MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 4 hours 33 min ago

Growing our donated organ supply

Thu, 04/11/2024 - 3:20pm

For those in need of one, an organ transplant is a matter of life and death. 

Every year, the medical procedure gives thousands of people with advanced or end-stage diseases extended life. This “second chance” is heavily dependent on the availability, compatibility, and proximity of a precious resource that can’t be simply bought, grown, or manufactured — at least not yet.

Instead, organs must be given — cut from one body and implanted into another. And because living organ donation is only viable in certain cases, many organs are only available for donation after the donor’s death.

Unsurprisingly, the logistical and ethical complexity of distributing a limited number of transplant organs to a growing wait list of patients has received much attention. There’s an important part of the process that has received less focus, however, and which may hold significant untapped potential: organ procurement itself.

“If you have a donated organ, who should you give it to? This question has been extensively studied in operations research, economics, and even applied computer science,” says Hammaad Adam, a graduate student in the Social and Engineering Systems (SES) doctoral program at the MIT Institute for Data, Systems, and Society (IDSS). “But there’s been a lot less research on where that organ comes from in the first place.”

In the United States, nonprofits called organ procurement organizations, or OPOs, are responsible for finding and evaluating potential donors, interacting with grieving families and hospital administrations, and recovering and delivering organs — all while following the federal laws that serve as both their mandate and guardrails. Recent studies estimate that obstacles and inefficiencies lead to thousands of organs going uncollected every year, even as the demand for transplants continues to grow.

“There’s been little transparent data on organ procurement,” argues Adam. Working with MIT computer science professors Marzyeh Ghassemi and Ashia Wilson, and in collaboration with stakeholders in organ procurement, Adam led a project to create a dataset called ORCHID: Organ Retrieval and Collection of Health Information for Donation. ORCHID contains a decade of clinical, financial, and administrative data from six OPOs.

“Our goal is for the ORCHID database to have an impact in how organ procurement is understood, internally and externally,” says Ghassemi.

Efficiency and equity 

It was looking to make an impact that drew Adam to SES and MIT. With a background in applied math and experience in strategy consulting, solving problems with technical components sits right in his wheelhouse.

“I really missed challenging technical problems from a statistics and machine learning standpoint,” he says of his time in consulting. “So I went back and got a master’s in data science, and over the course of my master’s got involved in a bunch of academic research projects in a few different fields, including biology, management science, and public policy. What I enjoyed most were some of the more social science-focused projects that had immediate impact.”

As a grad student in SES, Adam’s research focuses on using statistical tools to uncover health-care inequities, and developing machine learning approaches to address them. “Part of my dissertation research focuses on building tools that can improve equity in clinical trials and other randomized experiments,” he explains.

One recent example of Adam’s work: developing a novel method to stop clinical trials early if the treatment has an unintended harmful effect for a minority group of participants. “I’ve also been thinking about ways to increase minority representation in clinical trials through improved patient recruitment,” he adds.

Racial inequities in health care extend into organ transplantation, where a majority of wait-listed patients are not white — far in excess of their demographic groups’ proportion to the overall population. There are fewer organ donations from many of these communities, due to various obstacles in need of better understanding if they are to be overcome. 

“My work in organ transplantation began on the allocation side,” explains Adam. “In work under review, we examined the role of race in the acceptance of heart, liver, and lung transplant offers by physicians on behalf of their patients. We found that Black race of the patient was associated with significantly lower odds of organ offer acceptance — in other words, transplant doctors seemed more likely to turn down organs offered to Black patients. This trend may have multiple explanations, but it is nevertheless concerning.”

Adam’s research has also found that donor-candidate race match was associated with significantly higher odds of offer acceptance, an association that Adam says “highlights the importance of organ donation from racial minority communities, and has motivated our work on equitable organ procurement.”

Working with Ghassemi through the IDSS Initiative on Combatting Systemic Racism, Adam was introduced to OPO stakeholders looking to collaborate. “It’s this opportunity to impact not only health-care efficiency, but also health-care equity, that really got me interested in this research,” says Adam.

Making an impact

Creating a database like ORCHID means solving problems in multiple domains, from the technical to the political. Some efforts never overcome the first step: getting data in the first place. Thankfully, several OPOs were already seeking collaborations and looking to improve their performance.

“We have been lucky to have a strong partnership with the OPOs, and we hope to work together to find important insights to improve efficiency and equity,” says Ghassemi.

The value of a database like ORCHID is in its potential for generating new insights, especially through quantitative analysis with statistics and computing tools like machine learning. The potential value in ORCHID was recognized with an MIT Prize for Open Data, an MIT Libraries award highlighting the importance and impact of research data that is openly shared.

“It’s nice that the work got some recognition,” says Adam of the prize. “And it was cool to see some of the other great open data work that's happening at MIT. I think there's real impact in releasing publicly available data in an important and understudied domain.”

All the same, Adam knows that building the database is only the first step.

“I'm very interested in understanding the bottlenecks in the organ procurement process,” he explains. “As part of my thesis research, I’m exploring this by modeling OPO decision-making using causal inference and structural econometrics.”

Using insights from this research, Adam also aims to evaluate policy changes that can improve both equity and efficiency in organ procurement. “And we’re hoping to recruit more OPOs, and increase the amount of data we’re releasing,” he says. “The dream state is every OPO joins our collaboration and provides updated data every year.”

Adam is excited to see how other researchers might use the data to address inefficiencies in organ procurement. “Every organ donor saves between three and four lives,” he says. “So every research project that comes out of this dataset could make a real impact.”

New AI method captures uncertainty in medical images

Thu, 04/11/2024 - 11:00am

In biomedicine, segmentation involves annotating pixels from an important structure in a medical image, like an organ or cell. Artificial intelligence models can help clinicians by highlighting pixels that may show signs of a certain disease or anomaly.

However, these models typically only provide one answer, while the problem of medical image segmentation is often far from black and white. Five expert human annotators might provide five different segmentations, perhaps disagreeing on the existence or extent of the borders of a nodule in a lung CT image.

“Having options can help in decision-making. Even just seeing that there is uncertainty in a medical image can influence someone’s decisions, so it is important to take this uncertainty into account,” says Marianne Rakic, an MIT computer science PhD candidate.

Rakic is lead author of a paper with others at MIT, the Broad Institute of MIT and Harvard, and Massachusetts General Hospital that introduces a new AI tool that can capture the uncertainty in a medical image.

Known as Tyche (named for the Greek divinity of chance), the system provides multiple plausible segmentations that each highlight slightly different areas of a medical image. A user can specify how many options Tyche outputs and select the most appropriate one for their purpose.

Importantly, Tyche can tackle new segmentation tasks without needing to be retrained. Training is a data-intensive process that involves showing a model many examples and requires extensive machine-learning experience.

Because it doesn’t need retraining, Tyche could be easier for clinicians and biomedical researchers to use than some other methods. It could be applied “out of the box” for a variety of tasks, from identifying lesions in a lung X-ray to pinpointing anomalies in a brain MRI.

Ultimately, this system could improve diagnoses or aid in biomedical research by calling attention to potentially crucial information that other AI tools might miss.

“Ambiguity has been understudied. If your model completely misses a nodule that three experts say is there and two experts say is not, that is probably something you should pay attention to,” adds senior author Adrian Dalca, an assistant professor at Harvard Medical School and MGH, and a research scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

Their co-authors include Hallee Wong, a graduate student in electrical engineering and computer science; Jose Javier Gonzalez Ortiz PhD ’23; Beth Cimini, associate director for bioimage analysis at the Broad Institute; and John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering. Rakic will present Tyche at the IEEE Conference on Computer Vision and Pattern Recognition, where Tyche has been selected as a highlight.

Addressing ambiguity

AI systems for medical image segmentation typically use neural networks. Loosely based on the human brain, neural networks are machine-learning models comprising many interconnected layers of nodes, or neurons, that process data.

After speaking with collaborators at the Broad Institute and MGH who use these systems, the researchers realized two major issues limit their effectiveness. The models cannot capture uncertainty and they must be retrained for even a slightly different segmentation task.

Some methods try to overcome one pitfall, but tackling both problems with a single solution has proven especially tricky, Rakic says. 

“If you want to take ambiguity into account, you often have to use an extremely complicated model. With the method we propose, our goal is to make it easy to use with a relatively small model so that it can make predictions quickly,” she says.

The researchers built Tyche by modifying a straightforward neural network architecture.

A user first feeds Tyche a few examples that show the segmentation task. For instance, examples could include several images of lesions in a heart MRI that have been segmented by different human experts so the model can learn the task and see that there is ambiguity.

The researchers found that just 16 example images, called a “context set,” is enough for the model to make good predictions, but there is no limit to the number of examples one can use. The context set enables Tyche to solve new tasks without retraining.

For Tyche to capture uncertainty, the researchers modified the neural network so it outputs multiple predictions based on one medical image input and the context set. They adjusted the network’s layers so that, as data move from layer to layer, the candidate segmentations produced at each step can “talk” to each other and the examples in the context set.

In this way, the model can ensure that candidate segmentations are all a bit different, but still solve the task.

“It is like rolling dice. If your model can roll a two, three, or four, but doesn’t know you have a two and a four already, then either one might appear again,” she says.

They also modified the training process so it is rewarded by maximizing the quality of its best prediction.

If the user asked for five predictions, at the end they can see all five medical image segmentations Tyche produced, even though one might be better than the others.

The researchers also developed a version of Tyche that can be used with an existing, pretrained model for medical image segmentation. In this case, Tyche enables the model to output multiple candidates by making slight transformations to images.

Better, faster predictions

When the researchers tested Tyche with datasets of annotated medical images, they found that its predictions captured the diversity of human annotators, and that its best predictions were better than any from the baseline models. Tyche also performed faster than most models.

“Outputting multiple candidates and ensuring they are different from one another really gives you an edge,” Rakic says.

The researchers also saw that Tyche could outperform more complex models that have been trained using a large, specialized dataset.

For future work, they plan to try using a more flexible context set, perhaps including text or multiple types of images. In addition, they want to explore methods that could improve Tyche’s worst predictions and enhance the system so it can recommend the best segmentation candidates.

This research is funded, in part, by the National Institutes of Health, the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and Quanta Computer.

Improving drug development with a vast map of the immune system

Thu, 04/11/2024 - 12:00am

The human immune system is a network made up of trillions of cells that are constantly circulating throughout the body. The cellular network orchestrates interactions with every organ and tissue to carry out an impossibly long list of functions that scientists are still working to understand. All that complexity limits our ability to predict which patients will respond to treatments and which ones might suffer debilitating side effects.

The issue often leads pharmaceutical companies to stop developing drugs that could help certain patients, halting clinical trials even when drugs show promising results for some people.

Now, Immunai is helping to predict how patients will respond to treatments by building a comprehensive map of the immune system. The company has assembled a vast database it calls AMICA, that combines multiple layers of gene and protein expression data in cells with clinical trial data to match the right drugs to the right patients.

“Our starting point was creating what I call the Google Maps for the immune system,” Immunai co-founder and CEO Noam Solomon says. “We started with single-cell RNA sequencing, and over time we’ve added more and more ‘omics’: genomics, proteomics, epigenomics, all to measure the immune system’s cellular expression and function, to measure the immune environment holistically. Then we started working with pharmaceutical companies and hospitals to profile the immune systems of patients undergoing treatments to really get to the root mechanisms of action and resistance for therapeutics.”

Immunai’s big data foundation is a result of its founders’ unique background. Solomon and co-founder Luis Voloch ’13, SM ’15 hold degrees in mathematics and computer science. In fact, Solomon was a postdoc in MIT’s Department of Mathematics at the time of Immunai’s founding.

Solomon frames Immunai’s mission as stopping the decades-long divergence of computer science and the life sciences. He believes the single biggest factor driving the explosion of computing has been Moore’s Law — our ability to exponentially increase the number of transistors on a chip over the past 60 years. In the pharmaceutical industry, the reverse is happening: By one estimate, the cost of developing a new drug roughly doubles every nine years. The phenomenon has been dubbed Eroom’s Law (“Eroom” for “Moore” spelled backward).

Solomon sees the trend eroding the case for developing new drugs, with huge consequences for patients.

“Why should pharmaceutical companies invest in discovery if they won’t get a return on investment?” Solomon asks. “Today, there’s only a 5 to 10 percent chance that any given clinical trial will be successful. What we’ve built through a very robust and granular mapping of the immune system is a chance to improve the preclinical and clinical stages of drug development.”

A change in plans

Solomon entered Tel Aviv University when he was 14 and earned his bachelor’s degree in computer science by 19. He earned two PhDs in Israel, one in computer science and the other in mathematics, before coming to MIT in 2017 as a postdoc to continue his mathematical research career.

That year Solomon met Voloch, who had already earned bachelor’s and master’s degrees in math and computer science from MIT. But the researchers were soon exposed to a problem that would take them out of their comfort zones and change the course of their careers.

Voloch’s grandfather was receiving a cocktail of treatments for cancer at the time. The cancer went into remission, but he suffered terrible side effects that caused him to stop taking his medication.

Voloch and Solomon began wondering if their expertise could help patients like Voloch’s grandfather.

“When we realized we could make an impact, we made the difficult decision to stop our academic pursuits and start a new journey,” Solomon recalls. “That was the starting point for Immunai.”

Voloch and Solomon soon partnered with Immunai scientific co-founders Ansu Satpathy, a researcher at Stanford University at the time, and Danny Wells, a researcher at the Parker Institute for Cancer Immunotherapy. Satpathy and Wells had shown that single-cell RNA sequencing could be used to gain insights into why patients respond differently to a common cancer treatment.

The team began analyzing single-cell RNA sequencing data published in scientific papers, trying to link common biomarkers with patient outcomes. Then they integrated data from the United Kingdom’s Biobank public health database, finding they were able to improve their models’ predictions. Soon they were incorporating data from hospitals, academic research institutions, and pharmaceutical companies, analyzing information about the structure, function, and environment of cells — multiomics — to get a clearer picture of immune activity.

“Single cell sequencing gives you metrics you can measure in thousands of cells, where you can look at 20,000 different genes, and those metrics give you an immune profile,” Solomon explains. “When you measure all of that over time, especially before and after getting therapy, and compare patients who do respond with patients who don’t, you can apply machine learning models to understand why.”

Those data and models make up AMICA, what Immunai calls the world’s largest cell-level immune knowledge base. AMICA stands for Annotated Multiomic Immune Cell Atlas. It analyzes single cell multiomic data from almost 10,000 patients and bulk-RNA data from 100,000 patients across more than 800 cell types and 500 diseases.

At the core of Immunai’s approach is a focus on the immune system, which other companies shy away from because of its complexity.

“We don't want to be like other groups that are studying mainly tumor microenvironments,” Solomon says. “We look at the immune system because the immune system is the common denominator. It’s the one system that is implicated in every disease, in your body’s response to everything that you encounter, whether it's a viral infection or bacterial infection or a drug that you are receiving — even how you are aging.”

Turning data into better treatments

Immunai has already partnered with some of the largest pharmaceutical companies in the world to help them identify promising treatments and set up their clinical trials for success. Immunai's insights can help partners make critical decisions about treatment schedules, dosing, drug combinations, patient selection, and more.

“Everyone is talking about AI, but I think the most exciting aspect of the platform we have built is the fact that it's vertically integrated, from wet lab to computational modeling with multiple iterations,” Solomon says. “For example, we may do single-cell immune profiling of patient samples, then we upload that data to the cloud and our computational models come up with insights, and with those insights we do in vitro or in vivo validation to see if our models are right and iteratively improve them.”

Ultimately Immunai wants to enable a future where lab experiments can more reliably turn into impactful new recommendations and treatments for patients.

Scientists can cure nearly every type of cancer, but only in mice,” Solomon says. “In preclinical models we know how to cure cancer. In human beings, in most cases, we still don't. To overcome that, most scientists are looking for better ex vivo or in vivo models. Our approach is to be more agnostic as to the model system, but feed the machine with more and more data from multiple model systems. We’re demonstrating that our algorithms can repeatedly beat the top benchmarks in identifying the top preclinical immune features that match to patient outcomes.”

MIT-Mexico Program fosters cross-border collaboration

Wed, 04/10/2024 - 2:50pm

Favianna Colón Irizarry spent last summer at Tecnológico de Monterrey, working alongside Mexican biotechnology researchers to develop a biodegradable coating that prolongs the shelf life of local foods. Assisting in this and other innovative projects at one of Mexico’s top research institutions was the opportunity of a lifetime, for sure. But, for Colón Irizarry, it’s the tapestry of experiences that accompanied her MIT-Mexico internship that will always resonate.

“From my internship, I gleaned a vital lesson: Cultural proficiency is indispensable,” she says.

A sophomore majoring in chemical-biological engineering, Colón Irizarry is among nearly 500 interns who have traveled to Mexico for a summer of work and study since the MIT-Mexico Program was launched by MIT International Science and Technology Initiatives (MISTI) in 2004. A flagship program within the Center of International Studies (CIS), MISTI offers tailored global experiential learning opportunities to more than 1,200 students each year.

MIT-Mexico has enlisted the support of over 200 host partners in Mexico during the course of its 20-year history.

“It started as one student in 2004 doing an internship. Now in the summer it’s around 30 interns,” says MIT-Mexico Program Director Griselda Gómez, adding that the program has also placed MIT students at Mexican high schools as temporary STEM teachers through 170 Global Teaching Labs since 2012.

As the program begins its third decade, both Gómez and Faculty Director Paulo Lozano point to the number of students MIT-Mexico has involved over the years — contributing to myriad cross-border research partnerships — as the program’s foremost achievement.

“I think the large number of students that have gone to Mexico is a great accomplishment,” says Lozano, a Tecnológico de Monterrey alumnus and now MIT’s Miguel Alemán Velasco Professor of Aeronautics and Astronautics.

He credits Gómez, director of the program since 2006, with the initiative’s overall success, including “being very careful that the places we send our students are safe.”

For her part, Gómez says accommodating the interests of Mexico-bound students across a wide spectrum of academic subjects and fields “is a personal mission for me.”

“If students want to go to Mexico, I really want them to go and have a great experience. If we don’t have a specific project (matching student interests), we will go and look for one,” she says. “It’s very personalized.”

While MIT-Mexico offers internships in MISTI’s designated “impact areas” of climate and sustainability, health, artificial intelligence, and social impact, over the years it has arranged summer internships in several other fields, including architecture, urban planning, agriculture, and aeronautics.

Last summer, for example, MIT-Mexico interns worked on initiatives ranging from research on the continued value of textiles and craft methods to projects investigating low-carbon affordable housing solutions and employing AI for financial literacy. Internship topics planned for this summer include Design of 6G Communication Systems for Smart Cities, based in Mexico City, and Automatically Assessing Patients for Refractive Surgery in the city of Querétaro.

All are designed to promote cross-cultural experiences and strengthen ties between Mexican and MIT students and faculty, while boosting education, innovation, and entrepreneurship in Mexico and developing and exposing MIT’s research outside the United States.

Beyond the long-lasting impact interns say the experience has had on their lives (Gómez reports several “love stories” and even marriages have resulted), “it’s also a connection between researchers in Mexico and researchers at MIT — collaborations that may lead to exciting collaborative research later on,” Lozano says.

Lozano is MIT-Mexico’s second faculty director, taking over about a decade ago from now-retired political economy professor Michael Piore, who helped found the program in response to a proposal from a group of Mexican students attending MIT. Gómez says MIT-Mexico is unique among MISTI programs in that students from the host country were the catalyst for forming it and MIT alumni in Mexico were largely responsible for the funding that got it off the ground. It was also MISTI’s first program in a Spanish-speaking country.

Learning and practicing how to speak Spanish “in real life” was a primary motivator for what Matt Smith now calls “one of the best decisions I could have made for myself.” Smith, a second-year computer science and engineering major, was among 35 students who spent their January Independent Activities Period in Mexico through the Global Teaching Lab program. Assigned to teach at a Mexico City high school, Smith says the language barrier gradually melted away — at least partially — over a three-week period in which he immersed himself in local museums, parks, and culture and was amazed and impressed by the number of peaceful gardens and natural areas throughout the bustling city.

Like Global Teaching Lab programs in other countries, the MIT-Mexico program aims to increase interest in STEM topics at host country schools. It matches MIT students with high schools in Mexico, and materials are adapted from MIT online resources to prepare tailored workshops on STEM subjects that complement the local school’s curriculum.

The third piece of MIT-Mexico is the provision of the MIT Global Seed Fund (GSF) grants administered through CIS. GSF promotes and supports early-stage collaborations among MIT researchers and their counterparts in Mexico. The program has awarded more than 50 such grants to over 100 researchers since 2012 to fund collaborative projects that can involve both MIT and Mexican students.

With his appetite whetted by the Global Teaching Lab, Smith came back from Mexico in January determined to apply for an MIT-Mexico internship this summer.

“I decided that three weeks wasn’t enough for me to fully digest the entire city — so why not go again?” says Smith, who was accepted and leaves in early June for a research position at the Instituto Politécnico Nacional in Mexico City.

“Being in another country made me realize how much I’d like to travel the world and see the experiences that other people are having,” he adds. “I highly recommend the experience for anyone looking to do something impactful in another country while exploring the best parts of the community.”

With inspiration from “Tetris,” MIT researchers develop a better radiation detector

Wed, 04/10/2024 - 11:00am

The spread of radioactive isotopes from the Fukushima Daiichi Nuclear Power Plant in Japan in 2011 and the ongoing threat of a possible release of radiation from the Zaporizhzhia nuclear complex in the Ukrainian war zone have underscored the need for effective and reliable ways of detecting and monitoring radioactive isotopes. Less dramatically, everyday operations of nuclear reactors, mining and processing of uranium into fuel rods, and the disposal of spent nuclear fuel also require monitoring of radioisotope release.

Now, researchers at MIT and the Lawrence Berkeley National Laboratory (LBNL) have come up with a computational basis for designing very simple, streamlined versions of sensor setups that can pinpoint the direction of a distributed source of radiation. They also demonstrated that by moving that sensor around to get multiple readings, they can pinpoint the physical location of the source. The inspiration for their clever innovation came from a surprising source: the popular computer game “Tetris.”

The team’s findings, which could likely be generalized to detectors for other kinds of radiation, are described in a paper published in Nature Communications, by MIT professors Mingda Li, Lin-Wen Hu, Benoit Forget, and Gordon Kohse; graduate students Ryotaro Okabe and Shangjie Xue; research scientist Jayson Vavrek SM ’16, PhD ’19 at LBNL; and a number of others at MIT and Lawrence Berkeley.

Radiation is usually detected using semiconductor materials, such as cadmium zinc telluride, that produce an electrical response when struck by high-energy radiation such as gamma rays. But because radiation penetrates so readily through matter, it’s difficult to determine the direction that signal came from with simple counting. Geiger counters, for example, simply provide a click sound when receiving radiation, without resolving the energy or type, so finding a source requires moving around to try to find the maximum sound, similarly to how handheld metal detectors work. The process requires the user to move closer to the source of radiation, which can add risk.

To provide directional information from a stationary device without getting too close, researchers use an array of detector grids along with another grid called a mask, which imprints a pattern on the array that differs depending on the direction of the source. An algorithm interprets the different timings and intensities of signals received by each separate detector or pixel. This often leads to a complex design of detectors.  

Typical detector arrays for sensing the direction of radiation sources are large and expensive and include at least 100 pixels in a 10 by 10 array. However, the group found that using as few as four pixels arranged in the tetromino shapes of the figures in the “Tetris” game can come close to matching the accuracy of the large, expensive systems. The key is proper computerized reconstruction of the angles of arrival of the rays, based on the times each sensor detects the signal and the relative intensity each one detects, as reconstructed through an AI-guided study of simulated systems.

Of the different configurations of four pixels the researchers tried — square, or S-, J- or T-shaped — they found through repeated experiments that the most precise results were provided by the S-shaped array. This array gave directional readings that were accurate to within about 1 degree, but all three of the irregular shapes performed better than the square. This approach, Li says, “was literally inspired by ‘Tetris.’”

Key to making the system work is placing an insulating material such as a lead sheet between the pixels to increase the contrast between radiation readings coming into the detector from different directions. The lead between the pixels in these simplified arrays serves the same function as the more elaborate shadow masks used in the larger-array systems. Less symmetrical arrangements, the team found, provide more useful information from a small array, explains Okabe, who is the lead author of the work.

“The merit of using a small detector is in terms of engineering costs,” he says. Not only are the individual detector elements expensive, typically made of cadmium-zinc-telluride, or CZT, but all of the interconnections carrying information from those pixels also become much more complex. “The smaller and simpler the detector is, the better it is in terms of applications,” adds Li.

While there have been other versions of simplified arrays for radiation detection, many are only effective if the radiation is coming from a single localized source. They can be confused by multiple sources or those that are spread out in space, while the “Tetris”-based version can handle these situations well, adds Xue, co-lead author of the work.

In a single-blind field test at the Berkeley Lab with a real cesium radiation source, led by Vavrek, where the researchers at MIT did not know the ground-truth source location, a test device was performed with high accuracy in finding the direction and distance to the source. 

“Radiation mapping is of utmost importance to the nuclear industry, as it can help rapidly locate sources of radiation and keep everyone safe,” says co-author Forget, an MIT professor of nuclear engineering and head of the Department of Nuclear Science and Engineering.

Vavrek, another co-lead-author, says that while in their study they focused on gamma-ray sources, he believes the computational tools they developed to extract directional information from the limited number of pixels are “much, much more general.” It isn’t restricted to certain wavelengths, it can also be used for neutrons, or even other forms of light, ultraviolet light, adds Hu, a senior scientist at MIT Nuclear Reactor Lab.

Nick Mann, a scientist with the Defense Systems branch at the Idaho National Laboratory, says, "This work is critical to the U.S. response community and the ever-increasing threat of a radiological incident or accident.”

Additional research team members include Ryan Pavlovsky, Victor Negut, Brian Quiter, and Joshua Cates at Lawrence Berkely National Laboratory, and Jiankai Yu, Tongtong Liu, Stephanie Jegelka at MIT. The work was supported by the U.S. Department of Energy.

QS World University Rankings rates MIT No. 1 in 11 subjects for 2024

Wed, 04/10/2024 - 10:00am

QS World University Rankings has placed MIT in the No. 1 spot in 11 subject areas for 2024, the organization announced today.

The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; Physics and Astronomy; and Statistics and Operational Research.

MIT also placed second in five subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Chemistry; and Economics and Econometrics.

For 2024, universities were evaluated in 55 specific subjects and five broader subject areas. MIT was ranked No. 1 in the broader subject area of Engineering and Technology and No. 2 in Natural Sciences.

Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.

MIT has been ranked as the No. 1 university in the world by QS World University Rankings for 12 straight years.

Tackling cancer at the nanoscale

Wed, 04/10/2024 - 4:00am

When Paula Hammond first arrived on MIT’s campus as a first-year student in the early 1980s, she wasn’t sure if she belonged. In fact, as she told an MIT audience yesterday, she felt like “an imposter.”

However, that feeling didn’t last long, as Hammond began to find support among her fellow students and MIT’s faculty. “Community was really important for me, to feel that I belonged, to feel that I had a place here, and I found people who were willing to embrace me and support me,” she said.

Hammond, a world-renowned chemical engineer who has spent most of her academic career at MIT, made her remarks during the 2023-24 James R. Killian Jr. Faculty Achievement Award lecture.

Established in 1971 to honor MIT’s 10th president, James Killian, the Killian Award recognizes extraordinary professional achievements by an MIT faculty member. Hammond was chosen for this year’s award “not only for her tremendous professional achievements and contributions, but also for her genuine warmth and humanity, her thoughtfulness and effective leadership, and her empathy and ethics,” according to the award citation.

“Professor Hammond is a pioneer in nanotechnology research. With a program that extends from basic science to translational research in medicine and energy, she has introduced new approaches for the design and development of complex drug delivery systems for cancer treatment and noninvasive imaging,” said Mary Fuller, chair of MIT’s faculty and a professor of literature, who presented the award. “As her colleagues, we are delighted to celebrate her career today.”

In January, Hammond began serving as MIT’s vice provost for faculty. Before that, she chaired the Department of Chemical Engineering for eight years, and she was named an Institute Professor in 2021.

A versatile technique

Hammond, who grew up in Detroit, credits her parents with instilling a love of science. Her father was one of very few Black PhDs in biochemistry at the time, while her mother earned a master’s degree in nursing from Howard University and founded the nursing school at Wayne County Community College. “That provided a huge amount of opportunity for women in the area of Detroit, including women of color,” Hammond noted.

After earning her bachelor’s degree from MIT in 1984, Hammond worked as an engineer before returning to the Institute as a graduate student, earning her PhD in 1993. After a two-year postdoc at Harvard University, she returned to join the MIT faculty in 1995.

At the heart of Hammond’s research is a technique she developed to create thin films that can essentially “shrink-wrap” nanoparticles. By tuning the chemical composition of these films, the particles can be customized to deliver drugs or nucleic acids and to target specific cells in the body, including cancer cells.

To make these films, Hammond begins by layering positively charged polymers onto a negatively charged surface. Then, more layers can be added, alternating positively and negatively charged polymers. Each of these layers may contain drugs or other useful molecules, such as DNA or RNA. Some of these films contain hundreds of layers, others just one, making them useful for a wide range of applications.

“What’s nice about the layer-by-layer process is I can choose a group of degradable polymers that are nicely biocompatible, and I can alternate them with our drug materials. This means that I can build up thin film layers that contain different drugs at different points within the film,” Hammond said. “Then, when the film degrades, it can release those drugs in reverse order. This is enabling us to create complex, multidrug films, using a simple water-based technique.”

Hammond described how these layer-by-layer films can be used to promote bone growth, in an application that could help people born with congenital bone defects or people who experience traumatic injuries.

For that use, her lab has created films with layers of two proteins. One of these, BMP-2, is a protein that interacts with adult stem cells and induces them to differentiate into bone cells, generating new bone. The second is a growth factor called VEGF, which stimulates the growth of new blood vessels that help bone to regenerate. These layers are applied to a very thin tissue scaffold that can be implanted at the injury site.

Hammond and her students designed the coating so that once implanted, it would release VEGF early, over a week or so, and continue releasing BMP-2 for up to 40 days. In a study of mice, they found that this tissue scaffold stimulated the growth of new bone that was nearly indistinguishable from natural bone.

Targeting cancer

As a member of MIT’s Koch Institute for Integrative Cancer Research, Hammond has also developed layer-by-layer coatings that can improve the performance of nanoparticles used for cancer drug delivery, such as liposomes or nanoparticles made from a polymer called PLGA.

“We have a broad range of drug carriers that we can wrap this way. I think of them like a gobstopper, where there are all those different layers of candy and they dissolve one at a time,” Hammond said.

Using this approach, Hammond has created particles that can deliver a one-two punch to cancer cells. First, the particles release a dose of a nucleic acid such as short interfering RNA (siRNA), which can turn off a cancerous gene, or microRNA, which can activate tumor suppressor genes. Then, the particles release a chemotherapy drug such as cisplatin, to which the cells are now more vulnerable.

The particles also include a negatively charged outer “stealth layer” that protects them from being broken down in the bloodstream before they can reach their targets. This outer layer can also be modified to help the particles get taken up by cancer cells, by incorporating molecules that bind to proteins that are abundant on tumor cells.

In more recent work, Hammond has begun developing nanoparticles that can target ovarian cancer and help prevent recurrence of the disease after chemotherapy. In about 70 percent of ovarian cancer patients, the first round of treatment is highly effective, but tumors recur in about 85 percent of those cases, and these new tumors are usually highly drug resistant.

By altering the type of coating applied to drug-delivering nanoparticles, Hammond has found that the particles can be designed to either get inside tumor cells or stick to their surfaces. Using particles that stick to the cells, she has designed a treatment that could help to jumpstart a patient’s immune response to any recurrent tumor cells.

“With ovarian cancer, very few immune cells exist in that space, and because they don’t have a lot of immune cells present, it’s very difficult to rev up an immune response,” she said. “However, if we can deliver a molecule to neighboring cells, those few that are present, and get them revved up, then we might be able to do something.”

To that end, she designed nanoparticles that deliver IL-12, a cytokine that stimulates nearby T cells to spring into action and begin attacking tumor cells. In a study of mice, she found that this treatment induced a long-term memory T-cell response that prevented recurrence of ovarian cancer.

Hammond closed her lecture by describing the impact that the Institute has had on her throughout her career.

“It’s been a transformative experience,” she said. “I really think of this place as special because it brings people together and enables us to do things together that we couldn’t do alone. And it is that support we get from our friends, our colleagues, and our students that really makes things possible.”

A faster, better way to prevent an AI chatbot from giving toxic responses

Wed, 04/10/2024 - 12:00am

A user could ask ChatGPT to write a computer program or summarize an article, and the AI chatbot would likely be able to generate useful code or write a cogent synopsis. However, someone could also ask for instructions to build a bomb, and the chatbot might be able to provide those, too.

To prevent this and other safety issues, companies that build large language models typically safeguard them using a process called red-teaming. Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.

But this only works effectively if engineers know which toxic prompts to use. If human testers miss some prompts, which is likely given the number of possibilities, a chatbot regarded as safe might still be capable of generating unsafe answers.

Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab used machine learning to improve red-teaming. They developed a technique to train a red-team large language model to automatically generate diverse prompts that trigger a wider range of undesirable responses from the chatbot being tested.

They do this by teaching the red-team model to be curious when it writes prompts, and to focus on novel prompts that evoke toxic responses from the target model.

The technique outperformed human testers and other machine-learning approaches by generating more distinct prompts that elicited increasingly toxic responses. Not only does their method significantly improve the coverage of inputs being tested compared to other automated methods, but it can also draw out toxic responses from a chatbot that had safeguards built into it by human experts.

“Right now, every large language model has to undergo a very lengthy period of red-teaming to ensure its safety. That is not going to be sustainable if we want to update these models in rapidly changing environments. Our method provides a faster and more effective way to do this quality assurance,” says Zhang-Wei Hong, an electrical engineering and computer science (EECS) graduate student in the Improbable AI lab and lead author of a paper on this red-teaming approach.

Hong’s co-authors include EECS graduate students Idan Shenfield, Tsun-Hsuan Wang, and Yung-Sung Chuang; Aldo Pareja and Akash Srivastava, research scientists at the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Pulkit Agrawal, director of Improbable AI Lab and an assistant professor in CSAIL. The research will be presented at the International Conference on Learning Representations.

Automated red-teaming 

Large language models, like those that power AI chatbots, are often trained by showing them enormous amounts of text from billions of public websites. So, not only can they learn to generate toxic words or describe illegal activities, the models could also leak personal information they may have picked up.

The tedious and costly nature of human red-teaming, which is often ineffective at generating a wide enough variety of prompts to fully safeguard a model, has encouraged researchers to automate the process using machine learning.

Such techniques often train a red-team model using reinforcement learning. This trial-and-error process rewards the red-team model for generating prompts that trigger toxic responses from the chatbot being tested.

But due to the way reinforcement learning works, the red-team model will often keep generating a few similar prompts that are highly toxic to maximize its reward.

For their reinforcement learning approach, the MIT researchers utilized a technique called curiosity-driven exploration. The red-team model is incentivized to be curious about the consequences of each prompt it generates, so it will try prompts with different words, sentence patterns, or meanings.

“If the red-team model has already seen a specific prompt, then reproducing it will not generate any curiosity in the red-team model, so it will be pushed to create new prompts,” Hong says.

During its training process, the red-team model generates a prompt and interacts with the chatbot. The chatbot responds, and a safety classifier rates the toxicity of its response, rewarding the red-team model based on that rating.

Rewarding curiosity

The red-team model’s objective is to maximize its reward by eliciting an even more toxic response with a novel prompt. The researchers enable curiosity in the red-team model by modifying the reward signal in the reinforcement learning set up.

First, in addition to maximizing toxicity, they include an entropy bonus that encourages the red-team model to be more random as it explores different prompts. Second, to make the agent curious they include two novelty rewards. One rewards the model based on the similarity of words in its prompts, and the other rewards the model based on semantic similarity. (Less similarity yields a higher reward.)

To prevent the red-team model from generating random, nonsensical text, which can trick the classifier into awarding a high toxicity score, the researchers also added a naturalistic language bonus to the training objective.

With these additions in place, the researchers compared the toxicity and diversity of responses their red-team model generated with other automated techniques. Their model outperformed the baselines on both metrics.

They also used their red-team model to test a chatbot that had been fine-tuned with human feedback so it would not give toxic replies. Their curiosity-driven approach was able to quickly produce 196 prompts that elicited toxic responses from this “safe” chatbot.

“We are seeing a surge of models, which is only expected to rise. Imagine thousands of models or even more and companies/labs pushing model updates frequently. These models are going to be an integral part of our lives and it’s important that they are verified before released for public consumption. Manual verification of models is simply not scalable, and our work is an attempt to reduce the human effort to ensure a safer and trustworthy AI future,” says Agrawal.  

In the future, the researchers want to enable the red-team model to generate prompts about a wider variety of topics. They also want to explore the use of a large language model as the toxicity classifier. In this way, a user could train the toxicity classifier using a company policy document, for instance, so a red-team model could test a chatbot for company policy violations.

“If you are releasing a new AI model and are concerned about whether it will behave as expected, consider using curiosity-driven red-teaming,” says Agrawal.

This research is funded, in part, by Hyundai Motor Company, Quanta Computer Inc., the MIT-IBM Watson AI Lab, an Amazon Web Services MLRA research grant, the U.S. Army Research Office, the U.S. Defense Advanced Research Projects Agency Machine Common Sense Program, the U.S. Office of Naval Research, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.

Has remote work changed how people travel in the U.S?

Tue, 04/09/2024 - 5:00am

The prevalence of remote work since the start of the Covid-19 pandemic has significantly changed urban transportation patterns in the U.S., according to new study led by MIT researchers.

The research finds significant variation between the effects of remote work on vehicle miles driven and on mass-transit ridership across the U.S.

“A 1 percent decrease in onsite workers leads to a roughly 1 percent reduction in [automobile] vehicle miles driven, but a 2.3 percent reduction in mass transit ridership,” says Yunhan Zheng SM ’21, PhD ’24, an MIT postdoc who is co-author of a the study.

“This is one of the first studies that identifies the causal effect of remote work on vehicle miles traveled and transit ridership across the U.S.,” adds Jinhua Zhao, an MIT professor and another co-author of the paper.

By accounting for many of the nuances of the issue, across the lower 48 states and the District of Columbia as well as 217 metropolitan areas, the scholars believe they have arrived at a robust conclusion demonstrating the effects of working from home on larger mobility patterns.

The paper, “Impacts of remote work on vehicle miles traveled and transit ridership in the USA,” appears today in the journal Nature Cities. The authors are Zheng, a doctoral graduate of MIT’s Department of Civil and Environmental Engineering and a postdoc at the Singapore–MIT Alliance for Research and Technology (SMART); Shenhao Wang PhD ’20, an assistant professor at the University of Florida; Lun Liu, an assistant professor at Peking University; Jim Aloisi, a lecturer in MIT’s Department of Urban Studies and Planning (DUSP); and Zhao, the Professor of Cities and Transportation, founder of the MIT Mobility Initiative, and director of MIT’s JTL Urban Mobility Lab and Transit Lab.

The researchers gathered data on the prevalence of remote work from multiple sources, including Google location data, travel data from the U.S. Federal Highway Administration and the National Transit Database, and the monthly U.S. Survey of Working Arrangements and Attitudes (run jointly by Stanford University, the University of Chicago, ITAM, and MIT).

The study reveals significant variation among U.S. states when it comes to how much the rise of remote work has affected mileage driven.

“The impact of a 1 percent change in remote work on the reduction of vehicle miles traveled in New York state is only about one-quarter of that in Texas,” Zheng observes. “There is real variation there.”

At the same time, remote work has had the biggest effect on mass-transit revenues in places with widely used systems, with New York City, Chicago, San Francisco, Boston, and Philadelphia making up the top five hardest-hit metro areas.

The overall effect is surprisingly consistent over time, from early 2020 through late 2022.

“In terms of the temporal variation, we found that the effect is quite consistent across our whole study period,” Zheng says. “It’s not just significant in the early stage of the pandemic, when remote work was a necessity for many. The magnitude remains consistent into the later period, when many people have the flexibility to choose where they want to work. We think this may have long-term implications.”

Additionally, the study estimates the impact that still larger numbers of remote workers could have on the environment and mass transit.

“On a national basis, we estimate that a 10 percent decrease in the number of onsite workers compared to prepandemic levels will reduce the annual total vehicle-related CO2 emissions by 191.8 million metric tons,” Wang says.

The study also projects that across the 217 metropolitan areas in the study, a 10 percent decrease in the number of onsite workers, compared to prepandemic levels, would lead to an annual loss of 2.4 billion transit trips and $3.7 billion in fare revenue — equal to roughly 27 percent of the annual transit ridership and fare revenue in 2019.

“The substantial influence of remote work on transit ridership highlights the need for transit agencies to adapt their services accordingly, investing in services tailored to noncommuting trips and implementing more flexible schedules to better accommodate the new demand patterns,” Zhao says.

The research received support from the MIT Energy Initiative; the Barr Foundation; the National Research Foundation, Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise program; the Research Opportunity Seed Fund 2023 from the University of Florida; and the Beijing Social Science Foundation.

Physicist Netta Engelhardt is searching black holes for universal truths

Tue, 04/09/2024 - 12:00am

As Netta Engelhardt sees it, secrets never die. Not even in a black hole.

Engelhardt is a theoretical physicist at MIT who is teasing out the convoluted physics in and around black holes, in search of the fundamental ingredients that shape our universe.  In the process, she’s upending popular ideas in the fields of quantum and gravitational physics.

One of the biggest revelations from her work to date is the way in which information that falls into a black hole can avoid being lost forever. In 2019, shortly before coming to MIT, she and other physicists used gravitational methods to demonstrate that whatever might happen to the information inside a black hole can in principle be undone as the black hole evaporates away.

The team’s conclusion stunned the physics community, as it constituted the most quantitative direct advance toward resolving the longstanding black hole information paradox — a conundrum raised in the work of physicist Stephen Hawking. The paradox pits in opposition two theories that both appear to be true: one, the pillar of “unitarity,” which is the principle that information in the universe is neither created nor destroyed; and two, a calculation by Hawking from standard gravitational physics showing that information can indeed be destroyed, specifically, when radiating out from an evaporating black hole.

“Imagine you had a diary and you set it on fire in the lab,” Engelhardt explains. “According to unitarity, if you knew the fundamental dynamics of the universe, you could take the ashes and reverse-engineer them to see the diary and its contents. It would be very difficult, but you could do it. But Hawking’s calculation shows that, even if you knew the fundamental dynamics of the universe, you still couldn’t reverse-engineer the process of black hole evaporation.”

Engelhardt, then at Princeton University, and her colleagues showed that, contrary to Hawking’s calculation, it is possible to use gravitational physics to see that the process of black hole evaporation does in fact conserve information.

As a newly tenured member of the MIT faculty, Engelhardt is now tackling other longstanding questions about gravity, hoping to fill the last, largest gaps in physicists’ understanding of the universe at the most fundamental scales.

“At the end of the day, I’m driven by questions about nature and how the universe works,” says Engelhardt, who is now an associate professor of physics. “Answering these questions is a vocation.”

Gateway to gravity

Engelhardt was born in Jerusalem, where she developed an early interest in all things science. When she was 9, she and her family moved to Boston, partly so that her mother could enroll in a visiting scholars program in MIT Linguistics. New to America, and having only learned to read in Hebrew, Engelhardt spent those first weeks reading every book the family brought with them, some of them atypical for a 9-year-old.

“I read all the books we had left in Hebrew, until at long last, there was just one left, which was Stephen Hawking’s ‘A Brief History of Time.’”

Hawking’s book was Engelhardt’s first introduction to black holes, the Big Bang, and the fundamental forces and building blocks that shape the universe. What she found especially exciting were the missing pieces to physicists’ understanding.

“People can spend their entire life searching for answers to these very foundational questions that I just found completely fascinating,” Engelhardt says. “Where does the universe come from? What are the fundamental building blocks? Those are questions I realized I just wanted to know the answer to. And from that point on, I wasn’t just set on physics — I was set on quantum gravity at 9.”

She fed that early spark through college, double-majoring in physics and math at Brandeis University. She went on to the University of California at Santa Barbara, where she pursued a PhD in physics and really began to dig into the puzzle of quantum gravity, a field that seeks to describe the effects of gravity according to the principles of quantum mechanics.

The theory of quantum mechanics is a remarkably good blueprint for describing the interactions in nature at the scale of atoms and smaller. These quantum interactions are governed by three of the four fundamental forces that physicists know of. But the fourth force, gravity, has eluded quantum mechanical explanation, particularly in situations where the effect of gravity is overwhelming, such as deep inside black holes.

In such extreme regimes, there is no prediction for how matter and gravity behave. Such a theory would complete physicists’ understanding of the universe’s workings at the most fundamental scales.

For Engelhardt, quantum gravity is also a gateway to other mysterious questions to be answered. For example, the way in which space and time emerge from something even more fundamental. Engelhardt spent much of her graduate work focused on questions about the geometry of spacetime, and how its curvature may emerge from something more basic as described by quantum gravity.

“Those are big questions to tackle,” Engelhardt admits. “The largest bulk of my time is spent thinking, hmm, how do I take this vague intuition and condense it into a question that can be concretely answered, quantitatively? That’s a large part of the progress you can make.”

A black hole imprint

In 2014, midway through her PhD work, Engelhardt honed one of her questions about quantum gravity and spacetime emergence to a specific problem: how to compute the quantum corrections to the entropy of gravitating systems.

“There are surfaces (in spacetime) that are sensitive to gravitational (curving) called extremal surfaces,” Engelhardt explains. “There already was a formula that used such surfaces to compute the entropy of gravitational systems in the absence of quantum effects. But in realistic quantum gravity, there are quantum effects, and I wanted a formula that took that into account.”

She and postdoc Aron Wall worked to construct a general equation that would describe how entropy of gravitating regions should be computed when quantum effects are taken into account. The result: quantum extremal surfaces, a quantum generalization of the old classical surfaces.

At the time, the exercise was purely theoretical, as the quantum effects from most processes in the universe are too small to even slightly wobble the surrounding spacetime. Their new equation, therefore, would land on similar predictions as the purely classical counterpart.

But in 2019, as a postdoc at Princeton, Engelhardt and others realized that this equation might give a very different prediction for what a quantum extremal surface might do, and what the corresponding quantum gravitational entropy would be, in one specific situation: as a black hole evaporates. What’s more, what the equation predicts could be the key to resolving the longstanding black hole information paradox.

“This was a very dramatic moment,” she recalls. “Everyone was working around the clock to try to figure this out, not really sleeping at night because we were so excited.”

After three sleep-deprived weeks, the physicists were convinced that they had made a dramatic step toward resolving the paradox: As a black hole evaporates and releases radiation in a scrambled form of the information that originally fell into it, a new, completely nonclassical quantum extremal surface emerges, resulting in a gravitational entropy that shrinks as more information radiates away. They reasoned that this surface can serve as an imprint of the radiated information, which could in principle be used to reconstruct the original information, which Stephen Hawking had shown would be lost forever.

“That was a Eureka! moment,” she says. “I remember driving home, and thinking, and maybe even saying out loud, ‘I think this is it!’’

It’s not yet clear what Hawking was actually calculating to assume the contrary. But Engelhardt considers the paradox close to resolved, at least in broad strokes, and her team’s work has held up to repeated checks and careful scrutiny. In the meantime, she set her sights on other questions.

Testing pillars

Engelhardt’s breakthrough came in May of 2019. Just two months later, she headed to Cambridge to start her faculty position at MIT. She first visited the campus and interviewed for the position in 2017.

“There was a palpable sense of excitement about science in the Center for Theoretical Physics, and you feel it everywhere — it permeates the Institute,” she recalls. “That was one of the reasons I wanted to be at MIT.”

She was offered the position, which she accepted and chose to defer for a year to complete her postdoc at Princeton. In July 2019, she started at MIT as an assistant professor of physics.

In the early days on campus, as she set up her research group, Engelhardt followed up on the black hole information paradox, to see if she could find out not only how Hawking got it wrong but what he was actually calculating, if not the entropy of the radiation.

“At the end of the day, if you really want to resolve the paradox, we have to explain what Hawking’s mistake was,” Engelhardt says. 

Her hunch is that he was in a way computing a different quantity altogether. She believes Hawking’s work, which raised the paradox to begin with, might have been computing a different type of gravitational entropy, that appears to result in information loss when run forward as a black hole evaporates. However, this other form of gravitational entropy does not correspond to information content, and so its increase would not be paradoxical.

Today, she and her students are following up on questions related to quantum gravity as well as a thornier concept having to do with singularities — instances when an object such as a star collapses into a region so gravitationally intense as to destroy spacetime itself. Physicists historically have predicted that singularities should only be present behind a black hole’s event horizon, though others have seen hints that they exist outside of these gravitational boundaries. 

“A lot of my work now is going into understanding how many pillars of gravitational physics are just not true as we currently understand them,” she says. “Answering these questions is the ultimate motivation.”

MIT community members gather on campus to witness 93 percent totality

Mon, 04/08/2024 - 6:00pm

The stars and other celestial objects truly aligned on MIT’s campus Monday. After a weekend of rain, the community was treated to clear skies and high temperatures to view the only partial eclipse for the next 20 years.

Community members took in the interstellar anomaly in gatherings large and small. Although many traveled north to view the full eclipse, those in Greater Boston were treated with 93 percent coverage and ample ways to appreciate the cosmic wonder.

As the moon met the sun beginning around 2:15 p.m., Kresge Oval hosted crowds of onlookers, with staff members handing out solar filters of various types and encouraging star-struck viewers to sketch what they saw and tell stories. The event was hosted by the MT Edgerton Center and inspired by the seminar EC.050/090 (Recreate Experiments from History: Inform the Future with the Past).

On the other side of campus, the MIT Museum also hosted a gathering that included a full afternoon of programming. Attendees could hear from an astronomer and ask questions while they took in the views with solar filter glasses.

In Building 55, home to the Department of Earth, Atmospheric, and Planetary Sciences (EAPS), where the lives of stars take up a bit more headspace each day, sights and sounds from NASA’s livestream appeared on the department’s large new media wall.

Each of the gatherings could have been a scene out of a science fiction movie as everyone donned their glasses and looked up in amazement at the darkening sky. Those with extra eyewear to share quickly found themselves with new friends to experience the moment with.

“The Edgerton Center is really about building communities, and this was an opportunity to get the MIT community together to observe this thing that rarely happens and have some conversations about what's really going on,” said Jim Bales, the associate director of the Edgerton Center.

Such events have evoked fear and confusion in Earthlings throughout history, but this time, MIT’s community members seemed more prone to appreciative reflection. Many students, faculty, and staff took a break from terrestrial life to take in the rare natural phenomena, a welcome planetary disruption to an otherwise typical Monday on Earth.

“Watch parties are cool because you’re learning from what other people have to say about it and you get to meet new people,” said sophomore Sol Roberts. “You can only stare up for so long, but being with other people it makes it more enjoyable.”

Of course, MIT didn’t abandon its scientific bent entirely. The community, after all, was never going to stop helping humanity understand the fundamental workings of the universe. Myriad community members participated in professional and citizen science initiatives of one sort of another. Meanwhile, MIT’s Haystack Observatory in Westford, Massachusetts measured changes in the atmosphere, and members of the Department of Physics took measurements of the sun’s intensity using the shiny new radio telescope on the roof of Building 54.

As surreal as the skies appeared, the Earth’s surface offered equally fun sights. The gatherings made the eclipse at once an intergalactic event and a hyper-local one, an impossibly distant astronomical anomaly shared between friends.

Extracting hydrogen from rocks

Mon, 04/08/2024 - 5:00pm

It’s commonly thought that the most abundant element in the universe, hydrogen, exists mainly alongside other elements — with oxygen in water, for example, and with carbon in methane. But naturally occurring underground pockets of pure hydrogen are punching holes in that notion — and generating attention as a potentially unlimited source of carbon-free power.
 
One interested party is the U.S. Department of Energy, which last month awarded $20 million in research grants to 18 teams from laboratories, universities, and private companies to develop technologies that can lead to cheap, clean fuel from the subsurface.
 
Geologic hydrogen, as it’s known, is produced when water reacts with iron-rich rocks, causing the iron to oxidize. One of the grant recipients, MIT Assistant Professor Iwnetim Abate’s research group, will use its $1.3 million grant to determine the ideal conditions for producing hydrogen underground — considering factors such as catalysts to initiate the chemical reaction, temperature, pressure, and pH levels. The goal is to improve efficiency for large-scale production, meeting global energy needs at a competitive cost.
 
The U.S. Geological Survey estimates there are potentially billions of tons of geologic hydrogen buried in the Earth’s crust. Accumulations have been discovered worldwide, and a slew of startups are searching for extractable deposits. Abate is looking to jump-start the natural hydrogen production process, implementing “proactive” approaches that involve stimulating production and harvesting the gas.
                                                                                                                         
“We aim to optimize the reaction parameters to make the reaction faster and produce hydrogen in an economically feasible manner,” says Abate, the Chipman Development Professor in the Department of Materials Science and Engineering (DMSE). Abate’s research centers on designing materials and technologies for the renewable energy transition, including next-generation batteries and novel chemical methods for energy storage. 

Sparking innovation

Interest in geologic hydrogen is growing at a time when governments worldwide are seeking carbon-free energy alternatives to oil and gas. In December, French President Emmanuel Macron said his government would provide funding to explore natural hydrogen. And in February, government and private sector witnesses briefed U.S. lawmakers on opportunities to extract hydrogen from the ground.
 
Today commercial hydrogen is manufactured at $2 a kilogram, mostly for fertilizer and chemical and steel production, but most methods involve burning fossil fuels, which release Earth-heating carbon. “Green hydrogen,” produced with renewable energy, is promising, but at $7 per kilogram, it’s expensive.
 
“If you get hydrogen at a dollar a kilo, it’s competitive with natural gas on an energy-price basis,” says Douglas Wicks, a program director at Advanced Research Projects Agency - Energy (ARPA-E), the Department of Energy organization leading the geologic hydrogen grant program.
 
Recipients of the ARPA-E grants include Colorado School of Mines, Texas Tech University, and Los Alamos National Laboratory, plus private companies including Koloma, a hydrogen production startup that has received funding from Amazon and Bill Gates. The projects themselves are diverse, ranging from applying industrial oil and gas methods for hydrogen production and extraction to developing models to understand hydrogen formation in rocks. The purpose: to address questions in what Wicks calls a “total white space.”
 
“In geologic hydrogen, we don’t know how we can accelerate the production of it, because it’s a chemical reaction, nor do we really understand how to engineer the subsurface so that we can safely extract it,” Wicks says. “We’re trying to bring in the best skills of each of the different groups to work on this under the idea that the ensemble should be able to give us good answers in a fairly rapid timeframe.”
 
Geochemist Viacheslav Zgonnik, one of the foremost experts in the natural hydrogen field, agrees that the list of unknowns is long, as is the road to the first commercial projects. But he says efforts to stimulate hydrogen production — to harness the natural reaction between water and rock — present “tremendous potential.”
 
“The idea is to find ways we can accelerate that reaction and control it so we can produce hydrogen on demand in specific places,” says Zgonnik, CEO and founder of Natural Hydrogen Energy, a Denver-based startup that has mineral leases for exploratory drilling in the United States. “If we can achieve that goal, it means that we can potentially replace fossil fuels with stimulated hydrogen.”

“A full-circle moment”

For Abate, the connection to the project is personal. As a child in his hometown in Ethiopia, power outages were a usual occurrence — the lights would be out three, maybe four days a week. Flickering candles or pollutant-emitting kerosene lamps were often the only source of light for doing homework at night.
 
“And for the household, we had to use wood and charcoal for chores such as cooking,” says Abate. “That was my story all the way until the end of high school and before I came to the U.S. for college.”
 
In 1987, well-diggers drilling for water in Mali in Western Africa uncovered a natural hydrogen deposit, causing an explosion. Decades later, Malian entrepreneur Aliou Diallo and his Canadian oil and gas company tapped the well and used an engine to burn hydrogen and power electricity in the nearby village.
 
Ditching oil and gas, Diallo launched Hydroma, the world’s first hydrogen exploration enterprise. The company is drilling wells near the original site that have yielded high concentrations of the gas.
 
“So, what used to be known as an energy-poor continent now is generating hope for the future of the world,” Abate says. “Learning about that was a full-circle moment for me. Of course, the problem is global; the solution is global. But then the connection with my personal journey, plus the solution coming from my home continent, makes me personally connected to the problem and to the solution.”

Experiments that scale

Abate and researchers in his lab are formulating a recipe for a fluid that will induce the chemical reaction that triggers hydrogen production in rocks. The main ingredient is water, and the team is testing “simple” materials for catalysts that will speed up the reaction and in turn increase the amount of hydrogen produced, says postdoc Yifan Gao.
 
“Some catalysts are very costly and hard to produce, requiring complex production or preparation,” Gao says. “A catalyst that’s inexpensive and abundant will allow us to enhance the production rate — that way, we produce it at an economically feasible rate, but also with an economically feasible yield.”
 
The iron-rich rocks in which the chemical reaction happens can be found across the United States and the world. To optimize the reaction across a diversity of geological compositions and environments, Abate and Gao are developing what they call a high-throughput system, consisting of artificial intelligence software and robotics, to test different catalyst mixtures and simulate what would happen when applied to rocks from various regions, with different external conditions like temperature and pressure.
 
“And from that we measure how much hydrogen we are producing for each possible combination,” Abate says. “Then the AI will learn from the experiments and suggest to us, ‘Based on what I’ve learned and based on the literature, I suggest you test this composition of catalyst material for this rock.’”
 
The team is writing a paper on its project and aims to publish its findings in the coming months.
 
The next milestones for the project, after developing the catalyst recipe, is designing a reactor that will serve two purposes. First, fitted with technologies such as Raman spectroscopy, it will allow researchers to identify and optimize the chemical conditions that lead to improved rates and yield of hydrogen production. The lab-scale device will also inform the design of a real-world reactor that can accelerate hydrogen production in the field.
 
“That would be a plant-scale reactor that would be implanted into the subsurface,” Abate says.
 
The cross-disciplinary project is also tapping the expertise of Yang Shao-Horn, of MIT’s Department of Mechanical Engineering and DMSE, for computational analysis of the catalyst, and Esteban Gazel, a Cornell University scientist who will lend his expertise in geology and geochemistry. He’ll focus on understanding the iron-rich ultramafic rock formations across the United States and the globe and how they react with water.
 
For Wicks at ARPA-E, the questions Abate and the other grant recipients are asking are just the first, critical steps in uncharted energy territory.
 
“If we can understand how to stimulate these rocks into generating hydrogen, safely getting it up, it really unleashes the potential energy source,” he says. Then the emerging industry will look to oil and gas for the drilling, piping, and gas extraction know-how. “As I like to say, this is enabling technology that we hope to, in a very short term, enable us to say, ‘Is there really something there?’”

When an antibiotic fails: MIT scientists are using AI to target “sleeper” bacteria

Mon, 04/08/2024 - 2:00pm

Since the 1970s, modern antibiotic discovery has been experiencing a lull. Now the World Health Organization has declared the antimicrobial resistance crisis as one of the top 10 global public health threats. 

When an infection is treated repeatedly, clinicians run the risk of bacteria becoming resistant to the antibiotics. But why would an infection return after proper antibiotic treatment? One well-documented possibility is that the bacteria are becoming metabolically inert, escaping detection of traditional antibiotics that only respond to metabolic activity. When the danger has passed, the bacteria return to life and the infection reappears.  

“Resistance is happening more over time, and recurring infections are due to this dormancy,” says Jackie Valeri, a former MIT-Takeda Fellow (centered within the MIT Abdul Latif Jameel Clinic for Machine Learning in Health) who recently earned her PhD in biological engineering from the Collins Lab. Valeri is the first author of a new paper published in this month’s print issue of Cell Chemical Biology that demonstrates how machine learning could help screen compounds that are lethal to dormant bacteria. 

Tales of bacterial “sleeper-like” resilience are hardly news to the scientific community — ancient bacterial strains dating back to 100 million years ago have been discovered in recent years alive in an energy-saving state on the seafloor of the Pacific Ocean. 

MIT Jameel Clinic's Life Sciences faculty lead James J. Collins, a Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science and Department of Biological Engineering, recently made headlines for using AI to discover a new class of antibiotics, which is part of the group’s larger mission to use AI to dramatically expand the existing antibiotics available. 

According to a paper published by The Lancet, in 2019, 1.27 million deaths could have been prevented had the infections been susceptible to drugs, and one of many challenges researchers are up against is finding antibiotics that are able to target metabolically dormant bacteria. 

In this case, researchers in the Collins Lab employed AI to speed up the process of finding antibiotic properties in known drug compounds. With millions of molecules, the process can take years, but researchers were able to identify a compound called semapimod over a weekend, thanks to AI's ability to perform high-throughput screening.

An anti-inflammatory drug typically used for Crohn’s disease, researchers discovered that semapimod was also effective against stationary-phase Escherichia coli and Acinetobacter baumannii

Another revelation was semapimod's ability to disrupt the membranes of so-called “Gram-negative” bacteria, which are known for their high intrinsic resistance to antibiotics due to their thicker, less-penetrable outer membrane. 

Examples of Gram-negative bacteria include E. coli, A. baumannii, Salmonella, and Pseudomonis, all of which are challenging to find new antibiotics for. 

“One of the ways we figured out the mechanism of sema [sic] was that its structure was really big, and it reminded us of other things that target the outer membrane,” Valeri explains. “When you start working with a lot of small molecules ... to our eyes, it’s a pretty unique structure.” 

By disrupting a component of the outer membrane, semapimod sensitizes Gram-negative bacteria to drugs that are typically only active against Gram-positive bacteria. 

Valeri recalls a quote from a 2013 paper published in Trends Biotechnology: “For Gram-positive infections, we need better drugs, but for Gram-negative infections we need any drugs.” 

MIT engineers design flexible “skeletons” for soft, muscle-powered robots

Mon, 04/08/2024 - 11:40am

Our muscles are nature’s perfect actuators — devices that turn energy into motion. For their size, muscle fibers are more powerful and precise than most synthetic actuators. They can even heal from damage and grow stronger with exercise.

For these reasons, engineers are exploring ways to power robots with natural muscles. They’ve demonstrated a handful of “biohybrid” robots that use muscle-based actuators to power artificial skeletons that walk, swim, pump, and grip. But for every bot, there’s a very different build, and no general blueprint for how to get the most out of muscles for any given robot design.

Now, MIT engineers have developed a spring-like device that could be used as a basic skeleton-like module for almost any muscle-bound bot. The new spring, or “flexure,” is designed to get the most work out of any attached muscle tissues. Like a leg press that’s fit with just the right amount of weight, the device maximizes the amount of movement that a muscle can naturally produce.

The researchers found that when they fit a ring of muscle tissue onto the device, much like a rubber band stretched around two posts, the muscle pulled on the spring, reliably and repeatedly, and stretched it five times more, compared with other previous device designs.

The team sees the flexure design as a new building block that can be combined with other flexures to build any configuration of artificial skeletons. Engineers can then fit the skeletons with muscle tissues to power their movements.

“These flexures are like a skeleton that people can now use to turn muscle actuation into multiple degrees of freedom of motion in a very predictable way,” says Ritu Raman, the Brit and Alex d'Arbeloff Career Development Professor in Engineering Design at MIT. “We are giving roboticists a new set of rules to make powerful and precise muscle-powered robots that do interesting things.”

Raman and her colleagues report the details of the new flexure design in a paper appearing today in the journal Advanced Intelligent Systems. The study’s MIT co-authors include Naomi Lynch ’12, SM ’23; undergraduate Tara Sheehan; graduate students Nicolas Castro, Laura Rosado, and Brandon Rios; and professor of mechanical engineering Martin Culpepper.

Muscle pull

When left alone in a petri dish in favorable conditions, muscle tissue will contract on its own but in directions that are not entirely predictable or of much use.

“If muscle is not attached to anything, it will move a lot, but with huge variability, where it’s just flailing around in liquid,” Raman says.

To get a muscle to work like a mechanical actuator, engineers typically attach a band of muscle tissue between two small, flexible posts. As the muscle band naturally contracts, it can bend the posts and pull them together, producing some movement that would ideally power part of a robotic skeleton. But in these designs, muscles have produced limited movement, mainly because the tissues are so variable in how they contact the posts. Depending on where the muscles are placed on the posts, and how much of the muscle surface is touching the post, the muscles may succeed in pulling the posts together but at other times may wobble around in uncontrollable ways.

Raman’s group looked to design a skeleton that focuses and maximizes a muscle’s contractions regardless of exactly where and how it is placed on a skeleton, to generate the most movement in a predictable, reliable way.

“The question is: How do we design a skeleton that most efficiently uses the force the muscle is generating?” Raman says.

The researchers first considered the multiple directions that a muscle can naturally move. They reasoned that if a muscle is to pull two posts together along a specific direction, the posts should be connected to a spring that only allows them to move in that direction when pulled.

“We need a device that is very soft and flexible in one direction, and very stiff in all other directions, so that when a muscle contracts, all that force gets efficiently converted into motion in one direction,” Raman says.

Soft flex

As it turns out, Raman found many such devices in Professor Martin Culpepper’s lab. Culpepper’s group at MIT specializes in the design and fabrication of machine elements such as miniature actuators, bearings, and other mechanisms, that can be built into machines and systems to enable ultraprecise movement, measurement, and control, for a wide variety of applications. Among the group’s precision machined elements are flexures — spring-like devices, often made from parallel beams, that can flex and stretch with nanometer precision.

“Depending on how thin and far apart the beams are, you can change how stiff the spring appears to be,” Raman says.

She and Culpepper teamed up to design a flexure specifically tailored with a configuration and stiffness to enable muscle tissue to naturally contract and maximally stretch the spring. The team designed the device’s configuration and dimensions based on numerous calculations they carried out to relate a muscle’s natural forces with a flexure’s stiffness and degree of movement.

The flexure they ultimately designed is 1/100 the stiffness of muscle tissue itself. The device resembles a miniature, accordion-like structure, the corners of which are pinned to an underlying base by a small post, which sits near a neighboring post that is fit directly onto the base. Raman then wrapped a band of muscle around the two corner posts (the team molded the bands from live muscle fibers that they grew from mouse cells), and measured how close the posts were pulled together as the muscle band contracted.

The team found that the flexure’s configuration enabled the muscle band to contract mostly along the direction between the two posts. This focused contraction allowed the muscle to pull the posts much closer together — five times closer — compared with previous muscle actuator designs.

“The flexure is a skeleton that we designed to be very soft and flexible in one direction, and very stiff in all other directions,” Raman says. “When the muscle contracts, all the force is converted into movement in that direction. It’s a huge magnification.”

The team found they could use the device to precisely measure muscle performance and endurance. When they varied the frequency of muscle contractions (for instance, stimulating the bands to contract once versus four times per second), they observed that the muscles “grew tired” at higher frequencies, and didn’t generate as much pull.

“Looking at how quickly our muscles get tired, and how we can exercise them to have high-endurance responses — this is what we can uncover with this platform,” Raman says.

The researchers are now adapting and combining flexures to build precise, articulated, and reliable robots, powered by natural muscles.

“An example of a robot we are trying to build in the future is a surgical robot that can perform minimally invasive procedures inside the body,” Raman says. “Technically, muscles can power robots of any size, but we are particularly excited in making small robots, as this is where biological actuators excel in terms of strength, efficiency, and adaptability.”

This 3D printer can figure out how to print with an unknown material

Mon, 04/08/2024 - 12:00am

While 3D printing has exploded in popularity, many of the plastic materials these printers use to create objects cannot be easily recycled. While new sustainable materials are emerging for use in 3D printing, they remain difficult to adopt because 3D printer settings need to be adjusted for each material, a process generally done by hand.

To print a new material from scratch, one must typically set up to 100 parameters in software that controls how the printer will extrude the material as it fabricates an object. Commonly used materials, like mass-manufactured polymers, have established sets of parameters that were perfected through tedious, trial-and-error processes.

But the properties of renewable and recyclable materials can fluctuate widely based on their composition, so fixed parameter sets are nearly impossible to create. In this case, users must come up with all these parameters by hand.

Researchers tackled this problem by developing a 3D printer that can automatically identify the parameters of an unknown material on its own.

A collaborative team from MIT’s Center for Bits and Atoms (CBA), the U.S. National Institute of Standards and Technology (NIST), and the National Center for Scientific Research in Greece (Demokritos) modified the extruder, the “heart” of a 3D printer, so it can measure the forces and flow of a material.

These data, gathered through a 20-minute test, are fed into a mathematical function that is used to automatically generate printing parameters. These parameters can be entered into off-the-shelf 3D printing software and used to print with a never-before-seen material. 

The automatically generated parameters can replace about half of the parameters that typically must be tuned by hand. In a series of test prints with unique materials, including several renewable materials, the researchers showed that their method can consistently produce viable parameters.

This research could help to reduce the environmental impact of additive manufacturing, which typically relies on nonrecyclable polymers and resins derived from fossil fuels.

“In this paper, we demonstrate a method that can take all these interesting materials that are bio-based and made from various sustainable sources and show that the printer can figure out by itself how to print those materials. The goal is to make 3D printing more sustainable,” says senior author Neil Gershenfeld, who leads CBA.

His co-authors include first author Jake Read a graduate student in the CBA who led the printer development; Jonathan Seppala, a chemical engineer in the Materials Science and Engineering Division of NIST; Filippos Tourlomousis, a former CBA postdoc who now heads the Autonomous Science Lab at Demokritos; James Warren, who leads the Materials Genome Program at NIST; and Nicole Bakker, a research assistant at CBA. The research is published in the journal Integrating Materials and Manufacturing Innovation.

Shifting material properties

In fused filament fabrication (FFF), which is often used in rapid prototyping, molten polymers are extruded through a heated nozzle layer-by-layer to build a part. Software, called a slicer, provides instructions to the machine, but the slicer must be configured to work with a particular material.

Using renewable or recycled materials in an FFF 3D printer is especially challenging because there are so many variables that affect the material properties.

For instance, a bio-based polymer or resin might be composed of different mixes of plants based on the season. The properties of recycled materials also vary widely based on what is available to recycle.

“In ‘Back to the Future,’ there is a ‘Mr. Fusion’ blender where Doc just throws whatever he has into the blender and it works [as a power source for the DeLorean time machine]. That is the same idea here. Ideally, with plastics recycling, you could just shred what you have and print with it. But, with current feed-forward systems, that won’t work because if your filament changes significantly during the print, everything would break,” Read says.

To overcome these challenges, the researchers developed a 3D printer and workflow to automatically identify viable process parameters for any unknown material.

They started with a 3D printer their lab had previously developed that can capture data and provide feedback as it operates. The researchers added three instruments to the machine’s extruder that take measurements which are used to calculate parameters.

A load cell measures the pressure being exerted on the printing filament, while a feed rate sensor measures the thickness of the filament and the actual rate at which it is being fed through the printer.

“This fusion of measurement, modeling, and manufacturing is at the heart of the collaboration between NIST and CBA, as we work develop what we’ve termed ‘computational metrology,’” says Warren.

These measurements can be used to calculate the two most important, yet difficult to determine, printing parameters: flow rate and temperature. Nearly half of all print settings in standard software are related to these two parameters. 

Deriving a dataset

Once they had the new instruments in place, the researchers developed a 20-minute test that generates a series of temperature and pressure readings at different flow rates. Essentially, the test involves setting the print nozzle at its hottest temperature, flowing the material through at a fixed rate, and then turning the heater off.

“It was really difficult to figure out how to make that test work. Trying to find the limits of the extruder means that you are going to break the extruder pretty often while you are testing it. The notion of turning the heater off and just passively taking measurements was the ‘aha’ moment,” says Read.

These data are entered into a function that automatically generates real parameters for the material and machine configuration, based on relative temperature and pressure inputs. The user can then enter those parameters into 3D printing software and generate instructions for the printer.

In experiments with six different materials, several of which were bio-based, the method automatically generated viable parameters that consistently led to successful prints of a complex object.

Moving forward, the researchers plan to integrate this process with 3D printing software so parameters don’t need to be entered manually. In addition, they want to enhance their workflow by incorporating a thermodynamic model of the hot end, which is the part of the printer that melts the filament.

This collaboration is now more broadly developing computational metrology, in which the output of a measurement is a predictive model rather than just a parameter. The researchers will be applying this in other areas of advanced manufacturing, as well as in expanding access to metrology.

“By developing a new method for the automatic generation of process parameters for fused filament fabrication, this study opens the door to the use of recycled and bio-based filaments that have variable and unknown behaviors. Importantly, this enhances the potential for digital manufacturing technology to utilize locally sourced sustainable materials,” says Alysia Garmulewicz, an associate professor in the Faculty of Administration and Economics at the University of Santiago in Chile who was not involved with this work.

This research is supported, in part, by the National Institute of Standards and Technology and the Center for Bits and Atoms Consortia.

For Julie Greenberg, a career of research, mentoring, and advocacy

Fri, 04/05/2024 - 4:50pm

For Julie E. Greenberg SM ’89, PhD ’94, what began with a middle-of-the-night phone call from overseas became a gratifying career of study, research, mentoring, advocacy, and guiding of the office of a unique program with a mission to educate the next generation of clinician-scientists and engineers.

In 1987, Greenberg was a computer engineering graduate of the University of Michigan, living in Tel Aviv, Israel, where she was working for Motorola — when she answered an early-morning call from Roger Mark, then the director of the Harvard-MIT Program in Health Sciences and Technology (HST). A native of Detroit, Michigan, Greenberg had just been accepted into MIT’s electrical engineering and computer science (EECS) graduate program.

HST — one of the world’s oldest interdisciplinary educational programs based on translational medical science and engineering — had been offering the medical engineering and medical physics (MEMP) PhD program since 1978, but it was then still relatively unknown. Mark, an MIT distinguished professor of health sciences and technology and of EECS, and assistant professor of medicine at Harvard Medical School, was calling to ask Greenberg if she might be interested in enrolling in HST’s MEMP program.

“At the time, I had applied to MIT not knowing that HST existed,” Greenberg recalls. “So, I was groggily answering the phone in the middle of the night and trying to be quiet, because my roommate was a co-worker at Motorola, and no one yet knew that I was planning to leave to go to grad school. Roger asked if I’d like to be considered for HST, but he also suggested that I could come to EECS in the fall, learn more about HST, and then apply the following year. That was the option I chose.”

For Greenberg, who retired March 15 from her role as senior lecturer and director of education — that early morning phone call was the first she would hear of the program where she would eventually spend the bulk of her 37-year career at MIT, first as a student, then as the director of HST’s academic office. During her first year as a graduate student, she enrolled in class HST.582/6.555 (Biomedical Signal and Image Processing), for which she later served as lecturer and eventually course director, teaching the class almost every year for three decades. But as a first-year graduate student, she says she found that “all the cool kids” were HST students. “It was a small class, so we all got to know each other,” Greenberg remembers. “EECS was a big program. The MEMP students were a tight, close-knit community, so in addition to my desire to work on biomedical applications, that made HST very appealing.”

Also piquing her interest in HST was meeting Martha L. Gray, the Whitaker Professor in Biomedical Engineering. Gray, who is also a professor of EECS and a core faculty member of the MIT Institute for Medical Engineering and Science (IMES), was then a new member of the EECS faculty, and Greenberg met her at an orientation event for graduate student women, who were a smaller cohort then, compared to now. Gray SM ’81, PhD ’86 became Greenberg’s academic advisor when she joined HST. Greenberg’s SM and PhD research was on signal processing for hearing aids, in what was then the Sensory Communication Group in MIT’s Research Laboratory of Electronics (RLE).

Gray later succeeded Mark as director of HST at MIT, and it was she who recruited Greenberg to join as HST director of education in 2004, after Greenberg had spent a decade as a researcher in RLE.

“Julie is amazing — one of my best decisions as HST director was to hire Julie. She is an exceptionally clear thinker, a superb collaborator, and wicked smart,” Gray says. “One of her superpowers is being able to take something that is incredibly complex and to break it down into logical chunks … And she is absolutely devoted to advocating for the students. She is no pushover, but she has a way of coming up with solutions to what look like unfixable problems, before they become even bigger.”

Greenberg’s experience as an HST graduate student herself has informed her leadership, giving her a unique perspective on the challenges for those who are studying and researching in a demanding program that flows between two powerful institutions. HST students have full access to classes and all academic and other opportunities at both MIT and Harvard University, while having a primary institution for administrative purposes, and ultimately to award their degree. HST’s home at Harvard is in the London Society at Harvard Medical School, while at MIT, it is IMES.

In looking back on her career in HST, Greenberg says the overarching theme is one of “doing everything possible to smooth the path. So that students can get to where they need to go, and learn what they need to learn, and do what they need to do, rather than getting caught up in the bureaucratic obstacles of maneuvering between institutions. Having been through it myself gives me a good sense of how to empower the students.”

Rachel Frances Bellisle, an HST MEMP student who is graduating in May and is studying bioastronautics, says that having Julie as her academic advisor was invaluable because of her eagerness to solve the thorniest of issues. “Whenever I was trying to navigate something and was having trouble finding a solution, Julie was someone I could always turn to,” she says. “I know many graduate students in other programs who haven’t had the important benefit of that sort of individualized support. She’s always had my back.”

And Xining Gao, a fourth-year MEMP student studying biological engineering, says that as a student who started during the Covid pandemic, having someone like Greenberg and the others in the HST academic office — who worked to overcome the challenges of interacting mostly over Zoom — made a crucial difference. “A lot of us who joined in 2020 felt pretty disconnected,” Gao says. “Julie being our touchstone and guide in the absence of face-to-face interactions was so key.” The pandemic challenges inspired Gao to take on student government positions, including as PhD co-chair of the HST Joint Council. “Working with Julie, I’ve seen firsthand how committed she is to our department,” Gao says. “She is truly a cornerstone of the HST community.”

During her time at MIT, Greenberg has been involved in many Institute-level initiatives, including as a member of the 2016 class of the Leader to Leader program. She lauded L2L as being “transformative” to her professional development, saying that there have been “countless occasions where I’ve been able to solve a problem quickly and efficiently by reaching out to a fellow L2L alum in another part of the Institute.”

Since Greenberg started leading HST operations, the program has steadily evolved. When Greenberg was a student, the MEMP class was relatively small, admitting 10 students annually, with roughly 30 percent of them being women. Now, approximately 20 new MEMP PhD students and 30 new MD or MD-PhD students join the HST community each year, and half of them are women. Since 2004, the average time-to-degree for HST MEMP PhD students dropped by almost a full year, and is now on par with the average for all graduate programs in MIT’s School of Engineering, despite the complications of taking classes at both Harvard and MIT. 

A search is underway for Julie’s replacement. But in the meantime, those who have worked with her praise her impact on HST, and on MIT.

“Throughout the entire history of the HST ecosystem, you cannot find anyone who cares more about HST students than Julie,” says Collin Stultz, the Nina T. and Robert H. Rubin Professor in Medical Engineering and Science, and professor of EECS. Stultz is also the co-director of HST, as well as a 1997 HST MD graduate. “She is, and has always been, a formidable advocate for HST students and an oracle of information to me.”

Elazer Edelman ’78, SM ’79, PhD ’84, the Edward J. Poitras Professor in Medical Engineering and Science and director of IMES, says that Greenberg “has been a mentor to generations of students and leaders — she is a force of nature whose passion for learning and teaching is matched by love for our people and the spirit of our institutions. Her name is synonymous with many of our most innovative educational initiatives; indeed, she has touched every aspect of HST and IMES this very many decades. It is hard to imagine academic life here without her guiding hand.”

Greenberg says she is looking forward to spending more time on her hobbies, including baking, gardening, and travel, and that she may investigate getting involved in some way with working with STEM and underserved communities. She describes leaving now as “bittersweet. But I think that HST is in a strong, secure position, and I’m excited to see what will happen next, but from further away … and as long as they keep inviting alumni to the HST dinners, I will come.”

Reevaluating an approach to functional brain imaging

Thu, 04/04/2024 - 4:25pm

A new way of imaging the brain with magnetic resonance imaging (MRI) does not directly detect neural activity as originally reported, according to scientists at MIT’s McGovern Institute for Brain Research.

The method, first described in 2022, generated excitement within the neuroscience community as a potentially transformative approach. But a study from the lab of MIT Professor Alan Jasanoff, reported March 27 in the journal Science Advances, demonstrates that MRI signals produced by the new method are generated in large part by the imaging process itself, not neuronal activity.

Jasanoff, a professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, as well as an associate investigator of the McGovern Institute, explains that having a noninvasive means of seeing neuronal activity in the brain is a long-sought goal for neuroscientists. The functional MRI methods that researchers currently use to monitor brain activity don’t actually detect neural signaling. Instead, they use blood flow changes triggered by brain activity as a proxy. This reveals which parts of the brain are engaged during imaging, but it cannot pinpoint neural activity to precise locations, and it is too slow to truly track neurons’ rapid-fire communications.

So when a team of scientists reported in 2022 a new MRI method called DIANA, for “direct imaging of neuronal activity,” neuroscientists paid attention. The authors claimed that DIANA detected MRI signals in the brain that corresponded to the electrical signals of neurons, and that it acquired signals far faster than the methods now used for functional MRI.

“Everyone wants this,” Jasanoff says. “If we could look at the whole brain and follow its activity with millisecond precision and know that all the signals that we’re seeing have to do with cellular activity, this would be just wonderful. It could tell us all kinds of things about how the brain works and what goes wrong in disease.”

Jasanoff adds that from the initial report, it was not clear what brain changes DIANA was detecting to produce such a rapid readout of neural activity. Curious, he and his team began to experiment with the method. “We wanted to reproduce it, and we wanted to understand how it worked,” he says.

Recreating the MRI procedure reported by DIANA’s developers, postdoc Valerie Doan Phi Van imaged the brain of a rat as an electric stimulus was delivered to one paw. Phi Van says she was excited to see an MRI signal appear in the brain’s sensory cortex, exactly when and where neurons were expected to respond to the sensation on the paw. “I was able to reproduce it,” she says. “I could see the signal.”

With further tests of the system, however, her enthusiasm waned. To investigate the source of the signal, she disconnected the device used to stimulate the animal’s paw, then repeated the imaging. Again, signals showed up in the sensory processing part of the brain. But this time, there was no reason for neurons in that area to be activated. In fact, Phi Van found, the MRI produced the same kinds of signals when the animal inside the scanner was replaced with a tube of water. It was clear DIANA’s functional signals were not arising from neural activity.

Phi Van traced the source of the specious signals to the pulse program that directs DIANA’s imaging process, detailing the sequence of steps the MRI scanner uses to collect data. Embedded within DIANA’s pulse program was a trigger for the device that delivers sensory input to the animal inside the scanner. That synchronizes the two processes, so the stimulation occurs at a precise moment during data acquisition. That trigger appeared to be causing signals that DIANA’s developers had concluded indicated neural activity.

Phi Van altered the pulse program, changing the way the stimulator was triggered. Using the updated program, the MRI scanner detected no functional signal in the brain in response to the same paw stimulation that had produced a signal before. “If you take this part of the code out, then the signal will also be gone. So that means the signal we see is an artifact of the trigger,” she says.

Jasanoff and Phi Van went on to find reasons why other researchers have struggled to reproduce the results of the original DIANA report, noting that the trigger-generated signals can disappear with slight variations in the imaging process. With their postdoctoral colleague Sajal Sen, they also found evidence that cellular changes that DIANA’s developers had proposed might give rise to a functional MRI signal were not related to neuronal activity.

Jasanoff and Phi Van say it was important to share their findings with the research community, particularly as efforts continue to develop new neuroimaging methods. “If people want to try to repeat any part of the study or implement any kind of approach like this, they have to avoid falling into these pits,” Jasanoff says. He adds that they admire the authors of the original study for their ambition: “The community needs scientists who are willing to take risks to move the field ahead.”

Propelling atomically layered magnets toward green computers

Thu, 04/04/2024 - 3:30pm

Globally, computation is booming at an unprecedented rate, fueled by the boons of artificial intelligence. With this, the staggering energy demand of the world’s computing infrastructure has become a major concern, and the development of computing devices that are far more energy-efficient is a leading challenge for the scientific community. 

Use of magnetic materials to build computing devices like memories and processors has emerged as a promising avenue for creating “beyond-CMOS” computers, which would use far less energy compared to traditional computers. Magnetization switching in magnets can be used in computation the same way that a transistor switches from open or closed to represent the 0s and 1s of binary code. 

While much of the research along this direction has focused on using bulk magnetic materials, a new class of magnetic materials — called two-dimensional van der Waals magnets — provides superior properties that can improve the scalability and energy efficiency of magnetic devices to make them commercially viable. 

Although the benefits of shifting to 2D magnetic materials are evident, their practical induction into computers has been hindered by some fundamental challenges. Until recently, 2D magnetic materials could operate only at very low temperatures, much like superconductors. So bringing their operating temperatures above room temperature has remained a primary goal. Additionally, for use in computers, it is important that they can be controlled electrically, without the need for magnetic fields. Bridging this fundamental gap, where 2D magnetic materials can be electrically switched above room temperature without any magnetic fields, could potentially catapult the translation of 2D magnets into the next generation of “green” computers.

A team of MIT researchers has now achieved this critical milestone by designing a “van der Waals atomically layered heterostructure” device where a 2D van der Waals magnet, iron gallium telluride, is interfaced with another 2D material, tungsten ditelluride. In an open-access paper published March 15 in Science Advances, the team shows that the magnet can be toggled between the 0 and 1 states simply by applying pulses of electrical current across their two-layer device. 

“Our device enables robust magnetization switching without the need for an external magnetic field, opening up unprecedented opportunities for ultra-low power and environmentally sustainable computing technology for big data and AI,” says lead author Deblina Sarkar, the AT&T Career Development Assistant Professor at the MIT Media Lab and Center for Neurobiological Engineering, and head of the Nano-Cybernetic Biotrek research group. “Moreover, the atomically layered structure of our device provides unique capabilities including improved interface and possibilities of gate voltage tunability, as well as flexible and transparent spintronic technologies.”

Sarkar is joined on the paper by first author Shivam Kajale, a graduate student in Sarkar’s research group at the Media Lab; Thanh Nguyen, a graduate student in the Department of Nuclear Science and Engineering (NSE); Nguyen Tuan Hung, an MIT visiting scholar in NSE and an assistant professor at Tohoku University in Japan; and Mingda Li, associate professor of NSE.

Breaking the mirror symmetries 

When electric current flows through heavy metals like platinum or tantalum, the electrons get segregated in the materials based on their spin component, a phenomenon called the spin Hall effect, says Kajale. The way this segregation happens depends on the material, and particularly its symmetries.

“The conversion of electric current to spin currents in heavy metals lies at the heart of controlling magnets electrically,” Kajale notes. “The microscopic structure of conventionally used materials, like platinum, have a kind of mirror symmetry, which restricts the spin currents only to in-plane spin polarization.”

Kajale explains that two mirror symmetries must be broken to produce an “out-of-plane” spin component that can be transferred to a magnetic layer to induce field-free switching. “Electrical current can 'break' the mirror symmetry along one plane in platinum, but its crystal structure prevents the mirror symmetry from being broken in a second plane.”

In their earlier experiments, the researchers used a small magnetic field to break the second mirror plane. To get rid of the need for a magnetic nudge, Kajale and Sarkar and colleagues looked instead for a material with a structure that could break the second mirror plane without outside help. This led them to another 2D material, tungsten ditelluride. The tungsten ditelluride that the researchers used has an orthorhombic crystal structure. The material itself has one broken mirror plane. Thus, by applying current along its low-symmetry axis (parallel to the broken mirror plane), the resulting spin current has an out-of-plane spin component that can directly induce switching in the ultra-thin magnet interfaced with the tungsten ditelluride. 

“Because it's also a 2D van der Waals material, it can also ensure that when we stack the two materials together, we get pristine interfaces and a good flow of electron spins between the materials,” says Kajale. 

Becoming more energy-efficient 

Computer memory and processors built from magnetic materials use less energy than traditional silicon-based devices. And the van der Waals magnets can offer higher energy efficiency and better scalability compared to bulk magnetic material, the researchers note. 

The electrical current density used for switching the magnet translates to how much energy is dissipated during switching. A lower density means a much more energy-efficient material. “The new design has one of the lowest current densities in van der Waals magnetic materials,” Kajale says. “This new design has an order of magnitude lower in terms of the switching current required in bulk materials. This translates to something like two orders of magnitude improvement in energy efficiency.”

The research team is now looking at similar low-symmetry van der Waals materials to see if they can reduce current density even further. They are also hoping to collaborate with other researchers to find ways to manufacture the 2D magnetic switch devices at commercial scale. 

This work was carried out, in part, using the facilities at MIT.nano. It was funded by the Media Lab, the U.S. National Science Foundation, and the U.S. Department of Energy.

MIT Haystack scientists prepare a constellation of instruments to observe the solar eclipse’s effects

Thu, 04/04/2024 - 10:30am

On April 8, the moon’s shadow will sweep through North America, trailing a diagonal ribbon of momentary, midday darkness across parts of the continent. Those who happen to be within the “path of totality” will experience a total solar eclipse — a few eerie minutes when the sun, moon, and Earth align, such that the moon perfectly blocks out the sun.

The last solar eclipse to pass over the continental United States occurred in August 2017, when the moon’s shadow swept from Oregon down to South Carolina. This time, the moon will be closer to the Earth and will track a wider ribbon, from Mexico through Texas and on up into Maine and eastern Canada. The shadow will move across more populated regions than in 2017, and will completely block the sun for more than 31 million people who live in its path. The eclipse will also partly shade many more regions, giving much of the country a partial eclipse, depending on the local weather.

While many of us ready our eclipse-grade eyewear, scientists at MIT’s Haystack Observatory are preparing a constellation of instruments to study the eclipse and how it will affect the topmost layers of the atmosphere. In particular, they will be focused on the ionosphere — the atmosphere’s outermost layer where many satellites orbit. The ionosphere stretches from 50 to 400 miles above the Earth’s surface and is continually blasted by the sun’s extreme ultraviolet and X-ray radiation. This daily solar exposure ionizes gas molecules in the atmosphere, creating a charged sea of electrons and ions that shifts with changes in the sun’s energy.

As they did in 2017, Haystack researchers will study how the ionosphere responds before, during, and after the eclipse, as the sun’s radiation suddenly dips. With this year’s event, the scientists will be adding two new technologies to the mix, giving them a first opportunity to observe the eclipse’s effects at local, regional, and national scales. What they observe will help scientists better understand how the atmosphere reacts to other sudden changes in solar radiation, such as solar storms and flares.

Two lead members of Haystack’s eclipse effort are research scientists Larisa Goncharenko, who studies the physics of the ionosphere using measurements from multiple observational sources, and John Swoboda, who develops instruments to observe near-Earth space phenomena. While preparing for eclipse day, Goncharenko and Swoboda took a break to chat with MIT News about the ways in which they will be watching the event and what they hope to learn from Monday’s rare planetary alignment.

Q: There’s a lot of excitement around this solar eclipse. Before we dive into how you’ll be observing it, let’s take a step back to talk about what we know so far: How does a total eclipse affect the atmosphere?

Goncharenko: We know quite a bit. One of the largest effects is, as the moon’s shadow moves over part of the continent, we have a significant decrease in electron, or plasma, density in the ionosphere. The sun is an ionization source, and as soon as that source is removed, we have a decrease in electron density. So, we sort of have a hole in the ionosphere that moves behind the moon’s shadow.

During an eclipse, solar heating shuts off and it’s like a rapid sunset and sunrise, and we have significant cooling in the atmosphere. So, we have this cold area of low ionization, moving in latitude and longitude. And because of this change in temperature, you also have disturbances in the wind system that affect how plasma, or electrons in the ionosphere, are distributed. And these are changes on large scales.

From this cold area that follows totality, we also have different kinds of waves emanating. Like a boat moving on the water, you have bow shock waves moving from the shadow. These are waves in electron density. They are small perturbations but can cover really large areas. We saw similar waves in the 2017 eclipse. But every eclipse is different. So, we will be using this eclipse as a unique lab experiment. And we will be able to see changes in electron density, temperature, and winds in the upper atmosphere as the eclipse moves over the continental United States.

Q: How will you be seeing all this? What experiments will you be running to catch the eclipse and its effects on the atmosphere?

Swoboda: We’re going to measure local changes in the atmosphere and ionosphere using two new radar technologies. The first is Zephyr, which was developed by [Haystack research scientist] Ryan Volz. Zephyr looks at how meteors break up in our atmosphere. There are always little bits of sand that burn up in the Earth’s atmosphere, and when they burn up, they leave a trail of plasma that follows the wind patterns in the upper atmosphere. Zephyr sends out a signal that bounces off these plasma trails, so we can see how they are carried by winds moving at very high altitude. We will use Zephyr to observe how these winds in the upper atmosphere change during the eclipse.

The other radar system is EMVSIS [Electro-Magnetic Vector Sensor Ionospheric Sounder], which will measure the electron or plasma density and the bulk velocity of the charged particles in the ionosphere. Both these systems comprise a distributed array of transmitters and receivers that send and receive radio waves at various frequencies to do their measurements. Traditional ionospheric sounders require high-power transmitters and large towers on the order of hundreds of feet, and can cover an area the size of a football field. But we’ve developed a lower-power and physically smaller system, about the size of a refrigerator, and we’re deploying multiple of these systems around New England to make local and regional measurements.

Goncharenko: We will also make regional observations with two antennas at the Millstone Hill Geospace Facility [in Westford, Massachusetts]. One antenna is a fixed vertical antenna, 220 feet in diameter, that we can use to observe parameters in the ionosphere over a huge range of altitudes, from 90 to 1,000 kilometers above the ground. The other is a steerable antenna that’s 150 feet in diameter, which we can move to look what happens as far away as Florida and all the way to the central United States. We are planning to use both antennas to see changes during the eclipse.

We’ll also be processing data from a national network of almost 3,000 GNSS [Global Navigation Satellite System] receivers across the United States, and we’re installing new receivers in undersampled regions along the area of totality. These receivers will measure how the ionosphere’s electron content changes before, during, and after the eclipse.

One of the most exciting things is, this is the first time we’ll have all four of these technologies working together. Each of these technologies provides a unique point of view. And for me as a scientist, I feel like a little kid on Christmas Eve. You know great things are coming, and you know you’ll have new things to play with and new data to analyze.

Q: And speaking of what you’ll find, what do you expect to see from the measurements you collect?

Goncharenko: I expect to see the unexpected. It will be first time for us to look at the near-Earth space with a combination of four very different technologies at the same time and in the same geographic region. We expect higher sensitivity that translates into better resolution in time and space. Probing the upper atmosphere with a combination of these diagnostic tools will provide simultaneous observations we never had before four-dimensional wind flow, electron density, ion temperature, plasma motion. We will observe how they change during the eclipse and study how and why changes in one area of the upper atmosphere are linked to perturbations in other areas in space and time.

Swoboda: We’re also sort of thinking longer term. What the eclipse is giving us is a chance to show what these technologies can do, and say, what if we could have these going all the time? We could run it as a sort of radar network for space weather, like how we monitor weather in the lower atmosphere. And we need to monitor space weather, because we have so much going on in the near-Earth space environment, with satellites launching all the time that are affected by space weather.

Goncharenko: We have a lot of space to study. The eclipse is just the highlight. But overall, these systems can produce more data to get a look at what happens in the upper atmosphere and ionosphere during other disturbances, such as storms and lightning periods, or coronal mass ejections and solar flares. And all of this is part of a large effort to build up our understanding of near-Earth space to meet demands of modern technological society.

Q&A: Tips for viewing the 2024 solar eclipse

Thu, 04/04/2024 - 10:30am

On Monday, April 8, the United States will experience a total solar eclipse — a rare astronomical event where the moon passes directly between the sun and the Earth, blocking out the sun’s light almost completely. The last total solar eclipse in the contiguous U.S. was in 2017, and the next one won’t be until 2044.

If the weather cooperates, people across the United States — from northeastern Maine to southwestern Texas — will be able to observe the eclipse using protective eyewear. Those in the path of totality, where the moon entirely covers the sun, will have the best view, but 99% of people in the continental U.S. will be able to see a partial eclipse. Weather permitting, those on the MIT campus and the surrounding area will see 93 percent of the sun covered, with the partial eclipse starting at 2:15 p.m. and reaching its peak around 3:29 p.m. Gatherings are planned at the Kresge Oval and the MIT Museum, and a live NASA stream will be shown in the Building 55 atrium.

Brian Mernoff, manager of the CommLab in the Department of Aeronautics and Astronautics, is an accomplished astrophotographer and science educator. Mernoff is headed to Vermont with his family to experience the totality from the best possible angle — but has offered a few thoughts on how to enjoy the eclipse safely, wherever you are.

Q: What should viewers expect to see and experience with this solar eclipse?

A: When you’re watching TV (the sun) and your toddler, dog, or other large mammal (the moon) blocks your view, you no doubt move over a bit to try to get a partial or full view of the TV. This is exactly how the path of totality works for an eclipse. If you are exactly in line with the moon and sun, it will be completely blocked, but if you start moving away from this path, your view of the sun will start to increase until the moon is not in the way at all.

The closer you are to the path of totality, the more of the sun will be blocked. At MIT, about 93 percent of the sun will be blocked. Those in the area will notice that things around you will get slightly darker, just like when it starts to become overcast. Even so, the sun will remain very bright in the sky and solar glasses will be required to view the entirety of the eclipse. It really goes to show how incredibly bright the sun is!

Within the narrow path of totality, the moon will continue to move across the sun, reaching 100 percent coverage. For this short period of time, you can remove your glasses and see a black disk where the sun should be. Around the disk will be wispy white lines. This is the corona, the outermost part of the sun, which is normally outshone by the sun’s photosphere (surface). Around the edges of the black disk of the moon, right as totality begins and ends, you can also see bright spots around the edges, known as Bailey’s Beads, caused by sunlight shining between mountains and craters on the moon.

But that’s not all! Although you will be tempted to stare up at the sun throughout totality, do not forget to observe the world around you. During totality, it feels like twilight. There is a 360-degree sunset, the temperature changes rapidly, winds change, animals start making different sounds, and shadows start getting weird (look into “shadow bands” if you have a chance).

As soon as totality ends, and you start to see Baily’s Beads again, put your solar glasses back on as it will get very bright again very fast as the moon moves out of the way.

Q: What are the best options for viewing the eclipse safely and to greatest effect?

A: No matter where you are during the eclipse, make sure you have solar glasses. These glasses should be ISO-approved for solar viewing. Do not use glasses with scratches, holes, or other damage.

If you are unable to obtain solar glasses in time, you can safely view the eclipse using a home-made projection method, such as a pinhole camera or even projecting the image of the sun through a colander.

The best view of the eclipse will be from within the path of totality, but even if you are not within it, you should still go outside to experience the partial eclipse. Use the NASA Eclipse Explorer to find the start, maximum, and end times, and then find a nice spot outside — preferably with some shade — put on your glasses, and enjoy the show.

For a closer view of the sun, find a friend that has a telescope with the correct ISO-certified solar filter. This will let you see the photosphere (or chromosphere if it is an H-alpha scope) in a lot more detail. If you do not have access to a telescope, NASA plans to livestream a telescope view throughout the eclipse. [The livestream will be displayed publicly on a large screen in Building 55 at MIT, rain or shine.]

The only time you can look at or image the sun without a filter is during 100 percent totality. As soon as this period is done, glasses and filters must be put back on.

After the eclipse, keep your glasses and filters. You can use them to look at the sun on any day (it took me an embarrassing amount of time to realize that I could use the glasses at any time instead of lugging out a telescope). On a really clear day, you can sometimes see sunspots!

Q: How does eclipse photography work?

A: This year I plan to photograph the eclipse in two ways. The first is using a hydrogen-alpha telescope. This telescope filters out all light except for one wavelength that is given off by hydrogen. Because it blocks out most of the light from the sun’s surface, it allows you to see the turbulent upper atmosphere of the sun, including solar prominences that follow magnetic field lines.

Because this telescope does not allow for imaging during totality as too much light is blocked, I also plan to set up a regular camera with a wide-angle lens to capture the total eclipse with the surrounding environment as context. During the 2017 eclipse, I only captured close-ups of the sun using a regular solar filter and missed the opportunity to capture what was going on around me.

Will it work? That depends on if we get clear skies, and how many pictures of my 1.5-year-old need to be taken (as well as how much chasing needs to be done).

If you would like to take pictures of the eclipse, make sure you protect your camera sensor. The sun can easily damage lenses, sensors, and other components. Here are some examples of solar damaged cameras. The solution is simple, though. If using a camera phone, you can take pictures through an extra pair of solar glasses, or even tape them to the phone. For cameras with larger lenses, you can buy cardboard filters that slide over the front of your camera or even buy ISO-approved solar film and make your own.

Q: Any fun, unique, cool, or interesting science facts about this eclipse to share?

A: If you want to get even more involved with the eclipse, there are many citizen science projects that plan to collect as much data as possible throughout the eclipse.

NASA is planning to run several experiments during the eclipse, and researchers with MIT Haystack Observatory will also be using four different technologies to monitor changes in the upper atmosphere, both locally and across the continent.

If you are interested in learning more about the eclipse, here are two of my favorite videos, one on “unexpected science from a 0.000001 megapixel home-made telescope” and one on solar eclipse preparation.

Pages