MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 15 hours 20 min ago

Why are some rocks on the moon highly magnetic? MIT scientists may have an answer

Fri, 05/23/2025 - 2:00pm

Where did the moon’s magnetism go? Scientists have puzzled over this question for decades, ever since orbiting spacecraft picked up signs of a high magnetic field in lunar surface rocks. The moon itself has no inherent magnetism today. 

Now, MIT scientists may have solved the mystery. They propose that a combination of an ancient, weak magnetic field and a large, plasma-generating impact may have temporarily created a strong magnetic field, concentrated on the far side of the moon.

In a study appearing today in the journal Science Advances, the researchers show through detailed simulations that an impact, such as from a large asteroid, could have generated a cloud of ionized particles that briefly enveloped the moon. This plasma would have streamed around the moon and concentrated at the opposite location from the initial impact. There, the plasma would have interacted with and momentarily amplified the moon’s weak magnetic field. Any rocks in the region could have recorded signs of the heightened magnetism before the field quickly died away.

This combination of events could explain the presence of highly magnetic rocks detected in a region near the south pole, on the moon’s far side. As it happens, one of the largest impact basins — the Imbrium basin — is located in the exact opposite spot on the near side of the moon. The researchers suspect that whatever made that impact likely released the cloud of plasma that kicked off the scenario in their simulations.

“There are large parts of lunar magnetism that are still unexplained,” says lead author Isaac Narrett, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But the majority of the strong magnetic fields that are measured by orbiting spacecraft can be explained by this process — especially on the far side of the moon.”

Narrett’s co-authors include Rona Oran and Benjamin Weiss at MIT, along with Katarina Miljkovic at Curtin University, Yuxi Chen and Gábor Tóth at the University of Michigan at Ann Arbor, and Elias Mansbach PhD ’24 at Cambridge University. Nuno Loureiro, professor of nuclear science and engineering at MIT, also contributed insights and advice.

Beyond the sun

Scientists have known for decades that the moon holds remnants of a strong magnetic field. Samples from the surface of the moon, returned by astronauts on NASA’s Apollo missions of the 1960s and 70s, as well as global measurements of the moon taken remotely by orbiting spacecraft, show signs of remnant magnetism in surface rocks, especially on the far side of the moon.

The typical explanation for surface magnetism is a global magnetic field, generated by an internal “dynamo,” or a core of molten, churning material. The Earth today generates a magnetic field through a dynamo process, and it’s thought that the moon once may have done the same, though its much smaller core would have produced a much weaker magnetic field that may not explain the highly magnetized rocks observed, particularly on the moon’s far side.

An alternative hypothesis that scientists have tested from time to time involves a giant impact that generated plasma, which in turn amplified any weak magnetic field. In 2020, Oran and Weiss tested this hypothesis with simulations of a giant impact on the moon, in combination with the solar-generated magnetic field, which is weak as it stretches out to the Earth and moon.

In simulations, they tested whether an impact to the moon could amplify such a solar field, enough to explain the highly magnetic measurements of surface rocks. It turned out that it wasn’t, and their results seemed to rule out plasma-induced impacts as playing a role in the moon’s missing magnetism.

A spike and a jitter

But in their new study, the researchers took a different tack. Instead of accounting for the sun’s magnetic field, they assumed that the moon once hosted a dynamo that produced a magnetic field of its own, albeit a weak one. Given the size of its core, they estimated that such a field would have been about 1 microtesla, or 50 times weaker than the Earth’s field today.

From this starting point, the researchers simulated a large impact to the moon’s surface, similar to what would have created the Imbrium basin, on the moon’s near side. Using impact simulations from Katarina Miljkovic, the team then simulated the cloud of plasma that such an impact would have generated as the force of the impact vaporized the surface material. They adapted a second code, developed by collaborators at the University of Michigan, to simulate how the resulting plasma would flow and interact with the moon’s weak magnetic field.

These simulations showed that as a plasma cloud arose from the impact, some of it would have expanded into space, while the rest would stream around the moon and concentrate on the opposite side. There, the plasma would have compressed and briefly amplified the moon’s weak magnetic field. This entire process, from the moment the magnetic field was amplified to the time that it decays back to baseline, would have been incredibly fast — somewhere around 40 minutes, Narrett says.

Would this brief window have been enough for surrounding rocks to record the momentary magnetic spike? The researchers say, yes, with some help from another, impact-related effect.

They found that an Imbrium-scale impact would have sent a pressure wave through the moon, similar to a seismic shock. These waves would have converged to the other side, where the shock would have “jittered” the surrounding rocks, briefly unsettling the rocks’ electrons — the subatomic particles that naturally orient their spins to any external magnetic field. The researchers suspect the rocks were shocked just as the impact’s plasma amplified the moon’s magnetic field. As the rocks’ electrons settled back, they assumed a new orientation, in line with the momentary high magnetic field.

“It’s as if you throw a 52-card deck in the air, in a magnetic field, and each card has a compass needle,” Weiss says. “When the cards settle back to the ground, they do so in a new orientation. That’s essentially the magnetization process.”

The researchers say this combination of a dynamo plus a large impact, coupled with the impact’s shockwave, is enough to explain the moon’s highly magnetized surface rocks — particularly on the far side. One way to know for sure is to directly sample the rocks for signs of shock, and high magnetism. This could be a possibility, as the rocks lie on the far side, near the lunar south pole, where missions such as NASA’s Artemis program plan to explore.

“For several decades, there’s been sort of a conundrum over the moon’s magnetism — is it from impacts or is it from a dynamo?” Oran says. “And here we’re saying, it’s a little bit of both. And it’s a testable hypothesis, which is nice.”

The team’s simulations were carried out using the MIT SuperCloud. This research was supported, in part, by NASA. 

A magnetic pull toward materials

Thu, 05/22/2025 - 5:10pm

Growing up in Coeur d’Alene, Idaho, with engineer parents who worked in the state’s silver mining industry, MIT senior Maria Aguiar developed an early interest in materials. The star garnet, the state’s mineral, is still her favorite. It’s a sheer coincidence, though, that her undergraduate thesis also focuses on garnets.

Her research explores ways to manipulate the magnetic properties of garnet thin films — work that can help improve data storage technologies. After all, says Aguiar, a major in the Department of Materials Science and Engineering (DMSE), technology and energy applications increasingly rely on the use of materials with favorable electronic and magnetic properties.

Passionate about engineering in high school — science fiction was also her jam — Aguiar applied and got accepted to MIT. But she had only learned about materials engineering through a Google search. She assumed she would gravitate toward aerospace engineering, astronomy, or even physics, subjects that had all piqued her interest at one time or another.

Aguiar was indecisive about a major for a while but began to realize that the topics she enjoyed would invariably center on materials. “I would visit an aerospace museum and would be more interested in the tiles they used in the shuttle to tolerate the heat. I was interested in the process to engineer such materials,” Aguiar remembers.

It was a first-year pre-orientation program (FPOP), designed to help new students test-drive majors, that convinced Aguiar that materials engineering was a good fit for her interests. It helped that the DMSE students were friendly and approachable. “They were proud to be in that major, and excited to talk about what they did,” Aguiar says.

During the FPOP, Associate Professor James LeBeau, a DMSE expert in transmission electron microscopy, asked students about their interests. When Aguiar piped up, saying she loved astronomy, LeBeau compared the subject to microscopy.

“An electron microscope is just a telescope in reverse,” she recalls him saying. Instead of looking at something far away, you go from big to small — zooming in to see the finer details. That comparison stuck with Aguiar and inspired her to pursue her first Undergraduate Research Opportunities Program (UROP) project with Lebeau, where she learned more about microscopy.

Drawn to magnetic materials

It was class 3.152 (Magnetic Materials), taught by Professor Caroline Ross, that stoked Aguiar’s interest in magnetic materials. The subject matter was fascinating, Aguiar says, and she knew related research would make important contributions to modern data storage technology. After starting a UROP in Ross’s magnetic materials lab in the spring of her junior year, Aguiar was hooked, and the work eventually morphed into her undergraduate thesis, “Effects of Annealing on Atomic Ordering and Magnetic Anisotropy in Iron Garnet Thin Films.”

The broad goal of her work was to understand how to manipulate materials’ magnetic properties, such as anisotropy — the tendency of a material’s magnetic properties to change depending on which direction they are measured in. It turns out that changing where certain metal atoms — or cations — sit in the garnet’s crystal structure can influence this directional behavior. By carefully arranging these atoms, researchers can “tune” garnet films to deliver novel magnetic properties, enabling the design of advanced materials for electronics.

When Aguiar joined the lab, she began working with doctoral candidate Allison Kaczmarek, who was investigating the connection between cation ordering and magnetic properties for her PhD thesis. Specifically, Kaczmarek was studying the growth and characterization of garnet films, evaluating different ways to induce cation ordering by varying the parameters in the pulsed laser deposition process — a technique that fires a laser at a target material (in this case, garnet), vaporizing it so it deposits onto a substrate, such as glass. Adjusting variables such as laser energy, pressure, and temperature, along with the composition of the mixed oxides, can significantly influence the resulting film.

Aguiar studied one specific parameter: annealing — heating a material to a high temperature before cooling it. The strengthening technique is often used to alter the way atoms are arranged in a material. “So far, I have found that when we anneal these films for times as short as five minutes, the film gets closer to preferring out-of-plane magnetization,” Aguiar says. This property, known as perpendicular magnetic anisotropy, is significant for magnetic memory applications because it offers advantages in performance, scalability, and energy efficiency.

“Maria has been very reliable and quick to be independent. She picks things up very quickly and is very thoughtful about what she’s doing,” Kaczmarek says. That thoughtfulness showed early on. When asked to identify an optimal annealing temperature for the films, Aguiar didn’t just run tests — she first conducted a thorough literature review to understand what had been worked out before, then carefully tested films at different temperatures to find one that worked the best.

Kaczmarek first got to know Aguiar as a teaching assistant for class 3.030 (Microstructural Evolution of Materials), taught by Professor Geoffrey Beach. Even before starting the UROP in Ross’ lab, Aguiar had shared a clear research goal: to gain hands-on experience with advanced techniques such as X-ray diffraction, vibrating sample magnetometry, and ferromagnetic resonance — tools typically used by more senior researchers. “That’s a goal she has certainly achieved,” Kaczmarek says.

Beyond the lab, beyond MIT

Outside of the lab, Aguiar combines her love of materials with a strong sense of community outreach and social cohesion. As co-president of the Society of Undergraduate Materials Scientists in DMSE, she helps organize events that make the department more inclusive. Class dinners are great fun — many seniors recently went to a Cambridge restaurant for sushi — and “Materials Week” every semester functions primarily as a recruitment event for new students. A hot cocoa event near the winter holidays combined seasonal cheer with class evaluations — painful for some, perhaps, but necessary for improving instruction.

After graduating this spring, Aguiar is looking forward to pursuing graduate school at Stanford University and is setting her sights on teaching. She loved her time as a teaching assistant for the popular first-year classes 3.091 (Introduction to Solid-State Chemistry) and 3.010 (Structure of Materials), earning her an undergraduate student teaching award.

Ross is convinced that Aguiar is a strong fit for graduate studies. “For graduate school, you need academic excellence and technical skills like being good in the lab, and Maria has both. Then there are the soft skills, which have to do with how well organized you are, how resilient you are, how you manage different responsibilities. Usually, students learn them as they go along, but Maria is well ahead of the curve,” Ross says.

“One thing that makes me hopeful for Maria’s time in grad school is that she is very broadly interested in a lot of aspects of materials science,” Kaczmarek adds.

Aguiar’s passion for the subject spilled over into a fun side project: a DMSE-exclusive “Meow-terials Science” T-shirt she designed — featuring cats doing familiar lab experiments — was a hit among students.

She remains endlessly fascinated by the materials around her, even in the water bottle she drinks from every day. “Studying materials science has changed the way I see the world. I can pick up something as ordinary as this water bottle and think about the metallurgical processing techniques I learned from my classes. I just love that there’s so much to learn from the everyday.”

New research, data advance understanding of early planetary formation

Thu, 05/22/2025 - 3:40pm

A team of international astronomers led by Richard Teague, the Kerr-McGee Career Development Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) has gathered the most sensitive and detailed observations of 15 protoplanetary disks to date, giving the astronomy community a new look at the mechanisms of early planetary formation.

“The new approaches we’ve developed to gather this data and images are like switching from reading glasses to high-powered binoculars — they reveal a whole new level of detail in these planet-forming systems,” says Teague.

Their open-access findings were published in a special collection of 17 papers in the Astrophysical Journal of Letters, with several more coming out this summer. The report sheds light on a breadth of questions, including ways to calculate the mass of a disk by measuring its gravitational influence and extracting rotational velocity profiles to a precision of meters per second.

Protoplanetary disks are a collection of dust and gas around young stars, from which planets form. Observing the dust in these disks is easier because it is brighter, but the information that can be gleaned from dust alone is only a snapshot of what is going on. Teague’s research focus has shifted attention to the gas in these systems, as they can tell us more about the dynamics in a disk, including properties such as gravity, velocity, and mass.

To achieve the resolution necessary to study gas, the exoALMA program spent five years coordinating longer observation windows on the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. As a result, the international team of astronomers, many of whom are early-career scientists, were able to collect some of the most detailed images ever taken of protoplanetary disks.

“The impressive thing about the data is that it’s so good, the community is developing new tools to extract signatures from planets,” says Marcelo Barraza-Alfaro, a postdoc in the Planet Formation Lab and a member of the exoALMA project. Several new techniques to improve and calibrate the images taken were developed to maximize the higher resolution and sensitivity that was used.

As a result, “we are seeing new things that require us to modify our understanding of what’s going on in protoplanetary disks,” he says.

One of the papers with the largest EAPS influence explores planetary formation through vortices. It has been known for some time that the simple model of formation often proposed, where dust grains clump together and “snowball” into a planetary core, is not enough. One possible way to help is through vortices, or localized perturbations in the gas that pull dust into the center. Here, they are more likely to clump, the way soap bubbles collect in a draining tub.

“We can see the concentration of dust in different regions, but we cannot see how it is moving,” says Lisa Wölfer, another postdoc in the Planet Formation Lab at MIT and first author on the paper. While astronomers can see that the dust has gathered, there isn’t enough information to rule out how it got to that point.

“Only through the dynamics in the gas can we actually confirm that it’s a vortex, and not something else, creating the structure,” she says.

During the data collection period, Teague, Wölfer, and Barraza-Alfaro developed simple models of protoplanetary disks to compare to their observations. When they got the data back, however, the models couldn’t explain what they were seeing.

“We saw the data and nothing worked anymore. It was way too complicated,” says Teague. “Before, everyone thought they were not dynamic. That’s completely not the case.”

The team was forced to reevaluate their models and work with more complex ones incorporating more motion in the gas, which take more time and resources to run. But early results look promising.

“We see that the patterns look very similar; we think this is the best test case to study further with more observations,” says Wölfer.

The new data, which have been made public, come at a fortuitous time: ALMA will be going dark for a period in the next few years while it undergoes upgrades. During this time, astronomers can continue the monumental process of sifting through all the data.

“It’s going to just keep on producing results for years and years to come,” says Teague.

A new approach could fractionate crude oil using much less energy

Thu, 05/22/2025 - 2:00pm

Separating crude oil into products such as gasoline, diesel, and heating oil is an energy-intensive process that accounts for about 6 percent of the world’s CO2 emissions. Most of that energy goes into the heat needed to separate the components by their boiling point.

In an advance that could dramatically reduce the amount of energy needed for crude oil fractionation, MIT engineers have developed a membrane that filters the components of crude oil by their molecular size.

“This is a whole new way of envisioning a separation process. Instead of boiling mixtures to purify them, why not separate components based on shape and size? The key innovation is that the filters we developed can separate very small molecules at an atomistic length scale,” says Zachary P. Smith, an associate professor of chemical engineering at MIT and the senior author of the new study.

The new filtration membrane can efficiently separate heavy and light components from oil, and it is resistant to the swelling that tends to occur with other types of oil separation membranes. The membrane is a thin film that can be manufactured using a technique that is already widely used in industrial processes, potentially allowing it to be scaled up for widespread use.

Taehoon Lee, a former MIT postdoc who is now an assistant professor at Sungkyunkwan University in South Korea, is the lead author of the paper, which appears today in Science.

Oil fractionation

Conventional heat-driven processes for fractionating crude oil make up about 1 percent of global energy use, and it has been estimated that using membranes for crude oil separation could reduce the amount of energy needed by about 90 percent. For this to succeed, a separation membrane needs to allow hydrocarbons to pass through quickly, and to selectively filter compounds of different sizes.

Until now, most efforts to develop a filtration membrane for hydrocarbons have focused on polymers of intrinsic microporosity (PIMs), including one known as PIM-1. Although this porous material allows the fast transport of hydrocarbons, it tends to excessively absorb some of the organic compounds as they pass through the membrane, leading the film to swell, which impairs its size-sieving ability.

To come up with a better alternative, the MIT team decided to try modifying polymers that are used for reverse osmosis water desalination. Since their adoption in the 1970s, reverse osmosis membranes have reduced the energy consumption of desalination by about 90 percent — a remarkable industrial success story.

The most commonly used membrane for water desalination is a polyamide that is manufactured using a method known as interfacial polymerization. During this process, a thin polymer film forms at the interface between water and an organic solvent such as hexane. Water and hexane do not normally mix, but at the interface between them, a small amount of the compounds dissolved in them can react with each other.

In this case, a hydrophilic monomer called MPD, which is dissolved in water, reacts with a hydrophobic monomer called TMC, which is dissolved in hexane. The two monomers are joined together by a connection known as an amide bond, forming a polyamide thin film (named MPD-TMC) at the water-hexane interface.

While highly effective for water desalination, MPD-TMC doesn’t have the right pore sizes and swelling resistance that would allow it to separate hydrocarbons.

To adapt the material to separate the hydrocarbons found in crude oil, the researchers first modified the film by changing the bond that connects the monomers from an amide bond to an imine bond. This bond is more rigid and hydrophobic, which allows hydrocarbons to quickly move through the membrane without causing noticeable swelling of the film compared to the polyamide counterpart.

“The polyimine material has porosity that forms at the interface, and because of the cross-linking chemistry that we have added in, you now have something that doesn’t swell,” Smith says. “You make it in the oil phase, react it at the water interface, and with the crosslinks, it’s now immobilized. And so those pores, even when they’re exposed to hydrocarbons, no longer swell like other materials.”

The researchers also introduced a monomer called triptycene. This shape-persistent, molecularly selective molecule further helps the resultant polyimines to form pores that are the right size for hydrocarbons to fit through.

This approach represents “an important step toward reducing industrial energy consumption,” says Andrew Livingston, a professor of chemical engineering at Queen Mary University of London, who was not involved in the study.

“This work takes the workhorse technology of the membrane desalination industry, interfacial polymerization, and creates a new way to apply it to organic systems such as hydrocarbon feedstocks, which currently consume large chunks of global energy,” Livingston says. “The imaginative approach using an interfacial catalyst coupled to hydrophobic monomers leads to membranes with high permeance and excellent selectivity, and the work shows how these can be used in relevant separations.”

Efficient separation

When the researchers used the new membrane to filter a mixture of toluene and triisopropylbenzene (TIPB) as a benchmark for evaluating separation performance, it was able to achieve a concentration of toluene 20 times greater than its concentration in the original mixture. They also tested the membrane with an industrially relevant mixture consisting of naphtha, kerosene, and diesel, and found that it could efficiently separate the heavier and lighter compounds by their molecular size.

If adapted for industrial use, a series of these filters could be used to generate a higher concentration of the desired products at each step, the researchers say.

“You can imagine that with a membrane like this, you could have an initial stage that replaces a crude oil fractionation column. You could partition heavy and light molecules and then you could use different membranes in a cascade to purify complex mixtures to isolate the chemicals that you need,” Smith says.

Interfacial polymerization is already widely used to create membranes for water desalination, and the researchers believe it should be possible to adapt those processes to mass produce the films they designed in this study.

“The main advantage of interfacial polymerization is it’s already a well-established method to prepare membranes for water purification, so you can imagine just adopting these chemistries into existing scale of manufacturing lines,” Lee says.

The research was funded, in part, by ExxonMobil through the MIT Energy Initiative. 

MIT physicists discover a new type of superconductor that’s also a magnet

Thu, 05/22/2025 - 1:45pm

Magnets and superconductors go together like oil and water — or so scientists have thought. But a new finding by MIT physicists is challenging this century-old assumption.

In a paper appearing today in the journal Nature, the physicists report that they have discovered a “chiral superconductor” — a material that conducts electricity without resistance, and also, paradoxically, is intrinsically magnetic. What’s more, they observed this exotic superconductivity in a surprisingly ordinary material: graphite, the primary material in pencil lead.

Graphite is made from many layers of graphene — atomically thin, lattice-like sheets of carbon atoms — that are stacked together and can easily flake off when pressure is applied, as when pressing down to write on a piece of paper. A single flake of graphite can contain several million sheets of graphene, which are normally stacked such that every other layer aligns. But every so often, graphite contains tiny pockets where graphene is stacked in a different pattern, resembling a staircase of offset layers.

The MIT team has found that when four or five sheets of graphene are stacked in this “rhombohedral” configuration, the resulting structure can exhibit exceptional electronic properties that are not seen in graphite as a whole.

In their new study, the physicists isolated microscopic flakes of rhombohedral graphene from graphite, and subjected the flakes to a battery of electrical tests. They found that when the flakes are cooled to 300 millikelvins (about -273 degrees Celsius), the material turns into a superconductor, meaning that any electrical current passing through the material can flow through without resistance.

They also found that when they swept an external magnetic field up and down, the flakes could be switched between two different superconducting states, just like a magnet. This suggests that the superconductor has some internal, intrinsic magnetism. Such switching behavior is absent in other superconductors.

“The general lore is that superconductors do not like magnetic fields,” says Long Ju, assistant professor of physics at MIT. “But we believe this is the first observation of a superconductor that behaves as a magnet with such direct and simple evidence. And that’s quite a bizarre thing because it is against people’s general impression on superconductivity and magnetism.”

Ju is senior author of the study, which includes MIT co-authors Tonghang Han, Zhengguang Lu, Zach Hadjri, Lihan Shi, Zhenghan Wu, Wei Xu, Yuxuan Yao, Jixiang Yang, Junseok Seo, Shenyong Ye, Muyang Zhou, and Liang Fu, along with collaborators from Florida State University, the University of Basel in Switzerland, and the National Institute for Materials Science in Japan.

Graphene twist

In everyday conductive materials, electrons flow through in a chaotic scramble, whizzing by each other, and pinging off the material’s atomic latticework. Each time an electron scatters off an atom, it has, in essence, met some resistance, and loses some energy as a result, normally in the form of heat. In contrast, when certain materials are cooled to ultracold temperatures, they can become superconducting, meaning that the material can allow electrons to pair up, in what physicists term “Cooper pairs.” Rather than scattering away, these electron pairs glide through a material without resistance. With a superconductor, then, no energy is lost in translation.

Since superconductivity was first observed in 1911, physicists have shown many times over that zero electrical resistance is a hallmark of a superconductor. Another defining property was first observed in 1933, when the physicist Walther Meissner discovered that a superconductor will expel an external magnetic field. This “Meissner effect” is due in part to a superconductor’s electron pairs, which collectively act to push away any magnetic field.

Physicists have assumed that all superconducting materials should exhibit both zero electrical resistance, and a natural magnetic repulsion. Indeed, these two properties are what could enable Maglev, or “magnetic levitation” trains, whereby a superconducting rail repels and therefore levitates a magnetized car.

Ju and his colleagues had no reason to question this assumption as they carried out their experiments at MIT. In the last few years, the team has been exploring the electrical properties of pentalayer rhombohedral graphene. The researchers have observed surprising properties in the five-layer, staircase-like graphene structure, most recently that it enables electrons to split into fractions of themselves. This phenomenon occurs when the pentalayer structure is placed atop a sheet of hexagonal boron nitride (a material similar to graphene), and slightly offset by a specific angle, or twist. 

Curious as to how electron fractions might change with changing conditions, the researchers followed up their initial discovery with similar tests, this time by misaligning the graphene and hexagonal boron nitride structures. To their surprise, they found that when they misaligned the two materials and sent an electrical current through, at temperatures less than 300 millikelvins, they measured zero resistance. It seemed that the phenomenon of electron fractions disappeared, and what emerged instead was superconductivity.

The researchers went a step further to see how this new superconducting state would respond to an external magnetic field. They applied a magnet to the material, along with a voltage, and measured the electrical current coming out of the material. As they dialed the magnetic field from negative to positive (similar to a north and south polarity) and back again, they observed that the material maintained its superconducting, zero-resistance state, except in two instances, once at either magnetic polarity. In these instances, the resistance briefly spiked, before switching back to zero, and returning to a superconducting state.

“If this were a conventional superconductor, it would just remain at zero resistance, until the magnetic field reaches a critical point, where superconductivity would be killed,” Zach Hadjri, a first-year student in the group, says. “Instead, this material seems to switch between two superconducting states, like a magnet that starts out pointing upward, and can flip downwards when you apply a magnetic field. So it looks like this is a superconductor that also acts like a magnet. Which doesn’t make any sense!”

“One of a kind”

As counterintuitive as the discovery may seem, the team observed the same phenomenon in six similar samples. They suspect that the unique configuration of rhombohedral graphene is the key. The material has a very simple arrangement of carbon atoms. When cooled to ultracold temperatures, the thermal fluctuation is minimized, allowing any electrons flowing through the material to slow down, sense each other, and interact.

Such quantum interactions can lead electrons to pair up and superconduct. These interactions can also encourage electrons to coordinate. Namely, electrons can collectively occupy one of two opposite momentum states, or “valleys.” When all electrons are in one valley, they effectively spin in one direction, versus the opposite direction. In conventional superconductors, electrons can occupy either valley, and any pair of electrons is typically made from electrons of opposite valleys that cancel each other out. The pair overall then, has zero momentum, and does not spin.

In the team’s material structure, however, they suspect that all electrons interact such that they share the same valley, or momentum state. When electrons then pair up, the superconducting pair overall has a “non-zero” momentum, and spinning, that, along with many other pairs, can amount to an internal, superconducting magnetism.

“You can think of the two electrons in a pair spinning clockwise, or counterclockwise, which corresponds to a magnet pointing up, or down,” Tonghang Han, a fifth-year student in the group, explains. “So we think this is the first observation of a superconductor that behaves as a magnet due to the electrons’ orbital motion, which is known as a chiral superconductor. It’s one of a kind. It is also a candidate for a topological superconductor which could enable robust quantum computation.”

“Everything we’ve discovered in this material has been completely out of the blue,” says Zhengguang Lu, a former postdoc in the group and now an assistant professor at Florida State University. “But because this is a simple system, we think we have a good chance of understanding what is going on, and could demonstrate some very profound and deep physics principles.”

“It is truly remarkable that such an exotic chiral superconductor emerges from such simple ingredients,” adds Liang Fu, professor of physics at MIT. “Superconductivity in rhombodedral graphene will surely have a lot to offer.”     

The part of the research carried out at MIT was supported by the U.S. Department of Energy and a MathWorks Fellowship.

Study: Climate change may make it harder to reduce smog in some regions

Thu, 05/22/2025 - 8:00am

Global warming will likely hinder our future ability to control ground-level ozone, a harmful air pollutant that is a primary component of smog, according to a new MIT study.

The results could help scientists and policymakers develop more effective strategies for improving both air quality and human health. Ground-level ozone causes a host of detrimental health impacts, from asthma to heart disease, and contributes to thousands of premature deaths each year.

The researchers’ modeling approach reveals that, as the Earth warms due to climate change, ground-level ozone will become less sensitive to reductions in nitrogen oxide emissions in eastern North America and Western Europe. In other words, it will take greater nitrogen oxide emission reductions to get the same air quality benefits.

However, the study also shows that the opposite would be true in northeast Asia, where cutting emissions would have a greater impact on reducing ground-level ozone in the future. 

The researchers combined a climate model that simulates meteorological factors, such as temperature and wind speeds, with a chemical transport model that estimates the movement and composition of chemicals in the atmosphere.

By generating a range of possible future outcomes, the researchers’ ensemble approach better captures inherent climate variability, allowing them to paint a fuller picture than many previous studies.

“Future air quality planning should consider how climate change affects the chemistry of air pollution. We may need steeper cuts in nitrogen oxide emissions to achieve the same air quality goals,” says Emmie Le Roy, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author of a paper on this study.

Her co-authors include Anthony Y.H. Wong, a postdoc in the MIT Center for Sustainability Science and Strategy; Sebastian D. Eastham, principal research scientist in the MIT Center for Sustainability Science and Strategy; Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor of EAPS; and senior author Noelle Selin, a professor in the Institute for Data, Systems, and Society (IDSS) and EAPS. The research appears today in Environmental Science and Technology.

Controlling ozone

Ground-level ozone differs from the stratospheric ozone layer that protects the Earth from harmful UV radiation. It is a respiratory irritant that is harmful to the health of humans, animals, and plants.

Controlling ground-level ozone is particularly challenging because it is a secondary pollutant, formed in the atmosphere by complex reactions involving nitrogen oxides and volatile organic compounds in the presence of sunlight.

“That is why you tend to have higher ozone days when it is warm and sunny,” Le Roy explains.

Regulators typically try to reduce ground-level ozone by cutting nitrogen oxide emissions from industrial processes. But it is difficult to predict the effects of those policies because ground-level ozone interacts with nitrogen oxide and volatile organic compounds in nonlinear ways.

Depending on the chemical environment, reducing nitrogen oxide emissions could cause ground-level ozone to increase instead.

“Past research has focused on the role of emissions in forming ozone, but the influence of meteorology is a really important part of Emmie’s work,” Selin says.

To conduct their study, the researchers combined a global atmospheric chemistry model with a climate model that simulate future meteorology.

They used the climate model to generate meteorological inputs for each future year in their study, simulating factors such as likely temperature and wind speeds, in a way that captures the inherent variability of a region’s climate.

Then they fed those inputs to the atmospheric chemistry model, which calculates how the chemical composition of the atmosphere would change because of meteorology and emissions.

The researchers focused on Eastern North America, Western Europe, and Northeast China, since those regions have historically high levels of the precursor chemicals that form ozone and well-established monitoring networks to provide data.

They chose to model two future scenarios, one with high warming and one with low warming, over a 16-year period between 2080 and 2095. They compared them to a historical scenario capturing 2000 to 2015 to see the effects of a 10 percent reduction in nitrogen oxide emissions.

Capturing climate variability

“The biggest challenge is that the climate naturally varies from year to year. So, if you want to isolate the effects of climate change, you need to simulate enough years to see past that natural variability,” Le Roy says.

They could overcome that challenge due to recent advances in atmospheric chemistry modeling and by taking advantage of parallel computing to simulate multiple years at the same time. They simulated five 16-year realizations, resulting in 80 model years for each scenario.

The researchers found that eastern North America and Western Europe are especially sensitive to increases in nitrogen oxide emissions from the soil, which are natural emissions driven by increases in temperature.

Due to that sensitivity, as the Earth warms and more nitrogen oxide from soil enters the atmosphere, reducing nitrogen oxide emissions from human activities will have less of an impact on ground-level ozone.

“This shows how important it is to improve our representation of the biosphere in these models to better understand how climate change may impact air quality,” Le Roy says.

On the other hand, since industrial processes in northeast Asia cause more ozone per unit of nitrogen oxide emitted, cutting emissions there would cause greater reductions in ground-level ozone in future warming scenarios.

“But I wouldn’t say that is a good thing because it means that, overall, there are higher levels of ozone,” Le Roy adds.

Running detailed meteorology simulations, rather than relying on annual average weather data, gave the researchers a more complete picture of the potential effects on human health.

“Average climate isn’t the only thing that matters. One high ozone day, which might be a statistical anomaly, could mean we don’t meet our air quality target and have negative human health impacts that we should care about,” Le Roy says.

In the future, the researchers want to continue exploring the intersection of meteorology and air quality. They also want to expand their modeling approach to consider other climate change factors with high variability, like wildfires or biomass burning.

“We’ve shown that it is important for air quality scientists to consider the full range of climate variability, even if it is hard to do in your models, because it really does affect the answer that you get,” says Selin.

This work is funded, in part, by the MIT Praecis Presidential Fellowship, the J.H. and E.V. Wade Fellowship, and the MIT Martin Family Society of Fellows for Sustainability.

AI learns how vision and sound are connected, without human intervention

Thu, 05/22/2025 - 12:00am

Humans naturally learn by making connections between sight and sound. For instance, we can watch someone playing the cello and recognize that the cellist’s movements are generating the music we hear.

A new approach developed by researchers from MIT and elsewhere improves an AI model’s ability to learn in this same fashion. This could be useful in applications such as journalism and film production, where the model could help with curating multimodal content through automatic video and audio retrieval.

In the longer term, this work could be used to improve a robot’s ability to understand real-world environments, where auditory and visual information are often closely connected.

Improving upon prior work from their group, the researchers created a method that helps machine-learning models align corresponding audio and visual data from video clips without the need for human labels.

They adjusted how their original model is trained so it learns a finer-grained correspondence between a particular video frame and the audio that occurs in that moment. The researchers also made some architectural tweaks that help the system balance two distinct learning objectives, which improves performance.

Taken together, these relatively simple improvements boost the accuracy of their approach in video retrieval tasks and in classifying the action in audiovisual scenes. For instance, the new method could automatically and precisely match the sound of a door slamming with the visual of it closing in a video clip.

“We are building AI systems that can process the world like humans do, in terms of having both audio and visual information coming in at once and being able to seamlessly process both modalities. Looking forward, if we can integrate this audio-visual technology into some of the tools we use on a daily basis, like large language models, it could open up a lot of new applications,” says Andrew Rouditchenko, an MIT graduate student and co-author of a paper on this research.

He is joined on the paper by lead author Edson Araujo, a graduate student at Goethe University in Germany; Yuan Gong, a former MIT postdoc; Saurabhchand Bhati, a current MIT postdoc; Samuel Thomas, Brian Kingsbury, and Leonid Karlinsky of IBM Research; Rogerio Feris, principal scientist and manager at the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Hilde Kuehne, professor of computer science at Goethe University and an affiliated professor at the MIT-IBM Watson AI Lab. The work will be presented at the Conference on Computer Vision and Pattern Recognition.

Syncing up

This work builds upon a machine-learning method the researchers developed a few years ago, which provided an efficient way to train a multimodal model to simultaneously process audio and visual data without the need for human labels.

The researchers feed this model, called CAV-MAE, unlabeled video clips and it encodes the visual and audio data separately into representations called tokens. Using the natural audio from the recording, the model automatically learns to map corresponding pairs of audio and visual tokens close together within its internal representation space.

They found that using two learning objectives balances the model’s learning process, which enables CAV-MAE to understand the corresponding audio and visual data while improving its ability to recover video clips that match user queries.

But CAV-MAE treats audio and visual samples as one unit, so a 10-second video clip and the sound of a door slamming are mapped together, even if that audio event happens in just one second of the video.

In their improved model, called CAV-MAE Sync, the researchers split the audio into smaller windows before the model computes its representations of the data, so it generates separate representations that correspond to each smaller window of audio.

During training, the model learns to associate one video frame with the audio that occurs during just that frame.

“By doing that, the model learns a finer-grained correspondence, which helps with performance later when we aggregate this information,” Araujo says.

They also incorporated architectural improvements that help the model balance its two learning objectives.

Adding “wiggle room”

The model incorporates a contrastive objective, where it learns to associate similar audio and visual data, and a reconstruction objective which aims to recover specific audio and visual data based on user queries.

In CAV-MAE Sync, the researchers introduced two new types of data representations, or tokens, to improve the model’s learning ability.

They include dedicated “global tokens” that help with the contrastive learning objective and dedicated “register tokens” that help the model focus on important details for the reconstruction objective.

“Essentially, we add a bit more wiggle room to the model so it can perform each of these two tasks, contrastive and reconstructive, a bit more independently. That benefitted overall performance,” Araujo adds.

While the researchers had some intuition these enhancements would improve the performance of CAV-MAE Sync, it took a careful combination of strategies to shift the model in the direction they wanted it to go.

“Because we have multiple modalities, we need a good model for both modalities by themselves, but we also need to get them to fuse together and collaborate,” Rouditchenko says.

In the end, their enhancements improved the model’s ability to retrieve videos based on an audio query and predict the class of an audio-visual scene, like a dog barking or an instrument playing.

Its results were more accurate than their prior work, and it also performed better than more complex, state-of-the-art methods that require larger amounts of training data.

“Sometimes, very simple ideas or little patterns you see in the data have big value when applied on top of a model you are working on,” Araujo says.

In the future, the researchers want to incorporate new models that generate better data representations into CAV-MAE Sync, which could improve performance. They also want to enable their system to handle text data, which would be an important step toward generating an audiovisual large language model.

This work is funded, in part, by the German Federal Ministry of Education and Research and the MIT-IBM Watson AI Lab.

Learning how to predict rare kinds of failures

Wed, 05/21/2025 - 4:35pm

On Dec. 21, 2022, just as peak holiday season travel was getting underway, Southwest Airlines went through a cascading series of failures in their scheduling, initially triggered by severe winter weather in the Denver area. But the problems spread through their network, and over the course of the next 10 days the crisis ended up stranding over 2 million passengers and causing losses of $750 million for the airline.

How did a localized weather system end up triggering such a widespread failure? Researchers at MIT have examined this widely reported failure as an example of cases where systems that work smoothly most of the time suddenly break down and cause a domino effect of failures. They have now developed a computational system for using the combination of sparse data about a rare failure event, in combination with much more extensive data on normal operations, to work backwards and try to pinpoint the root causes of the failure, and hopefully be able to find ways to adjust the systems to prevent such failures in the future.

The findings were presented at the International Conference on Learning Representations (ICLR), which was held in Singapore from April 24-28 by MIT doctoral student Charles Dawson, professor of aeronautics and astronautics Chuchu Fan, and colleagues from Harvard University and the University of Michigan.

“The motivation behind this work is that it’s really frustrating when we have to interact with these complicated systems, where it’s really hard to understand what’s going on behind the scenes that’s creating these issues or failures that we’re observing,” says Dawson.

The new work builds on previous research from Fan’s lab, where they looked at problems involving hypothetical failure prediction problems, she says, such as with groups of robots working together on a task, or complex systems such as the power grid, looking for ways to predict how such systems may fail. “The goal of this project,” Fan says, “was really to turn that into a diagnostic tool that we could use on real-world systems.”

The idea was to provide a way that someone could “give us data from a time when this real-world system had an issue or a failure,” Dawson says, “and we can try to diagnose the root causes, and provide a little bit of a look behind the curtain at this complexity.”

The intent is for the methods they developed “to work for a pretty general class of cyber-physical problems,” he says. These are problems in which “you have an automated decision-making component interacting with the messiness of the real world,” he explains. There are available tools for testing software systems that operate on their own, but the complexity arises when that software has to interact with physical entities going about their activities in a real physical setting, whether it be the scheduling of aircraft, the movements of autonomous vehicles, the interactions of a team of robots, or the control of the inputs and outputs on an electric grid. In such systems, what often happens, he says, is that “the software might make a decision that looks OK at first, but then it has all these domino, knock-on effects that make things messier and much more uncertain.”

One key difference, though, is that in systems like teams of robots, unlike the scheduling of airplanes, “we have access to a model in the robotics world,” says Fan, who is a principal investigator in MIT’s Laboratory for Information and Decision Systems (LIDS). “We do have some good understanding of the physics behind the robotics, and we do have ways of creating a model” that represents their activities with reasonable accuracy. But airline scheduling involves processes and systems that are proprietary business information, and so the researchers had to find ways to infer what was behind the decisions, using only the relatively sparse publicly available information, which essentially consisted of just the actual arrival and departure times of each plane.

“We have grabbed all this flight data, but there is this entire system of the scheduling system behind it, and we don’t know how the system is working,” Fan says. And the amount of data relating to the actual failure is just several day’s worth, compared to years of data on normal flight operations.

The impact of the weather events in Denver during the week of Southwest’s scheduling crisis clearly showed up in the flight data, just from the longer-than-normal turnaround times between landing and takeoff at the Denver airport. But the way that impact cascaded though the system was less obvious, and required more analysis. The key turned out to have to do with the concept of reserve aircraft.

Airlines typically keep some planes in reserve at various airports, so that if problems are found with one plane that is scheduled for a flight, another plane can be quickly substituted. Southwest uses only a single type of plane, so they are all interchangeable, making such substitutions easier. But most airlines operate on a hub-and-spoke system, with a few designated hub airports where most of those reserve aircraft may be kept, whereas Southwest does not use hubs, so their reserve planes are more scattered throughout their network. And the way those planes were deployed turned out to play a major role in the unfolding crisis.

“The challenge is that there’s no public data available in terms of where the aircraft are stationed throughout the Southwest network,” Dawson says. “What we’re able to find using our method is, by looking at the public data on arrivals, departures, and delays, we can use our method to back out what the hidden parameters of those aircraft reserves could have been, to explain the observations that we were seeing.”

What they found was that the way the reserves were deployed was a “leading indicator” of the problems that cascaded in a nationwide crisis. Some parts of the network that were affected directly by the weather were able to recover quickly and get back on schedule. “But when we looked at other areas in the network, we saw that these reserves were just not available, and things just kept getting worse.”

For example, the data showed that Denver’s reserves were rapidly dwindling because of the weather delays, but then “it also allowed us to trace this failure from Denver to Las Vegas,” he says. While there was no severe weather there, “our method was still showing us a steady decline in the number of aircraft that were able to serve flights out of Las Vegas.”

He says that “what we found was that there were these circulations of aircraft within the Southwest network, where an aircraft might start the day in California and then fly to Denver, and then end the day in Las Vegas.” What happened in the case of this storm was that the cycle got interrupted. As a result, “this one storm in Denver breaks the cycle, and suddenly the reserves in Las Vegas, which is not affected by the weather, start to deteriorate.”

In the end, Southwest was forced to take a drastic measure to resolve the problem: They had to do a “hard reset” of their entire system, canceling all flights and flying empty aircraft around the country to rebalance their reserves.

Working with experts in air transportation systems, the researchers developed a model of how the scheduling system is supposed to work. Then, “what our method does is, we’re essentially trying to run the model backwards.” Looking at the observed outcomes, the model allows them to work back to see what kinds of initial conditions could have produced those outcomes.

While the data on the actual failures were sparse, the extensive data on typical operations helped in teaching the computational model “what is feasible, what is possible, what’s the realm of physical possibility here,” Dawson says. “That gives us the domain knowledge to then say, in this extreme event, given the space of what’s possible, what’s the most likely explanation” for the failure.

This could lead to a real-time monitoring system, he says, where data on normal operations are constantly compared to the current data, and determining what the trend looks like. “Are we trending toward normal, or are we trending toward extreme events?” Seeing signs of impending issues could allow for preemptive measures, such as redeploying reserve aircraft in advance to areas of anticipated problems.

Work on developing such systems is ongoing in her lab, Fan says. In the meantime, they have produced an open-source tool for analyzing failure systems, called CalNF, which is available for anyone to use. Meanwhile Dawson, who earned his doctorate last year, is working as a postdoc to apply the methods developed in this work to understanding failures in power networks.

The research team also included Max Li from the University of Michigan and Van Tran from Harvard University. The work was supported by NASA, the Air Force Office of Scientific Research, and the MIT-DSTA program.

A new technology for extending the shelf life of produce

Wed, 05/21/2025 - 11:00am

We’ve all felt the sting of guilt when fruit and vegetables go bad before we could eat them. Now, researchers from MIT and the Singapore-MIT Alliance for Research and Technology (SMART) have shown they can extend the shelf life of harvested plants by injecting them with melatonin using biodegradable microneedles.

That’s a big deal because the problem of food waste goes way beyond our salads. More than 30 percent of the world’s food is lost after it’s harvested — enough to feed more than 1 billion people. Refrigeration is the most common way to preserve foods, but it requires energy and infrastructure that many regions of the world can’t afford or lack access to.

The researchers believe their system could offer an alternative or complement to refrigeration. Central to their approach are patches of silk microneedles. The microneedles can get through the tough, waxy skin of plants without causing a stress response, and deliver precise amounts of melatonin into plants’ inner tissues.

“This is the first time that we’ve been able to apply these microneedles to extend the shelf life of a fresh-cut crop,” says Benedetto Marelli, the study’s senior author, associate professor of civil and environmental engineering at MIT, and the director of the Wild Cards mission of the MIT Climate Project. “We thought we could use this technology to deliver something that could regulate or control the plant’s post-harvest physiology. Eventually, we looked at hormones, and melatonin is already used by plants to regulate such functions. The food we waste could feed about 1.6 billion people. Even in the U.S., this approach could one day expand access to healthy foods.”

For the study, which appears today in Nano Letters, Marelli and researchers from SMART applied small patches of the microneedles containing melatonin to the base of the leafy vegetable pak choy. After application, the researchers found the melatonin was able to extend the vegetables’ shelf life by four days at room temperature and 10 days when refrigerated, which could allow more crops to reach consumers before they’re wasted.

“Post-harvest waste is a huge issue. This problem is extremely important in emerging markets around Africa and Southeast Asia, where many crops are produced but can't be maintained in the journey from farms to markets,” says Sarojam Rajani, co-senior author of the study and a senior principal investigator at the Temasek Life Sciences Laboratory in Singapore.

Plant destressors

For years, Marelli’s lab has been exploring the use of silk microneedles for things like delivering nutrients to crops and monitoring plant health. Microneedles made from silk fibroin protein are nontoxic and biodegradable, and Marelli’s previous work has described ways of manufacturing them at scale.

To test microneedle’s ability to extend the shelf life of food, the researchers wanted to study their ability to deliver a hormone known to affect the senescence process. Aside from helping humans sleep, melatonin is also a natural hormone in many plants that helps them regulate growth and aging.

“The dose of melatonin we’re delivering is so low that it’s fully metabolized by the crops, so it would not significantly increase the amount of melatonin normally present in the food; we would not ingest more melatonin than usual,” Marelli says. “We chose pak choy because it's a very important crop in Asia, and also because pak choy is very perishable.”

Pak choy is typically harvested by cutting the leafy plant from the root system, exposing the shoot base that provides easy access to vascular bundles which distribute water and nutrients to the rest of the plant. To begin their study, the researchers first used their microneedles to inject a fluorescent dye into the base to confirm that vasculature could spread the dye throughout the plant.

The researchers then compared the shelf life of regular pak choy plants and plants that had been sprayed with or dipped into melatonin, finding no difference.

With their baseline shelf life established, the researchers applied small patches of the melatonin-filled microneedles to the bottom of pak choy plants by hand. They then stored the treated plants, along with controls, in plastic boxes both at room temperature and under refrigeration.

The team evaluated the plants by monitoring their weight, visual appearance, and concentration of chlorophyll, a green pigment that decreases as plants age.

At room temperature, the leaves of the untreated control group began yellowing within two or three days. By the fourth day, the yellowing accelerated to the point that the plants likely could not be sold. Plants treated with the melatonin-loaded silk microneedles, in contrast, remained green on day five, and the yellowing process was significantly delayed. The weight loss and chlorophyll reduction of treated plants also slowed significantly at room temperature. Overall, the researchers estimated the microneedle-treated plants retained their saleable value until the eighth day.

“We clearly saw we could enhance the shelf life of pak choy without the cold chain,” Marelli says.

In refrigerated conditions of about 40 degrees Fahrenheit, plant yellowing was delayed by about five days on average, with treated plants remaining relatively green until day 25.

“Spectrophotometric analysis of the plants indicated the treated plants had higher antioxidant activity, while gene analysis showed the melatonin set off a protective chain reaction inside the plants, preserving chlorophyll and adjusting hormones to slow senescence,” says Monika Jangir, co-first author and former postdoc at the Temasek Life Sciences Laboratory.

“We studied melatonin’s effects and saw it improves the stress response of the plant after it’s been cut, so it’s basically decreasing the stress that plant’s experience, and that extends its shelf life,” says Yangyang Han, co-first author and research scientist at the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group at SMART.

Toward postharvest preservation

While the microneedles could make it possible to minimize waste when compared to other application methods like spraying or dipping crops, the researchers say more work is needed to deploy microneedles at scale. For instance, although the researchers applied the microneedle patches by hand in this experiment, the patches could be applied using tractors, autonomous drones, and other farming equipment in the future.

“For this to be widely adopted, we’d need to reach a performance versus cost threshold to justify its use,” Marelli explains. “This method would need to become cheap enough to be used by farmers regularly.”

Moving forward, the research team plans to study the effects of a variety of hormones on different crops using its microneedle delivery technology. The team believes the technique should work with all kinds of produce.

“We’re going to continue to analyze how we can increase the impact this can have on the value and quality of crops,” Marelli says. “For example, could this let us modulate the nutritional values of the crop, how it’s shaped, its texture, etc.? We're also going to continue looking into scaling up the technology so this can be used in the field.”

The work was supported by the Singapore-MIT Alliance for Research and Technology (SMART) and the National Research Foundation of Singapore.

Startup enables 100-year bridges with corrosion-resistant steel

Wed, 05/21/2025 - 12:00am

According to the American Road and Transportation Builders Association, one in three bridges needs repair or replacement, amounting to more than 200,000 bridges across the country. A key culprit of America’s aging infrastructure is rebar that has accumulated rust, which cracks the concrete around it, making bridges more likely to collapse.

Now Allium Engineering, founded by two MIT PhDs, is tripling the lifetime of bridges and other structures with a new technology that uses a stainless steel cladding to make rebar resilient to corrosion. By eliminating corrosion, infrastructure lasts much longer, fewer repairs are required, and carbon emissions are reduced. The company’s technology is easily integrated into existing steelmaking processes to make America’s infrastructure more resilient, affordable, and sustainable over the next century.

“Across the U.S., the typical bridge deck lasts about 30 years on average — we’re enabling 100-year lifetimes,” says Allium co-founder and CEO Steven Jepeal PhD ’21. “There’s a huge backlog of infrastructure that needs to be replaced, and that has frankly aged faster than it was expected to, largely because the materials we were using at the time weren’t cut out for the job. We’re trying to ride the momentum of rebuilding America’s infrastructure, but rebuild in a way that makes it last.”

To accomplish that, Allium adds a thin protective layer of stainless steel on top of traditional steel rebar to make it more resistant to corrosion. About 100,000 pounds of Allium’s stainless steel-clad rebar have already been used in construction projects around the U.S., and the company believes its process can be quickly scaled alongside steel mills.

“We integrate our system into mills so they don’t have to do anything differently,” says Jepeal, who co-founded Allium with Sam McAlpine PhD ’22. “We add everything we need to make a normal product into a stainless-clad product so that any mill out there can make a material that won’t corrode. That’s what needs to happen for all of the world’s infrastructure to be longer lasting.”

Toward better bridges

Jepeal completed his PhD in the MIT Department of Nuclear Science and Engineering (NSE) under Professor Zach Hartwig. During that time, he saw Hartwig and fellow NSE researchers spinout Commonwealth Fusion Systems to create the first commercial fusion reactors, which he says sparked his interest in startups.

“It definitely helped me catch the startup bug,” Jepeal says. “MIT is also where I got my materials science chops.”

McAlpine completed his PhD under Associate Professor Michael Short. In 2019, McAlpine and Short were working on an ARPA-E-funded project in which they would combine metals to improve corrosion-resistance in extreme environments.

Jepeal and McAlpine decided to start a company around applying a similar approach to improve the resilience of metals in everyday settings, working with MIT’s Venture Mentoring Service and speaking with Tata Steel, one of the largest steel makers in the world that has worked with the MIT Industrial Liaison Program (ILP). Members of Tata told the founders that one of their biggest problems was steel corrosion.

A key early problem the founders set out to solve was depositing corrosion-resistant material without adding significant costs or disrupting existing processes. Steelmaking traditionally begins by putting huge pieces of precursor steel through machines called rollers at extremely high temperatures to stretch out the material. Jepeal compares the process to making pasta on an industrial scale.

The founders decided to add their cladding before the rolling process. Although Allium’s system is customized, today the company makes use of existing pieces of equipment used in other metal processing applications, like welding, to add its cladding.

“We go into the mills and take big chunks of steel that are going through the steelmaking process but aren’t the end-product, and we deposit stainless steel on the outside of their cheap carbon steel, which is typically just recycled scrap from products like cars and fridges,” Jepeal says. “The treated steel then goes through the mill’s typical process for making end products like rebar.”

Each 40-foot piece of thick precursor steel turns into about a mile of rebar following the rolling process. Rebar treated by Allium is still more than 95 percent regular rebar and doesn’t need any special post-processing or handling.

“What comes out of the mill looks like regular rebar,” Jepeal says. “It is just as strong and can be bent, cut, and installed in all the same ways. But instead of being put into a bridge and lasting an average of 30 years, it will last 100 years or more.”

Infrastructure to last

Last year, Allium’s factory in Billerica, Massachusetts, began producing its first commercial cladding material, helping to manufacture about 100 tons of the company’s stainless steel-clad rebar in collaboration with a partner steel mill. That rebar has since been placed into construction projects in California and Florida.

Allium’s first facility has the capacity to produce about 1,000 tons of its long-lasting rebar each year, but the company is hoping to build more facilities closer to the steel mills it partners with, eventually integrating them into mill operations.

“Our mission of reducing emissions and improving this infrastructure is what’s driving us to scale very quickly to meet the needs of the industry,” Jepeal says. “Everyone we talk to wants this to be bigger than it is today.”

Allium is also experimenting with other cladding materials and composites. Down the line, Jepeal sees Allium’s tech being used for things beyond rebar like train tracks, steel beams, and pipes. But he stresses the company’s focus on rebar will keep it busy for the foreseeable future.

“Almost all of our infrastructure has this corrosion problem, so it’s the biggest problem we could imagine solving with our set of skills,” Jepeal says. “Tunnels, bridges, roads, industrial buildings, power plants, chemical factories — all of them have this problem.”

Fueling social impact: PKG IDEAS Challenge invests in bold student-led social enterprises

Tue, 05/20/2025 - 4:25pm

On Wednesday, April 16, members of the MIT community gathered at the MIT Welcome Center to celebrate the annual IDEAS Social Innovation Challenge Showcase and Awards ceremony. Hosted by the Priscilla King Gray Public Service Center (PKG Center), the event celebrated 19 student-led teams who spent the spring semester developing and implementing solutions to complex social and environmental challenges, both locally and globally.

Founded in 2001, the IDEAS Challenge is an experiential learning incubator that prepares students to take their early-stage social enterprises to the next level. As the program approaches its 25th anniversary, IDEAS serves a vital role in the Institute’s innovation ecosystem — with a focus on social impact that encourages students across disciplines to think boldly, act compassionately, and engineer for change.

This year’s event featured keynote remarks by Amy Smith, co-founder of IDEAS and founder of D-Lab, who reflected on IDEAS’ legacy and the continued urgency of its mission. She emphasized the importance of community-centered design and celebrated the creativity and determination of the program’s participants over the years. 

“We saw the competition as a vehicle for MIT students to apply their technical skills to problems that they cared about, with impact and community engagement at the forefront,” Smith said. “I think that the goal of helping as many teams as possible along their journey has continued to this day.”

A legacy of impact and a vision for the future

Since its inception, the IDEAS Challenge has fueled over 1,200 ventures through training, mentorship, and seed funding; the program has also awarded more than $1.3 million to nearly 300 teams. Many of these have gone on to effect transformative change in the areas of global health, civic engagement, energy and the environment, education, and employment.  

Over the course of the spring semester, MIT student-led teams engage in a rigorous process of ideating, prototyping, and stakeholder engagement, supported by a robust series of workshops on the topics of systems change, social impact measurement, and social enterprise business models. Participants also benefit from mentorship, an expansive IDEAS alumni network, and connections with partners across MIT’s innovation ecosystem. 

“IDEAS continues to serve as a critical home to MIT students determined to meaningfully address complex systems challenges by building social enterprises that prioritize social impact and sustainability over profit,” said Lauren Tyger, the PKG Center’s assistant dean of social innovation, who has overseen the program since 2023. 

Voices of innovation

For many of this year’s participants, IDEAS offered the chance to turn their academic and professional experience into real-world impact. Blake Blaze, co-founder of SamWise, was inspired to design a platform that provides personalized education for incarcerated students after teaching classes in Boston-area jails and prisons in partnership with The Educational Justice Institute (TEJI) at MIT.

“Our team began the year motivated by a good idea, but IDEAS gave us the frameworks, mindset, and, more simply, the language to be effective collaborators with the communities we aim to serve,” said Blaze. “We learned that sometimes building technology for a customer requires more than product-market fit — it requires proper orientation for meaningful outcomes and impact.”

Franny Xi Wu, who co-founded China Dispossession Watch, a platform to document and raise awareness of grassroots anti-displacement activism in China, highlighted the niche space that IDEAS occupies within the entrepreneurship ecosystem. “IDEAS provided crucial support by helping us achieve federated, trust-based program rollout rather than rapid extractive scaling, pursue diversified funding aligned with community-driven incentives, and find like-minded collaborators equally invested in human rights and spatial justice.” 

A network of alumni and other volunteers play an invaluable mentorship role in IDEAS, fostering remarkable growth in their mentees over the course of the semester. 

“Engaging with mentors, judges, and peers profoundly validated our vision, reinforcing our confidence to pursue what initially felt like audacious goals,” said Xi Wu. “Their insightful feedback and genuine encouragement created a supportive environment that inspired and energized us. They also provided us valuable perspectives on how to effectively launch and scale social ventures, communicate compellingly with funders, and navigate the multifaceted challenges in impact entrepreneurship.”

“Being a PKG IDEAS mentor for the last two years has been an incredible experience. I have met a group of inspiring entrepreneurs trying to solve big problems, helped them on their journeys, and developed my own mentoring skills along the way,” said IDEAS mentor Dheera Ananthakrishnan SM ’90, EMBA ’23. “The PKG network is an incredible resource, a reinforcing loop, giving back so much more than it gets — I’m so proud to be a part of it. I look forward to seeing the impact of IDEAS teams as they continue on their journey, and I am excited to mentor and learn with the MIT PKG Center in the future.”

Top teams recognized with over $60K in awards

The 2025 IDEAS Challenge culminated with the announcement of this year’s winners. Teams were evaluated by a panel of expert judges representing a wide range of industries, and eight were selected to receive awards and additional mentorship that will jump-start their social innovations. These volunteer judges evaluated each proposal for innovation, feasibility, and potential for social impact. 

The showcase was not just a celebration of projects — it was a testament to the value of systems-driven design, collaborative problem-solving, and sustained engagement with community partners.

The 2025 grantees include:

  • $20,000 award: SamWise is an AI-powered oral assessment tool that provides personalized education for incarcerated students, overcoming outdated testing methods. By leveraging large language models, it enhances learning engagement and accessibility.
  • $15,000 award: China Dispossession Watch is developing a digital platform to document and raise awareness of grassroots anti-displacement activism and provide empirical analysis of forced expropriation and demolition in China.
  • $10,000 award: Liberatory Computing is an educational framework that empowers African-American youth to use data science and AI to address systemic inequities.
  • $7,500 Award: POLLEN is a purpose-driven card game and engagement framework designed to spark transnational conversations around climate change and disaster preparedness.
  • $5,000 Award: Helix Carbon is transforming carbon conversion by producing electrolyzers with enhanced system lifetimes, enabling the onsite conversion of carbon dioxide into useful chemicals at industrial facilities.
  • $2,000 Award: Forma Systems has developed a breakthrough in concrete floor design, using up to 72 percent less cement and 67 percent less steel, with the potential for significant environmental impact.
  • $2,000 Award: Precisia empowers women with real-time, data-driven insights into their hormonal health through micro-needle patch technology, allowing them to make informed decisions about their well-being.
  • $2,000 Award: BioBoost is experimenting with converting Caribbean sargassum seaweed waste into carbon-neutral energy using pyrolysis, addressing both the region's energy challenges and the environmental threat of seaweed accumulation.

Looking ahead: Supporting the next generation

As IDEAS nears its 25th anniversary, the PKG Center is launching a year-long celebration and campaign to ensure the program’s longevity and expand its reach. Christine Ortiz, the Morris Cohen Professor of Materials Science and Engineering, announced the IDEAS25 campaign during the event.

“Over the past quarter-century, close to 300 teams have launched projects through the support of IDEAS Awards, and several hundred more have entered the challenge — working on projects in over 60 countries,” Ortiz said. “IDEAS has supported student-led work that has had real-world impact across sectors and regions.”

In honor of the program’s 25th year, the PKG Center will measure the collective impact of IDEAS teams, showcase the work of alumni and partners at an Alumni Showcase this fall, and rally support to sustain the program for the next 25 years. 

“Whether you're a past team member, a mentor, a friend of IDEAS, or someone who just learned about the program tonight,” Ortiz said, “we invite you to join us. Let’s keep the momentum going together.”

A cool new way to study gravity

Tue, 05/20/2025 - 4:10pm

One of the most profound open questions in modern physics is: “Is gravity quantum?” 

The other fundamental forces — electromagnetic, weak, and strong — have all been successfully described, but no complete and consistent quantum theory of gravity yet exists.  

“Theoretical physicists have proposed many possible scenarios, from gravity being inherently classical to fully quantum, but the debate remains unresolved because we’ve never had a clear way to test gravity’s quantum nature in the lab,” says Dongchel Shin, a PhD candidate in the MIT Department of Mechanical Engineering (MechE). “The key to answering this lies in preparing mechanical systems that are massive enough to feel gravity, yet quiet enough — quantum enough — to reveal how gravity interacts with them.”

Shin, who is also a MathWorks Fellow, researches quantum and precision metrology platforms that probe fundamental physics and are designed to pave the way for future industrial technology. He is the lead author of a new paper that demonstrates laser cooling of a centimeter-long torsional oscillator. The open-access paper, “Active laser cooling of a centimeter-scale torsional oscillator,” was recently published in the journal Optica

Lasers have been routinely employed to cool down atomic gases since the 1980s, and have been used in the linear motion of nanoscale mechanical oscillators since around 2010. The new paper presents the first time this technique has been extended to torsional oscillators, which are key to a worldwide effort to study gravity using these systems.

“Torsion pendulums have been classical tools for gravity research since [Henry] Cavendish’s famous experiment in 1798. They’ve been used to measure Newton’s gravitational constant, G, test the inverse-square law, and search for new gravitational phenomena,” explains Shin.

By using lasers to remove nearly all thermal motion from atoms, in recent decades scientists have created ultracold atomic gases at micro- and nanokelvin temperatures. These systems now power the world’s most precise clocks — optical lattice clocks — with timekeeping precision so high that they would gain or lose less than a second over the age of the universe.

“Historically, these two technologies developed separately — one in gravitational physics, the other in atomic and optical physics,” says Shin. “In our work, we bring them together. By applying laser cooling techniques originally developed for atoms to a centimeter-scale torsional oscillator, we try to bridge the classical and quantum worlds. This hybrid platform enables a new class of experiments — ones that could finally let us test whether gravity needs to be described by quantum theory.”

The new paper demonstrates laser cooling of a centimeter-scale torsional oscillator from room temperature to a temperature of 10 millikelvins (1/1,000th of a kelvin) using a mirrored optical lever.

“An optical lever is a simple but powerful measurement technique: You shine a laser onto a mirror, and even a tiny tilt of the mirror causes the reflected beam to shift noticeably on a detector. This magnifies small angular motions into easily measurable signals,” explains Shin, noting that while the premise is simple, the team faced challenges in practice. “The laser beam itself can jitter slightly due to air currents, vibrations, or imperfections in the optics. These jitters can falsely appear as motion of the mirror, limiting our ability to measure true physical signals.”

To overcome this, the team used the mirrored optical lever approach, which employs a second, mirrored version of the laser beam to cancel out the unwanted jitter.

“One beam interacts with the torsional oscillator, while the other reflects off a corner-cube mirror, reversing any jitter without picking up the oscillator’s motion,” Shin says. “When the two beams are combined at the detector, the real signal from the oscillator is preserved, and the false motion from [the] laser jitter is canceled.”

This approach reduced noise by a factor of a thousand, which allowed the researchers to detect motion with extreme precision, nearly 10 times better than the oscillator’s own quantum zero-point fluctuations. “That level of sensitivity made it possible for us to cool the system down to just 10 milli-kelvins using laser light,” Shin says.

Shin says this work is just the beginning. “While we’ve achieved quantum-limited precision below the zero-point motion of the oscillator, reaching the actual quantum ground state remains our next goal,” he says. “To do that, we’ll need to further strengthen the optical interaction — using an optical cavity that amplifies angular signals, or optical trapping strategies. These improvements could open the door to experiments where two such oscillators interact only through gravity, allowing us to directly test whether gravity is quantum or not.”

The paper’s other authors from the Department of Mechanical Engineering include Vivishek Sudhir, assistant professor of mechanical engineering and the Class of 1957 Career Development Professor, and PhD candidate Dylan Fife. Additional authors are Tina Heyward and Rajesh Menon of the Department of Electrical and Computer Engineering at the University of Utah. Shin and Fife are both members of Sudhir’s lab, the Quantum and Precision Measurements Group.

Shin says one thing he’s come to appreciate through this work is the breadth of the challenge the team is tackling. “Studying quantum aspects of gravity experimentally doesn’t just require deep understanding of physics — relativity, quantum mechanics — but also demands hands-on expertise in system design, nanofabrication, optics, control, and electronics,” he says.

“Having a background in mechanical engineering, which spans both the theoretical and practical aspects of physical systems, gave me the right perspective to navigate and contribute meaningfully across these diverse domains,” says Shin. “It’s been incredibly rewarding to see how this broad training can help tackle one of the most fundamental questions in science.”

How to solve a bottleneck for CO2 capture and conversion

Tue, 05/20/2025 - 9:00am

Removing carbon dioxide from the atmosphere efficiently is often seen as a crucial need for combatting climate change, but systems for removing carbon dioxide suffer from a tradeoff. Chemical compounds that efficiently remove CO₂ from the air do not easily release it once captured, and compounds that release CO₂ efficiently are not very efficient at capturing it. Optimizing one part of the cycle tends to make the other part worse.

Now, using nanoscale filtering membranes, researchers at MIT have added a simple intermediate step that facilitates both parts of the cycle. The new approach could improve the efficiency of electrochemical carbon dioxide capture and release by six times and cut costs by at least 20 percent, they say.

The new findings are reported today in the journal ACS Energy Letters, in a paper by MIT doctoral students Simon Rufer, Tal Joseph, and Zara Aamer, and professor of mechanical engineering Kripa Varanasi.

“We need to think about scale from the get-go when it comes to carbon capture, as making a meaningful impact requires processing gigatons of CO₂,” says Varanasi. “Having this mindset helps us pinpoint critical bottlenecks and design innovative solutions with real potential for impact. That’s the driving force behind our work.”

Many carbon-capture systems work using chemicals called hydroxides, which readily combine with carbon dioxide to form carbonate. That carbonate is fed into an electrochemical cell, where the carbonate reacts with an acid to form water and release carbon dioxide. The process can take ordinary air with only about 400 parts per million of carbon dioxide and generate a stream of 100 percent pure carbon dioxide, which can then be used to make fuels or other products.

Both the capture and release steps operate in the same water-based solution, but the first step needs a solution with a high concentration of hydroxide ions, and the second step needs one high in carbonate ions. “You can see how these two steps are at odds,” says Varanasi. “These two systems are circulating the same sorbent back and forth. They’re operating on the exact same liquid. But because they need two different types of liquids to operate optimally, it’s impossible to operate both systems at their most efficient points.”

The team’s solution was to decouple the two parts of the system and introduce a third part in between. Essentially, after the hydroxide in the first step has been mostly chemically converted to carbonate, special nanofiltration membranes then separate ions in the solution based on their charge. Carbonate ions have a charge of 2, while hydroxide ions have a charge of 1. “The nanofiltration is able to separate these two pretty well,” Rufer says.

Once separated, the hydroxide ions are fed back to the absorption side of the system, while the carbonates are sent ahead to the electrochemical release stage. That way, both ends of the system can operate at their more efficient ranges. Varanasi explains that in the electrochemical release step, protons are being added to the carbonate to cause the conversion to carbon dioxide and water, but if hydroxide ions are also present, the protons will react with those ions instead, producing just water.

“If you don’t separate these hydroxides and carbonates,” Rufer says, “the way the system fails is you’ll add protons to hydroxide instead of carbonate, and so you’ll just be making water rather than extracting carbon dioxide. That’s where the efficiency is lost. Using nanofiltration to prevent this was something that we aren’t aware of anyone proposing before.”

Testing showed that the nanofiltration could separate the carbonate from the hydroxide solution with about 95 percent efficiency, validating the concept under realistic conditions, Rufer says. The next step was to assess how much of an effect this would have on the overall efficiency and economics of the process. They created a techno-economic model, incorporating electrochemical efficiency, voltage, absorption rate, capital costs, nanofiltration efficiency, and other factors.

The analysis showed that present systems cost at least $600 per ton of carbon dioxide captured, while with the nanofiltration component added, that drops to about $450 a ton. What’s more, the new system is much more stable, continuing to operate at high efficiency even under variations in the ion concentrations in the solution. “In the old system without nanofiltration, you’re sort of operating on a knife’s edge,” Rufer says; if the concentration varies even slightly in one direction or the other, efficiency drops off drastically. “But with our nanofiltration system, it kind of acts as a buffer where it becomes a lot more forgiving. You have a much broader operational regime, and you can achieve significantly lower costs.”

He adds that this approach could apply not only to the direct air capture systems they studied specifically, but also to point-source systems — which are attached directly to the emissions sources such as power plant emissions — or to the next stage of the process, converting captured carbon dioxide into useful products such as fuel or chemical feedstocks.  Those conversion processes, he says, “are also bottlenecked in this carbonate and hydroxide tradeoff.”

In addition, this technology could lead to safer alternative chemistries for carbon capture, Varanasi says. “A lot of these absorbents can at times be toxic, or damaging to the environment. By using a system like ours, you can improve the reaction rate, so you can choose chemistries that might not have the best absorption rate initially but can be improved to enable safety.”

Varanasi adds that “the really nice thing about this is we’ve been able to do this with what’s commercially available,” and with a system that can easily be retrofitted to existing carbon-capture installations. If the costs can be further brought down to about $200 a ton, it could be viable for widespread adoption. With ongoing work, he says, “we’re confident that we’ll have something that can become economically viable” and that will ultimately produce valuable, saleable products.

Rufer notes that even today, “people are buying carbon credits at a cost of over $500 per ton. So, at this cost we’re projecting, it is already commercially viable in that there are some buyers who are willing to pay that price.” But by bringing the price down further, that should increase the number of buyers who would consider buying the credit, he says. “It’s just a question of how widespread we can make it.” Recognizing this growing market demand, Varanasi says, “Our goal is to provide industry scalable, cost-effective, and reliable technologies and systems that enable them to directly meet their decarbonization targets.”

The research was supported by Shell International Exploration and Production Inc. through the MIT Energy Initiative, and the U.S. National Science Foundation, and made use of the facilities at MIT.nano.

Technique rapidly measures cells’ density, reflecting health and developmental state

Tue, 05/20/2025 - 5:00am

Measuring the density of a cell can reveal a great deal about the cell’s state. As cells proliferate, differentiate, or undergo cell death, they may gain or lose water and other molecules, which is revealed by changes in density.

Tracking these tiny changes in cells’ physical state is difficult to do at a large scale, especially with single-cell resolution, but a team of MIT researchers has now found a way to measure cell density quickly and accurately — measuring up to 30,000 cells in a single hour.

The researchers also showed that density changes could be used to make valuable predictions, including whether immune cells such as T cells have become activated to kill tumors, or whether tumor cells are susceptible to a specific drug.

“These predictions are all based on looking at very small changes in the physical properties of cells, which can tell you how they’re going to respond,” says Scott Manalis, the David H. Koch Professor of Engineering in the departments of Biological Engineering and Mechanical Engineering, and a member of the Koch Institute for Integrative Cancer Research.

Manalis is the senior author of the new study, which appears today in Nature Biomedical Engineering. The paper’s lead author is MIT Research Scientist Weida (Richard) Wu.

Measuring density

As cells enter new states, their molecular contents, including lipids, proteins, and nucleic acids, can become more or less crowded. Measuring the density of a cell offers an indirect view of this crowding.

The new density measurement technique reported in this study builds on work that Manalis’ lab has done over the past two decades on technologies for making measurements of cells and tiny particles. In 2007, his lab developed a microfluidic device known as a suspended microchannel resonator (SMR), which consists of a microchannel across a tiny silicon cantilever that vibrates at a specific frequency. As a cell passes through the channel, the frequency of the vibration changes slightly, and the magnitude of that change can be used to calculate the cell’s mass.

In 2011, the researchers adapted the technique to measure the density of cells. To achieve that, cells are sent through the device twice, suspended in two liquids of different densities. A cell’s buoyant mass (its mass as it floats in fluid) depends on its absolute mass and volume, so by measuring two different buoyant masses for a cell, its mass, volume, and density can be calculated.

That technique works well, but swapping fluids and flowing cells through each one is time-consuming, so it can only be used to measure a few hundred cells at a time.

To create a faster, more streamlined system, the researchers combined their SMR device with a fluorescent microscope, which enables measurements of cell volume. The microscope is positioned at the entrance to the resonator, and cells flow through the device while floating in a fluorescent dye that can’t be absorbed by cells. When cells pass by the microscope, the dip in the fluorescent signal can be used to determine the volume of the cell.

After that volume measurement is taken, the cells flow into the resonator, which measures their mass. This process, which allows for rapid calculation of density, can be used to measure up to 30,000 cells in an hour.

“Instead of trying to flow the cells back and forth at least twice through the cantilever to get cell density, we wanted to try to create a method to do a streamlined measurement, so the cells only need to pass through the cantilever once,” Wu says. “From a cell’s mass and volume, we can then derive its density, without compromising the throughput or the precision.”

Evaluating T cells

The researchers used their new technique to track what happens to the density of T cells after they are activated by signaling molecules.

As T cells transition from a quiescent state to an active state, they gain new molecules, as well as water, the researchers found. From their pre-activation state to the first day of activation, the densities of the cells dropped from an average of 1.08 grams per milliliter to 1.06 grams per milliliter. This means that the cells are becoming less crowded, as they gain water faster than they gain other molecules.

“This is suggesting that cell density is very likely reflecting an increase in cellular water content as the cells transit from a quiescent, non-proliferative state to a high-growth state,” Wu says. “These data are pointing to the notion that cell density is an interesting biomarker that is changing during T-cell activation and may have functional relevance to how well the T cells could proliferate.”

Travera, a clinical-stage company co-founded by Manalis, is working on using the SMR mass measurements to predict whether individual cancer patients’ T cells will respond to drugs meant to stimulate a strong anti-tumor immune response. The company has also begun using the density measurement technique, and preliminary studies have found that using mass and density measurements together gives a much more accurate prediction that using either one alone.

“Both mass and density are revealing something about the overall fitness of the immune cells,” Manalis says.

Using physical measurements of cells to monitor their immune activation “is very exciting and may offer a new way of evaluating and measuring changes in immune cells in circulation,” says Genevieve Boland, an associate professor of surgery at Harvard Medical School and vice chair of research for the Integrated Department of Surgery at Mass General Brigham, who was not involved in the study.

“This is a complementary, but very different, method than those currently used for immune assessments in cancer and other diseases, potentially offering a novel tool to assist in clinical decision-making regarding the need for and the choice of a specific cancer therapy, allow monitoring of response to therapy, and/or in early detection of side effects of immune-based therapies,” she says.

Making predictions

Another potential application for this approach is predicting how tumor cells will respond to different types of cancer drugs. In previous work, Manalis has shown that tracking changes in cell mass after treatment can predict whether a tumor cell is undergoing drug-induced apoptosis. In the new study, he found that density could also reveal these responses.

In those experiments, the researchers treated pancreatic cancer cells with one of two different drugs — one that the cells are susceptible to, and one they are resistant to. They found that density changes after treatment accurately reflected the cells’ known responses to treatment.

“We capture something about the cells that is highly predictive within the first couple of days after they get taken out from the tumor,” Wu says. “Cell density is a rapid biomarker to predict in vivo drug response in a very timely manner.”

Manalis’ lab is now working on using measurements of cell mass and density as a way to evaluate the fitness of cells used to synthesize complex proteins such as therapeutic antibodies.

“As cells are producing these proteins, we can learn from these markers of cell fitness and metabolic state to try to make predictions about how well these cells can produce these proteins, and hopefully in the future also guide design and control strategies to even further improve the yield of these complex proteins,” Wu says.

The research was funded by the Paul G. Allen Frontiers Group, the Virginia and Daniel K. Ludwig Fund for Cancer Research, the MIT Center for Precision Cancer Medicine, the Stand up to Cancer Convergence Program, Bristol Myers Squibb, and the Koch Institute Support (core) Grant from the National Cancer Institute.

Scientists discover potential new targets for Alzheimer’s drugs

Tue, 05/20/2025 - 5:00am

By combining information from many large datasets, MIT researchers have identified several new potential targets for treating or preventing Alzheimer’s disease.

The study revealed genes and cellular pathways that haven’t been linked to Alzheimer’s before, including one involved in DNA repair. Identifying new drug targets is critical because many of the Alzheimer’s drugs that have been developed to this point haven’t been as successful as hoped.

Working with researchers at Harvard Medical School, the team used data from humans and fruit flies to identify cellular pathways linked to neurodegeneration. This allowed them to identify additional pathways that may be contributing to the development of Alzheimer’s.

“All the evidence that we have indicates that there are many different pathways involved in the progression of Alzheimer’s. It is multifactorial, and that may be why it’s been so hard to develop effective drugs,” says Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology in MIT’s Department of Biological Engineering and the senior author of the study. “We will need some kind of combination of treatments that hit different parts of this disease.”

Matthew Leventhal PhD ’25 is the lead author of the paper, which appears today in Nature Communications.

Alternative pathways

Over the past few decades, many studies have suggested that Alzheimer’s disease is caused by the buildup of amyloid plaques in the brain, which triggers a cascade of events that leads to neurodegeneration.

A handful of drugs have been developed to block or break down these plaques, but these drugs usually do not have a dramatic effect on disease progression. In hopes of identifying new drug targets, many scientists are now working on uncovering other mechanisms that might contribute to the development of Alzheimer’s.

“One possibility is that maybe there’s more than one cause of Alzheimer’s, and that even in a single person, there could be multiple contributing factors,” Fraenkel says. “So, even if the amyloid hypothesis is correct — and there are some people who don’t think it is — you need to know what those other factors are. And then if you can hit all the causes of the disease, you have a better chance of blocking and maybe even reversing some losses.”

To try to identify some of those other factors, Fraenkel’s lab teamed up with Mel Feany, a professor of pathology at Harvard Medical School and a geneticist specializing in fruit fly genetics.

Using fruit flies as a model, Feany and others in her lab did a screen in which they knocked out nearly every conserved gene expressed in fly neurons. Then, they measured whether each of these gene knockdowns had any effect on the age at which the flies develop neurodegeneration. This allowed them to identify about 200 genes that accelerate neurodegeneration.

Some of these were already linked to neurodegeneration, including genes for the amyloid precursor protein and for proteins called presenillins, which play a role in the formation of amyloid proteins.

The researchers then analyzed this data using network algorithms that Fraenkel’s lab has been developing over the past several years. These are algorithms that can identify connections between genes that may be involved in the same cellular pathways and functions.

In this case, the aim was to try to link the genes identified in the fruit fly screen with specific processes and cellular pathways that might contribute to neurodegeneration. To do that, the researchers combined the fruit fly data with several other datasets, including genomic data from postmortem tissue of Alzheimer’s patients.

The first stage of their analysis revealed that many of the genes identified in the fruit fly study also decline as humans age, suggesting that they may be involved in neurodegeneration in humans.

Network analysis

In the next phase of their study, the researchers incorporated additional data relevant to Alzheimer’s disease, including eQTL (expression quantitative trait locus) data — ­a measure of how different gene variants affect the expression levels of certain proteins.

Using their network optimization algorithms on this data, the researchers identified pathways that link genes to their potential role in Alzheimer’s development. The team chose two of those pathways to focus on in the new study.

The first is a pathway, not previously linked to Alzheimer’s disease, related to RNA modification. The network suggested that when one of two of the genes in this pathway — MEPCE and HNRNPA2B1 — are missing, neurons become more vulnerable to the Tau tangles that form in the brains of Alzheimer’s patients. The researchers confirmed this effect by knocking down those genes in studies of fruit flies and in human neurons derived from induced pluripotent stem cells (IPSCs).

The second pathway reported in this study is involved in DNA damage repair. This network includes two genes called NOTCH1 and CSNK2A1, which have been linked to Alzheimer’s before, but not in the context of DNA repair. Both genes are most well-known for their roles in regulating cell growth.

In this study, the researchers found evidence that when these genes are missing, DNA damage builds up in cells, through two different DNA-damaging pathways. Buildup of unrepaired DNA has previously been shown to lead to neurodegeneration.

Now that these targets have been identified, the researchers hope to collaborate with other labs to help explore whether drugs that target them could improve neuron health. Fraenkel and other researchers are working on using IPSCs from Alzheimer’s patients to generate neurons that could be used to evaluate such drugs.

“The search for Alzheimer’s drugs will get dramatically accelerated when there are very good, robust experimental systems,” he says. “We’re coming to a point where a couple of really innovative systems are coming together. One is better experimental models based on IPSCs, and the other one is computational models that allow us to integrate huge amounts of data. When those two mature at the same time, which is what we’re about to see, then I think we’ll have some breakthroughs.”

The research was funded by the National Institutes of Health.

Imaging technique removes the effect of water in underwater scenes

Tue, 05/20/2025 - 12:00am

The ocean is teeming with life. But unless you get up close, much of the marine world can easily remain unseen. That’s because water itself can act as an effective cloak: Light that shines through the ocean can bend, scatter, and quickly fade as it travels through the dense medium of water and reflects off the persistent haze of ocean particles. This makes it extremely challenging to capture the true color of objects in the ocean without imaging them at close range.

Now a team from MIT and the Woods Hole Oceanographic Institution (WHOI) has developed an image-analysis tool that cuts through the ocean’s optical effects and generates images of underwater environments that look as if the water had been drained away, revealing an ocean scene’s true colors. The team paired the color-correcting tool with a computational model that converts images of a scene into a three-dimensional underwater “world,” that can then be explored virtually.

The researchers have dubbed the new tool “SeaSplat,” in reference to both its underwater application and a method known as 3D gaussian splatting (3DGS), which takes images of a scene and stitches them together to generate a complete, three-dimensional representation that can be viewed in detail, from any perspective.

“With SeaSplat, it can model explicitly what the water is doing, and as a result it can in some ways remove the water, and produces better 3D models of an underwater scene,” says MIT graduate student Daniel Yang.

The researchers applied SeaSplat to images of the sea floor taken by divers and underwater vehicles, in various locations including the U.S. Virgin Islands. The method generated 3D “worlds” from the images that were truer and more vivid and varied in color, compared to previous methods.

The team says SeaSplat could help marine biologists monitor the health of certain ocean communities. For instance, as an underwater robot explores and takes pictures of a coral reef, SeaSplat would simultaneously process the images and render a true-color, 3D representation, that scientists could then virtually “fly” through, at their own pace and path, to inspect the underwater scene, for instance for signs of coral bleaching.

“Bleaching looks white from close up, but could appear blue and hazy from far away, and you might not be able to detect it,” says Yogesh Girdhar, an associate scientist at WHOI. “Coral bleaching, and different coral species, could be easier to detect with SeaSplat imagery, to get the true colors in the ocean.”

Girdhar and Yang will present a paper detailing SeaSplat at the IEEE International Conference on Robotics and Automation (ICRA). Their study co-author is John Leonard, professor of mechanical engineering at MIT.

Aquatic optics

In the ocean, the color and clarity of objects is distorted by the effects of light traveling through water. In recent years, researchers have developed color-correcting tools that aim to reproduce the true colors in the ocean. These efforts involved adapting tools that were developed originally for environments out of water, for instance to reveal the true color of features in foggy conditions. One recent work accurately reproduces true colors in the ocean, with an algorithm named “Sea-Thru,” though this method requires a huge amount of computational power, which makes its use in producing 3D scene models challenging.

In parallel, others have made advances in 3D gaussian splatting, with tools that seamlessly stitch images of a scene together, and intelligently fill in any gaps to create a whole, 3D version of the scene. These 3D worlds enable “novel view synthesis,” meaning that someone can view the generated 3D scene, not just from the perspective of the original images, but from any angle and distance.

But 3DGS has only successfully been applied to environments out of water. Efforts to adapt 3D reconstruction to underwater imagery have been hampered, mainly by two optical underwater effects: backscatter and attenuation. Backscatter occurs when light reflects off of tiny particles in the ocean, creating a veil-like haze. Attenuation is the phenomenon by which light of certain wavelengths attenuates, or fades with distance. In the ocean, for instance, red objects appear to fade more than blue objects when viewed from farther away.

Out of water, the color of objects appears more or less the same regardless of the angle or distance from which they are viewed. In water, however, color can quickly change and fade depending on one’s perspective. When 3DGS methods attempt to stitch underwater images into a cohesive 3D whole, they are unable to resolve objects due to aquatic backscatter and attenuation effects that distort the color of objects at different angles.

“One dream of underwater robotic vision that we have is: Imagine if you could remove all the water in the ocean. What would you see?” Leonard says.

A model swim

In their new work, Yang and his colleagues developed a color-correcting algorithm that accounts for the optical effects of backscatter and attenuation. The algorithm determines the degree to which every pixel in an image must have been distorted by backscatter and attenuation effects, and then essentially takes away those aquatic effects, and computes what the pixel’s true color must be.

Yang then worked the color-correcting algorithm into a 3D gaussian splatting model to create SeaSplat, which can quickly analyze underwater images of a scene and generate a true-color, 3D virtual version of the same scene that can be explored in detail from any angle and distance.

The team applied SeaSplat to multiple underwater scenes, including images taken in the Red Sea, in the Carribean off the coast of Curaçao, and the Pacific Ocean, near Panama. These images, which the team took from a pre-existing dataset, represent a range of ocean locations and water conditions. They also tested SeaSplat on images taken by a remote-controlled underwater robot in the U.S. Virgin Islands.

From the images of each ocean scene, SeaSplat generated a true-color 3D world that the researchers were able to virtually explore, for instance zooming in and out of a scene and viewing certain features from different perspectives. Even when viewing from different angles and distances, they found objects in every scene retained their true color, rather than fading as they would if viewed through the actual ocean.

“Once it generates a 3D model, a scientist can just ‘swim’ through the model as though they are scuba-diving, and look at things in high detail, with real color,” Yang says.

For now, the method requires hefty computing resources in the form of a desktop computer that would be too bulky to carry aboard an underwater robot. Still, SeaSplat could work for tethered operations, where a vehicle, tied to a ship, can explore and take images that can be sent up to a ship’s computer.

“This is the first approach that can very quickly build high-quality 3D models with accurate colors, underwater, and it can create them and render them fast,” Girdhar says. “That will help to quantify biodiversity, and assess the health of coral reef and other marine communities.”

This work was supported, in part, by the Investment in Science Fund at WHOI, and by the U.S. National Science Foundation.

MIT students turn vision to reality

Mon, 05/19/2025 - 4:45pm

Life is a little brighter in Kapiyo these days.

For many in this rural Kenyan town, nightfall used to signal the end to schoolwork and other family activities. Now, however, the darkness is pierced by electric lights from newly solar-powered homes. Inside, children in this off-the-grid area can study while parents extend daily activities past dusk, thanks to a project conceived by an MIT mechanical engineering student and financed by the MIT African Students Association (ASA) Impact Fund.

There are changes coming, too, in the farmlands of Kashusha in the Democratic Republic of Congo (DRC), where another ASA Impact Fund project is working with local growers to establish an energy-efficient mill for processing corn — adding value, creating jobs, and sparking new economic opportunities. Similarly, plans are underway to automate processing of locally-grown cashews in the Mtwara area of Tanzania — an Impact Fund project meant to increase the income of farmers who now send over 90 percent of their nuts abroad for processing.

Inspired by a desire by MIT students to turn promising ideas into practical solutions for people in their home countries, the ASA Impact Fund is a student-run initiative that launched during the 2023-24 academic year. Backed by an alumni board, the fund empowers students to conceive, design, and lead projects with social and economic impact in communities across Africa.

After financing three projects its first year, the ASA Impact Fund received eight project proposals earlier this year and plans to announce its second round of two to four grants sometime this spring, says Pamela Abede, last year’s fund president. Last year’s awards totaled approximately $15,000.

The fund is an outgrowth of MIT’s African Learning Circle, a seminar open to the entire MIT community where biweekly discussions focus on ways to apply MIT’s educational resources, entrepreneurial spirit, and innovation to improve lives on the African continent.

“The Impact Fund was created,” says MIT African Students Association president Victory Yinka-Banjo, “to take this to the next level … to go from talking to execution.”

Aimed at bridging a gap between projects Learning Circle participants envision and resources available to fund them, the ASA Impact Fund “exists as an avenue to assist our members in undertaking social impact projects on the African continent,” the initiative’s website states, “thereby combining theoretical learning with practical application in alignment with MIT's motto.”

The fund’s value extends to the Cambridge campus as well, says ASA Impact Fund board member and 2021 MIT graduate Bolu Akinola.

“You can do cool projects anywhere,” says Akinola, who is originally from Nigeria and currently pursuing a master’s degree in business administration at Harvard University. “Where this is particularly catalyzing is in incentivizing folks to go back home and impact life back on the continent of Africa.”

MIT-Africa managing director Ari Jacobovits, who helped students get the fund off the ground last year, agrees.

“I think it galvanized the community, bringing people together to bridge a programmatic gap that had long felt like a missed opportunity,” Jacobovits says. “I’m always impressed by the level of service-mindedness ASA members have towards their home communities. It’s something we should all be celebrating and thinking about incorporating into our home communities, wherever they may be.”

Alumni Board president Selam Gano notes that a big part of the Impact Fund’s appeal is the close connections project applicants have with the communities they’re working with. MIT engineering major Shekina Pita, for example, is from Kapiyo, and recalls “what it was like growing up in a place with unreliable electricity,” which “would impact every aspect of my life and the lives of those that I lived around.” Pita’s personal experience and familiarity with the community informed her proposal to install solar panels on Kapiyo homes.

So far, the ASA Impact Fund has financed installation of solar panels for five households where families had been relying on candles so their children could do homework after dark.

“A candle is 15 Kenya shillings, and I don’t always have that amount to buy candles for my children to study. I am grateful for your help,” comments one beneficiary of the Kapiyo solar project.

Pita anticipates expanding the project, 10 homes at a time, and involving some college-age residents of those homes in solar panel installation apprenticeships.

“In general, we try to balance projects where we fund some things that are very concrete solutions to a particular community’s problems — like a water project or solar energy — and projects with a longer-term view that could become an organization or a business — like a novel cashew nut processing method,” says Gano, who conducted projects in his father’s homeland of Ethiopia while an MIT student. “I think striking that balance is something I am particularly proud of. We believe that people in the community know best what they need, and it’s great to empower students from those same communities.”  

Vivian Chinoda, who received a grant from the ASA Impact Fund and was part of the African Students Association board that founded it, agrees.

“We want to address problems that can seem trivial without the lived experience of them,” says Chinoda. “For my friend and I, getting funding to go to Tanzania and drive more than 10 hours to speak to remotely located small-scale cashew farmers … made a difference. We were able to conduct market research and cross-check our hypotheses on a project idea we brainstormed in our dorm room in ways we would not have otherwise been able to access remotely.”

Similarly, Florida Mahano’s Impact Fund-financed project is benefiting from her experience growing up near farms in the DRC. Partnering with her brother, a mechanical engineer in her home community of Bukavu in eastern DRC, Mahano is on her way to developing a processing plant that will serve the needs of local farmers. Informed by market research involving about 500 farmers, consumers, and retailers that took place in January, the plant will likely be operational by summer 2026, says Mahano, who has also received funding from MIT’s Priscilla King Gray (PKG) Public Service Center.

“The ASA Impact Fund was the starting point for us,” paving the way for additional support, she says. “I feel like the ASA Impact Fund was really amazing because it allowed me to bring my idea to life.”

Importantly, Chinoda notes that the Impact Fund has already had early success in fostering ties between undergraduate students and MIT alumni.

“When we sent out the application to set up the alumni board, we had a volume of respondents coming in quite quickly, and it was really encouraging to see how the alums were so willing to be present and use their skill sets and connections to build this from the ground up,” she says.

Abede, who is originally from Ghana, would like to see that enthusiasm continue — increasing alumni awareness about the fund “to get more alums involved … more alums on the board and mentoring the students.”

Mentoring is already an important aspect of the ASA Impact Fund, says Akinola. Grantees, she says, get paired with alumni to help them through the process of getting projects underway. 

“This fund could be a really good opportunity to strengthen the ties between the alumni community and current students,” Akinola says. “I think there are a lot of opportunities for funds like this to tap into the MIT alumni community. I think where there is real value is in the advisory nature — mentoring and coaching current students, helping the transfer of skills and resources.”

As more projects are proposed and funded each year, awareness of the ASA Impact Fund among MIT alumni will increase, Gano predicts.

“We’ve had just one year of grantees so far, and all of the projects they’ve conducted have been great,” he says. “I think even if we just continue functioning at this scale, if we’re able to sustain the fund, we can have a real lasting impact as students and alumni and build more and more partnerships on the continent.”

The sweet taste of a new idea

Mon, 05/19/2025 - 4:30pm

Behavioral economist Sendhil Mullainathan has never forgotten the pleasure he felt the first time he tasted a delicious crisp, yet gooey Levain cookie. He compares the experience to when he encounters new ideas.

“That hedonic pleasure is pretty much the same pleasure I get hearing a new idea, discovering a new way of looking at a situation, or thinking about something, getting stuck and then having a breakthrough. You get this kind of core basic reward,” says Mullainathan, the Peter de Florez Professor with dual appointments in the MIT departments of Economics and Electrical Engineering and Computer Science, and a principal investigator at the MIT Laboratory for Information and Decision Systems (LIDS).

Mullainathan’s love of new ideas, and by extension of going beyond the usual interpretation of a situation or problem by looking at it from many different angles, seems to have started very early. As a child in school, he says, the multiple-choice answers on tests all seemed to offer possibilities for being correct.

“They would say, ‘Here are three things. Which of these choices is the fourth?’ Well, I was like, ‘I don’t know.’ There are good explanations for all of them,” Mullainathan says. “While there’s a simple explanation that most people would pick, natively, I just saw things quite differently.”

Mullainathan says the way his mind works, and has always worked, is “out of phase” — that is, not in sync with how most people would readily pick the one correct answer on a test. He compares the way he thinks to “one of those videos where an army’s marching and one guy’s not in step, and everyone is thinking, what’s wrong with this guy?”

Luckily, Mullainathan says, “being out of phase is kind of helpful in research.”

And apparently so. Mullainathan has received a MacArthur “Genius Grant,” has been designated a “Young Global Leader” by the World Economic Forum, was named a “Top 100 thinker” by Foreign Policy magazine, was included in the “Smart List: 50 people who will change the world” by Wired magazine, and won the Infosys Prize, the largest monetary award in India recognizing excellence in science and research.

Another key aspect of who Mullainathan is as a researcher — his focus on financial scarcity — also dates back to his childhood. When he was about 10, just a few years after his family moved to the Los Angeles area from India, his father lost his job as an aerospace engineer because of a change in security clearance laws regarding immigrants. When his mother told him that without work, the family would have no money, he says he was incredulous.

“At first I thought, that can’t be right. It didn’t quite process,” he says. “So that was the first time I thought, there’s no floor. Anything can happen. It was the first time I really appreciated economic precarity.”

His family got by running a video store and then other small businesses, and Mullainathan made it to Cornell University, where he studied computer science, economics, and mathematics. Although he was doing a lot of math, he found himself drawn not to standard economics, but to the behavioral economics of an early pioneer in the field, Richard Thaler, who later won the Nobel Memorial Prize in Economic Sciences for his work. Behavioral economics brings the psychological, and often irrational, aspects of human behavior into the study of economic decision-making.

“It’s the non-math part of this field that’s fascinating,” says Mullainathan. “What makes it intriguing is that the math in economics isn’t working. The math is elegant, the theorems. But it’s not working because people are weird and complicated and interesting.”

Behavioral economics was so new as Mullainathan was graduating that he says Thaler advised him to study standard economics in graduate school and make a name for himself before concentrating on behavioral economics, “because it was so marginalized. It was considered super risky because it didn’t even fit a field,” Mullainathan says.

Unable to resist thinking about humanity’s quirks and complications, however, Mullainathan focused on behavioral economics, got his PhD at Harvard University, and says he then spent about 10 years studying people.

“I wanted to get the intuition that a good academic psychologist has about people. I was committed to understanding people,” he says.

As Mullainathan was formulating theories about why people make certain economic choices, he wanted to test these theories empirically.

In 2013, he published a paper in Science titled “Poverty Impedes Cognitive Function.” The research measured sugarcane farmers’ performance on intelligence tests in the days before their yearly harvest, when they were out of money, sometimes nearly to the point of starvation. In the controlled study, the same farmers took tests after their harvest was in and they had been paid for a successful crop — and they scored significantly higher.

Mullainathan says he is gratified that the research had far-reaching impact, and that those who make policy often take its premise into account.

“Policies as a whole are kind of hard to change,” he says, “but I do think it has created sensitivity at every level of the design process, that people realize that, for example, if I make a program for people living in economic precarity hard to sign up for, that’s really going to be a massive tax.”

To Mullainathan, the most important effect of the research was on individuals, an impact he saw in reader comments that appeared after the research was covered in The Guardian.

“Ninety percent of the people who wrote those comments said things like, ‘I was economically insecure at one point. This perfectly reflects what it felt like to be poor.’”

Such insights into the way outside influences affect personal lives could be among important advances made possible by algorithms, Mullainathan says.

“I think in the past era of science, science was done in big labs, and it was actioned into big things. I think the next age of science will be just as much about allowing individuals to rethink who they are and what their lives are like.”

Last year, Mullainathan came back to MIT (after having previously taught at MIT from 1998 to 2004) to focus on artificial intelligence and machine learning.

“I wanted to be in a place where I could have one foot in computer science and one foot in a top-notch behavioral economic department,” he says. “And really, if you just objectively said ‘what are the places that are A-plus in both,’ MIT is at the top of that list.”

While AI can automate tasks and systems, such automation of abilities humans already possess is “hard to get excited about,” he says. Computer science can be used to expand human abilities, a notion only limited by our creativity in asking questions.

“We should be asking, what capacity do you want expanded? How could we build an algorithm to help you expand that capacity? Computer science as a discipline has always been so fantastic at taking hard problems and building solutions,” he says. “If you have a capacity that you’d like to expand, that seems like a very hard computing challenge. Let’s figure out how to take that on.”

The sciences that “are very far from having hit the frontier that physics has hit,” like psychology and economics, could be on the verge of huge developments, Mullainathan says. “I fundamentally believe that the next generation of breakthroughs is going to come from the intersection of understanding of people and understanding of algorithms.”

He explains a possible use of AI in which a decision-maker, for example a judge or doctor, could have access to what their average decision would be related to a particular set of circumstances. Such an average would be potentially freer of day-to-day influences — such as a bad mood, indigestion, slow traffic on the way to work, or a fight with a spouse.

Mullainathan sums the idea up as “average-you is better than you. Imagine an algorithm that made it easy to see what you would normally do. And that’s not what you’re doing in the moment. You may have a good reason to be doing something different, but asking that question is immensely helpful.”

Going forward, Mullainathan will absolutely be trying to work toward such new ideas — because to him, they offer such a delicious reward.

Study in India shows several tactics together boost vaccination against deadly diseases

Mon, 05/19/2025 - 12:00am

Around the world, low immunizations rates for children are a persistent problem. Now, an experiment conducted in India shows that an inexpensive combination of methods, including text reminders and small financial incentives, has a major impact on immunization.

Led by MIT economists, the research finds that a trifecta of incentives, text messages, and information provided by local residents creates a 44 percent increase in child immunizations, at low cost. Alternately, without financial incentives, but still using text messages and local information, there is a 9 percent increase in immunizations at virtually no expense — the most cost-effective increase the researchers found.

“The most effective package overall has incentives, reminders, and enlisting of community ambassadors to remind people,” says MIT economist Esther Duflo, who helped lead the research. “The cost is very low. And an even more cost-effective package is to not have incentives — you can increase immunization just from reminders through social networks. That’s basically a free lunch because you are making a more effective use of the immunization infrastructure in place. So the small cost of the program is more compensated by the fact that the full cost of administering an immunization goes down.”

The experiment is also notable for the sophisticated new method the research team developed to combine a variety of these approaches in the experiment — and then see precisely what effects were produced by different combinations as well as their component parts.

“What is good about this is that it triangulates and links all these pieces of evidence together,” says MIT economist Abhijit Banerjee, who also helped lead the project. “In terms of our confidence in saying this is a reasonable policy recipe, that’s very important.”

A new paper detailing the results and the method, “Selecting the Most Effective Nudge: Evidence from a Large-Scale Experiment on Immunization,” is being published in the journal Econometrica. Duflo and Banerjee are among 11 co-authors of the paper, along with several staff members of MIT’s Abdul Latif Jameel Anti-Poverty Lab (J-PAL).

Duflo and Banerjee are also two of the co-founders of MIT’s Abdul Latif Jameel Anti-Poverty Lab (J-PAL), a global leader in field experiments about antipoverty programs. In 2019 they were awarded the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, along with Michael Kremer of Harvard University.

Analyzing 75 approaches at once

About 2 million children die per year globally from diseases that are vaccine-preventable. As of 2016, when the current study began, only 62 percent of children in India were fully immunized against tuberculosis, measles, diptheria, tetanus, and polio.

Prior research by Duflo and Banerjee has helped validate the value of finding new ways to boost immunizations rates. In one prior study the economists found that immunization rates for rural children in the state of Rajasthan, India, jumped from 5 percent to 39 percent when their families were offered a modest quantity of lentils as an incentive. (That finding was mentioned in their Nobel citation.) Subsequently, many other researchers have studied new methods of increasing immunization.

To conduct the current study, the research team partnered with the state government of Haryana, India, to conduct an experiment spanning more than 900 villages, from 2016 through 2019.

The researchers based the experiment around their three basic ways of encouraging parents to get their children vaccinated: financial incentives, text messages, and information from local “ambassadors,” that is, well-connected residents. The research team then developed a set of varying combinations of these elements. In some cases they would offer more incentives, or fewer, along with different amounts of text messages, and different kinds of exposure to local information.

In all, the researchers wound up with 75 combinations of these elements and developed a new method to evaluate them all, which they call treatment variant aggregation (TVA). Essentially, the scholars developed an algorithm that used a systematic data-driven approach to pool together variations that were ultimately identical, and noted which ones were ineffective. To select the best package, they also adjusted their results for the so-called “winner’s curse” of social-science studies, in which the policy option that works best in a particular experiment will tend to be the one that did better due to random chance.

All told, the scholars believe they have developed a way of evaluating many “treatments” — the individual elements, such as financial incentives — within the same experiment, rather than just trying out one concept, like distributing lentils, per every large study.

“It’s not one experiment where you compare A with B,” says Banerjee, who is also the Ford Foundation International Professor of Economics. “What we do here is evaluate a combination of things. Even in scenarios where you see no effect, there is information to be harvested. It may be that in a combination of treatments, maybe one element works well, and the others have a negative effect and the net is zero, but there is information there. So, you want to keep track of all the possibilities as you go along, although it is a mathematically difficult exercise.”

The researchers were also able to discern that differences among local populations have an impact on the effectiveness of the different elements being tested. Generally, groups with lower immunization rates will respond more to incentives to immunize.

“In a way, we are landing back where we were in [the lentil study in] Rajasthan, where low immunization rates lead to super-high effects for these incentives,” says Duflo, who is also the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics. “We replicated the result in this context.” However, she reinforces, the new method allows scholars to acquire more information about that process more quickly.

An actionable path

The research team is hopeful that the new TVA method will gain wider adoption among scholars and lead to more experiments with multifaceted approaches, in which numerous potential solutions are evaluated simultaneously. The method could apply to antipoverty research, medical trials, and more.

Beyond that, they note, these kinds of results give governments and other organizations the ability to see how different policy options will play out, in both medical and fiscal terms.

“The reason why we did this was to be able to give the government of Haryana an actionable path, moving forward,” Duflo says.

She adds: “People before thought in order to say something with confidence, you should try just one treatment at a time,” meaning, one type of intervention at a time, such as incentives, or text messages. However, Duflo notes, “I’m very happy to say you can have more than one, and you can analyze all of them. It takes many steps, but such is life: many steps.”

In addition to Duflo and Banerjee, the co-authors of the study are Arun G. Chandrasekhar of J-PAL; Suresh Dalpath of the Government of Haryana; John Floretta of J-PAL; Matthew O. Jackson, an economist at Stanford University; Harini Kannan of J-PAL; Francine Loza of J-PAL; Anirudh Sankar of Stanford; Anna Schrimpf of J-PAL; and Maheshwor Shrestha of the World Bank.

The research was made possible through cooperation with the Haryana Department of Health and Family Welfare. 

A day in the life of MIT MBA student David Brown

Fri, 05/16/2025 - 1:25pm

MIT Sloan was my first and only choice,” says MIT graduate student David Brown. After receiving his BS in chemical engineering at the U.S. Military Academy at West Point, Brown spent eight years as a helicopter pilot in the U.S. Army, serving as a platoon leader and troop commander. 

Now in the final year of his MBA, Brown has co-founded a climate tech company — Helix Carbon — with Ariel Furst, an MIT assistant professor in the Department of Chemical Engineering, and Evan Haas MBA ’24, SM ’24. Their goal: erase the carbon footprint of tough-to-decarbonize industries like ironmaking, polyurethanes, and olefins by generating competitively-priced, carbon-neutral fuels directly from waste carbon dioxide (CO2). It’s an ambitious project; they’re looking to scale the company large enough to have a gigaton per year impact on CO2 emissions. They have lab space off campus, and after graduation, Brown will be taking a full-time job as chief operating officer.

“What I loved about the Army was that I felt every day that the work I was doing was important or impactful in some way. I wanted that to continue, and felt the best way to have the greatest possible positive impact was to use my operational skills learned from the military to help close the gap between the lab and impact in the market.”

The following photo essay provides a snapshot of what a typical day for Brown has been like as an MIT student.

Pages