MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 19 hours 37 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Bringing AI-driven protein-design tools to biologists everywhere

Fri, 04/17/2026 - 12:00am

Artificial intelligence is already proving it can accelerate drug development and improve our understanding of disease. But to turn AI into novel treatments we need to get the latest, most powerful models into the hands of scientists.

The problem is that most scientists aren’t machine-learning experts. Now the company OpenProtein.AI is helping scientists stay on the cutting edge of AI with a no-code platform that gives them access to powerful foundation models and a suite of tools for designing proteins, predicting protein structure and function, and training models.

The company, founded by Tristan Bepler PhD ’20 and former MIT associate professor Tim Lu PhD ’07, is already equipping researchers in pharmaceutical and biotech companies of all sizes with its tools, including internally developed foundation models for protein engineering. OpenProtein.AI also offers its platform to scientists in academia for free.

“It’s a really exciting time right now because these models can not only make protein engineering more efficient — which shortens development cycles for therapeutics and industrial uses — they can also enhance our ability to design new proteins with specific traits,” Bepler says. “We’re also thinking about applying these approaches to non-protein modalities. The big picture is we’re creating a language for describing biological systems.”

Advancing biology with AI

Bepler came to MIT in 2014 as part of the Computational and Systems Biology PhD Program, studying under Bonnie Berger, MIT’s Simons Professor of Applied Mathematics. It was there that he realized how little we understand about the molecules that make up the building blocks of biology.

“We hadn’t characterized biomolecules and proteins well enough to create good predictive models of what, say, a whole genome circuit will do, or how a protein interaction network will behave,” Bepler recalls. “It got me interested in understanding proteins at a more fine-grained level.”

Bepler began exploring ways to predict the chains of amino acids that make up proteins by analyzing evolutionary data. This was before Google released AlphaFold, a powerful prediction model for protein structure. The work led to one of the first generative AI models for understanding and designing proteins — what the team calls a protein language model.

“I was really excited about the classical framework of proteins and the relationships between their sequence, structure, and function. We don’t understand those links well,” Bepler says. “So how could we use these foundation models to skip the ‘structure’ component and go straight from sequence to function?”

After earning his PhD in 2020, Bepler entered Lu’s lab in MIT’s Department of Biological Engineering as a postdoc.

“This was around the time when the idea of integrating AI with biology was starting to pick up,” Lu recalls. “Tristan helped us build better computational models for biologic design. We also realized there’s a disconnect between the most cutting-edge tools available and the biologists, who would love to use these things but don’t know how to code. OpenProtein came from the idea of broadening access to these tools.”

Bepler had worked at the forefront of AI as part of his PhD. He knew the technology could help scientists accelerate their work.

“We started with the idea to build a general-purpose platform for doing machine learning-in-the-loop protein engineering,” Bepler says. “We wanted to build something that was user friendly because machine-learning ideas are kind of esoteric. They require implementation, GPUs, fine-tuning, designing libraries of sequences. Especially at that time, it was a lot for biologists to learn.”

OpenProtein’s platform, in contrast, features an intuitive web interface for biologists to upload data and conduct protein engineering work with machine learning. It features a range of open-source models, including PoET, OpenProtein’s flagship protein language model.

PoET, short for Protein Evolutionary Transformer, was trained on protein groups to generate sets of related proteins. Bepler and his collaborators showed it could generalize about evolutionary constraints on proteins and incorporate new information on protein sequences without retraining, allowing other researchers to add experimental data to improve the model.

“Researchers can use their own data to train models and optimize protein sequences, and then they can use our other tools to analyze those proteins,” Bepler says. “People are generating libraries of protein sequences in silico [on computers] and then running them through predictive models to get validation and structural predictors. It’s basically a no-code front-end, but we also have APIs for people who want to access it with code.”

The models help researchers design proteins faster, then decide which ones are promising enough for further lab testing. Researchers can also input proteins of interest, and the models can generate new ones with similar properties.

Since its founding, OpenProtein’s team has continued to add tools to its platform for researchers regardless of their lab size or resources.

“We’ve tried really hard to make the platform an open-ended toolbox,” Bepler says. “It has specific workflows, but it’s not tied specifically to one protein function or class of proteins. One of the great things about these models is they are very good at understanding proteins broadly. They learn about the whole space of possible proteins.”

Enabling the next generation of therapies

The large pharmaceutical company Boehringer Ingelheim began using OpenProtein’s platform in early 2025. Recently, the companies announced an expanded collaboration that will see OpenProtein’s platform and models embedded into Boehringer Ingelheim’s work as it engineers proteins to treat diseases like cancer and autoimmune or inflammatory conditions.

Last year, OpenProtein also released a new version of its protein language model, PoET-2, that outperforms much larger models while using a small fraction of the computing resources and experimental data.

“We really want to solve the question of how we describe proteins,” Bepler says. “What’s the meaningful, domain-specific language of protein constraints we use as we generate them? How can we bring in more evolutionary constraints? How can we describe an enzymatic reaction a protein carries out such that a model can generate sequences to do that reaction?”

Moving forward, the founders are hoping to make models that factor in the changing, interconnected nature of protein function.

“The area I am excited about is going beyond protein binding events to use these models to predict and design dynamic features, where the protein has to engage two, three, or four biological mechanisms at the same time, or change its function after binding,” says Lu, who currently serves in an advisory role for the company.

As progress in AI races forward, OpenProtein continues to see its mission as giving scientists the best tools to develop new treatments faster.

“As work gets more complex, with approaches incorporating things like protein logic and dynamic therapies, the existing experimental toolsets become limiting,” Lu says. “It’s really important to create open ecosystems around AI and biology. There’s a risk that AI resources could get so concentrated that the average researcher can’t use them. Open access is super important for the scientific field to make progress.”

With navigating nematodes, scientists map out how brains implement behaviors

Thu, 04/16/2026 - 6:30pm

Animal behavior reflects a complex interplay between an animal’s brain and its sensory surroundings. Only rarely have scientists been able to discern how actions emerge from this interaction. A new open-access study in Nature Neuroscience by researchers in The Picower Institute for Learning and Memory at MIT offers one example by revealing how circuits of neurons within C. elegans nematode worms respond to odors and generate movement as they pursue of smells they like and evade ones they don’t.

“Across the animal kingdom, there are just so many remarkable behaviors,” says study senior author Steven Flavell, associate professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences and an investigator of the Howard Hughes Medical Institute. “With modern neuroscience tools, we are finally gaining the ability to map their mechanistic underpinnings.”

By the end of the study, which former graduate student Talya Kramer PhD ’25 led as her doctoral thesis research, the team was able to show exactly which neurons in the worm’s brain did which of the jobs needed to sense where smells were coming from, plan turns toward or away from them, shift to reverse (like old-fashioned radio-controlled cars, C. elegans worms turn in reverse), execute the turns, and then go back to moving forward. Not only did the study reveal the sequence and each neuron’s role in it, but it also demonstrated that worms are more skillful and intentional in these actions than perhaps they’ve received credit for. And finally, the study demonstrated that it’s all coordinated by the neuromodulatory chemical tyramine.

“One thing that really excited us about this study is that we were able to see what a sensorimotor arc looks like at the scale of a whole nervous system: all the bits and pieces, from responses to the sensory cue until the behavioral response is implemented,” Flavell says.

Seeing the sequence

To do the research, Kramer put worms in dishes with spots of odors they’d either want to navigate toward or slither away from. With the lab’s custom microscopes and software, she and her co-authors could track how the worms navigated and all the electrical activity of more than 100 neurons in their brains during those behaviors (the worms only have 302 neurons total).

The surveillance enabled Kramer, Flavell, and their colleagues to observe that the worms weren’t just ambling randomly until they happened to get where they’d want to be. Instead, the worms would execute turns with advantageous timing and at well-chosen angles. The worms seemed to know what they were doing as they navigated along the gradients of the odors.

Inside their heads, patterns of electrical activity among a cohort of 10 neurons (indicated by flashing green light tied to the flux of calcium ions in the cells), revealed the sequence of neural activation that enabled the worms to execute these sensible sensory-guided motions: forward, then into reverse, then into the turn, and then back to forward. Particular neurons guided each of these steps, including detecting the odors, planning the turn, switching into reverse, and then executing the turns.

A couple of neurons stood out as key gears in the sequence. A neuron called SAA proved pivotal for integrating odor detection with planning movement, as its activity predicted the direction of the eventual turn. Several neurons were flexible enough to show different activity patterns depending on factors such as where the odors were and whether the worm was moving forward or in reverse.

And if the neurons are indeed turning and shifting gears, then the neuromodulator tyramine (the worm analog of norepinephrine) was the signal essential to switch their gears. After the worms started moving in reverse, tyramine from the neuron RIM enabled other neurons in the sequence to change their activity appropriately to execute the turns. In several experiments the scientists knocked out RIM tyramine and saw that the navigation behaviors and the sequence of neural activity largely fell apart.

“The neuromodulator tyramine plays a central role in organizing these sequential brain activity patterns,” Flavell says.

In addition to Flavell and Kramer, the paper’s other authors are Flossie Wan, Sara Pugliese, Adam Atanas, Sreeparna Pradhan, Alex Hiser, Lillie Godinez, Jinyue Luo, Eric Bueno, and Thomas Felt.

A MathWorks Science Fellowship, the National Institutes of Health, the National Science Foundation, The McKnight Foundation, The Alfred P. Sloan Foundation, the Freedom Together Foundation, and HHMI provided funding to support the work.

Understanding community effects of Asian immigrants’ US housing purchases

Thu, 04/16/2026 - 6:00pm

Asian immigrants are both the fastest-growing and highest-earning immigrant ethnic group in the United States, facts that have caught the attention of many economists interested in how these groups — whether investors or residents — impact housing prices, K-12 education, and other important aspects of community life.

A new study by economists at MIT and the University of Cincinnati delves into this trend, focusing on the potential mechanisms at work behind the correlation of rising home prices and subsequent improvements in education at the county level. Their findings suggest that home prices rise not simply due to increased demand, but because the new neighbors have a positive influence on the quality of K-12 education, which in turn increases desirability.

The study focuses on 2008 to 2019, a period that saw a relative spike in US immigration from six Asian countries in particular — China, India, Japan, Korea, the Philippines, and Vietnam. Among this group, the economists focused specifically on those who arrived on non-permanent visas for study or work — a cohort that represents a distinct and growing channel of new immigrant inflow, and is often pre-selected by universities and employers.

“We’re looking at a window when the influx of Asian immigrants has a particularly strong preference for education, and who themselves were also highly educated,” says Eunjee Kwon, the West Shell, Jr. Assistant Professor of Real Estate in the Department of Finance at the University of Cincinnati, a co-author on the study published in the May issue of the Journal of Urban Economics. “This period also marks a notable shift in the socioeconomic profile of Asian immigrants to the U.S., with this cohort arriving with higher levels of education and income relative to earlier waves of Asian immigrants and, in many cases, relative to the native-born population.”

While county data is not granulated to the neighborhood or even municipality level, the researchers found that 30 to 40 percent of the rise in home values purchased in areas where Asian immigrant buyers have school-age children correlates with improved quality of education, as indicated by the average rise in standardized test scores of all children in the county.

“Maybe some Asian buyers are pure investors, but many of them become residents who buy homes for themselves and their families, and transform the neighborhoods,” says co-author Siqi Zheng, the Samuel Tak Lee Professor of Urban and Real Estate Sustainability at the MIT Center for Real Estate and the Department of Urban Studies and Planning. “We show that this is not negligible; it is a big component. We can attribute at least one-third of housing price increases to improved education.”

Amanda Ang, a postdoc in the Department of Economics at Aalto University in Helsinki, is the third co-author of the paper. The work is somewhat personal for the scientists, who undertook the study without funding in order to see for themselves what impact this particular group of immigrants had on neighborhoods.

“We wanted to understand what this group contributes to the communities where they settle," Kwon says. “We found that their presence benefits children of all other backgrounds, too."

Ang, Kwon, and Zheng use an econometric approach called an instrumental variable to home in on a causal correlation, and not just an association. To help ensure accuracy, they carefully omitted counties that have long been home to large Asian communities — such as San Francisco, Los Angeles, and New York — in order to capture the impact of recent immigrants on other counties.

“I believe that this will be a highly influential paper because it asks a very important question and uses credible statistical methods to try to disentangle selection effects from treatment effects, using a subtle analysis accounting for displacement,” says Matthew Kahn, the Provost Professor of Economics and Spatial Sciences at the University of Southern California, who was not involved with the research. 

“What really interests me about this paper is that it suggests that there can be a positive spillover effect: that U.S. areas that attract Asian immigrants also gain from improved school quality,” Kahn says. “It’s the first I’ve seen undertaken on this very important hypothesis, which certainly merits additional future research, possibly using school-level and individual-level data.”

Light-activated gel could impact wearables, soft robotics, and more

Thu, 04/16/2026 - 5:10pm

Consider the chief difference between living systems and electronics: The first is generally soft and squishy, while the latter is hard and rigid. Now, in work that could impact human-machine interfaces, biocompatible devices, soft robotics, and more, MIT engineers and colleagues have developed a soft, flexible gel that dramatically changes its conductivity upon the application of light.

Enter the growing field of ionotronics, which involves transferring data through ions, or charged molecules. Electronics does the same, with electrons. But while the latter is well established, ionotronics is still being developed, with one huge exception: living systems. The cells in our bodies communicate with a variety of ions, from potassium to sodium.

Ionotronics, in turn, can provide a bridge between electronics and biological tissues. Potential applications range from soft wearable technology to human-machine interfaces

“We’ve found a mechanism to dynamically control local ion population in a soft material,” says Thomas J. Wallin, the John F. Elliott Career Development Professor in MIT’s Department of Materials Science and Engineering and leader of the work. “That could allow a system that is self-adaptive to environmental stimuli, in this case light.” In other words, the system could automatically change in response to changes in light, which could allow complex signal processing in soft materials.

An open-access paper about the work was published online recently in Nature Communications.

A growing field

Although others have developed ionotronic materials with high conductivities that allow the quick movement of ions, those conductivities cannot be controlled. “What we’re doing is using light to switch a soft material from insulating to something that is 400 times more conductive,” says Xu Liu, first author of the paper and former MIT postdoc in materials science and engineering who is now an incoming assistant professor at King’s College London.

Key to the work is a class of materials known as photo-ion generators (PIGs). These can become some 1,000 times more conductive upon the application of light. The MIT team optimized a way to incorporate a PIG into polyurethane rubber by first dissolving a PIG powder into a solvent, and then using a swelling method to get it into the rubber.

Much potential

In the material reported in the current work, the change in conductivity is irreversible. But Liu is confident that future versions could switch back and forth between insulating and conducting states.

She notes that the current material was developed using only one kind of PIG, polymer (the polyurethane rubber), and solvent, but there are many other kinds of all three. So there is great potential for creating even better light-responsive soft materials.

Liu also notes the potential for developing soft materials that respond to other environmental stimuli, such as heat or magnetism. “We’re inspired to do more work in this field by changing the driving force from light to other forms of environmental stimuli,” she says.

“Our work has the potential to lead to the creation of a subfield that we call soft photo-ionotronics,” Liu continues. “We are also very excited about the opportunities from our work to create new soft machines impacting soft wearable technology, human-machine interfaces, robotics, biomedicine, and other fields.”

Additional authors of the paper are Steven M. Adelmund, Shahriar Safaee, and Wenyang Pan of Reality Labs at Meta. 

3 Questions: A running shoe that adapts to the runner

Thu, 04/16/2026 - 11:25am

Granular convection takes place everywhere: candy in a box, sand on the beach, foam in a cushion. Often referred to as the “Brazil nut effect,” granular convection occurs when solid, independent, irregularly shaped particles reorder themselves following agitation. One might think, intuitively, that the larger pieces fall to the bottom, but it is their size, and not their density, that alters their location, and the larger pieces end up on the top.

In the world of competitive running, elite athletes have their footwear individually designed for needs such as foot shape and pressure points. Comfortable and supportive footwear can assist optimal performance. However, most footwear is standardized and doesn’t offer a personalized performance. 

MIT associate professor of architecture Skylar Tibbits, founder and co-director of the Self-Assembly Lab in the MIT School of Architecture and Planning, along with various MIT colleagues, have been developing tests surrounding the phenomenon of granular convection within the midsole — or middle layer, between the outsole (bottom) and insole (top) — of running shoes to create a shoe that evolves over time to provide an individualized product. As we approach the running of the 130th Boston Marathon — one of the world's most prominent displays of footwear supporting athletes — Tibbits answers three questions about bead-based technologies as applied to running shoes. 

Q. What are the advantages of an adaptive midsole over the current bead-based midsole technology?

A. Currently, the standard midsoles in running shoes are static. They aren’t customized to the shape of our foot or the force we deliver when running or walking. They also don’t change or improve over time as we run in them. Some products — blue jeans, baseball gloves, and hats, for example — get more comfortable as you wear them. We were exploring how this could be taken even further with a running shoe so that you would have the cushion, support, and stiffness where you need it and have it improve these features as you use it so that, over time, the actual performance of the shoe gets better. It’s not a personalized fit; it’s a performance-driven adaptation.

There are three advantages to this technology. The first is that customization is not only for elite athletes. Most elite athletes are already getting gear personalized for their specific needs by their sponsoring brands. Now, customized gear can be available for everyone. Second, customized gear currently does not adapt to an athlete’s performance. But you need your footwear to evolve because your needs as a runner evolve. You need to get the comfort, cushioning, and protection, to support your performance.

A third advantage is the manufacturability of this type of shoe. Custom shoes are now made in a factory for the specifications of a single athlete. That doesn’t scale. You can’t produce a manufacturing process where every single person’s shoe is going to be custom-made for them. We’ve shown that every shoe can be the same and mass produced, but, over time, the shoe will evolve to your personal needs. That is a way to get customization without having to change the manufacturing process.

Q: Why the interest in granular systems, and granular convection in particular?

A: We’ve worked on reversible construction techniques with granular jamming over the years, which is at the opposite end of the spectrum. Granular convection promotes the movement of particles; the more they are mixed, the more they separate. Our vision was looking at footwear that adapts with you over time. We thought we could use granular convection as a mechanism for the footwear to evolve.

We put particles with different stiffness, different material properties, and unique sizes, so that over time, we know the softer particles, which are the larger particles, will rise to the top, and the stiffer particles that are smaller will sink to the bottom, towards the outsole. We designed how these particles moved based on the vibration and the impact of walking and running.

We also designed the container. We had three different particle sizes; we conducted tests to try to dial it into the right number of steps for it to evolve over the course of about 20,000 steps. About the length of a marathon. We could either speed up or slow down that process.

Q. Are there future applications of customization for granular convection? If so, where do you see your research going next?

A: Any products that need cushioning systems that improve over time would benefit from this technology. With custom packaging, you have molded foam that fits around a product — a flat-screen television, for example — that is tossed out after it has been shipped from factory to distributor to customer. I worked with a furniture company that wrapped blankets around chairs for transport, but there were still some chairs that sustained damage. Maybe we could develop a blanket or some kind of material that adapts over the journey so that it creates just the right amount of cushion for the shape and property of that product and, once it’s delivered, its shape could be “released” and then reused. How can we reset this product in a timely manner so it can be used again?

Wheelchairs are another product where we would want seat cushions that can adapt to how a person sits, the force distribution, and the environment in which they are being used, such as a sidewalk or a gravel path. We considered this as it relates to footwear. You might want to reset your shoes because you’re going to be running road races on a given day and trail races another day. How can we empty and refill the midsole with different particles so it can adapt again? More importantly, how can we upgrade or change our shoes without throwing them away? This is exciting future work for us to explore.

A regulatory loophole could delay ozone recovery by years

Thu, 04/16/2026 - 5:00am

Often hailed as the most successful international environmental agreement of all time, the 1987 Montreal Protocol continues to successfully phase out the global production of chemicals that were creating a growing hole in the ozone layer, causing skin cancer and other adverse health effects.

MIT-led studies have since shown the subsequent reduction in ozone-depleting substances is helping stratospheric ozone to recover. (It could return to 1980 levels by as early as 2040, according to some estimates.) But the Montreal Protocol made an exception in its rules for the use of ozone-depleting substances as feedstocks in the production of other materials. That’s because it was thought that only a small amount — just 0.5 percent — of the ozone-depleting substances used for this purpose would leak into the atmosphere.

In recent years, however, scientists have observed more ozone-depleting substances in the atmosphere than expected, and have increased their estimates of leakage from feedstocks.

Now an international group of scientists, including researchers from MIT, has calculated the impact of different feedstock leakage rates on the ozone’s fragile recovery. They find the higher leakage rates, if not addressed by the Montreal Protocol, could delay ozone recovery by about seven years.

“We’ve realized in the last few years that these feedstock chemicals are a bug in the system,” says author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry, who was part of the original research team that linked the chemicals to the ozone hole. “Production of ozone-depleting substances has pretty much ceased around the world except for this one use, which is when you have a chemical you convert into something else.”

The paper, which was published in Nature Communications today, is the first to comprehensively quantify the impact of leaked feedstocks, which are currently used to make plastics and nonstick chemicals. They are also used to make substitute chemicals for the ones regulated under the Montreal Protocol. The researchers say it shows the importance of curbing use and preventing leakage of such feedstocks, especially as the production of their end products, like plastic, is projected to grow.

“We’ve gotten to the point where, if we want the protocol to be as successful in the future as it has been in the past, the parties really need to think about how to tighten up the emissions of these industrial processes,” says first author Stefan Reimann of the Swiss Federal Laboratories for Materials Science and Technology.

“To me, it’s only fair, because so many other things have already been completely discontinued. So why should this exemption exist if it’s going to be damaging?” says Solomon.

Joining Reimann on the paper are his colleagues Martin K. Vollmer and Lukas Emmenegger; Luke Western and Susan Solomon of the MIT Center for Sustainability Science and Strategy and the Department of Earth, Atmospheric and Planetary Sciences; David Sherry of Nolan-Sherry and Associates Ltd; Megan Lickley of Georgetown University; Lambert Kuijpers of the A/gent Consultancy b.v.; Stephen A. Montzka and John Daniel of the National Oceanic and Atmospheric Administration; Matthew Rigby of the University of Bristol; Guus J.M. Velders of Utrecht University; Qing Liang of the NASA Goddard Space Flight Center; and Sunyoung Park of Kyungpook National University.

Repairing the ozone

In 1985, scientists discovered a growing hole in the ozone layer over Antarctica that was allowing more of the sun’s harmful ultraviolet radiation to reach Earth’s surface. The following year, researchers including Solomon traveled to Antarctica and discovered the cause of the ozone deterioration: a class of chemicals called chlorofluorocarbons, or CFCs, which were then used in refrigeration, air conditioning, and aerosols.

The revelations led to the Montreal Protocol, an international treaty involving 197 countries and the European Union restricting the use of CFCs. The subsequent decision to exempt the use of ozone-depleting substances for use as feedstocks was based partially on industry estimates of how much of their feedstocks leaked.

“It was thought that the emissions of these substances as a feedstock were minor compared to things like refrigerants and foams,” Western says. “It was also believed that leakage from these sources was minor — around half a percent of what went in — because people would essentially be leaking their profits if their feedstocks were released into the atmosphere.”

Unfortunately, some of those assumptions are no longer true. Western and Reimann are part of the Advanced Global Atmospheric Gases Experiment (AGAGE), a global monitoring network co-founded by Ronald Prinn, MIT’s TEPCO Professor of Atmospheric Science. AGAGE monitors emissions of ozone-depleting substances around the world, and in recent years researchers have revised their estimates of feedstock leakage upwards, to about 3.6 percent. For some chemicals, the number was even higher.

In the new paper, the researchers estimated a 3.6 percent feedstock leakage as the baseline for most chemicals. They compared that with a scenario where 0.5 percent of feedstocks are leaked from 2025 onward and a scenario with zero feedstock-related emissions. The researchers also looked at production trends between 2014 and 2024 to project how much of each specific ozone-depleting chemical would be used as feedstock between 2025 and 2100.

The analysis shows that until 2050, total ozone-depleting chemical emissions decrease in all scenarios as rising feedstock emissions are offset by declining uses enforced by the Montreal Protocol. In the scenario with continued 3.6 percent leakage, however, emissions level off around 2045, and total emissions only decrease by 50 percent overall by 2100.

The researchers then evaluated the impact of feedstock-related emissions on stratospheric ozone depletion. In the scenario where feedstock leakage is 0.5 percent, the ozone returns to its 1980 status by 2066. In the scenario with zero feedstock leakage, the ozone reclaims its 1980 health in 2065. But in the baseline scenario, the recovery is delayed about seven years, to 2073.

“This paper sends an important message that these emissions are too high and we have to find a way to reduce them,” Reimann says. “Either that means no longer using these substances as feedstocks, swapping out chemicals, or reducing the leakage emissions when they are used.”

A global response

Solomon is confident industries will be able to adjust to the latest findings.

“There are a lot of innovators in the chemical industry,” Solomon says. “They make new chemicals and improve chemicals for a living. It’s true they can perhaps get too entrenched with certain chemicals, but it doesn’t happen that often. Actually, they’re usually quite willing to consider alternatives. There are thousands of other chemicals that could be used instead, so why not switch? That’s been the attitude.”

Solomon says the fact that AGAGE can detect the impact of feedstock emissions is a testament to the progress the world has made in reducing emissions from other sources up to this point. She believes raising awareness of the feedstock problem is the first step.

“This isn’t the first time that the AGAGE Network has made measurements that have allowed the world to see we need to do a little better here or there,” Western says. “Often, it’s just a mistake. Sometimes all it takes is making people more aware of these things to tighten up some processes.”

Members of the Montreal Protocol meet every year. In those meetings, they split into working groups around different topics. Feedstock emissions are already one of those topics, so participants will review the evidence together. Typically, they release a statement about mitigation strategies if needed.

“We wanted to raise the warning flag that something is wrong here,” Reimann says. “We could reduce the period of ozone depletion by years. It might not sound like a long time, but if you could count the skin cancer cases you’d avoid in that time, it would seem quite significant.”

The work was supported, in part, by the U.S. National Science Foundation, the U.S. National Aeronautics and Space Administration (NASA), the Swiss Federal Office for the Environment, the VoLo Foundation, the United Kingdom Natural Environment Research Council, and the Korea Meteorological Administration Research and Development Program.

Youth may increase vulnerability to a carcinogen found in contaminated water and some drugs

Thu, 04/16/2026 - 12:00am

A new study from MIT suggests that a carcinogen that has been found in medications and in drinking water contaminated by chemical plants may have a much more severe impact on children than adults.

In a study of mice, the researchers found that juveniles exposed to drinking water containing this compound, known as NDMA, showed dramatically higher rates of DNA damage and cancer than adults.

The findings may help to explain an epidemiological association between childhood cancer and prenatal exposure to NDMA in people living near a contaminated site in Wilmington, Massachusetts, the researchers say. The study also suggests that it is critical to evaluate the impact of potential carcinogens across all ages.

“We really hope that groups that do safety testing will change their paradigm and start looking at young animals, so that we can catch potential carcinogens before people are exposed,” says Bevin Engelward, an MIT professor of biological engineering. “As a solution to cancer, cancer prevention is clearly much better than cancer treatment, so we hope we can spot dangerous chemicals before people are exposed, and therefore prevent extensive cancer risk.”

MIT postdoc Lindsay Volk is the lead author of the paper. Engelward is the senior author of the study, which appears in Nature Communications.

From DNA damage to cancer

NDMA (N-Nitrosodimethylamine) can be generated as a byproduct of many industrial chemical processes, and it is also found in cigarette smoke and processed meats. In recent years, NDMA has been detected in some formulations of the drugs valsartan, ranitidine, and metformin. It was also found in drinking water in Wilmington, Massachusetts, in the 1990s, as a result of contamination from the Olin Chemical site.

In 2021, a study from the Massachusetts Department of Health suggested a link between that water contamination and an elevated incidence of childhood cancer in Wilmington. Between 1990 and 2000, 22 Wilmington children were diagnosed with cancer. The contaminated wells were closed in 2003.

Also in 2021, Engelward and others at MIT published a study on the mechanism of how NDMA can lead to cancer. In the new Nature Communications paper, Engelward and her colleagues set out to see if they could determine why the compound appears to affect children more than adults.

Most studies that evaluate potential carcinogens are performed in mice that are at least 4 to 6 weeks old, and often older. For this study, the researchers studied two groups of mice — one 3 weeks old (juvenile), and one 6 months old (adult). Each group was given drinking water with low levels of NDMA, about five parts per million, for two weeks.

Inside the body, NDMA is metabolized by a liver enzyme called CYP2E1. This produces toxic metabolites that can damage DNA by adding a small chemical group known as a methyl group to DNA bases, creating lesions known as adducts.

When the researchers examined the livers of the mice, they found that juveniles and adults showed similar levels of DNA adducts. However, there were dramatic differences in what happened after that initial damage. In juvenile mice, DNA adducts led to significant accumulation of double-stranded DNA breaks, which occur when cells try to repair adducts. These breaks produce mutations that eventually lead to the development of liver cancer.

In the adult mice, the researchers saw essentially no double-stranded breaks and significantly fewer mutations compared to juveniles. Furthermore, the livers did not develop severe pathology, including tumors, even though they experienced the same initial level of DNA adducts.

“The initial structural changes to the DNA had very different consequences depending on age,” Engelward says. “The double-stranded breaks were exclusively observed in the young.”

Further experiments revealed that these differences stem from differences in the rates of cell proliferation. Cells in the juvenile liver divide rapidly, giving them more opportunity to turn DNA adducts into mutations, while cells of the adult liver rarely divide.

“This really emphasizes the overall problem that we’re trying to highlight in the paper,” Volk says. “With toxicological studies, oftentimes the standard is to use fully grown mice. At that point, they’re already slowing down cell division, so if we are testing the harmful effects of NDMA in adult mice, then we’re completely missing how vulnerable particular groups are, such as younger animals.”

While most of these effects were seen in the liver, because that is where NDMA is metabolized, a few of the mice developed other types of cancer, including lung cancer and lymphoma.

Adult risk is not zero

For most of these studies, the researchers used mice that had two of their DNA repair systems knocked out. This speeds up the mutation process, allowing the researchers to see the effects of NDMA exposure more easily, without needing to study a large population of mice.

However, a small study in mice with normal DNA repair showed that juveniles experienced NDMA-induced double-strand breaks, regenerative proliferation, and large-scale mutations that were completely absent in adults. This occurs because the fast-growing juveniles possess highly active DNA replication machinery that encounters the DNA adducts before the cell has time to repair them.

The researchers also found that if they treated adult mice with thyroid hormone, which stimulates proliferation of liver cells, those cells began accumulating mutations as quickly as the juvenile liver cells. Previous work done in the Engelward laboratory has shown that inflammation can also stimulate cell proliferation-driven vulnerability to DNA damage, so the findings of this study suggest that anything that causes liver inflammation could make the adult liver more vulnerable to damage caused by agents such as NDMA.

“We certainly don’t want to say that adults are completely resistant to NDMA,” Volk says. “Everything impacts your susceptibility to a carcinogen, whether that’s your genetics, your age, your diet, and so forth. In adults, if they have a viral infection, or a high fat diet, or chronic binge alcohol drinking, this can impact proliferation within the liver and potentially make them susceptible to NDMA.”

The researchers are now investigating how a high-fat diet might influence cancer development in mice that also have exposure to NDMA.

This collaborative effort across several MIT labs was funded by the National Institutes of Environmental and Health Sciences (NIEHS) Superfund Research Program, a NIEHS Core Center Grant, a National Institutes of Health Training Grant, and the Anonymous Fund for Climate Action. 

MIT study reveals a new role for cell membranes

Thu, 04/16/2026 - 12:00am

Cells are enveloped by a lipid membrane that gives them structure and provides a barrier between the cell and its environment. However, evidence has recently emerged suggesting that these membranes do more than simply provide protection — they also influence the behavior of the protein receptors embedded in them.

A new study from MIT chemists adds further support to that idea. The researchers found that changing the composition of the cell membrane can alter the function of a membrane receptor that promotes proliferation.

Epidermal growth factor receptor (EGFR) can be locked into an overactive state when the cell membrane has a higher than normal concentration of negatively charged lipids, the researchers found. This may help to explain why cancer cells with high levels of those lipids enter a highly proliferative state that allows them to divide uncontrollably.

“The longstanding dogma of what a membrane does is that it’s just a scaffold, an organizational structure. However, there have been increasing observations that suggest that maybe these membrane lipids are actually playing a role in receptor function,” says Gabriela Schlau-Cohen, the Robert T. Haslam and Bradley Dewey Professor of Chemistry at MIT and the senior author of the study.

The findings open up the possibility of discovering new ways to treat tumors by neutralizing the negative charge, which might turn down EGFR signaling, she adds.

Shwetha Srinivasan PhD ’22 is the lead author of the paper, which appears in the journal eLife. Other authors include former MIT postdocs Xingcheng Lin and Raju Regmi, Xuyan Chen PhD ’25, and Bin Zhang, an associate professor of chemistry at MIT.

Receptor dynamics

The EGF receptor, which is found on cells that line body surfaces and organs, is one of many receptors that help control cell growth. Some types of cancer, especially lung cancer and glioblastoma, overexpress the EGF receptor, which can lead to uncontrolled growth.

Like most receptor proteins, EGFR spans the entire cell membrane. Until recently, it has been challenging to study how signals are conveyed across the entire receptor, because of the difficulty of creating membranes that have proteins going all the way through them and then studying both ends of those proteins.

To make it easier to study these signaling processes, Schlau-Cohen’s lab uses nanodiscs, a special type of self-assembling membrane that mimics the cell membrane. When making these discs, the researchers can embed receptors in them, allowing the team to study the function of the full-length receptor.

Using a technique called single molecule FRET (fluorescence resonance energy transfer), the researchers can study how the shape of the receptor changes under different conditions. Single molecule FRET allows them to measure the distance between different parts of the protein by labeling them with fluorescent tags and then measuring how fast energy travels between the tags.

In previous work, Schlau-Cohen and Zhang used single molecule FRET and molecular dynamics simulations to reveal what happens when EGFR binds to EGF. They found that this binding causes the transmembrane section of the receptor to change shape, and that shape-shift triggers the section of the receptor that extends inside the cell to activate cellular machinery that stimulates growth.

Stuck in an overactive state

In the new study, the researchers used a similar approach to investigate how altering the composition of the membrane affects the function of the receptor. First, they explored how elevated levels of negatively charged lipids would affect the cell membrane and EGFR function.

Normally, about 15 percent of the cell membrane is made up of negatively charged lipids. The researchers found that membranes with negatively charged lipids in the range of 15 to 30 percent behaved normally, but if that level reached 60 percent, then the EGFR receptor would become locked into an active state.

In that state, the pro-growth signaling pathway is turned on all the time, even when no EGF is bound to the receptor. Many cancer cells show increased levels of these lipids, and this mechanism could help to explain why those cells are able to grow unchecked, Schlau-Cohen says.

“If the membrane has high levels of negatively charged lipids, then it’s always in that open conformation. It doesn’t matter if ligand is bound or unbound,” she says. “It’s always in the conformation that’s telling the cell to grow, not just when EGF binds.”

The researchers also used this system to explore the role of cholesterol in EGFR function. When the researchers created nanodiscs with elevated cholesterol levels, they found that the membranes became more rigid, and this rigidity suppressed EGFR signaling.

The research was funded by the National Institutes of Health and MIT’s Department of Chemistry.

Waves hit different on other planets

Thu, 04/16/2026 - 12:00am

On a calm day, a light breeze might barely ripple the surface of a lake on Earth. But on Saturn’s largest moon Titan, a similar mild wind would kick up 10-foot-tall waves.

This otherworldly behavior is one prediction from a new wave model developed by scientists at MIT. The model is the first to capture the full dynamics of waves and what it takes to whip them up under different planetary conditions.

In a study published in the Journal of Geophysical Research: Planets, the MIT team introduces the model, which they’ve aptly coined “PlanetWaves.” They apply the model to predict how waves behave on planetary bodies that might host liquid lakes and oceans, including Titan, ancient Mars, and three planets beyond the solar system.

The model predicts that a gentle wind would be enough to stir up huge waves on Titan, where lakes are filled with light liquid hydrocarbons. In contrast, it would take hurricane-force winds to barely move the surface of a lake on the exoplanet 55-Cancri e, which is thought to be a lava world covered in hot, dense liquid rock. 

“On Earth, we get accustomed to certain wave dynamics,” says study author Andrew Ashton, associate scientist at the Woods Hole Oceanographic Institution (WHOI) and faculty member of the MIT-WHOI Joint Program. “But with this model, we can see how waves behave on planets with different liquids, atmospheres, and gravity, which can kind of challenge our intuition.”

The team is particularly keen to understand how waves form on Titan. The large moon is the only other planetary body in the solar system other than the Earth that is known to currently host liquid lakes.

“Anywhere there’s a liquid surface with wind moving over it, there’s potential to make waves,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT. “For Titan, the tantalizing thing is that we don’t have any direct observation of what these lakes look like. So we don’t know for sure what kind of waves might exist there. Now this model gives us an idea.”

If humans were to one day to send a probe to Titan’s lakes, the team’s new model could inform the design of wave-resilient spacecraft.

“You would want to build something that can withstand the energy of the waves,” says lead author Una Schneck, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So it’s important to know what kind of waves these instruments would be up against.”

The study’s co-authors include Charlene Detelich and Alexander Hayes of Cornell University and Milan Curcic of the University of Miami.

“The first puff”

When wind blows over water, it creates waves that can be strong enough to carve out coastlines and redistribute sediment brought to the coast by rivers. Through this process, waves can be a significant force in shaping a landscape over time. Schneck and her colleagues, who study landscape evolution on Earth and other planets, wondered how waves might behave on other worlds where gravity, atmospheric conditions, and liquid compositions can be very different from what is found on Earth.

“There have been attempts in the past to predict how gravity will affect waves on other planets,” Schneck says. “But they don’t quantify other factors such as the composition of the liquid that is making waves. That was the big leap with this project.”

She and her colleagues developed a full wave model that takes into account not just a planet’s gravity, but also properties of its surface liquid, such as its density, viscosity, and surface tension, or how resistant a liquid is to rippling. The team also incorporated the effect of a planet’s atmospheric pressure. With this model, they aimed to predict how a planet’s liquid surface would evolve in response to winds of a given speed.

“Imagine a completely still lake,” Ashton offers. “We’re trying to figure out the first puff that will make those first little tiny ripples, on up to a full ocean wave.”

Making waves

The team first tested their new model with wave data on Earth. They used measurements of waves that were collected by buoys across Lake Superior over 20 years. They found that the model, which took into account Earth’s gravity, the composition of liquid (water), and atmospheric conditions, was able to accurately predict what windspeeds it would take to generate waves across the lake, and how high the waves grew with a given wind strength.

The researchers then applied the model to predict how waves would behave on other planetary bodies that are known to host liquid on their surface. They looked first to Titan, where NASA’s Cassini mission previously captured radar images of lake formations, which scientists suspect are currently filled with liquid methane and ethane. The team used the new model to calculate the moon’s wave dynamics given its gravity, atmospheric pressure, and liquid composition.

They found that on Titan, it’s surprisingly easy to make waves. The relatively light liquid, combined with low gravity and atmospheric pressure, means that even a gentle wind can stir up huge waves.

“It kind of looks like tall waves moving in slow motion,” Schneck says. “If you were standing on the shore of this lake, you might feel only a soft breeze but you would see these enormous waves flowing toward you, which is not what we would expect on Earth.”

The researchers also considered wave activity on ancient Mars. The Red Planet hosts many impact basins that may have once been filled with water, before the planet’s atmosphere dissipated and the water evaporated away. One of those basins is Jezero Crater, which is currently being explored by NASA’s Perseverance rover. With the new model, the team showed that as Mars’ atmosphere gradually disappeared, reducing its pressure over time, it would have required stronger winds to make the same waves.

Beyond the solar system, the researchers applied the model to three different exoplanets. The first, LHS1140b, is a “cool super-Earth,” meaning that it is colder and larger than Earth. The planet hosts liquid water, though because it is so large, it has a stronger gravity. The model showed that the same wind on Earth would generate much smaller waves of water on the super-Earth, due to its difference in gravity.

The team also considered Kepler 1649b, a Venus-like planet, which has a gravity similar to Earth’s, with lakes of sulfuric acid, which is about twice as dense as water. Under these conditions, the researchers found that it would take strong winds to make even a ripple on the exo-Venus, compared to on Earth.

This effect is even more pronounced for the third planet, 55-Cancri e — a lava world that has both a higher gravity than Earth and a much denser, more viscous surface liquid. Scientists suspect that the planet hosts oceans of liquefied rock. In this environment, the model predicts that hurricane-force winds on Earth, of about 80 miles per hour, would generate only small waves of a few centimeters in height on the lava world.

Aside from illuminating new ways that waves can behave on other planets, Perron hopes the model will answer longstanding questions of planetary landscape formation.

“Unlike on Earth where there is often a delta where a river meets the coast, on Titan there are very few things that look like deltas, even though there are plenty of rivers and coasts. Could waves be responsible for this?” Perron wonders. “These are the kinds of mysteries that this model will help us solve.”

This work was supported, in part, by NASA and the National Science Foundation.

Geothermal energy turns red hot

Wed, 04/15/2026 - 7:30pm

Drill deep and drill differently. That’s what’s needed to exploit the nearly bottomless promise of geothermal energy in the United States and around the globe, according to participants at the 2026 Spring Symposium, titled “Next-generation geothermal energy for firm power.” 

Sponsored by the MIT Energy Initiative (MITEI), the March 4 event drew 120 people, including MIT faculty and students, investors, and representatives from startups, multinational energy companies, and zero-carbon advocacy groups.

“The time feels right to pull together good policy, great corporate partners, and the research and technological innovations … to make significant advances in the widespread utilization of this incredible resource,” said Karen Knutson, the vice president for government affairs at MIT, in welcoming attendees.

Technology from the oil and gas industry helped usher in a first wave of geothermal energy. But chewing vertical holes through rocks in traditional ways can’t deliver on the full potential of this resource. And the real treasure — geologic formations radiating heat at 374 degrees Celsius and above — lies kilometers beneath Earth’s surface, far beyond the reach of most conventional drilling rigs.

Panelists explored the many innovations in accessing and circulating subsurface heat, as well as digging to unprecedented depths through extremely challenging geological conditions, discussing advanced drilling technologies, materials, and subsurface imaging.

This work is needed urgently, as demand for firm (always-on) power skyrockets in response to the electrification of industry and rise of data centers, said Pablo Dueñas‑Martínez, a MITEI research scientist. “We cannot get through this only with solar and wind; we need dense, deployable energy like geothermal.”

From “minuscule” to “almost inexhaustible” energy

In her opening remarks, Carolyn Ruppel, MITEI’s deputy director of science and technology, noted that despite decades of successful projects in places like the United States, Kenya, Iceland, Indonesia, and Turkey, geothermal still contributes only a “minuscule” share of global electricity. “The tremendous heat beneath our feet remains largely untouched,” she said.

Citing MIT’s milestone 2006 study “The Future of Geothermal Energy,” keynote speaker John McLennan, a professor at the University of Utah and co–principal investigator of the U.S. Department of Energy’s Utah FORGE enhanced geothermal systems (EGS) field laboratory, reminded attendees that the continental crust holds enough accessible heat to supply power for generations. “For practical purposes, it’s almost inexhaustible,” he said.

The question now, he said, is how to access that resource economically and responsibly.

At the Utah FORGE test site, McLennan has been part of a team investigating one method — adapting the oil and gas industry’s drilling and reservoir engineering expertise for hot, relatively impermeable rocks.

The project has drilled multiple deep wells into crystalline granitic rock, including a pair of wells that have been hydraulically stimulated and connected. In a recent circulation test, cold water was pumped down one well, flowed through fractures, and returned hot through the other.

“On a commercial basis … this hot water would be converted to electricity at the surface,” McLennan said. “This has now been demonstrated at Utah FORGE.”

The basic physics, in other words, work. The harder problems now are cost, repeatability, and scale.

Geothermal on the grid

Several panels highlighted the fact that next-generation geothermal is already beginning to deliver firm power.

At Lightning Dock, New Mexico, geothermal company Zanskar used a probabilistic modeling framework that simulated thousands of possible subsurface configurations to identify where to drill a new production well at an underperforming geothermal field. By thermal power delivered, the resulting well is now “the most-productive pumped geothermal well in the country,” said Joel Edwards, Zanskar’s co-founder and chief technology officer — powering the entire 15 megawatt (MW) Lightning Dock plant from a single well.

This data-driven approach enables the company to find and develop new resources faster and more cheaply than traditional methods, said Edwards.

José Bona, the director of next-generation geothermal at Turboden, explained how his company’s technology uses specialized turbines to circulate organic fluids that conserve heat better than water, and then convert that heat efficiently into electrical power. This closed-cycle technology can utilize low- to medium-temperature heat sources. Turboden is supplying its technology both to the Lightning Dock geothermal facility in New Mexcio and to Fervo Energy’s Cape Station in southwest Utah, an EGS project that will begin delivering 100 MW of baseload, clean electricity to the grid this year, aiming for 500 MW by 2028.

In Geretsried, Germany, Eavor has developed its own proprietary closed-loop system by creating a kind of underground radiator.

“We drilled to about 4.5 kilometers vertical depth, completed six horizontal multilateral pairs, and we delivered the first power to the grid in December,” said Christian Besoiu, the team lead of technology development at Eavor. The project will ultimately be capable of supplying 8.2 MW of electricity to the 32,000 households in the Bavarian town of Geretsried and 64 MW of thermal energy to the district in which the town lies, prioritizing heat when needed.

Beyond oil and gas technology

Early geothermal exploration typically targeted preexisting faults using vertical wells left by oil and gas drilling. Today, companies are experimenting with rock fracturing at multiple subsurface levels and creating heat reservoirs in previously untenable formations by using propping materials.

“Instead of vertical wells, we’re going to horizontal wells, we’re going to cased wells, we’re introducing proppants [solid materials that hold open hydraulically fractured rock] … we do dozens of stages with these designs,” said Koenraad Beckers, the geothermal engineering lead at ResFrac. This shale-style approach has already yielded much higher flow rates and more-reliable performance than earlier EGS.

Some current geothermal wells manage to achieve depths close to 15,000 feet using the oil and gas industry’s polycrystalline diamond compact drill bits, which can bore through hard rock like granite at more than 100 feet per hour. But these bits and the rigs that drive them are no match for conditions six or more kilometers down — and it is at those depths that the heat on hand begins to make an overwhelming economic case for geothermal.

“If we go to around 300 to 350 degrees, your power potential increases 10 times,” said Lev Ring, CEO of Sage Geosystems. “At that point, with reasonable CAPEX [capital expenditure] assumptions, levelized cost of electricity [a metric for comparing the cost of electricity across different generation technologies] is around 4 cents, and geothermal becomes cheaper than any other alternative.”

But “at 10 kilometers down … the largest land rigs in existence today cannot handle it,” Ring added. “We need alternatives — new materials, new ways to handle pressure, maybe even welding on the rig … a whole space that has not been addressed yet.”

One panel, featuring Quaise Energy, an MIT spinout with MITEI roots, spotlighted just how radically drilling might change. Co-founder Matt Houde described the company’s millimeter-wave drilling approach, which uses high-frequency electromagnetic waves derived from fusion research to vaporize rock instead of grinding it, as with conventional drilling. In a recent Texas field test, the team drilled 100 meters of hard basement rock in about a month, and is now planning kilometer-scale trials aimed at reaching superhot rock temperatures around 400 C, where each well could deliver many times the power of today’s geothermal projects.

Innovations for deep drilling

Moderating a panel on “MIT innovations for next-generation geothermal,” Andrew Inglis, the venture builder in residence with MIT Proto Ventures, whose position is sponsored by the U.S. Department of Energy GEODE program, framed the Institute’s role in getting such hard-tech ideas out of the lab and into the field. “The way MIT thinks about tech development, uniquely from other universities, can play a very singular role in geothermal commercial liftoff,” he said.

Materials researchers on that panel illustrated the point. Matěj Peč, an associate professor of geophysics in the Department of Earth, Atmospheric and Planetary Sciences, outlined work to build sensors that survive up to 900 C so that rock deformation and fracturing can be studied at supercritical conditions. Michael Short, the Class of 1941 Professor in the Department of Nuclear Science and Engineering, and C. Cem Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering, respectively described coatings and alloys designed to resist corrosion, fouling, and cracking in extreme environments. In response to audience questions after their talks, Tasan made an important point, highlighting how academics need input from industry to understand the real-world problems (e.g., corrosion of pipes by geofluids) that require engineering solutions.

Other researchers are rethinking how to detect geothermal resources: Wanju Yuan, a research scientist with the Geological Survey of Canada at Natural Resources Canada, is using satellite imagery and thermal infrared sensing to screen vast regions for subtle hot spots and structures, processing thousands of images to identify promising sites in just a few months of work. “It’s a very efficient way to screen potential areas before more expensive exploration, thus reducing exploration and drilling risks,” he said.

Policy as backdrop, not center stage

Policy loomed in the background of many discussions — from bipartisan support for geothermal exploration and tax incentives to issues of regulation and permitting.

For Ruppel, that was by design.

“We wanted this meeting to showcase what’s technically possible and what’s already happening on the ground,” she said. “The policy world is starting to pay attention. Our job is to make sure that when that spotlight turns our way, next-generation geothermal is ready.”

MITEI’s Spring Symposium was followed by a gathering of geothermal entrepreneurs, investors, and energy industry experts co-hosted by MITEI and the Clean Air Task Force. “GeoTech Summit: Accelerating geothermal technology, projects, and deal flow” explored the financing challenges and opportunities of geothermal energy today.

MIT faculty, alumni receive 2025-26 American Physical Society honors

Wed, 04/15/2026 - 2:50pm

The American Physical Society (APS) recently honored two MIT faculty members — professors Yoel Fink PhD ’00 and Mehran Kardar PhD ’83 — as well as six alumni with prizes and awards for their contributions to physics and academic leadership.

In addition, several MIT faculty members — Professor Jorn Dunkel, Professor Yen-Jie Lee PhD ’11, Associate Professor Mingda Li PhD ’15, and Associate Professor Julien Tailleur — as well as 12 additional alumni were named APS Fellows.

Yoel Fink PhD ’00, the Danae and Vasilis (1961) Salapatas Professor in the Department of Materials Science and Engineering, received the Andrei Sakharov Prize “for defending the academic freedom and human rights of scientists working in the U.S.”

The prize, named for physicist and human rights advocate Andrei Sakharov, recognizes scientists whose leadership and impact advance the principles of intellectual freedom and human dignity. Fink’s research focuses on “computing fabrics” — fibers and textiles that sense, communicate, store, and process information. By embedding functionality at the fiber level, fabrics become computing systems that can infer human activity and context while keeping the traditional qualities of garments. These textiles enable noninvasive monitoring of physiological and health conditions, with applications ranging from fetal and maternal health to human performance analytics, injury prevention in challenging environments, and defense.

Mehran Kardar PhD ’83, the Francis Friedman Professor of Physics, received the Lars Onsager Prize “for ground-breaking contributions to statistical physics, including the Kardar-Parisi-Zhang equation, Casimir forces, active matter, and aspects of biological physics.”

Kardar’s research focuses on how complex behavior emerges from simple interactions in systems both in and far from equilibrium, including stable ones like a still pond and rapidly changing ones such as growing surfaces. The Kardar-Parisi-Zhang equation, which he helped develop, provides a unifying framework for understanding how randomness and fluctuations shape evolving phenomena, from fluids and interfaces to biological and quantum systems. His work has also advanced the theoretical understanding of disordered materials, soft matter such as polymers and gels, and fluctuation-induced forces — including Casimir forces arising from quantum and thermal effects. More recently, he has applied these ideas to active matter — systems of self-driven units — and biological systems, helping reveal patterns in living and evolving systems.

Alumni receiving awards

Joel Butler PhD ’75 was presented the W.K.H. Panofsky Prize in Experimental Particle Physics “for wide-ranging scientific, technical, and strategic contributions to particle physics, particularly exceptional leadership in fixed-target quark flavor experiments at Fermilab and collider physics at the Large Hadron Collider.”

Anthony Duncan PhD ’75 is the recipient of the Abraham Pais Prize for History of Physics “for research on the history of quantum physics between 1900 and 1927 that culminated in 'Constructing Quantum Mechanics,' an exemplary work that uses primary sources masterfully and employs scaffold and arch metaphors to describe developments in the quantum revolution.”

Laura A. Lopez ’04 was presented the Edward A. Bouchet Award “for pioneering contributions to X-ray astronomy, including foundational studies of supernova remnants, compact objects, and stellar feedback in galaxies, and for transformative leadership in advancing equity and inclusion in physics through innovative mentorship programs, national advocacy, and unwavering support for students from historically marginalized communities.”

Zhiquan Sun PhD ’25 is the recipient of the J.J. and Noriko Sakurai Dissertation Award in Theoretical Particle Physics “for applying effective field theory to advance our understanding of QCD [quantum chromodynamics], including establishing a new formalism to study heavy quark fragmentation, determining how confinement affects energy correlators, and revealing an overlooked complexity of the axion solution to the strong CP [charge conjugation symmetry and parity symmetry] problem.”

Charles B. Thorn III ’68 received the Dannie Heineman Prize for Mathematical Physics for “fundamental contributions to elementary particle physics, primarily the theory of strong interactions and the development of string theory.”

Christina Wang ’19 received the Mitsuyoshi Tanaka Dissertation Award in Experimental Particle Physics “for pioneering a novel technique using CMS [Compact Muon Solenoid] muon chambers to search for weakly-coupled sub-GeV [giga-electronvolt] mass dark matter using long-lived particle searches, and for groundbreaking work in quantum sensing to enable new probes of dark matter.”

APS Fellows

Several MIT faculty were elected 2025 APS Fellows:

Jorn Dunkel, MathWorks Professor of Mathematics, is the recipient of the Division of Statistical and Nonlinear Physics Fellowship “for pioneering contributions to statistical, nonlinear, and biological physics, notably in understanding pattern formation in soft matter and biology, cell positioning in tissues, and turbulence in active media.”

Yen-Jie Lee PhD '11, professor of physics, received the Division of Nuclear Physics Fellowship “for pioneering measurements of jet quenching, medium response and heavy-quark diffusion in the quark-gluon plasma, and for using electron-positron collisions as an innovative control to understand collectivity in small collision systems.”

Mingda Li PhD '15, associate professor of nuclear science and engineering, is the recipient of the Topical Group on Data Science Fellowship “for pioneering the integration of artificial intelligence with scattering and spectroscopy, enabling breakthroughs in phonons, topological states, optical and time-resolved spectra, and data-driven discovery for quantum and energy applications.”

Julien Tailleur, associate professor of physics, is the recipient of the Division of Soft Matter Fellowship “for foundational theoretical work on motility-induced phase separation and emergent collective behavior in scalar active matter.”

The following additional MIT alumni were also honored as APS Fellows:

Andrew Cross SM ’05, PhD ’08 (EECS), Division of Quantum Information Fellowship 

Kevin D. Dorfman SM '01, PhD '02 (ChemE), Division of Polymer Physics Fellowship

Geoffroy Hautier PhD '11 (DMSE), Division of Computational Physics Fellowship

Douglas J. Jerolmack PhD '06 (EAPS), Division of Statistical and Nonlinear Physics Fellowship

Brian Lantz '92, PhD '99 (Physics), Division of Gravitational Physics Fellowship

Valerio Lucarini SM '03 (EAPS), Topical Group on Physics of Climate Fellowship

Giles Novak '81 (Physics), Division of Astrophysics Fellowship

Steve Presse PhD '08 (Physics), Division of Biological Physics Fellowship

Jonathan Rothstein PhD '01 (MechE), Division of Fluid Dynamics Fellowship

Gray Rybka PhD '07 (Physics), Division of Particles and Fields Fellowship

Sarah Sheldon '08, PhD '13 (Physics, NSE), Forum on Industrial and Applied Physics Fellowship

Lian Shen ScD '01 (MechE), Division of Fluid Dynamics Fellowship

Multitasking quantum sensors can measure several properties at once

Wed, 04/15/2026 - 12:00am

A special class of sensors leverages quantum properties to measure tiny signals at levels that would be impossible using classical sensors alone. Such quantum sensors are currently being used to study the inner workings of cells and the outer depths of our universe.

Particularly promising are solid-state quantum sensors, which can operate at room temperature. Unfortunately, most solid-state quantum sensors today only measure one physical quantity at a time — such as the magnetic field, temperature, or strain in a material. Trying to measure both the magnetic field and temperature of a material at the same time causes their signals to get mixed up and measurements to become unreliable.

Now, MIT researchers have created a way to simultaneously measure multiple physical quantities with a solid-state quantum sensor. They achieved this by exploiting entanglement, where particles become correlated into a single quantum state. In a new paper, the team demonstrated its approach in a commonly used quantum sensor at room temperature, measuring the amplitude, frequency, and phase of a microwave field in a single measurement. They also showed the approach works better than sequentially measuring each property or using traditional sensors.

The researchers say the approach could enable quantum sensors that can deepen our understanding of the behavior of atoms and electrons inside materials and living systems like cancer cells.

“Quantum multiparameter estimation has been mostly theoretical to date,” says co-lead author of the paper Takuya Isogawa, a graduate student in nuclear science and engineering. “There have been very few experiments that actually demonstrate it, and that work focused on photons. We wanted to demonstrate multiparameter estimation in a more application-oriented setup: a solid-state quantum sensor in use today.”

Joining Isogawa on the paper are co-lead authors Guoqing Wang PhD ’23 and MIT PhD candidate Boning Li. The other authors on the paper are former MIT visiting students Zhiyao Hu and Ayumi Kanamoto; University of Tokyo PhD candidate Shunsuke Nishimura; Chinese University of Hong Kong Professor Haidong Yuan; and Paola Cappellaro, MIT’s Ford Professor of Engineering, a professor of nuclear science and engineering and of physics, and a member of the Research Laboratory of Electronics.

Quantum effects for measurement

Quantum sensors exploit quantum effects like entanglement, spin states, and superposition to measure changes in magnetic fields, electric fields, gravity, acceleration, and more. As such, they can be used to measure the activity of single molecules in ways that are useful for understanding biology and space, like tracking the activity of metabolites or enzymes inside cells.

One particularly useful sensor in biology leverages what’s known as nitrogen-vacancy (NV) centers in diamonds, a defect where a carbon atom in the diamond’s crystal lattice is replaced by a nitrogen atom, and a neighboring lattice site is missing, or vacant. The defect hosts an electronic spin whose transition frequencies can be read out optically. The NV center’s spin state is extremely sensitive to external effects, such as magnetic fields and temperature, which can shift the spin state in ways that can be measured at extremely high resolution.

Unfortunately, different external effects change the energy resonances of the spin in similar ways, making it difficult to measure multiple effects at once. The result is that most solid-state quantum sensor applications measure a single physical quantity at one time.

“If you can only measure one quantity at a time, you have to repeat experiments to measure quantities one by one,” Isogawa says. “That takes more time, which means less sensitivity. It also makes experiments more susceptible to errors.”

For their experiment, the researchers used NV centers inside of a 5-square-millimeter diamond. They pointed a laser into the diamond and studied its fluorescence to make their measurements, a common approach for such sensors. To study the electronic spin of the NV center, they used a microwave antenna. To study the spin of the nitrogen atom they used a radio frequency field.

“We used those two spins as two qubits,” Isogawa says, referring to the building blocks of quantum computing systems. “If you have only one qubit, you can only measure one outcome: basically, 0 or 1. It’s the probability that it spins up or down. Think of it like a coin toss, with the probability of getting heads or tails. With two qubits, we increased the parameters that we could extract.”

The system worked because the spins of the sensor qubit and auxiliary qubit were entangled, a quantum property where the state of one particle is dependent on another. With one qubit, you get a binary outcome. With two, you get four possible outcomes with a total of three possible parameters.

The two qubits allowed researchers to measure those three quantities simultaneously using a technique known as the Bell state measurement.

Other researchers had used the Bell state measurement at extremely low temperatures before, but the MIT researchers developed a new technique to perform the measurement at room temperature. That technique was first proposed by Wang, who was previously a graduate student in Professor Cappellaro’s lab.

The researchers used the approach to simultaneously measure the amplitude, detuning, and phase of a microwave magnetic field. The researchers also say the approach could be used to measure electric fields, temperature, pressure, and strain.

“Measuring these parameters simultaneously can help us explore spin waves in materials, which is an important topic in condensed matter physics,” Isogawa says. “NV center sensors have extremely high spatial resolution and versatility. It can measure a lot of different physical quantities.”

More practical quantum sensing

The researchers say this work is an important step toward using solid-state quantum sensors to more fully characterize systems in biomedical research and materials characterization. That’s because multiparameter estimation had never been achieved in realistic settings or in widely used quantum sensors.

“What makes the NV center quantum sensors so special is they can operate at room temperature,” Isogawa says. “It’s very suitable for biological measurements or condensed matter physics experiments.”

Although the researchers say their sensor didn’t measure each quantity at the highest possible precision, in future work they plan to explore if their approach can achieve higher precision for each parameter.

They also plan to explore how their approach works to characterize heterogenous materials.

“In an extremely uniform environment, you could use many different classical and quantum sensors and measure each physical quantity at the same time,” Isogawa says. “But if the physical quantities change at different locations, you need high spatial sensors, and you need a sensor that can measure multiple physical quantities. This approach has major advantages in such situations.”

The work was supported, in part, by the U.S. National Science Foundation, the National Research Foundation of Korea, and the Research Grants Council of Hong Kong.

Human-machine teaming dives underwater

Tue, 04/14/2026 - 9:00am

The electricity to an island goes out. To find the break in the underwater power cable, a ship pulls up the entire line or deploys remotely operated vehicles (ROVs) to traverse the line. But what if an autonomous underwater vehicle (AUV) could map the line and pinpoint the location of the fault for a diver to fix?

Such underwater human-robot teaming is the focus of an MIT Lincoln Laboratory project funded through an internally administered R&D portfolio on autonomous systems and carried out by the Advanced Undersea Systems and Technology Group. The project seeks to leverage the respective strengths of humans and robots to optimize maritime missions for the U.S. military, including critical infrastructure inspection and repair, search and rescue, harbor entry, and countermine operations.

"Divers and AUVs generally don't team at all underwater," says principal investigator Madeline Miller. "Underwater missions requiring humans typically do so because they involve some sort of manipulation a robot can't do, like repairing infrastructure or deactivating a mine. Even ROVs are challenging to work with underwater in very skilled manipulation tasks because the manipulators themselves aren't agile enough."

Beyond their superior dexterity, humans excel at recognizing objects underwater. But humans working underwater can't perform complex computations or move very quickly, especially if they are carrying heavy equipment; robots have an edge over humans in processing power, high-speed mobility, and endurance. To combine these strengths, Miller and her team are developing hardware and algorithms for underwater navigation and perception — two key capabilities for effective human-robot teaming.

As Miller explains, divers may only have a compass and fin-kick counts to guide them. With few landmarks and potentially murky conditions caused by a lack of light at depth or the presence of biological matter in the water column, they can easily become disoriented and lost. For robots to help divers navigate, they need to perceive their environment. However, in the presence of darkness and turbidity, optical sensors (cameras) cannot generate images, while acoustic sensors (sonar) generate images that lack color and only show the shapes and shadows of objects in the scene. The historical lack of large, labeled sonar image datasets has hindered training of underwater perception algorithms. Even if data were available, the dynamic ocean can obscure the true nature of objects, confusing artificial intelligence. For instance, a downed aircraft broken into multiple pieces, or a tire covered in an overgrowth of mussels, may no longer resemble an aircraft or tire, respectively.

"Ultimately, we want to devise solutions for navigation and perception in expeditionary environments," Miller says. "For the missions we're thinking about, there is limited or no opportunity to map out the area in advance. For the harbor entry mission, maybe you have a satellite map but no underwater map, for example."

On the navigation side, Miller's team picked up on work started by the MIT Marine Robotics Group, led by John Leonard, to develop diver-AUV teaming algorithms. With their navigation algorithms, Leonard's group ran simulations under optimal conditions and performed field testing in calm waters using human-paddled kayaks as proxies for both divers and AUVs. Miller's team then integrated these algorithms into a mission-relevant AUV and began testing them under more realistic ocean conditions, initially with a support boat acting as a diver surrogate, and then with actual divers.

"We quickly learned that you need more sensing capabilities on the diver when you factor in ocean currents," Miller explains. "With the algorithms demonstrated by MIT, the vehicle only needed to calculate the distance, or range, to the diver at regular intervals to solve the optimization problem of estimating the positions of both the vehicle and diver over time. But with the real ocean forces pushing everything around, this optimization problem blows up quickly."

On the perception side, Miller's team has been developing an AI classifier that can process both optical and sonar data mid-mission and solicit human input for any objects classified with uncertainty.

"The idea is for the classifier to pass along some information — say, a bounding box around an image — to the diver and indicate, "I think this is a tire, but I'm not sure. What do you think?" Then, the diver can respond, "Yes, you've got it right, or no, look over here in the image to improve your classification," Miller says.

This feedback loop requires an underwater acoustic modem to support diver-AUV communication. State-of-the-art data rates in underwater acoustic communications would require tens of minutes to send an uncompressed image from the AUV to the diver. So, one aspect the team is investigating is how to compress information into a minimum amount to be useful, working within the constraints of the low bandwidth and high latency of underwater communications and the low size, weight, and power of the commercial off-the-shelf (COTS) hardware they're using. For their prototype system, the team procured mostly COTS sensors and built a sensor payload that would easily integrate into an AUV routinely employed by the U.S. Navy, with the goal of facilitating technology transition. Beyond sonar and optical sensors, the payload features an acoustic modem for ranging to the diver and several data processing and compute boards.

Miller's team has tested the sensor-equipped AUV and algorithms around coastal New England — including in the open ocean near Portsmouth, New Hampshire, with the University of New Hampshire's (UNH) Gulf Surveyor and Gulf Challenger coastal research vessels as diver surrogates, and on the Boston-area Charles River, with an MIT Sailing Pavilion skiff as the surrogate.

"The UNH boats are well-equipped and can access realistic ocean conditions. But pretending to be a diver with a large boat is hard. With the skiff, we can move more slowly and get the relative motion in tune with how a diver and AUV would navigate together."

Last summer, the team started testing equipment with human divers at Michigan Technological University's Great Lakes Research Center. Although the divers lacked an interface to feed back information to the AUV, each swam holding the team's tube-shaped prototype tablet, dubbed a "tube-let." The tube-let was equipped with a pressure and depth sensor, inertial measurement unit (to track relative motion), and ranging modem — all necessary components for the navigation algorithms to solve the optimization problem.

"A challenge during testing was coordinating the motion of the diver and vehicle, because they don't yet collaborate," Miller says. "Once the divers go underwater, there is no communication with the team on the surface. So, you have to plan where to put the diver and vehicle so they don't collide."

The team also worked on the perception problem. The water clarity of the Great Lakes at that time of year allowed for underwater imaging with an optical sensor. Caroline Keenan, a Lincoln Scholars Program PhD student jointly working in the laboratory's Advanced Undersea Systems and Technology Group and Leonard's research group at MIT, took the opportunity to advance her work on knowledge transfer from optical sensors to sonar sensors. She is exploring whether optical classifiers can train sonar classifiers to recognize objects for which sonar data doesn't exist. The motivation is to reduce the human operator load associated with labeling sonar data and training sonar classifiers.

With the internally funded research program coming to an end, Miller's team is now seeking external sponsorship to refine and transition the technology to military or commercial partners.

"The modern world runs on undersea telecommunication and power cables, which are vulnerable to attack by disruptive actors. The undersea domain is becoming increasingly contested as more nations develop and advance the capabilities of autonomous maritime systems. Maintaining global economic security and U.S. strategic advantage in the undersea domain will require leveraging and combining the best of AI and human capabilities," Miller says.

Q&A: MIT SHASS and the future of education in the age of AI

Tue, 04/14/2026 - 9:00am

The MIT School of Humanities, Arts, and Social Sciences (SHASS) was founded in 1950 in response to “a new era emerging from social upheaval and the disasters of war,” as outlined in the 1949 Lewis Committee Report

The report’s findings emphasized MIT’s role and responsibility in the new nuclear age, which called for doubling down on genuine “integration” of scientific and technical topics with humanistic scholarship and teaching. Only that way, the committee wrote, could MIT tackle “the most difficult and complicated problems confronting our generation.”

As SHASS marks its 75th anniversary, Dean Agustín Rayo answers questions about why the need for developing students with broad minds and human understanding is as urgent as ever, given pressing challenges in the midst of a new technological revolution.

Q: Many universities are responding to artificial intelligence by launching new technical programs or updating curricula. You’ve suggested the change is deeper than that. Why?

A: Artificial intelligence isn’t just changing the way students learn — it’s transforming every aspect of society. The labor market is experiencing a dramatic shift, upending traditional paths to financial stability. And AI is changing the ways we bring meaning to our lives: the ways we build relationships, the ways we pay attention, and the things we enjoy doing.

The upshot is that the most important question universities need to ask is not how to adapt our pedagogy to AI — although we certainly need to address that. The most important question we need to ask is how to provide an education that brings real value to students in the age of AI. 

We need to ensure that universities provide students with the tools they need to find a path to financial security and to build meaningful lives.

We need to produce students with minds that are both nimble and broad. We need our students to not only be able to execute tasks effectively, but also have the judgment to determine which tasks are worth executing. We need students who have a moral compass, and who understand how the world works, in all of its political, economic, and human complexity. We need students who know how to think critically, and who have excellent communication and leadership skills.

Q: What role do the humanities, arts, and social sciences play in preparing MIT students for that future?

A: They’re essential, and are rightly a core part of an MIT education: MIT has long required its undergraduates take at least eight courses in HASS disciplines to graduate.

Fields like philosophy, political science, economics, literature, history, music, and anthropology are crucial to developing the parts of our lives that are essentially human — the parts that will not be replaced by AI.

They are crucial to developing critical thinking and a moral compass. They are crucial to understanding people — our values, institutions, cultures, and ways of thinking. They are crucial to creating students who are broad thinkers who understand the way the world works. They are crucial to developing students who are excellent communicators and are able to describe their projects — and their lives — in a way that endows them with meaning.

Our students understand this. Here is how one of them put the point: “Engineering gives me the tools to measure the world; the humanities teach me how to interpret it. That balance has shaped both how I do science and why I do it.” (Full interview here.)

Q: Some people worry that emphasizing humanistic study could dilute MIT’s technological edge. How do you respond to that concern?

A: I think the opposite is true. 

MIT is an important engine for social mobility in the United States, and a catalyst for entrepreneurship, which has added billions of dollars to the American economy. That cannot be separated from the fact that we are a technical institution, which brings together the country’s most talented undergraduates — regardless of socioeconomic background — and transforms them into the next generation of our country's top scientific and engineering leaders. 

MIT plays an incredibly important role in our country. So, the last thing I want to do is mess with our secret sauce.

But I also think that the age of AI is forcing us to rethink what it means to be a top engineer. 

Think about artificial intelligence itself. The challenges we face are not just technical. Issues like bias, accountability, governance, and the societal impact of automation are no less important. Understanding those dimensions helps technologists design better systems and anticipate real-world consequences.

Strengthening the humanities at MIT isn’t a departure from our core mission — it’s a way of ensuring that our technical leadership continues to matter in the world.

Q: What kinds of changes is MIT SHASS pursuing to support this vision?

A: There’s a lot going on! 

We’ve launched the MIT Human Insight Collaborative (MITHIC) as a way of strengthening research in the humanities, arts, and social sciences, and of deepening collaboration with colleagues across MIT.

We’re shaping the undergraduate experience to ensure that every MIT student engages with the big societal questions shaping our time, from democratic resilience to climate change to the ethics of new technologies.

We’re building stronger connections through initiatives like the creation of shared faculty positions with the MIT Schwarzman College of Computing (SCC). And we recently launched a new Music Technology and Computation Graduate Program with the School of Engineering.

We’re partnering with SERC (the SCC’s Social and Ethical Responsibilities of Computing) to design new classes on the intersection of computing and human-centered issues, such as ethics.

And we’re elevating the humanities — for their own sake, and as a space for experimentation, bringing together students, faculty, and partners to explore new forms of research, teaching, and public engagement.

This is a very exciting time for SHASS.

Flying at the edge of the stratosphere

Tue, 04/14/2026 - 9:00am

All the ingredients to leave the first layer of the atmosphere were laying on a picnic table. T-minus 30 minutes before launch from the New York Catskills, students in MIT's reborn 16.00 (Introduction to Aerospace Engineering) course tore open hand warmers to fight the December morning chill. One hot pack for cold hands. One for the electronics payload, which would need the warmth on the way up. This series of balloon launches rose to more than 20 kilometers above the surface.

Five student teams completed stratospheric balloon launches for a final project in the MIT Department of Aeronautics and Astronautics (AeroAstro) first-year exploratory course. This fall semester was the first iteration of the reimagined 16.00. The course was co-taught by MIT professors Jeffery Hoffman, a former NASA astronaut, and Oliver de Weck, Apollo Program Professor of Astronautics and Engineering Systems. The course was reintroduced to the curriculum in 2025 to give first-year students a design-build experience from the very start, says de Weck, who is also AeroAstro's associate department head. 

"This course had been taught for more than 25 years. And then the pandemic came," he explains. "We felt that it was time to bring the course back, to revive it, give it new life."

De Weck taught a version of this hands-on project from 2012 to 2016 in Unified Engineering, with 20 balloon launches over that time. Hoffman taught a version that focused on blimps, indoor flights, and achieving neutral buoyancy and control. Those prior courses inspired the new program. The current 16.00 course is an early introduction to design-build flying, offered before the well-known Unified Engineering course for Course 16 sophomores.

"Students don't want to sit through long lectures, with lots of PowerPoints and notes and blackboards," says de Weck. He referenced feedback from students that is framing the department's upcoming strategic plan. "Those hands-on visceral experiences is what we want to provide them."

The AeroAstro program adds about 60 undergraduates per year. Future students can expect to see different versions of the 16.00 course, including those focused on fixed-wing aircraft, quadcopter drones, and rockets. Future balloon courses will be called 16.00B. A fixed-wing remote-controlled aircraft course will be 16.00A.

Over 13 weeks, the students attended lectures on subjects including atmospheric composition, radio waves, and flight planning and regulations. In labs, they practiced building Arduino-based pressure and temperature sensors, and testing communication systems.

On that cold launch day, Jackson Lunfelt kept his grip against the pull of an oversized helium balloon moments before his team's launch. His team worked for weeks configuring GPS and radio communications and testing balloon buoyancy. Among their trials and errors, they had to find the right weight for a 3D printed frame to attach the balloon and parachute. It was too heavy at first. They figured out how to reduce the weight of the plastic to keep the payload buoyant.

"Fortunately, a lot of preparation had helped us," he says.

Lunfelt, a first-year student, grew up just a few hours away from the Catskills in upstate New York. In high school, he was active in Future Farmers of America, welding, and robotics. On launch day, his team was worried their onboard GoPro would shut off from the cold high-altitude temperatures. They got the green light to add a battery bank. They would need to re-calculate the weight and helium needed at the final hour.

"It was one of those things that if you don't do this, you're not gonna launch,” says Lunfelt.

That first week of December brought frigid air, gusts, and wind patterns that meant the class would have to rethink its launch site. The team aimed to fly east, over Massachusetts, and land before reaching the ocean. The new weather pattern pushed the team even farther west across the New York border.

The balloon lifted the 3.5 pound payload from the Catskills while the mission control group monitored progress from Cambridge, Massachusetts. It rose hundreds of feet per minute. It passed the troposphere and flew across Western Massachusetts at 100 miles an hour, pushed by the strong upper-level winds of the jet stream. It climbed to an estimated 22 kilometers above the surface. At that height, an onboard GoPro camera recorded the curvature of the Earth.

"Every single moment of that video was amazing. It was truly a story in itself," says Lunfelt.

Then the latex balloon burst, as designed, and descended back down — aided by a parachute. The GoPros captured that spectacular moment, too. The winds carried them just north of the Massachusetts-New Hampshire border. They landed in a neighborhood around Nashua, New Hampshire. Locals saw the MIT identifiers written on the side of the payloads and helped the teams recover them. The landing made it onto the local news.

After a very early morning and late evening monitoring the launch returns, de Weck, alongside teaching assistant Jonathan Stoppani and Senior Technical Instructor Dave Robertson, agreed that the feeling of pride from the whole class was palpable. The payloads all came back in one piece, a test of successful design-builds and last-minute adjustments. The AeroAstro flying tradition is back for first-year students. 

Carbon removal project supports Maine’s blue economy, broader marine health

Tue, 04/14/2026 - 12:00am

Oceans absorb roughly 25 to 30 percent of the carbon dioxide (CO2) that is released into the atmosphere. When this CO2 dissolves in seawater, it forms carbonic acid, making the water more acidic and altering its chemistry. Elevated levels of acidity are harmful to marine life like corals, oysters, and certain plankton that rely on calcium carbonate to build shells and skeletons.

“As the oceans absorb more CO2, the chemistry shifts — increasing bicarbonate while reducing carbonate ion availability — which means shellfish have less carbonate to form shells,” explains Kripa Varanasi, professor of mechanical engineering at MIT. “These changes can propagate through marine ecosystems, affecting organism health and, over time, broader food webs.”

Loss of shellfish can lead to water quality decline, coastal erosion, and other ecosystem disruptions, including significant economic consequences for coastal communities. “The U.S. has such an extensive coastline, and shellfish aquaculture is globally valued at roughly $60 billion,” says Varanasi. “With the right innovations, there is a substantial opportunity to expand domestic production.”

“One might think, ‘this [depletion] could happen in 100 years or something,’ but what we’re finding is that they are already affecting hatcheries and coastal systems today,” he adds. “Without intervention, these trends could significantly alter marine ecosystems and the coastal economies that rely on them over time.”

Varanasi and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering, Post-Tenure, at MIT, have been collaborating for years to develop methods for removing carbon dioxide from seawater and turn acidic water back to alkaline. In recent years, they’ve partnered with researchers at the University of Maine Darling Marine Center to deploy the method in hatcheries.

“The way we farm oysters, we spawn them in special tanks and rear them through about a two-week larval period … until they’re big enough so that they can be transferred out into the river as the water warms up,” explains Bill Mook, founder of Mook Sea Farm. Around 2009, he noticed problems with production of early-stage larvae. “It was a catastrophe. We lost several hundred thousand dollars’ worth of production,” he says.

Ultimately, the problem was identified as the low pH of the water that was being brought in: The water was too acidic. The farm’s initial strategy, a common practice in oyster farming, was to buffer the water by adding sodium bicarbonate. The new approach avoids the use of chemicals or minerals.

“A lot of researchers are studying direct air capture, but very few are working in the ocean-capture space,” explains Hatton. “Our approach is to use electricity, in an electrochemical manner, rather than add chemicals to manipulate the solution pH.”

The method uses reactive electrodes to release protons into seawater that is collected and fed into the cells, driving the release of the dissolved carbon dioxide from the water. The cyclic process acidifies the water to convert dissolved inorganic bicarbonates to molecular carbon dioxide, which is collected as a gas under vacuum. The water is then fed to a second set of cells with a reversed voltage to recover the protons and turn the acidic water back to alkaline before releasing it back to the sea.

Maine’s Damariscotta River Estuary, where Mook farms is located, provides about 70 percent of the state’s oyster crop. Damian Brady, a professor of oceanography based at the University of Maine and key collaborator on the project, says the Damariscotta community has “grown into an oyster-producing powerhouse … [that is] not only part of the economy, but part of the culture.” He adds, “there’s actually a huge amount that we could learn if we couple the engineering at MIT with the aquaculture science here at the University of Maine.”

“The scientific underpinning of our hypothesis was that these bivalve shellfish, including oysters, need calcium carbonate in order to form their shells,” says Simon Rufer PhD ’25, a former student in Varanasi’s lab and now CEO and co-founder of CoFlo Medical. “By alkalizing the water, we actually make it easier for the oysters to form and maintain their shells.”

In trials conducted by the team, results first showed that the approach is biocompatible and doesn't kill the larvae, and later showed that the oysters treated by MIT's buffer approach did better than mineral or chemical approaches. Importantly, Hatton also notes, the process creates no waste products. Ocean water goes in, CO2 comes out. This captured CO2 can potentially be used for other applications, including to grow algae to be used as food for shellfish.

Varanasi and Hatton first introduced their approach in 2023. Their most recent paper, “Thermodynamics of Electrochemical Marine Inorganic Carbon Removal,” which was published last year in journal Environmental Science & Technology, outlines the overall thermodynamics of the process and presents a design tool to compare different carbon removal processes. The team received a “plus-up award” from ARPA-E to collaborate with University of Maine and further develop and scale the technology for application in aquaculture environments.

Brady says the project represents another avenue for aquaculture to contribute to climate change mitigation and adaptation. “It pushes a new technology for removing carbon dioxide from ocean environments forward simultaneously,” says Brady. “If they can be coupled, aquaculture and carbon dioxide removal improve each other’s bottom line."

Through the collaboration, the team is improving the robustness of the cells and learning about their function in real ocean environments. The project aims to scale up the technology, and to have significant impact on climate and the environment, but it includes another big focus.

“It’s also about jobs,” says Varanasi. “It’s about supporting the local economy and coastal communities who rely on aquaculture for their livelihood. We could usher in a whole new resilient blue economy. We think that this is only the beginning. What we have developed can really be scaled.”

Mook says the work is very much an applied science, “[and] because it’s applied science, it means that we benefit hugely from being connected and plugged into academic institutions that are doing research very relevant to our livelihoods. Without science, we don’t have a prayer of continuing this industry.”

Jazz in the key of life

Sun, 04/12/2026 - 12:00am

It is not hard to find glowing reviews of saxophonist Miguel Zenón, a creative jazz artist whose compositions incorporate musical elements from his native Puerto Rico.

For instance, The Jazz Times called “Jibaro,” Zenón’s breakthrough 2005 album, “profound yet joyful.” The New York Times called the same music “strong and light,” adding that we have “rarely seen a jazz composer step forward with a project so impressively organized, intellectually powerful and well played from the start.”

In 2009, when Zenón won a prestigious MacArthur Fellowship, the MacArthur Foundation called Zenón’s work “elegant and innovative,” with “a high degree of daring and sophistication.” In 2012, The New York Times reviewed another Zenón work, “Puerto Rico Nació en Mi: Tales From the Diaspora,” by calling the music “deeply hybridized and original, complex but clear.”

As you may have noticed, these notices all contain multiple descriptive terms. That’s because Zenón’s work is many things at once: jazz, combined with other musical genres; technically rigorous, and supple; novel, yet steeped in tradition. Indeed, Zenón has always seen jazz as being multifaceted.

“What I discovered, when I first encountered jazz, was this idea that you were using improvisation to portray your personality directly to your listeners,” Zenón explains. “And it was connected to a very interesting and intricate improvisational language. That provided something I hadn’t encountered in music before, this idea that you could have something personal and heartfelt walking hand in hand with something that was intellectual and brainy. That balance spoke to me.”

It is still speaking. In 2024, Zenón won the Grammy Award for Best Latin Jazz Album for “El Arte Del Bolero Vol. 2,” a collaboration with Venezuelan pianist Luis Perdomo, a musical partner in the Miguel Zenón Quartet.

Zenón has taught at MIT for three years now. He became a tenured faculty member last year, in MIT’s Music and Theater Arts program, where he helps students find the same satisfaction in music that he does.

“When I first got into music, I was looking for fulfillment,” Zenón says. “It wasn’t about success. I was just looking for music to fulfill something within me. And I still search for that now. And sometimes it still feels like it did 25 or 30 years ago, when I first encountered that feeling. It’s nice to have that in your pocket, to say, this is what I’m looking for, that initial feeling.”

Paradise in the Back Bay

Zenón grew up in San Juan, Puerto Rico. Around age 11, he started attending a performing arts school and playing the saxophone. In his last year of school, Zenón was admitted into college to study engineering. However, a few years before, he had encountered something new: jazz. Zenón’s training had been in classical music. But jazz felt different.

“Discovering jazz music ignited a passion for music in me that had not existed up to that point,” says Zenón, who decided to pursue music in college. “I kind of jumped ship, and it was a blind jump. I didn’t know what to expect, I didn’t know what was on the other side, I didn’t have any artists or any musicians in my family. I just followed a hunch, followed my heart.”

After teachers recommended he study at the renowned Berklee College of Music in Boston, Zenón worked to find a scholarship and funding.

“This was way before the internet. I was looking at catalogs,” Zenón recalls. “I had never been to Boston in my life, I didn’t even know what Berklee looked like. But at Berklee it was the first time I was able to connect with a jazz teacher in a formal way, to learn about history, theory, harmony, and I soaked in it. Also, I was surrounded by young people like myself, who were as enamored and passionate about music as I was. It really felt like paradise.”

After earning his BA from Berklee in 1998, Zenón then moved to New York City. He earned an MA from the Manhattan School of Music in 2001 and began playing more extensively with new bandmates.

“I just wanted to be able to play with people who were better than me, and learn from the experience,” Zenón says. He started generating new ideas, writing music, and performing publicly. With Antonio Sánchez, Hans Glawischnig, and Perdomo, he founded the Miguel Zenón Quartet.

“That led to going into the studio and making an album,” Zenón recounts. “And that led to more experience, and more albums.”

Did it ever. Zenón has now been the leader for about 20 albums, mostly featuring the quartet. (After several years, Henry Cole replaced Sánchez as the group’s drummer.) Zenón has played on many recordings by other artists, and helped found the SFJAZZ Collective.

Not many prolific musicians will name any one recording as their best, and Zenón is the same way, but he is willing to cite a few that were milestones for him.

“Jibaro” draws on the music of Puerto Rico’s jibaro singers, troubadors using 10-line stanzas with eight-syllable lines, something Zenón adopted for jazz-quartet use. “Esta Plena,” a 2009 record, fuses jazz and the structures of “plena,” a traditional percussion-based Puerto Rican song form. “Alma Adentro,” a 2011 album, covers classic songs from Puerto Rico.

“It would be impossible for me to pick one favorite, but what I would say is, there are a couple of albums in the earlier part of my career that explored a balance between things coming from a jazz world and coming from traditional Puerto Rican traditional music and folklore, when I was able to feel like that balance was right, it felt like me,” Zenón says. “This is what I have to give. This is my persona.”

In 2008, Zenón was also honored with a Guggenheim Fellowship, which helped him conduct music research, another facet of his career. Zenón has often extensively interviewed traditional Puerto Rican musicians about the intricacies of their works before writing material in those forms.

And Zenón has made a point of giving back, founding the Caravana Cultural, a project that brings free jazz concerts to rural Puerto Rico.

Work, joy, and love

Zenón is now settled in at MIT, which boasts a vibrant music program. More than 1,500 MIT students take a music class each year, and over 500 students participate in one of 30 campus ensembles. Last year, MIT opened its new Edward and Joyce Linde Music Building, a purpose-built performance, rehearsal, and teaching space.

“There are definitely students at MIT who could be at some of the best music schools in the world,” Zenón says. “That’s not in question.”

Moreover, among MIT students, Zenón says, “There is a communal approach to music. Everything they do, they do for each other. They look out for each other, they work together. And that has been one of the most rewarding things to see.”

He continues: “Of course the students are brilliant and the faculty are too. In terms of what I like to teach, it’s been a good fit for me personally, and I couldn’t be happier about the opportunity. There’s more and more interest in jazz, more and more interest in creating things together, and there’s a unique mindset being built in front of our eyes.”

He is also pleased to work in the Linde Music Building: “It’s amazing to have the building, not only in terms of the facilities, but it’s also a symbol of the place music has within the Institute. We’re not just talking about music, we’re creating it. It’s a great commitment from the school and says a lot about our leadership.”

Meanwhile, along with teaching, Zenón’s own recording career continues at full speed. With Luis Perdomo, he is working on “El Arte Del Bolero Vol. 3,” the follow-up to his Grammy-winning album. And Zenón has plans for still another album, to be recorded in Puerto Rico with a large ensemble, based on music he is writing about Puerto Rico’s history and present.

“Things are always linked,” Zenón explains. “Once you finish one project, the next one starts. It feels natural for me to do it that way.”

In conversation, Zenón is engaging, genial, and reflective. So what advice does he have for younger musicians? Not everyone who plays an instrument will become Miguel Zenón. But what about people who want to pursue music, not knowing how far it will take them?

“If you find something you enjoy, just enjoy it for the sake of it,” Zenón says. “Find what brings joy, and make sure you don’t lose that. Having said that, with music, like any art form, or anything else in life, in order to make progress, it takes work and commitment. There’s no hiding that. So if music is something you’re serious about, set goals you can achieve over time, so you always have something to work for. In my experience, that’s key. But I always pair that with the idea of joy and love for music — keeping that love close to your heart.”

Professor Emeritus Jack Dennis, pioneering developer of dataflow models of computation, dies at 94

Fri, 04/10/2026 - 5:40pm

Jack Dennis, an influential MIT professor emeritus of computer science and engineering, died on March 14 at age 94. The original leader of the Computation Structures Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), he pioneered the development of dataflow models of computation, and, subsequently, many novel principles of computer architecture inspired by dataflow models.

The second child of an engineer and a textile designer, Dennis showed early interest in both engineering and music, rewriting Gilbert and Sullivan lyrics with his parents and playing piano with the Norwalk Symphony Orchestra in Connecticut as a teen, while building a canoe at home with his father. As an undergraduate at MIT, he developed his wide array of interests further, joining the VI-A Cooperative Program in Electrical Engineering; working at the Air Force Cambridge Research Laboratories on projects in speech processing and novel radar systems; participating in the model railroad club; and joining the MIT Symphony Orchestra, where he met his first wife, Jane Hodgson ’55, SM ’56, PhD ’61. (The two later separated when she went to study medicine in Florida.) 

Dennis earned his BS (1953), MS (1954), and ScD (1958) from MIT before joining the then-Department of Electrical Engineering as a faculty member. He was promoted to full professor status in 1969. His doctoral thesis, entitled, “Mathematical Programming and electrical networks,” explored analogies between electric circuit theory and quadratic programming problems. Ideas he developed in that paper further crystallized in his 1964 paper, “Distributed solution of network programming problems,” which created an important early class of digital distributed optimization solvers.

In a 2003 piece that Dennis wrote for his undergraduate class’s 50th reunion, he remembered his earliest encounters with computers at the Institute: “I prepared programs written in assembly language on punched paper tape using Frieden 'Flexowriters,' and stood aside watching the myriad lights blink and flash while operator Mike Solamita fed the tapes [...] That was 1954. Fifty years later, much has changed: A room full of vacuum tubes has become a tiny chip with millions of transistors. A phenomenon once limited to research laboratories has become an industry producing commodity products that anyone can own and use beneficially.”

Dennis’ influence in steering that change was profound. As a collaborator with the teams behind both Project MAC and Multics, the earliest attempts to allow multiple users to work with a single computer seemingly simultaneously (i.e., a time-shared operating system), Dennis helped to specify the unique segment addressing and paging mechanisms that became a fundamental part of the General Electric Model 645 computer. His insights stemmed from a tendency to pay equal attention to both hard- and software when others considered themselves specialists in one or the other. 

“I formed the Computation Structures Group [within CSAIL] and focused on architectural concepts that could narrow the acknowledged gap between programming concepts and the organization of computer hardware,” Dennis explained in his 2003 recollection. “I found myself dismayed that people would consider themselves to be either hardware or software experts, but paid little heed to how joint advances in programming and architecture could lead to a synergistic outcome that might revolutionize computing practice.”

Dennis’ emphasis on synergy did not go unnoticed. Gerald Sussman, the Panasonic Professor of Electrical Engineering, points out “the relationship of [Dennis’] dataflow architecture to single-assignment programs, and thus to pure functional programs. This coupled the virtue of referential transparency in programming to the effective use of hardware parallelism. Dennis also pioneered the use of self-timed circuits in digital systems. The ideas from that work generalize to much of the work on highly distributed systems.” 

The Computation Structures Group attracted multiple scholars interested in developing asynchronous computing and dataflow architecture, many of whom became lifelong friends and collaborators. These included Peter Denning, with whom Dennis and Joseph Qualitz co-authored the textbook “Machines, Languages, and Computation” (1978); the late Arvind, who became faculty head of computer science for the Department of Electrical Engineering and Computer Science (EECS), and the late Guang R. Gao, who became distinguished professor of electrical and computer engineering at the University of Delaware. 

In recognition of his contributions to the Multics project, Dennis was elected fellow of the Institute of Electrical and Electronics Engineers (IEEE). Many additional honors would follow: He received the Association for Computing Machinery (ACM)/IEEE Eckert-Mauchly Award in 1984; was inducted as a fellow of the ACM (1994); was named to the National Academy of Engineering (2009); was elected to the (ACM) Special Interest Group on Operating Systems (SIGOPS) Hall of Fame (2012); and was awarded the IEEE John von Neumann Medal (2013). 

A successful researcher, Dennis was perhaps equally influential in the development of EECS’ curriculum, developing six subjects in areas of computer theory and systems: Theoretical Models for Computation; Computation Structures; Structure of Computer Systems; Semantic Theory for Computer Systems; Semantics of Parallel Computation; and Computer System Architecture (taught in collaboration with Arvind.) Several of the courses that Dennis developed continue to be taught, in updated form, to this day.

Following his retirement from teaching in 1987, he consulted on projects relating to parallel computer hardware and software for such varied groups as NASA Research Institute for Advanced Computer Science; Boeing Aerospace; McGill University; the Architecture Group of Carlstedt Elektronik in Gothenburg, Sweden; and Acorn Networks, Inc. His fruitful relationship with former student Guang Gao continued in the form of a lecture tour through China, as well as co-authorship of a book, “Dataflow Architecture,” currently in progress at MIT Press. 

A voracious lifelong learner, Dennis was fond of repeating a friend’s observation that “a scholar is just a book’s way of making another book.” In a full and active retirement, he still made room for music, trying his hand at composing; performing at Tanglewood as a tenor in Chorus Pro Musica; playing piano at the marriage of Guang Gao’s son Nick; and joining the chorus at the First Church in Belmont, Massachusetts, where his celebration of life (with concurrent livestreaming) will be held on Monday, June 8, at 2 p.m. 

Dennis is survived by his wife Therese Smith ’75; children David Hodgson Dennis of North Miami, Florida; Randall Dennis of Connecticut; and Galen Dennis, a resident of Australia. 

Learning with audiobooks

Thu, 04/09/2026 - 2:00pm

Millions of students nationwide use text-supplemented audiobooks, learning tools that are thought to help those who struggle with reading keep up in the classroom. A new study from scientists at MIT’s McGovern Institute for Brain Research finds that many students do benefit from the audiobooks, gaining new vocabulary through the stories they hear. But study participants learned significantly more when audiobooks were paired with explicit one-on-one instruction — and this was especially true for students who were poor readers. The group’s findings were reported on March 17 in the journal Developmental Science.

“It is an exciting moment in this ed-tech space,” says Grover Hermann Professor of Health Sciences and Technology John Gabrieli, noting a rapid expansion of online resources meant to support students and educators. “The admirable goal in all this is: Can we use technology to help kids progress, especially kids who are behind for one reason or another?” His team’s study — one of few randomized, controlled trials to evaluate educational technology — suggests a nuanced approach is needed as these tools are deployed in the classroom. “What you can get out of a software package will be great for some people, but not so great for other people,” Gabrieli says. “Different people need different levels of support.” Gabrieli is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute. 

Ola Ozernov-Palchik and Halie Olson, scientists in Gabrieli’s lab, launched the audiobook study in 2020, when most schools in the United States had closed to slow the spread of Covid-19. The pandemic meant the researchers would not be able to ask families to visit an MIT lab to participate in the study — but it also underscored the urgency of understanding which educational technologies are effective, and for whom.

“What we were really concerned about as the pandemic hit is that the types of gaps that we see widen through the summers — the summer slide that affects poor readers and disadvantaged children to a greater extent — would be amplified by the pandemic,” says Ozernov-Palchik. Many educational technologies purport to ameliorate these gaps. But, Ozernov-Palchik says, “fewer than 10 percent of educational technology tools have undergone any type of research. And we know that when we use unproven methods in education, the students who are most vulnerable are the ones who are left further and further behind.”

So the team designed a study that could be done remotely, involving hundreds of third- and fourth-graders around the country. They focused on evaluating the impact of audiobooks on children’s vocabularies, because vocabulary knowledge is so important for educational success. Ozernov-Palchik explains that books are important for exposing children to new words, and when children miss out on that experience because they struggle to read, they can fall further behind in school.

Audiobooks allow students to access similar content in a different way. For their study, the researchers partnered with Learning Ally, an organization that produces audiobooks synchronized with highlighted text on a computer screen, so students can follow along as they listen.

“The idea is, they’re going to learn vocabulary implicitly through accessing those linguistically rich materials,” Ozernov-Palchik says. But that idea was untested. In contrast, she says, “we know that really what works in education, especially for the most vulnerable students, is explicit instruction.”

Before beginning their study, Ozernov-Palchik and Olson trained a team of online tutors to provide that explicit instruction. The tutors — college students with no educational expertise — learned how to apply proven educational methods to support students’ learning and understanding of challenging new words they encountered in their audiobooks.

Students in the study were randomly assigned to an eight-week intervention. Some were asked to listen to Learning Ally audiobooks for about 90 minutes a week. Another group received one-on-one tutoring twice a week, in addition to listening to audiobooks. A third group, in which students participated in mindfulness practice without using audiobooks or receiving tutoring, served as a control.

A diverse group of students participated, spanning different reading abilities and socioeconomic backgrounds. The study’s remote design — with flexibly scheduled testing and tutoring sessions conducted over Zoom — helped make that possible. “I think the pandemic pushed researchers to rethink how we might use these technologies to make our research more accessible and better represent the people that we’re actually trying to learn about,” says Olson, a postdoc who was a graduate student in Gabrieli’s lab.

Testing before and after the intervention showed that overall, students in the audiobooks-only group gained vocabulary. But on their own, the books did not benefit everyone. Children who were poor readers showed no improvement from audiobooks alone, but did make significant gains in vocabulary when the audiobooks were paired with one-on-one instruction. Even good readers learned more vocabulary when they received tutoring, although the differences for this group were less dramatic.

Individualized, one-on-one instruction can be time-consuming, and may not be routinely paired with audiobooks in the classroom. But the researchers say their study shows that effective instruction can be provided remotely, and you don’t need highly trained professionals to do it.

For students from households with lower socioeconomic status, the researchers found no evidence of significant gains, even when audiobooks were paired with explicit instruction — further emphasizing that different students have different needs. “I think this carefully done study is a note of caution about who benefits from what,” Gabrieli says.

The researchers say their study highlights the value and feasibility of objectively evaluating educational technologies — and that effort will continue. At Boston University, where she is a research assistant professor, Ozernov-Palchik has launched a new initiative to evaluate artificial intelligence-based educational tools’ impacts on student learning. 

Pages