MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 9 hours 31 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Plants can sense the sound of rain, a new study finds

16 hours 34 min ago

The next time you find yourself lulled by the patter of rain outside your window, think how that same sprinkle might sound if you were a tiny seed planted directly below a free-falling droplet. Would you still be similarly soothed?

In fact, MIT engineers have found the opposite to be the case: Some seeds may come alive to the sound of rain. In experiments with rice seeds, the team found that the sound of falling droplets effectively shook the seeds out of a dormant state, stimulating them to germinate at a faster rate compared with seeds that were not exposed to the same sound vibrations.

The team’s findings, which are published today in the journal Scientific Reports, are the first direct evidence that plant seeds and seedlings can sense sounds in nature. Their experiments involved rice seeds that they submerged in shallow water. Rice can germinate in both soil and shallow water. The researchers suspect that many similar seed types may also respond to the sound of rain.

The team worked out a hypothesis to explain how the seeds might be doing this. They found that when a raindrop hits the surface of a puddle or the ground, it generates a sound wave that makes the surroundings vibrate, including any shallowly submerged seeds. These vibrations can be strong enough to dislodge a seed’s “statoliths,” which are tiny gravity-sensing organelles within certain cells of a seed. When these statoliths are jostled, their movement is a signal for seeds and seedlings to grow and sprout.

“What this study is saying is that seeds can sense sound in ways that can help them survive,” says study author Nicholas Makris, a professor of mechanical engineering at MIT. “The energy of the rain sound is enough to accelerate a seed’s growth.”

Makris and his co-author, Cadine Navarro, a former graduate student in MIT’s Department of Urban Studies and Planning, suspect that the sound of rain is similar to the vibrations generated by other natural phenomena such as wind. They plan to follow up this work to investigate other natural vibrations and sounds plants may perceive.

Sound vibration

Plants are surprisingly perceptive. To help them survive, plants have evolved to sense and respond to stimuli in their surroundings. Some plants snap shut when touched, while others curl inward when exposed to toxic smells. And of course, most plants respond to light, reaching toward the sun to help them grow.

Plants can also sense gravity. A plant’s roots grow down, while its shoots push up against gravity’s pull. One way that plants sense and respond to gravity is through their statoliths. Statoliths are denser than a cell’s cytoplasm and can drift and sink through the cell, like a bit of sand in a jar of water. When a statolith finally settles to the bottom, its resting place on the cell’s membrane is a reflection of gravity’s direction and a signal for where a seed’s root or shoot should grow. If the statolith is dislodged, scientists have found that this can also trigger the seed to grow more.

Makris, whose work focuses on acoustics across a range of disciplines, became curious when Navarro asked him questions about seeds and sound. They wondered: Could sound be enough to jostle the statoliths and stimulate a seed to grow? And if so, what sounds in nature could be strong enough to have such an effect?

“I went back to look at work done by colleagues in the 1980s, who measured the sound of rain underwater. If you check, you’ll see it’s much greater than in the air,” Makris says. “It has to do with the fact that water is denser than air, so the same drop makes larger pressure waves underwater. So if you’re a seed that’s within a few centimeters of a raindrop’s impact, the kind of sound pressures that you would experience in water or in the ground are equivalent to what you’d be subject to within a few meters of a jet engine in the air.”

Such rain-induced soundwaves, Makris and Navarro suspected, might be enough to jostle statoliths and subsequently stimulate a seed’s growth.

Connecting a droplet’s dots

To test this idea, the researchers carried out experiments with rice seeds, which naturally grow in shallow watery fields. Over a large number of repeated experiments, the team submerged roughly 8,000 individual seeds of rice in shallow tubs of water and exposed sections of them to dripping water. The seeds were placed sufficiently far away from the falling droplets that only sound waves would reach them. The team varied the size and height of each water droplet to mimic raindrops during light, moderate, and heavy rainstorms.

The sound of rain, recorded by MIT researchers from underwater, within a rain puddle in Massachusetts during a moderate to heavy rainstorm. 
Credit: Courtesy of the researchers

They also used a hydrophone to measure the acoustic vibrations created underwater by the water droplets. They compared these measurements to recordings they took in the field, such as in puddles, ponds, wetlands, and soils during rainstorms. The comparisons confirmed that their water droplets in the lab were generating rain-induced acoustic vibrations as in nature.

As they observed the rice seeds, the researchers found that the groups of seeds that were exposed to the sound of water were able to germinate 30 to 40 percent faster than the seed groups that were not exposed to rain sounds but were otherwise in identical conditions. They also found that seeds that were closer to the surface could better sense the droplets’ sounds and grow faster, compared to more submerged or more distant seeds.

These experiments showed that there is a connection between the sound of a water droplet and a seed’s ability to grow. The researchers propose that there may be a biological advantage to seeds that can sense rain: If they are close enough to the surface to respond to the sound of rain, they are likely at an optimal depth to soak up moisture and safely grow to the surface.

The team then worked out calculations to see whether the physical vibrations of the droplets would be enough to jostle the seeds’ microscopic statoliths. If so, this would point to the mechanism by which sound can directly stimulate a plant’s growth.

In their calculations, the researchers factored in a rain droplet’s size and terminal velocity (the constant speed that a falling object eventually reaches), and worked out the amplitude of sound vibration the droplet would generate. From this, they determined to what degree these vibrations in water or soil would displace, or shake a submerged or buried seed, and how a shaking seed would affect microscopic statoliths within individual cells.

Makris and Navarro found that the experiments they performed on rice seeds were consistent with their calculations: The sound of rain can indeed dislodge and jostle a seed’s statoliths. This mechanism is likely at the root of a plant’s ability to “sense” the sound of rain and grow in response.

“Brilliant research has been done around the world to reveal the mechanisms behind the ability of plants to sense gravity,” Makris notes. “Our study has shown that these same mechanisms seem to be providing plant seeds a means of perceiving submergence depths in the soil or water that are beneficial to their survival by sensing the sound of rain. It gives new meaning to the fourth Japanese microseason, entitled ‘Falling rain awakens the soil.’”

This work was supported, in part, by the MIT Bose Fellowship and the MIT Koch Chair.

T.L. Taylor named 2026-27 CASBS Fellow

Tue, 04/21/2026 - 7:10pm

MIT Comparative Media Studies/Writing Professor T.L. Taylor has been named a 2026-27 fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University (CASBS), a highly selective residential program that convenes scholars from a wide range of disciplines for a year of focused research, collaborative exchange, and intellectual engagement.

Professor Taylor — an ethnographer whose work sits at the intersection of sociology; media studies; and science, technology, and society — will be focusing on her current project exploring the rise of “immersion” in physical spaces as a contemporary cultural pursuit. While new entertainment undertakings like The Sphere in Las Vegas, interactive theater like Sleep No More, or Meow Wolf’s growing list of city-based immersive art projects have captured popular attention, Taylor’s project turns to their progenitor, a much older, more widespread instantiation of the immersive experience — the theme park.

Building on fieldwork undertaken over the last several years in Disney parks around the world, as well as interviews with both designers and attendees, she will be working on a new book that examines theme parks as sitting at the analytically rich intersection of design, infrastructure, and play. Extending her influential work on digital environments and online communities, this project bridges from game and virtual world studies to an examination of physical, immersive environments.

As in her prior work, Taylor treats leisure as an area of study worth taking seriously. Not dissimilar to gaming, there is a tendency to underestimate, or simply dismiss, the economic and cultural significance of these environments. In 2025, theme parks worldwide boasted 976 million visitors and the Walt Disney Co.’s “Experiences” division alone reported $10 billion in profit last year. Spaces of play and experiential engagement also regularly embody some of our most pressing contemporary conversations. Theme parks, she notes, are “at the heart of economic and media systems, technological development, and cultural imaginaries despite — like video games before them — often being dismissed as peripheral to ‘serious’ matters.”

The fellowship project frames theme parks as simultaneously operating on several levels: intentionally designed worlds “that invite people to step into them,” socio-technical infrastructures “meant to facilitate affective, embodied experience,” and as “playgrounds” that sometimes afford participation beyond corporate control and governance.

At the center of the work is a tension familiar from digital environments. “You invite people into a designed space,” she says, “but what happens when emergent culture collides with expectations of use?” One of the most interesting examples of this tension she has encountered in her fieldwork, for example, is of fan-organized live-action role-play within a theme park, a moment in which the environment functions as a playground for emergent experience within an otherwise tightly controlled commercial frame.

The CASBS fellowship will offer Taylor the time and intellectual cross-pollination needed to best situate, and even challenge, her new work. The program’s interdisciplinary cohort is drawn from across the social sciences, humanities, law, health, and other fields; it includes 36 scholars from 30 institutions. “It’s an amazing opportunity to work through the data and write in a really vibrant setting where conversation and cross-disciplinary engagement is at the heart of the experience” she says.

New study bridges the worlds of classical and quantum physics

Tue, 04/21/2026 - 7:05pm

When you throw a ball in the air, the equations of classical physics will tell you exactly what path the ball will take as it falls, and when and where it will land. But if you were to squeeze that same ball down to the size of an atom or smaller, it would behave in ways beyond anything that classical physics can predict.

Or so we’ve thought.

MIT scientists have now shown that certain mathematical ideas from everyday classical physics can be used to describe the often weird and nonintuitive behavior that occurs at the quantum, subatomic scale.

In a paper appearing today in the journal Proceedings of the Royal Society, the team shows that the motion of a quantum object can be calculated by applying an idea from classical physics known as “least action.” With their new formulation, they show they can arrive at exactly the same solution as the Schrödinger equation — the main description of quantum mechanics — for a number of textbook quantum-mechanical scenarios, including the double-slit experiment and quantum tunneling.

Such mysterious phenomena, that could only be understood through equations of quantum mechanics, can now also be described using the team’s new classical formulation. In essence, the researchers have built an exact mathematical bridge between the classical, everyday physical world and the world that happens at dimensions smaller than an atom.

“Before, there was a very tenuous bridge that worked only for reasonably large [quantum] particles,” says study co-author Winfried Lohmiller, a research associate in the Nonlinear Systems Laboratory at MIT. “Now we have a strong bridge — a common way to describe quantum mechanics, classical mechanics, and relativity, that holds at all scales.”

“We’re not saying there’s anything wrong with quantum mechanics,” emphasizes co-author Jean-Jacques Slotine, an MIT professor of mechanical engineering and information sciences, and of brain and cognitive sciences. “We’re just showing a different way to compute quantum mechanics, which is based on well-known classical ideas that we put together in a simple way.”

To infinity and far below

Slotine and Lohmiller derived the quantum bridge while working on solidly classical problems. The researchers are members of the MIT Nonlinear Systems Laboratory, which Slotine directs. He and his colleagues develop models to describe complex behavior in problems of robotic and aircraft control, neuroscience, and machine learning. To predict the behavior of such systems, engineers often look to the Hamilton-Jacobi equation, which is one of the major formulations of classical mechanics and is related to Newton’s famous laws of motion.

The Hamilton-Jacobi equation essentially represents an object’s motion as minimizing a quantity called the action. Take, for instance, a simple scenario in which a ball is thrown from point A to point B. Theoretically, the ball could take any number of zigzagging paths between the two points. But the equation states that the actual path should be one where the ball’s “action” is minimized at every single point along that path.

In this case, the term “action” refers to the sum over time of the difference between an object’s kinetic energy (the energy that is generating the motion) and its potential energy (the object’s stored energy). The actual path that a ball takes between point A and B should then be a sequence of positions where the overall difference between kinetic and potential energy is minimized.

Slotine and Lohmiller were applying the Hamilton-Jacobi equation, and the principle of least action, to a number of classical mechanics problems with constraints when they realized that the equation, with some mathematical extensions, could solve a famous problem in quantum mechanics known as the double-slit experiment.

The double-slit experiment illustrates one of the weird, nonclassical behaviors that arises at quantum scales. In the experiment, two slits are cut out of a metal wall. When a single photon — a quantum-scale particle of light — is shot toward the wall, classical physics predicts that you should see a spot of light on the other side of the wall, assuming that the photon flew straight through either one of the holes, following a single path.

But experimentalists have instead observed alternating bright and dark stripes. The reality-bending pattern is a result of a quantum mechanical phenomenon by which a photon takes more than one path simultaneously. In this context, when a single photon is shot toward the wall, it can pass through both holes at the same time, along two paths that end up interfering with each other. The pattern of stripes that results means that the photon’s two interfering paths must be wave-like. The experiment therefore demonstrates how a quantum particle can also behave, however improbably, like a wave.

Since the discovery of quantum mechanics, physicists have tried to explain the double-slit experiment using tools from classical, everyday physics. But they’ve only ever been able to approximate the experiment’s results.

Even the noted physicist Richard Feynman ’39 found the task impossible. He assumed that one would have to consider and average over every single theoretical path that a photon could take, whether it be a straight line or any variation of a zigzagging path through either of the two holes. Such an exercise would require calculating an infinite number of possible zigzag paths, which all contradict the classical smooth paths one would expect. 

This last point is what Slotine and Lohmiller realized could be tweaked. Where classical physics assumes that an object must only take a single path from point A to B, quantum mechanics allows for an object to take multiple paths and multiple states simultaneously — a fundamental quantum property known as superposition.

The team wondered: What if classical physics could also entertain, at least mathematically, this notion of multiple paths? Then, they reasoned that an infinite number of paths wouldn’t have to be calculated. Instead, a much smaller number of “least action” classical paths might produce the exact same quantum result.

With this idea in mind, they looked back to the Hamilton-Jacobi equation to see how they might adapt its principles of least action to predict the double-slit experiment and other quantum phenomena.

“For a while we thought it was a little too good to be true,” Slotine says.

A particle’s destiny is in its density

In their new study, the team adds another ingredient of classical physics: “density,” which is, essentially, a probability that a given path is taken.

“We think of density in terms of fluid dynamics,” Lohmiller explains. “For the double-slit experiment, imagine pumping a hose toward the wall. What will happen is, most of the water will hit the center, but some droplets will also go toward the sides. A high density of water at the center means there is a high probability of finding a droplet along that path. And there will be a distribution, which we can compute.”

He and Slotine tweaked the Hamilton-Jacobi equation to include terms of density and multiple least action paths, and applied it to the double-slit experiment. They found that with this formulation, they only had to consider two classical paths through the two slits, as compared to Feynman’s infinity of zigzag paths. Ultimately, their calculations of classical density and action produced a wave function, or distribution of most probable paths that a photon could take, that was exactly the same as what was predicted by the Schrödinger equation, which is the central equation used to describe quantum-mechanical behavior.

“We show that the Schrödinger’s equation of quantum mechanics and the Hamilton-Jacobi equation of classical physics are actually identical given a suitable computation of density,” Slotine says. “That’s a purely mathematical result. We’re not saying that quantum phenomena happens at classical scales. We’re saying you can compute this quantum behavior with very simple classical tools.”

In addition to the double-slit experiment, the researchers showed the reworked equation can also predict other quantum mechanical behavior, such as quantum tunneling, in which particles such as electrons can pass through energy barriers that would not be possible according to classical physics. They could also derive the exact quantum wave of the electron in a hydrogen atom from the classical orbit of a planet. Finally, they revisited from this perspective the famous Einstein-Podolski-Rosen experiment, which started the modern study of quantum entanglement.

The researchers envision that scientists could use the new formula as a simple method to predict how certain quantum systems and devices will perform.

“There could be important implications for quantum computing, where quantum bits have these nonlinear energies that physicists must approximate, or for better understanding problems involving both quantum physics and general relativity,” Slotine offers. “In principle at least, we should now be able to characterize this quantum behavior exactly, with simple classical tools, and show that it’s not so mysterious after all.”

Two MIT alumnae named 2026 Gates Cambridge Scholars

Tue, 04/21/2026 - 6:35pm

Mitali Chowdhury ’24 and Christina Kim ’24 have been selected as 2026 Gates Cambridge Scholars. The highly competitive fellowship offers fully funded opportunities for postgraduate study in any field at Cambridge University in the U.K. Kim is a second-time Gates Cambridge Scholar.

MIT students interested in the Gates Cambridge Scholar program should contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development.

Mitali Chowdhury

Chowdury graduated from MIT with a BS in biological engineering and minors in both urban planning and environment and sustainability. Chowdhury has had a longstanding interest in reducing inequities in global health. At MIT, she pursued research in point-of-care diagnostics to identify and treat disease with accessible biotechnologies. She also helped develop low-cost testing for bacterial contamination in water in South Asia.

Chowdury currently works at a startup advancing sequencing-based diagnostics. At Cambridge University, she will study for MPhil and PhD degrees in the Centre for Doctoral Training in Sensor Technologies. Her research will focus on CRISPR-based diagnostics to address antimicrobial resistance and expand equitable access to care.

Christina Kim

After graduating from MIT with a bachelor’s degree in chemistry and biology, Kim worked as a researcher in women’s health at the Wellcome Sanger Institute in Cambridge, U.K. 

As a 2025 Gates Cambridge Scholar, Kim pursued an MPhil in research at the institute, focusing on using bioinformatics and tissue engineering to design novel in vitro models. Her second Gates Cambridge scholarship will fund her PhD studies.

How morality and ethics shaped India’s economic development

Tue, 04/21/2026 - 6:30pm

In a world leaning away from globalization, governments face a tough choice: Should they block dominant foreign companies to protect local businesses, or welcome them in hopes of fast-tracking economic growth and modernization? 

In his recently published book, “Traders, Speculators, and Captains of Industry: How Capitalist Legitimacy Shaped Foreign Investment Policy in India” (Harvard University Press, November 2025), Jason Jackson, associate professor in political economy and urban planning in the MIT Department of Urban Studies and Planning, explains that these policy decisions aren’t just math, but long-standing and often heated moral debates over how businesses should conduct themselves, and who they serve.

Jackson argues that morality has a long history in economics and deserves more attention because, while ever-present in economic policy discourse, moral beliefs are often under-recognized or underappreciated.

“India is an exemplary case of ways in which moral beliefs shape economic policy decisions,” says Jackson. “But at the same time, I think it’s representative of a general feature of capitalism. It’s the perfect case.”

Jackson’s focus on India for this book stems from his interest in industrial policy and the politics of international development. Multinational firms have long been a source of controversy. They are seen as bringing two crucial resources to developing countries: finance and technology. However, while multinationals are potentially valuable contributors to economic development through the mechanism of foreign direct investment (FDI), they can also be monopolistic, dominating local industries and displacing domestic firms.

This long-standing tension in foreign investment policy became the backdrop for several emerging markets in developing countries — Brazil, Russia, India, China, and South Africa (BRICS) — in the early 2000s. India was growing at an extremely high level — 6-7 percent annually — and Indian companies were doing well, including those in industries that were seen as key to development, such as autos. Jackson wanted to understand why Indian companies were holding their own relative to foreign firms, which dominated more manufacturing in other places, and planned to focus on the period from the 1980s through the 2010s that coincides with the period of economic liberalization in India and, more broadly, with globalization. But while conducting field work, Jackson noticed that in describing how they made industrial policy decisions, Indian policymakers drew distinctions between firms that were fashioned in moral terms. There were some firms that policymakers believed would invest in technology and provide good jobs, and other firms — both foreign and domestic — seen as exploitative and not interested in engaging in activities that would advance economic growth and industrial transformation.

“I realized these distinctions had deep salience,” says Jackson. “My interlocutors would describe firms — especially foreign firms they saw as simply trading, or as exploitative — as ‘New East India’ companies, referencing the famous East India Company that was the governance authority in colonial India, but had been defunct for more than 150 years. That forced my research to become more historical, increasingly relying on archival work to make sense of these moralized distinctions between different types of business actors, whether foreign or domestic, and to understand how these beliefs became so powerful across Indian society.”

“Moral categories of capitalist legitimacy”

Jackson says there are several ways in which social scientists think that policymakers make decisions. One view considers the competing interest groups policymakers must negotiate with, in which case outcomes may depend on one group having more influence or power than others. Another approach assumes these individuals make decisions based on self-interest, particularly when their choices are perceived as corrupt.

“But what I found is that neither of these approaches gave enough credence to the ways in which policymakers in India grapple with quite technical and complex policy decisions regarding the type of development they want to promote in their country, and the types of companies they thought could help to achieve their development goals.” says Jackson. “Therefore, I was more interested in trying to understand what kind of ideas and beliefs animated their decision-making.”

What Jackson found was that Indian policymakers viewed both foreign firms and local Indian companies through what he terms “moral categories of capitalist legitimacy.” Would these firms invest in productive technologies? Would they provide good employment for the local population? Or would they be exploitative? These criteria were not only applied to multinational corporations. Even Indian family-controlled business groups were evaluated as to whether the gains accrued stayed within the confines of the extended family or whether they provided broader societal benefits. 

Coca-Cola goes to India

The story of Coca-Cola in India is an example of the tension experienced with regulating foreign investment where multinational companies were seen as exploitative. The company made its initial foray into India in the 1950s, and over the next two decades its reach became extensive. In the late 1970s, India’s Minister of Industry George Fernandes was visiting a village in Bihar — a state with one of the highest levels of poverty — when he asked for a glass of water. Instead, he was told the water was not suitable to drink, and was given Coca-Cola.

“This struck Fernandes as deeply problematic,” says Jackson. “He later recalled thinking that ‘after 30 years of freedom in India, our villages do not have clean drinking water, but they do have Coca-Cola — which, of course, is made with purified water, so safe to drink. How was this possible?’” Fernandes returned to his office in New Delhi determined to do something about it.

Just a few years earlier, India had passed a law, the Foreign Exchange Regulation Act (FERA), which required foreign companies to dilute their equity to no more than 40 percent. The law was explicitly designed to encourage technology transfer, but Coca-Cola had not complied. Fernandes told Coca-Cola that it had to take on an Indian partner or it would have to leave. Coca-Cola chose the latter. In the following year, IBM was also kicked out of India when it similarly balked at complying with FERA and sharing its technology.

“These companies were very much seen in the mold of the East India Co.,” says Jackson. “A firm comes from abroad and extracts resources from India while giving little benefit to the country. These are all very clearly morally coded beliefs that played a crucial role in these policy decisions.”

With Coca-Cola out of India, the beverage market became wide open, and several Indian companies emerged. Thums Up, an Indian cola brand — founded by Ramesh Chauhan ’62 — took off and became the dominant cola by the 1980s. Chauhan developed its own unique formula independently.

In 1991, India accelerated its economic liberalization, especially around FDI, and FERA’s standards were diluted. Coca-Cola returned to India, again without a partner. Other major brands, including Pepsi, had also entered the market. By then, Thums Up had a market share in India of well over 80 percent, but, concerned with its ability to compete in a war between the deep-pocketed American multinational giants, Thums Up sold out to Coca-Cola for $60 million in 1993, a figure that was later deemed to be small.

Trader, speculator, or captain of industry?

Jackson says that in India, there were two competing interpretations of this story. In one version, Fernandes kicking out a global multinational firm was seen as a developing country establishing its economic sovereignty by making a bold policy decision and “risking all kind of geopolitical blowback that might follow from the U.S.,” says Jackson. “In this view, the Indian government’s bold move allowed local entrepreneurs and local companies like Chauhan and Thums Up to emerge.”

Yet an important counter narrative emerged that challenged the view that companies like Thums Up and figures like Chauhan are enterprising entrepreneurs.

“Maybe they just took advantage of protectionism to form a company and make some money,” says Jackson. “So rather than being an intrepid captain of industry, observers wondered whether maybe Chauhan was ‘simply a trader’ who took advantage of policy protection, but sold out as soon as the market became competitive.”

Later developments added some credibility to this view. Ironically, Coca-Cola was unable to remove Thums Up and Limca, another soda brand from Chauhan’s company, from its product lineup, and both remained extremely popular and widely consumed. This suggested to many observers that Thums Up could have survived the cola wars had it not sold out to the American multinational. The public had acquired a taste for the distinctly Indian beverages that Chauhan had created.

“This narrative encapsulates this kind of tension policymakers face: If we provide policy support to our enterprising entrepreneurs and they thrive, will they also do well for the country? Or are they simply opportunists who will take advantage of policy support in ways that benefit themselves but have little broader benefits to the country,” says Jackson.

This episode was just one of dozens of instances of conflicts between Indian companies and multinational firms in the liberalizing 1990s and 2000s, which the government was often compelled to adjudicate. Throughout this period, the question persisted: How would policymakers identify the business figures who could be agents of industrial development and economic transformation, whether foreign or domestic? 

Ramesh Chauhan for one continued an enterprising path. He turned his attention to the bottled water industry in India and his brand — Bisleri — remains one of the country’s leading bottled water brands today.

PSFC showcases technologies applicable to both fusion and geothermal energy during representative’s visit

Tue, 04/21/2026 - 6:00pm

The MIT Plasma Science and Fusion Center (PSFC) showcased its high-temperature superconducting (HTS) magnet technology, essential for fusion energy and increasingly relevant to superhot geothermal applications, to Representative Jake Auchincloss (D-Mass) during his March 12 visit.

High-field electromagnets are required to confine plasma in fusion reactors, and PSFC’s HTS technology enables dramatically higher magnetic fields, allowing for more compact and cost-effective reactor designs. The same HTS technology can also be applied to gyrotrons, which are high-power microwave sources that operate more efficiently at higher frequencies, enabling new energy applications.

One such application is millimeter-wave drilling for superhot geothermal energy, where microwave energy is used to heat, melt, or vaporize rock. Because drilling rates scale with input power and costs increase less rapidly with depth than in conventional drilling, this approach could overcome key economic barriers to accessing deep geothermal resources and enable scalable, baseload clean energy.

During Auchincloss’ recent visit to PSFC, MIT researchers explained the technology development and testing underway to take millimeter-wave technology from laboratory to the real world.

“I visited MIT’s Plasma Science and Fusion Center to learn more about the science and engineering necessary to make this technology work at utility scale. Superhot geothermal uses microwaves to melt rock, going much deeper and hotter than is possible with contact drilling. This can generate clean, baseload power in America east of the Rocky Mountains, where the geology has conventionally not been suitable for industrial geothermal,” says Auchincloss.

“The technology is still years away from working in a state with ‘cool rock’ like Massachusetts, but the ultimate benefit for the Bay State could be tremendous. In addition to lower utility bills, a new industry with good jobs could thrive here. Indeed, this is already starting to happen, as spinouts from MIT — and the suppliers for these spinouts — are already setting up shop in Massachusetts,” he says.

Staff from MIT startup Quaise Energy participated in Auchincloss’ visit to PSFC. Quaise Energy, which has an office in Cambridge, completed a successful drilling demonstration using gyrotron-based millimeter-wave technology last fall in Texas. One of the first rounds of MIT Energy Initiative (MITEI) seed funding provided support for PSFC’s initial development of the technology in 2008.

Superhot rock geothermal energy refers to tapping temperatures of nearly 400 degrees Celsius to generate large amounts of electricity. Conventional drilling approaches can fail at the great depths (several kilometers) and high temperatures required to reach this geothermal resource. The millimeter-wave drilling technology invented at PSFC and being commercialized by Quaise Energy could be faster and more effective than conventional drilling, especially at high temperatures and great depths. PSFC is planning a new laboratory facility to further study millimeter-wave drilling and test improvements to the existing technology.

“This initiative will leverage MIT’s extensive capabilities in geophysics, geochemistry, millimeter-wave technology, and AI, along with existing infrastructure including power, water, and experimental facilities. The goal is to anchor next-generation geothermal innovation within an integrated academic-industry ecosystem, accelerating both technology maturation, de-risking deployment pathways, and developing the needed workforce,” says Steve Wukitch, the interim director and a principal research scientist at PSFC.

Oliver Jagoutz, the Cecil and Ida Green Professor of Geology and director of the Earth Resources Laboratory (ERL), also participated in the representative’s visit to PSFC. ERL is teaming with PSFC on the planned laboratory facility for testing millimeter-wave drilling under representative pressure and temperature conditions and on realistic rock samples.

Earlier in March, MITEI’s Spring Symposium, titled “Next-generation geothermal for firm power,” explored the current state of the geothermal industry, innovative technologies, and the opportunities ahead. During the symposium, Wukitch served as moderator of a panel on drilling advances and described the planned PSFC laboratory facility for millimeter-wave testing, and Quaise Energy’s Matt Houde described the company’s recent advances and future plans. On the following day, MITEI and the Clean Air Task Force co-hosted a gathering of MITEI member companies, next-generation geothermal companies, and investors for a GeoTech Summit, titled “Accelerating geothermal technology, projects, and deal flow.”

Tackling the housing shortage with robotic microfactories

Tue, 04/21/2026 - 5:45pm

A national housing shortage is straining finances and communities across the United States. In Massachusetts, at least 222,000 homes will have to be built in the next 10 years to meet the population's needs. At the same time, there are numerous challenges in traditional construction. There's a shortage of skilled construction workers. Most projects involve multiple contractors and subcontractors, adding complexity and lag time. And the construction process, as well as the buildings themselves, can be a major source of emissions that contribute to climate change.

Reframe Systems, co-founded by Vikas Enti SM '20, uses robotics, software, and high-performance materials to address these problems. Founded in 2022, the company deploys microfactories that bring housing fabrication and production closer to the regions where the homes are needed. The first homes designed and manufactured in Reframe's first microfactory have been fully built in Arlington and Somerville, Massachusetts. 

Enti's experiences in MIT System Design and Management (SDM) shaped the company from its start. "Learning how to navigate the system and finding the optimal value for each stakeholder has been a key part of the business strategy," he says, "and that's rooted in what I learned at SDM."

Better tools for system-level problems

Enti applied to SDM's master of science in engineering and management while he was working at Kiva Systems, overseeing its acquisition by Amazon and transformation into Amazon Robotics. He found that the SDM program's fundamentals of systems engineering, system architecture, and project management provided him with the tools he needed to address system-level problems in his work.

While he was at MIT, Enti also served as an associate director for the MIT $100K Entrepreneurship Competition, which offers students and researchers mentorship, feedback, and potential funding for their startup ideas. He realized that "there isn't a single formula for how businesses start, or how long it takes to get them started," he says, which helped shape his plans to start his own business.

Enti took a leave of absence from MIT to oversee the expansion of Amazon Robotics in Europe. He returned and completed his degree in 2020, writing his thesis on developing technology that could mitigate falls for elderly people. This instinct to use his education for a good cause resurfaced when his daughters were born. He wanted his future business to address a real-world problem and have a social impact, while also reducing carbon emissions.

Growing housing, shrinking emissions

Enti concluded that housing, with immediate real-world impact and a significant share of global carbon emissions, was the right problem to work on. He reached out to his colleagues Aaron Small and Felipe Polido from Amazon Robotics to share his idea for advanced, low-cost factories that could be deployed quickly and close to where they were needed. The two joined him as co-founders.

Currently, the microfactory in Andover, Massachusetts, produces structural panels, with robotics completing wall and ceiling framing and people completing the rest of the work, including wiring and plumbing. Eventually, Reframe hopes to automate more of the building process through further use of robotics. The modular construction process allows for reduced waste and disruption on the eventual home site. And the finished homes are designed to be energy-efficient and ready for solar panel installation. The company is set to start work soon on a group of homes in Devens, Massachusetts.

In addition to the Andover location, Reframe is setting up in southern California to help rebuild homes that were destroyed in the area's January 2025 wildfires. The company's software-assisted design process and the adjustability of the microfactories allows them to meet local zoning and building codes and align with the local architectural aesthetic. This means that in Somerville, Reframe's completed buildings look like modernized versions of the neighboring three-story buildings, known locally as "triple-deckers." On the other side of the country, Reframe's design offerings include Spanish-style and craftsman homes.

"Housing is a complex systems problem," Enti says, explaining the impact SDM has had on his work at Reframe. The methods and tools taught in the integrated core class EM.412 (Foundations of System Design and Management) help him tackle systems-level problems and take the needs of multiple stakeholders into account. The Reframe team used technology roadmapping as they devised their overall business plan, inspired by the work of Olivier de Weck, associate head of the MIT Department of Aeronautics and Astronautics. And lectures on project management from Bryan Moser, SDM's academic director, remain relevant. 

"Embracing the fact that this is a systems problem, and learning how to navigate the system and the stakeholders to make sure we're finding the optimal value, has been a key part of the business strategy," Enti says.

Reframe Systems is set to continue learning through iteration as they plan to expand their network of microfactories. The company remains committed to the core vision of sustainably meeting the country's need for more housing. "I'm grateful we get to do this," Enti says. "Once you strip away all the robotics, the advanced algorithms, and the factories, these are high-quality, healthy homes that families get to live in and grow." 

How to expand the US economy

Tue, 04/21/2026 - 12:00pm

It’s an essential insight about our world: Innovation drives economic growth. For the U.S. to thrive, it must keep innovating. But how, and in what areas?

A new book co-authored by MIT faculty members focuses on six key areas where technology advances can drive the economy and support national security.

Those sectors — semiconductors, biotechnology, critical minerals, drones, quantum computing, and advanced manufacturing — are all built on U.S. know-how but are also areas where the country has either yielded a lead in production or innovation, or could yet fall behind.

As the book explains, a roadmap for U.S. prosperity and security involves sustaining notable areas of innovation and the national research ecosystem behind them, while rebuilding domestic manufacturing.

“In each of these areas, there are breakthroughs to be had, where the U.S. can leapfrog competitors and gain an advantage,” says Elisabeth Reynolds, an MIT expert on industrial innovation and editor of the new volume. “That’s a very exciting part of this.” She adds: “These areas are front and center for U.S. national economic and security policy.”

The book, “Priority Technologies: Ensuring U.S. Security and Shared Prosperity,” is published this week by the MIT Press. It features chapters by MIT faculty with expertise on the industrial sectors in question. Reynolds, a professor of the practice in MIT’s Department of Urban Studies and Planning, is a leading expert on industrial innovation and has long advocated for innovation-based growth that helps the U.S. workforce.

“All of this can be good for everyone,” says MIT economist Simon Johnson, who wrote the foreword to the book. “Out of that flow of innovations and ideas, we can create more good jobs for all Americans. Pushing the technological frontier and turning that into jobs is definitely going to help.”

Making more chips

“Priority Technologies” grew out of an ongoing MIT seminar by the same name, which Reynolds and Johnson began holding in 2023, often with appearances by other MIT faculty.

Both Reynolds and Johnson bring vast experience to the subject of innovation and production. Among other things, Reynolds headed MIT’s Industrial Performance Center for over a decade and was executive director of the MIT Task Force on the Work of the Future. She served in the White House National Economic Council as special assistant to the president for manufacturing and development.

Johnson, the Ronald A. Kurtz (1954) Professor of Entrepreneurship at the MIT Sloan School of Management, shared the 2024 Nobel Prize in economics, with MIT’s Daron Acemoglu and the University of Chicago’s James Robinson, for work about the historical relationship between institutions and economic growth. He has co-authored numerous books, including, with Acemoglu, the 2023 book “Power and Progress,” about the trajectory and implications of artificial intelligence.

As it happens, “Priority Technologies” does not focus on AI, instead opting to examine other vital, and often related, areas of innovation.

“We do not think this is the entire list of priority technologies,” Johnson says. “This is a partial list, and there are lots of other ideas.”

In the chapter on semiconductors, Jesús A. del Alamo, the Donner Professor of Science in MIT’s Department of Electrical Engineering and Computer Science, calls them “the oxygen of modern society.” This U.S.-born industry has seen a large manufacturing shift away from the country, however, leaving it vulnerable in terms of security and the economy; about one-third of inflation experienced in 2021 stemmed from a chip shortage. As he notes, the U.S. is now in the process of rebuilding its capacity to make leading-edge logic chips, for one thing.

“With semiconductors, people thought the U.S. could lose the manufacturing, stay on top of the innovation and design side, and would be fine,” Reynolds says. “But it’s turned out to make the country quite vulnerable. So we’ve had a massive shift to rebuild semiconductor manufacturing capabilities here in the U.S., and I would argue that’s been a successful strategy in recent years.”

Bringing biotech back home

In biotechnology, relocating manufacturing in the U.S. is also key, using new technologies in the process. As J. Christopher Love, the Laurent Professor of Chemical Engineering, puts it in his chapter, while the U.S. is the leader in biotech research, it “lacks the manufacturing infrastructure and expertise necessary to bring these ideas to the market at the same pace as it generates innovative new products.” Among other remedies, he suggests that smaller, more flexible production facilities can help the U.S. “leapfrog” other countries on the manufacturing side. Love is also co-director of MIT’s Initiative for New Manufacturing, which aims to drive advances in U.S. production across industries.

“We have tremendous biotech innovation, we’re the leaders, but we have a bottleneck when it comes manufacturing,” Reynolds observes. “If we can break through that with new technologies, new production processes, we’re in a position to make us less vulnerable, from a supply chain point of view, and capture more of what is going to be a $4 trillion market over the next 15 years.”

A similar story holds in other areas. Many drone innovations were developed in the U.S., while much manufacturing has shifted to China. Fiona Murray, the William Porter (1967) Professor of Entrepreneurship, writes that the U.S. has an “opportunity to rebuild its production at scale,” although that will also require significant strengthening of its supply chains, too.

Elsa Olivetti, the Jerry McAfee (1940) Professor of Engineering and a professor of materials science and engineering, recommends a multifaceted approach to help the U.S. regain traction in the production of critical minerals, including better forms of extraction, manufacturing, and recycling, to reduce potential scarcities.

And in the quantum computing chapter, two MIT co-authors — William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and a professor of physics; and Jonathan Ruane, a senior lecturer at MIT Sloan — note that the sector could help accelerate drug discovery, materials science, and energy applications. Noting that the U.S. still leads in private-sector investment in the field but tails China in public-sector investment, they urge more research support and stronger supply chains for quantum computing components, among other recommendations.

“The country that achieves quantum leadership will gain decisive advantages in these strategically important industries,” they write.

The university engine

From industry to industry, the book makes clear that certain key issues are broadly important to U.S. competitiveness and growth. The partnership between the federal government and the world-leading research capacities of U.S. universities, for one thing, has given the country an initial lead in many economic sectors and promises to continue driving innovation.

At the same time, the U.S. would benefit from expanding and strengthening its domestic supply chains, in the process of building up more domestic manufacturing, and needs capital investment that will help hardware-side, physically substantial industrial growth.

“These common themes include supply chain resilience and manufacturing capability,” Reynolds says. “Can we help drive the country’s innovation ecosystem through expansion of our industrial system and manufacturing? That’s a big question.”

On the research front, she reflects, over the years, “It’s been amazing how much MIT-led research has aligned with national priorities — or maybe that’s not so surprising.”

The partnership between the U.S. federal government and universities as research engines was formalized in the 1940s, thanks in part to then-MIT president Vannevar Bush. According to some estimates, government investment in non-defense research and development alone has accounted for up to 25 percent of U.S. economic growth since World War II.

“Vannevar Bush realized it wasn’t about a stock of technology, it was about a flow of innovation,” Johnson says. “And that brilliant insight is still relevant today. I think that is the insight of the last century. And that’s what we’re trying to capture and reiterate and repeat.”

“This is not even the future. This is current.”

Scholars and industry leaders have praised “Priority Technologies.” Erica Fuchs, a professor of engineering and public policy at Carnegie Mellon University, has stated that when it comes to “ensuring American national security, economic competitiveness, and societal well-being,” the book underscores “the positive role technology can play in those outcomes.” Hemant Taneja, CEO of the venture capital firm General Catalyst, calls the volume “required reading for anyone interested in building the abundant, resilient future America deserves.”

For their part, Reynolds and Johnson hope the book will draw many kinds of readers interested in the economy, innovation, prosperity, and national security.

“We tried to make the volume accessible,” Reynolds says, noting that the book directly lays out “challenges for the country, and what we see as recommendations for next steps in how we position the country to succeed, and lead globally. Each of these chapters has something important to say.”

Johnson also notes the MIT scholars participating in the project want to enhance the ongoing policy conversation, in Washington and across the country, about supporting innovation and using it to drive U.S. economic and technological leadership.

“One reason to write a book is, you can’t pound the table with a podcast,” quips Johnson, who co-hosts a podcast, “Power and Consequences,” on major policy issues. In conversations with political leaders and their staffs, he adds, there is a core message to be transmitted about America and technology-driven growth: We have the knowledge and resources, but need to focus on supporting innovation while trying to increase domestic production.

“Here are the technologies we currently need,” Johnson says. “This is not imagination, this is not fanciful, this is not science fiction. This is not even the future. This is current. These are the technologies needed to defend the country and its interests. And we need to invest in these, and in everything we need to drive them forward.”

Managing traffic in space

Sun, 04/19/2026 - 10:05am

Chances are, you’ve already used a satellite today. Satellites make it possible for us to stream our favorite shows, call and text a friend, check weather and navigation apps, and make an online purchase. Satellites also monitor the Earth’s climate, the extent of agricultural crops, wildlife habitats, and impacts from natural disasters.

As we’ve found more uses for them, satellites have exploded in number. Today, there are more than 10,000 satellites operating in low-Earth orbit. Another 5,000 decommissioned satellites drift through this region, along with over 100 million pieces of debris comprising everything from spent rocket stages to flecks of spacecraft paint.

For MIT’s Richard Linares, the rapid ballooning of satellites raises pressing questions: How can we safely manage traffic and growing congestion in space? And at what point will we reach orbital capacity, where adding more satellites is not sustainable, and may in fact compromise spacecraft and the services that we rely on?

“It is a judgement that society has to make, of what value do we derive from launching more satellites,” says Linares, who recently received tenure as an associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the things we try to do is approach these questions of traffic management and orbital capacity as engineering problems.”

Linares leads the MIT Astrodynamics, Space Robotics, and Controls Lab (ARCLab), a research group that applies astrodynamics (the motion and trajectory of orbiting objects) to help track and manage the millions of objects in orbit around the Earth. The group also develops tools to predict how space traffic and debris will change as operators launch large satellite “mega-constellations” into space.

He is also exploring the effects of space weather on satellites, as well as how climate change on Earth may limit the number of satellites that can safely orbit in space. And, anticipating that satellites will have to be smarter and faster to navigate a more cluttered environment, Linares is looking into artificial intelligence to help satellites autonomously learn and reason to adapt to changing conditions and fix issues onboard.

“Our research is pretty diverse,” Linares says. “But overall, we want to enable all these economic opportunities that satellites give us. And we are figuring out engineering solutions to make that possible.”

Grounding practical problems

Linares was born and raised in Yonkers, New York. His parents both worked as school bus drivers to support their children, Linares being the youngest of six. He was an active kid and loved sports, playing football throughout high school.

“Sports was a way to stay focused and organized, and to develop a work ethic,” Linares says. “It taught me to work hard.”

When applying for colleges, rather than aim for Division I schools like some of his teammates, Linares looked for programs that were strong in science, specifically in aerospace. Growing up, he was fascinated with Carl Sagan’s “Cosmos” docuseries. And being close to Manhattan, he took regular trips to the Hayden Planetarium to take in the center’s immersive projections of space and the technologies used to explore it.

“My interest in science came from the universe and trying to understand our place within it,” Linares recalls.

Choosing to stay close to home, he applied to in-state schools with strong aeronautical engineering departments, and happily landed at the State University of New York at Buffalo (SUNY Buffalo), where he would ultimately earn his bachelor’s, master’s, and doctoral degrees, all in aerospace engineering.

As an undergraduate, Linares took on a research project in astrodynamics, looking to solve the problem of how to determine the relative orientation of satellites flying in formation.

“Formation flying was a big topic in the early 2000s,” Linares says. “I liked the flavor of the math involved, which allowed me to go a layer deeper toward a solution.”

He worked out the math to show that when three satellites fly together, they essentially form a triangle, the angles of which can be calculated to determine where each satellite is in relation to the other two at any moment in time. His work introduced a new controls approach to enable satellites to fly safely together. The research had direct applications for the U.S. Air Force, which helped to sponsor the work.

As he expanded the research into a master’s thesis, Linares also took opportunities to work directly with the Air Force on issues of satellite tracking and orientation. He served two internships with the U.S. Air Force Research Lab, one at Kirtland Air Force Base in Albuquerque, New Mexico, and the other in Maui, Hawaii.

“Being able to collaborate with the Air Force back then kind of grounded the research in practical problems,” Linares says.

For his PhD, he turned to another practical problem of “uncorrelated tracks.” At the time, the Air Force operated a network of telescopes to observe more than 20,000 objects in space, which they were working to label and record in a catalog to help them track the objects over time. But while detecting objects was relatively straightforward, the challenge came in correlating a detected object with what was already in the catalog. In other words, is what they were seeing something they had already seen?

Linares developed image analysis techniques to identify key characteristics of objects such as their shape and orientation, which helped the Air Force “fingerprint” satellites and pieces of space debris, and track their activity — and potential for collisions — over time.

After completing his PhD, Linares worked as a postdoc at Los Alamos National Laboratory and the U.S. Naval Observatory. During that time he expanded his aerospace work to other areas including space weather, using satellite measurements to model how Earth’s ionosphere — the upper layer of the atmosphere that is ionized by the sun’s radiation — affects satellite drag.

He then accepted a position as assistant professor of aerospace engineering at the University of Minnesota at Minneapolis. For the next three years, he continued his research in modeling space weather, tracking space objects and coordinating satellites to fly in swarms.

Making space

In 2018, Linares made the move to MIT.

“I had a lot of respect for the people and for the history of the work that was done here,” says Linares, who was especially inspired by the legendary Charles Stark “Doc” Draper, who developed the first inertial guidance systems in the 1940s that would enable the self-navigation of airplanes, submarines, satellites, and spacecraft for decades to come. “This was essentially my field, and I knew MIT was the best place to continue my career.”

As a junior faculty member in AeroAstro, Linares spent his first years focused on an emerging challenge: space sustainability. Around that time, the first satellite constellations were launching into low-Earth orbit with SpaceX’s Starlink, which aimed to provide global internet coverage via a huge network of several thousand coordinating satellites. The launching of so many satellites, into orbits that already held other active and nonactive satellites, along with millions of pieces of space debris, raised questions about how to safely manage the satellite traffic and how much traffic an orbit can sustain.

“At what level do we reach a tipping point, where we have too many satellites in certain orbital regimes?” Linares says. “It was kind of a known problem at the time, but there weren’t many solutions.”

Linares’ group applied an understanding of astrodynamics, and the physics of how objects move in space, to figure out the best way to pack satellites in orbital “shells,” or lanes that would most likely prevent collisions. They also developed a state-of-the-art model of orbital traffic, that was able to simulate the trajectories of more than 10 million individual objects in space. Previous models were much more limited in the number of objects they could accurately simulate. Linares’ open-source model, called the MIT Orbital Capacity Assessment Tool, or MOCAT, could account for the millions of pieces of space debris, in addition to the many intact satellites in orbit.

The tools that his group has developed are used today by satellite operators to plan and predict safe spacecraft trajectories. His team is continuing to work on problems of space traffic management and orbital capacity. They are also branching out into space robotics. The team is testing ways to teleoperate a humanoid robot, which could potentially help to build future infrastructure and carry out long-duration tasks in space.

Linares is also exploring artificial intelligence, including ways that a satellite can autonomously “learn” from its experience and safely adapt to uncertain environments.

“Imagine if each satellite had a virtual Doc Draper onboard that could do the de-bugging that we did from the ground during the Apollo missions,” Linares says. “That way, satellites would become instantaneously more robust. And it’s not taking the human out of the equation. It’s allowing the human to be amplified. I think that’s within reach.”

Professor Michael Laub and MIT alumni named 2025 AAAS Fellows

Fri, 04/17/2026 - 2:15pm

MIT Professor Michael T. Laub as well as 21 MIT alumni have been elected as fellows of the American Association for the Advancement of Science (AAAS).

The 2025 class of AAAS Fellows includes 449 scientists, engineers, and innovators, spanning all 24 of AAAS disciplinary sections, who are recognized for their scientific achievements.

Laub, the Salvador E. Luria Professor in the MIT Department of Biology and an HHMI Investigator, studies the biological mechanisms and evolution of how cells process information to regulate their own growth and proliferation, using bacteria as a model organism to develop a deeper, fundamental understanding of how bacteria function and evolve. Laub was honored as a AAAS Fellow for distinguished contributions to the field of bacterial information processing, particularly to the understanding of coevolution of host-pathogen response and immunity.

“This year’s AAAS Fellows have demonstrated research excellence, made notable contributions to advance science, and delivered important services to their communities,” said Sudip S. Parikh, AAAS chief executive officer and executive publisher of the Science family of journals. “These fellows and their accomplishments validate the importance of investing in science and technology for the benefit of all.”

The following alumni were also named fellows of the AAAS:

  • Debra Auguste ’99
  • Julie Claycomb PhD ’04
  • Chris Clifton ’85, SM ’86
  • Kevin Crowston PhD ’91
  • Maitreya Dunham ’99
  • David Fike PhD ’07
  • Jianping Fu PhD ’07
  • Peter A. Gilman SM ’64, PhD ’66
  • Diane M. Harper ’80, SM ’82
  • Cherie R. Kagan PhD ’96
  • Elizabeth A. Kensinger PhD ’03
  • Kenro Kusumi PhD ’97
  • Charla Lambert ’96
  • Bennett A. Landman ’01, MNG ’02
  • Michael E. Matheny SM ’06
  • Paul David Ronney ScD ’83
  • Steven Semken ’80, PhD ’89
  • Sudipta Sengupta SM ’99, PhD ’06
  • Lawrence R. Sita PhD ’86
  • Jan M. Skotheim ’99
  • Beverly Park Woolf ’66

Why bother with plausible deniability?

Fri, 04/17/2026 - 11:00am

Picture this scenario in a business: An employee, Brad, disclosed some information that wound up in the hands of a competitor. He may not have meant to, but he did, and a few people at the firm know this. So, at the next company meeting, another employee, Linda, looks pointedly at Brad and says, “I know that no one would ever dream of leaking information, intentionally or otherwise, from our discussions.”

Linda means the opposite of what she says, of course. She is letting people know that Brad is to blame. However, while Linda is making her message public, she also wants what we often call “plausible deniability” for her statement. If anyone asks later if she was insinuating anything about Brad, she can claim she was just making a general comment about the firm.

From the boardroom to the courtroom, the talk show, and beyond, people frequently seek plausible deniability for their statements. It seems to work, too. Indeed, to have plausible deniability, the denial need not be plausible.

“People can say, ‘That’s not what I meant,’ and completely get away with it, even though it’s totally obvious they’re lying,” says MIT philosopher Sam Berstler. “They wouldn’t be getting away with it in the same respect by putting the content in explicit words.”

She adds: “This should be very puzzling to us, because in both cases the intent is maximally obvious.”

So why does plausible deniability work, and work like this? And what does it tell us about how we interact? Berstler, who studies language and communication, has published a new paper on plausible deniability, examining these issues. It is part of a larger body of work Berstler is generating, focused on everyday interactions involving deception.

To understand plausible deniability, Berstler thinks we should recognize that our conversations cannot be understood simply by analyzing the words we use. Our interactions always take place in social contexts, often have a performative aspect, and occasionally intersect with “non-acknowedgement norms,” the practice of keeping quiet about what we all know. Plausible deniability is bound up with social practices that incentivize us to not be fully transparent.

“A lot of indirect speech is designed, as it were, to facilitate this kind of deniability,” Berstler says.

The paper, “Non-Epistemic Deniability,” is published in the journal MIND. Berstler, the Laurance S. Rockefeller Career Development Chair and assistant professor of philosophy at MIT, is the sole author.

Managing a personal “Cold War”

In Berstler’s view, there are multiple ways to create plausible deniability. One is through the practice of open secrets, the subject of one of her previous papers. An open secret is widely known information that is never acknowledged, for reasons of power or in-group identification, among other things. Indeed, no one even acknowledges that they are not acknowledging the open secret.

Examining open secrets led Berstler directly to her analysis of plausible deniability. However, the new paper focuses more on another way of creating plausible deniability, which she calls “two-tracking norms.” Two-tracking is when a group divides its communications into two parts: One track consists of official, limited, courteous interaction, and the second track consists more of informal, resentful, uncooperative interactions. Linda, in our example, is engaging in two-tracking.

But why do we two-track at all? Why not just be fully transparent? Well, in an office scenario, if Linda is mad that Brad divulged some company secrets, calling out Brad directly might lead to recriminations and conflict beyond what Linda is willing to tolerate for the sake of critizing Brad on the record.

“It's like a Cold War situation where we each have an interest in not letting the conflict go to a state where we’re firing warheads at each other, but we can’t just purely manage relations around the negotiating table because we’re adversaries,” Berstler says. “We’re going to aggress against each other, but in a limited way. In a two-track conversation, communicating in the second track is like fighting a proxy battle, but we’re also providing evidence to each other that we’re only going to engage in a proxy battle.”

In this way, Linda takes Brad to task and some people pick up on it, but Brad is not explicitly publicly shamed. And though he might be unhappy, he is less likely to wreck all company norms in an attempt to retaliate. The firm more or less rolls on as usual.

Waiting for Goffman

Where Berstler differs in part from other philosophers is in her emphasis on the extent to which social practices are integral to our ways of deploying deniability. Our interactions are not just limited to rhetoric, but have additional layers.

“What we mean can often be different from what we say, or enhanced from what we say,” Berstler says. “Sometimes we figure out what others mean by relying on what they say in literal language. But sometimes we’re relying on other things, like the context.”

So, back at the firm, the colleagues of Linda and Brad might have some knowledge of a confidentiality breach, or they might know that Linda does not usually speak up at meetings, or they might read things into her tone of voice and the way she appeared to look at Brad. There is more to be gleaned than her literal words.

In this kind of analysis, Berstler finds illumination in the work of the midcentury sociologist Erving Goffman, who studied in minute detail the performative parts of our everyday interactions and speech. Goffman, as Berstler notes in the paper, proposed that we have a ritualized, social self (or “face”) and that normal, everyday behavior generally allows us, and others, to keep this face intact.

Relatedly, Goffman and some of his intellectual followers concluded that habits such as two-tracking are very common in everyday life; the price we pay for saving face is a bit less transparency, and a bit more secrecy and deniability.

“What I’m suggesting is we have these other established practices like two-tracking and open secrecy, where the deniability is just a byproduct,” Berstler says.

What’s the solution?

By bringing sociological ideas into her work, Berstler is moving beyond the normal philosophical discussion of the subject. On the other hand, she is not directly disputing core ideas in linguistics or the philosophy of language; she is just suggesting we add another layer to our analysis of communication and meaning.

Digging into issues of plausible deniability also raises the question of what to do about it. There may be something pernicious in the practice, but calling out plausible deniability threatens to dismantle our social guardrails and break the “Cold War” norms used to help people co-exist.

Berstler, though, has another suggestion: Instead of calling out such subterfuge, we can become verbally and performatively skilled enough to counteract it.

“I think the actual answer is becoming rhetorically clever,” Berstler says. “It’s being the person who uses indirect speech to respond strategically, without violating these norms. That is possible. It also means you have agency. You could become very good at verbal sparring.”

Besides, Berstler says, “Often that can be more powerful than just calling them out, and demonstrates your own verbal fluency. I think we admire it when we see it. Conversational skill is an important component of being morally good, in these cases by reprimanding someone in a way that’s not going to be counterproductive.”

She adds: “People who buy into the rhetoric of transparency can be setting back their own interests. Maybe speaking transparently is morally virtuous in some respects, but given the reality of our speech practices, transparency is not necessarily going to be the most effective way of handling things.”

Jacob Andreas and Brett McGuire named Edgerton Award winners

Fri, 04/17/2026 - 9:40am

MIT Associate Professor Jacob Andreas of the Department of Electrical Engineering and Computer Science [EECS] and MIT Associate Professor Brett McGuire of the Department of Chemistry have been selected as the winners of the 2026 Harold E. Edgerton Faculty Achievement Award. Established in 1982 as a permanent tribute to Institute Professor Emeritus Harold E. Edgerton’s great and enduring support for younger faculty members, this award is given annually in recognition of exceptional distinction in teaching, research, and service.

“The Department of Chemistry is extremely delighted to see Brett recognized for science that has changed how we think about carbon in space,” says Class of 1942 Professor of Chemistry and Department Head Matthew D. Shoulders. “Brett’s lab combines laboratory spectroscopy, radio astronomy, and sophisticated signal-analysis methods to pull definitive molecular fingerprints out of extraordinarily faint data. His discovery of polycyclic aromatic hydrocarbons in the cold interstellar medium has opened a powerful new window on astrochemistry. Moreover, Brett is inventing the creative and unique tools that make discoveries like this possible.”

“Jacob Andreas represents the very best of MIT EECS” says Asu Ozdaglar, EECS department head. “He is an innovative researcher whose work combines computational and linguistically informed approaches to build foundations of language learning. He is an extraordinary educator who has brought these forefront ideas into our core classes in natural language processing and machine learning. His ability to bridge foundational theory with real-world impact, while also advancing the social and ethical dimensions of computing, makes him truly deserving of the Edgerton Faculty Achievement Award.”

Andreas joined the MIT faculty in July 2019, and is affiliated with the Computer Science and Artificial Intelligence Laboratory. His work is in natural language processing (NLP), and more broadly in AI. He aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Among other honors, Andreas has received Samsung’s AI Researcher of the Year award, MIT’s Kolokotrones and Junior Bose teaching awards, a 2024 Sloan Research Fellow award, and paper awards at the National Accrediting Agency for Clinical Laboratory Sciences, the International Conference on Machine Learning, and the Association for Computational Linguistics.

Andreas received his BS from Columbia University, his MPhil from Cambridge University (where he studied as a Churchill scholar), and his PhD in natural language processing from the University of California at Berkeley. His work in natural language processing has taken on thorny problems in the capability gap between humans and computers. “The defining feature of human language use is our capacity for compositional generalization,” explains Antonio Torralba, Delta Electronics Professor and faculty head of Artificial Intelligence and Decision-Making in the Department of EECS. “Many of the core challenges in natural language processing is addressed by simply training larger and larger neural models, but this kind of compositional generalization remains a persistent difficulty, and without the ability to generalize compositionally, the deep learning toolkit will never be robust enough for the most challenging real-world NLP tasks. Jacob’s work on compositional modeling draws new connections between NLP and work in computer vision and physics aimed at modeling systems governed by symmetries and other algebraic structures and, using them, they have been able to build NLP models exhibiting a number of new, human-like language acquisition behaviors, including one-shot word learning, learning via mutual exclusivity constraints, and learning of grammatical rules in extremely low-resource settings.”

Within EECS, Andreas has developed multiple advanced courses in natural language processing, as well as new exercises designed to get students to grapple with important social and ethical considerations in machine learning deployment. “Jacob has taken a leading role in completely modernizing and extending our course offerings in natural language processing,” says award nominator Leslie Pack Kaelbling, Panasonic Professor in the Department of EECS. “He has led the development of a modern two-course sequence, which is a cornerstone of the new AI+D [artificial intelligence and decision-making] major, routinely enrolling several hundred students each semester. His command of the area is broad and deep, and his classes integrate classical structural understanding of language with the most modern learning-based approaches. He has put MIT EECS on the worldwide map as a place to study natural language at every level.”

Brett McGuire joined the MIT faculty in 2020 and was promoted to associate professor in 2025. His research operates at the intersection of physical chemistry, molecular spectroscopy, and observational astrophysics, where he seeks to uncover how the chemical building blocks of life evolve alongside and help shape the birth of stars and planets. A former Jansky Fellow and then Hubble Postdoctoral Fellow at the National Radio Astronomy Observatory, McGuire has a BS in chemistry from the University of Illinois and a PhD in physical chemistry from Caltech. His honors include a 2026 Sloan Fellowship, the Beckman Young Investigator Award, the Helen B. Warner Prize for Astronomy, and the MIT Award for Teaching with Digital Technology.

The faculty who nominated McGuire for this award praised his extraordinary public outreach, his immediate willingness to take on teaching class 5.111 (Principles of Chemical Science), a General Institute Requirement (GIR) course comprised of 150–500 students, and his service to both the MIT and astrochemical communities.

“Brett is at the very top of astrochemical scientists in his age group due to his discovery of fused carbon ring compounds in the cold region of the ISM [interstellar medium], an observation that provides a route for carbon incorporation in planets,” says Sylvia Ceyer, the John C. Sheehan Professor of Chemistry in her nomination statement. “His extensive involvement in service-oriented activities within the astrochemical/physical community is highly unusual for a junior scientist, and is testament to the value that the astronomical community places in his wisdom and judgement. His phenomenal organizational skills have made his contributions to graduate admission protocols and seminar administration at MIT the envy of the department. And most importantly, Brett is a superb teacher, who cares deeply about students’ understanding and success, not only in his course, but in their future endeavors.”

“As an assistant professor, Brett volunteered to teach 5.111, a large GIR course with 150–500 students, and has received some of the best teaching evaluations among all faculty who have led the subject,” says Mei Hong, the David A. Leighty Professor of Chemistry. “He has a natural talent in explaining abstract physical chemistry concepts in an engaging manner. His slides, which he prepared from scratch instead of modifying from previous years’ material from other professors, are clear, and … the combination of lucid explanation and humor has generated great enthusiasm and interest in chemistry among students.”

Subject evaluations from McGuire’s courses praised his humor, the clarity of his explanations, and his ability to transform a lecture into a “science show.” “I haven't felt this sort of desire for the depth of understanding in a subject beyond just a straight grade [in some time],” says one student. “Brett definitely stimulated that love of learning for me.” 

“Brett is an outstanding faculty member who is dedicated to fostering student learning and success,” says Jennifer Weisman, assistant director of academic programs in chemistry. “He is thoughtful, caring, and goes above and beyond to help his colleagues, students, and staff.”

“I’m thrilled to be selected for the Edgerton Award this year,” says McGuire. “The award is nominally for teaching, research, and service; MIT and the chemistry department in particular have been an incredible place to learn and grow in all these areas. I’m incredibly grateful for the mentorship, enthusiasm, and support I have received from my colleagues, from my students both in the lab and in the classroom, and from the MIT community during my time here. I look forward to many more years of exciting discovery together with this one-of-a-kind community.”

Bringing AI-driven protein-design tools to biologists everywhere

Fri, 04/17/2026 - 12:00am

Artificial intelligence is already proving it can accelerate drug development and improve our understanding of disease. But to turn AI into novel treatments we need to get the latest, most powerful models into the hands of scientists.

The problem is that most scientists aren’t machine-learning experts. Now the company OpenProtein.AI is helping scientists stay on the cutting edge of AI with a no-code platform that gives them access to powerful foundation models and a suite of tools for designing proteins, predicting protein structure and function, and training models.

The company, founded by Tristan Bepler PhD ’20 and former MIT associate professor Tim Lu PhD ’07, is already equipping researchers in pharmaceutical and biotech companies of all sizes with its tools, including internally developed foundation models for protein engineering. OpenProtein.AI also offers its platform to scientists in academia for free.

“It’s a really exciting time right now because these models can not only make protein engineering more efficient — which shortens development cycles for therapeutics and industrial uses — they can also enhance our ability to design new proteins with specific traits,” Bepler says. “We’re also thinking about applying these approaches to non-protein modalities. The big picture is we’re creating a language for describing biological systems.”

Advancing biology with AI

Bepler came to MIT in 2014 as part of the Computational and Systems Biology PhD Program, studying under Bonnie Berger, MIT’s Simons Professor of Applied Mathematics. It was there that he realized how little we understand about the molecules that make up the building blocks of biology.

“We hadn’t characterized biomolecules and proteins well enough to create good predictive models of what, say, a whole genome circuit will do, or how a protein interaction network will behave,” Bepler recalls. “It got me interested in understanding proteins at a more fine-grained level.”

Bepler began exploring ways to predict the chains of amino acids that make up proteins by analyzing evolutionary data. This was before Google released AlphaFold, a powerful prediction model for protein structure. The work led to one of the first generative AI models for understanding and designing proteins — what the team calls a protein language model.

“I was really excited about the classical framework of proteins and the relationships between their sequence, structure, and function. We don’t understand those links well,” Bepler says. “So how could we use these foundation models to skip the ‘structure’ component and go straight from sequence to function?”

After earning his PhD in 2020, Bepler entered Lu’s lab in MIT’s Department of Biological Engineering as a postdoc.

“This was around the time when the idea of integrating AI with biology was starting to pick up,” Lu recalls. “Tristan helped us build better computational models for biologic design. We also realized there’s a disconnect between the most cutting-edge tools available and the biologists, who would love to use these things but don’t know how to code. OpenProtein came from the idea of broadening access to these tools.”

Bepler had worked at the forefront of AI as part of his PhD. He knew the technology could help scientists accelerate their work.

“We started with the idea to build a general-purpose platform for doing machine learning-in-the-loop protein engineering,” Bepler says. “We wanted to build something that was user friendly because machine-learning ideas are kind of esoteric. They require implementation, GPUs, fine-tuning, designing libraries of sequences. Especially at that time, it was a lot for biologists to learn.”

OpenProtein’s platform, in contrast, features an intuitive web interface for biologists to upload data and conduct protein engineering work with machine learning. It features a range of open-source models, including PoET, OpenProtein’s flagship protein language model.

PoET, short for Protein Evolutionary Transformer, was trained on protein groups to generate sets of related proteins. Bepler and his collaborators showed it could generalize about evolutionary constraints on proteins and incorporate new information on protein sequences without retraining, allowing other researchers to add experimental data to improve the model.

“Researchers can use their own data to train models and optimize protein sequences, and then they can use our other tools to analyze those proteins,” Bepler says. “People are generating libraries of protein sequences in silico [on computers] and then running them through predictive models to get validation and structural predictors. It’s basically a no-code front-end, but we also have APIs for people who want to access it with code.”

The models help researchers design proteins faster, then decide which ones are promising enough for further lab testing. Researchers can also input proteins of interest, and the models can generate new ones with similar properties.

Since its founding, OpenProtein’s team has continued to add tools to its platform for researchers regardless of their lab size or resources.

“We’ve tried really hard to make the platform an open-ended toolbox,” Bepler says. “It has specific workflows, but it’s not tied specifically to one protein function or class of proteins. One of the great things about these models is they are very good at understanding proteins broadly. They learn about the whole space of possible proteins.”

Enabling the next generation of therapies

The large pharmaceutical company Boehringer Ingelheim began using OpenProtein’s platform in early 2025. Recently, the companies announced an expanded collaboration that will see OpenProtein’s platform and models embedded into Boehringer Ingelheim’s work as it engineers proteins to treat diseases like cancer and autoimmune or inflammatory conditions.

Last year, OpenProtein also released a new version of its protein language model, PoET-2, that outperforms much larger models while using a small fraction of the computing resources and experimental data.

“We really want to solve the question of how we describe proteins,” Bepler says. “What’s the meaningful, domain-specific language of protein constraints we use as we generate them? How can we bring in more evolutionary constraints? How can we describe an enzymatic reaction a protein carries out such that a model can generate sequences to do that reaction?”

Moving forward, the founders are hoping to make models that factor in the changing, interconnected nature of protein function.

“The area I am excited about is going beyond protein binding events to use these models to predict and design dynamic features, where the protein has to engage two, three, or four biological mechanisms at the same time, or change its function after binding,” says Lu, who currently serves in an advisory role for the company.

As progress in AI races forward, OpenProtein continues to see its mission as giving scientists the best tools to develop new treatments faster.

“As work gets more complex, with approaches incorporating things like protein logic and dynamic therapies, the existing experimental toolsets become limiting,” Lu says. “It’s really important to create open ecosystems around AI and biology. There’s a risk that AI resources could get so concentrated that the average researcher can’t use them. Open access is super important for the scientific field to make progress.”

With navigating nematodes, scientists map out how brains implement behaviors

Thu, 04/16/2026 - 6:30pm

Animal behavior reflects a complex interplay between an animal’s brain and its sensory surroundings. Only rarely have scientists been able to discern how actions emerge from this interaction. A new open-access study in Nature Neuroscience by researchers in The Picower Institute for Learning and Memory at MIT offers one example by revealing how circuits of neurons within C. elegans nematode worms respond to odors and generate movement as they pursue of smells they like and evade ones they don’t.

“Across the animal kingdom, there are just so many remarkable behaviors,” says study senior author Steven Flavell, associate professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences and an investigator of the Howard Hughes Medical Institute. “With modern neuroscience tools, we are finally gaining the ability to map their mechanistic underpinnings.”

By the end of the study, which former graduate student Talya Kramer PhD ’25 led as her doctoral thesis research, the team was able to show exactly which neurons in the worm’s brain did which of the jobs needed to sense where smells were coming from, plan turns toward or away from them, shift to reverse (like old-fashioned radio-controlled cars, C. elegans worms turn in reverse), execute the turns, and then go back to moving forward. Not only did the study reveal the sequence and each neuron’s role in it, but it also demonstrated that worms are more skillful and intentional in these actions than perhaps they’ve received credit for. And finally, the study demonstrated that it’s all coordinated by the neuromodulatory chemical tyramine.

“One thing that really excited us about this study is that we were able to see what a sensorimotor arc looks like at the scale of a whole nervous system: all the bits and pieces, from responses to the sensory cue until the behavioral response is implemented,” Flavell says.

Seeing the sequence

To do the research, Kramer put worms in dishes with spots of odors they’d either want to navigate toward or slither away from. With the lab’s custom microscopes and software, she and her co-authors could track how the worms navigated and all the electrical activity of more than 100 neurons in their brains during those behaviors (the worms only have 302 neurons total).

The surveillance enabled Kramer, Flavell, and their colleagues to observe that the worms weren’t just ambling randomly until they happened to get where they’d want to be. Instead, the worms would execute turns with advantageous timing and at well-chosen angles. The worms seemed to know what they were doing as they navigated along the gradients of the odors.

Inside their heads, patterns of electrical activity among a cohort of 10 neurons (indicated by flashing green light tied to the flux of calcium ions in the cells), revealed the sequence of neural activation that enabled the worms to execute these sensible sensory-guided motions: forward, then into reverse, then into the turn, and then back to forward. Particular neurons guided each of these steps, including detecting the odors, planning the turn, switching into reverse, and then executing the turns.

A couple of neurons stood out as key gears in the sequence. A neuron called SAA proved pivotal for integrating odor detection with planning movement, as its activity predicted the direction of the eventual turn. Several neurons were flexible enough to show different activity patterns depending on factors such as where the odors were and whether the worm was moving forward or in reverse.

And if the neurons are indeed turning and shifting gears, then the neuromodulator tyramine (the worm analog of norepinephrine) was the signal essential to switch their gears. After the worms started moving in reverse, tyramine from the neuron RIM enabled other neurons in the sequence to change their activity appropriately to execute the turns. In several experiments the scientists knocked out RIM tyramine and saw that the navigation behaviors and the sequence of neural activity largely fell apart.

“The neuromodulator tyramine plays a central role in organizing these sequential brain activity patterns,” Flavell says.

In addition to Flavell and Kramer, the paper’s other authors are Flossie Wan, Sara Pugliese, Adam Atanas, Sreeparna Pradhan, Alex Hiser, Lillie Godinez, Jinyue Luo, Eric Bueno, and Thomas Felt.

A MathWorks Science Fellowship, the National Institutes of Health, the National Science Foundation, The McKnight Foundation, The Alfred P. Sloan Foundation, the Freedom Together Foundation, and HHMI provided funding to support the work.

Understanding community effects of Asian immigrants’ US housing purchases

Thu, 04/16/2026 - 6:00pm

Asian immigrants are both the fastest-growing and highest-earning immigrant ethnic group in the United States, facts that have caught the attention of many economists interested in how these groups — whether investors or residents — impact housing prices, K-12 education, and other important aspects of community life.

A new study by economists at MIT and the University of Cincinnati delves into this trend, focusing on the potential mechanisms at work behind the correlation of rising home prices and subsequent improvements in education at the county level. Their findings suggest that home prices rise not simply due to increased demand, but because the new neighbors have a positive influence on the quality of K-12 education, which in turn increases desirability.

The study focuses on 2008 to 2019, a period that saw a relative spike in US immigration from six Asian countries in particular — China, India, Japan, Korea, the Philippines, and Vietnam. Among this group, the economists focused specifically on those who arrived on non-permanent visas for study or work — a cohort that represents a distinct and growing channel of new immigrant inflow, and is often pre-selected by universities and employers.

“We’re looking at a window when the influx of Asian immigrants has a particularly strong preference for education, and who themselves were also highly educated,” says Eunjee Kwon, the West Shell, Jr. Assistant Professor of Real Estate in the Department of Finance at the University of Cincinnati, a co-author on the study published in the May issue of the Journal of Urban Economics. “This period also marks a notable shift in the socioeconomic profile of Asian immigrants to the U.S., with this cohort arriving with higher levels of education and income relative to earlier waves of Asian immigrants and, in many cases, relative to the native-born population.”

While county data is not granulated to the neighborhood or even municipality level, the researchers found that 30 to 40 percent of the rise in home values purchased in areas where Asian immigrant buyers have school-age children correlates with improved quality of education, as indicated by the average rise in standardized test scores of all children in the county.

“Maybe some Asian buyers are pure investors, but many of them become residents who buy homes for themselves and their families, and transform the neighborhoods,” says co-author Siqi Zheng, the Samuel Tak Lee Professor of Urban and Real Estate Sustainability at the MIT Center for Real Estate and the Department of Urban Studies and Planning. “We show that this is not negligible; it is a big component. We can attribute at least one-third of housing price increases to improved education.”

Amanda Ang, a postdoc in the Department of Economics at Aalto University in Helsinki, is the third co-author of the paper. The work is somewhat personal for the scientists, who undertook the study without funding in order to see for themselves what impact this particular group of immigrants had on neighborhoods.

“We wanted to understand what this group contributes to the communities where they settle," Kwon says. “We found that their presence benefits children of all other backgrounds, too."

Ang, Kwon, and Zheng use an econometric approach called an instrumental variable to home in on a causal correlation, and not just an association. To help ensure accuracy, they carefully omitted counties that have long been home to large Asian communities — such as San Francisco, Los Angeles, and New York — in order to capture the impact of recent immigrants on other counties.

“I believe that this will be a highly influential paper because it asks a very important question and uses credible statistical methods to try to disentangle selection effects from treatment effects, using a subtle analysis accounting for displacement,” says Matthew Kahn, the Provost Professor of Economics and Spatial Sciences at the University of Southern California, who was not involved with the research. 

“What really interests me about this paper is that it suggests that there can be a positive spillover effect: that U.S. areas that attract Asian immigrants also gain from improved school quality,” Kahn says. “It’s the first I’ve seen undertaken on this very important hypothesis, which certainly merits additional future research, possibly using school-level and individual-level data.”

Light-activated gel could impact wearables, soft robotics, and more

Thu, 04/16/2026 - 5:10pm

Consider the chief difference between living systems and electronics: The first is generally soft and squishy, while the latter is hard and rigid. Now, in work that could impact human-machine interfaces, biocompatible devices, soft robotics, and more, MIT engineers and colleagues have developed a soft, flexible gel that dramatically changes its conductivity upon the application of light.

Enter the growing field of ionotronics, which involves transferring data through ions, or charged molecules. Electronics does the same, with electrons. But while the latter is well established, ionotronics is still being developed, with one huge exception: living systems. The cells in our bodies communicate with a variety of ions, from potassium to sodium.

Ionotronics, in turn, can provide a bridge between electronics and biological tissues. Potential applications range from soft wearable technology to human-machine interfaces

“We’ve found a mechanism to dynamically control local ion population in a soft material,” says Thomas J. Wallin, the John F. Elliott Career Development Professor in MIT’s Department of Materials Science and Engineering and leader of the work. “That could allow a system that is self-adaptive to environmental stimuli, in this case light.” In other words, the system could automatically change in response to changes in light, which could allow complex signal processing in soft materials.

An open-access paper about the work was published online recently in Nature Communications.

A growing field

Although others have developed ionotronic materials with high conductivities that allow the quick movement of ions, those conductivities cannot be controlled. “What we’re doing is using light to switch a soft material from insulating to something that is 400 times more conductive,” says Xu Liu, first author of the paper and former MIT postdoc in materials science and engineering who is now an incoming assistant professor at King’s College London.

Key to the work is a class of materials known as photo-ion generators (PIGs). These can become some 1,000 times more conductive upon the application of light. The MIT team optimized a way to incorporate a PIG into polyurethane rubber by first dissolving a PIG powder into a solvent, and then using a swelling method to get it into the rubber.

Much potential

In the material reported in the current work, the change in conductivity is irreversible. But Liu is confident that future versions could switch back and forth between insulating and conducting states.

She notes that the current material was developed using only one kind of PIG, polymer (the polyurethane rubber), and solvent, but there are many other kinds of all three. So there is great potential for creating even better light-responsive soft materials.

Liu also notes the potential for developing soft materials that respond to other environmental stimuli, such as heat or magnetism. “We’re inspired to do more work in this field by changing the driving force from light to other forms of environmental stimuli,” she says.

“Our work has the potential to lead to the creation of a subfield that we call soft photo-ionotronics,” Liu continues. “We are also very excited about the opportunities from our work to create new soft machines impacting soft wearable technology, human-machine interfaces, robotics, biomedicine, and other fields.”

Additional authors of the paper are Steven M. Adelmund, Shahriar Safaee, and Wenyang Pan of Reality Labs at Meta. 

3 Questions: A running shoe that adapts to the runner

Thu, 04/16/2026 - 11:25am

Granular convection takes place everywhere: candy in a box, sand on the beach, foam in a cushion. Often referred to as the “Brazil nut effect,” granular convection occurs when solid, independent, irregularly shaped particles reorder themselves following agitation. One might think, intuitively, that the larger pieces fall to the bottom, but it is their size, and not their density, that alters their location, and the larger pieces end up on the top.

In the world of competitive running, elite athletes have their footwear individually designed for needs such as foot shape and pressure points. Comfortable and supportive footwear can assist optimal performance. However, most footwear is standardized and doesn’t offer a personalized performance. 

MIT associate professor of architecture Skylar Tibbits, founder and co-director of the Self-Assembly Lab in the MIT School of Architecture and Planning, along with various MIT colleagues, have been developing tests surrounding the phenomenon of granular convection within the midsole — or middle layer, between the outsole (bottom) and insole (top) — of running shoes to create a shoe that evolves over time to provide an individualized product. As we approach the running of the 130th Boston Marathon — one of the world's most prominent displays of footwear supporting athletes — Tibbits answers three questions about bead-based technologies as applied to running shoes. 

Q. What are the advantages of an adaptive midsole over the current bead-based midsole technology?

A. Currently, the standard midsoles in running shoes are static. They aren’t customized to the shape of our foot or the force we deliver when running or walking. They also don’t change or improve over time as we run in them. Some products — blue jeans, baseball gloves, and hats, for example — get more comfortable as you wear them. We were exploring how this could be taken even further with a running shoe so that you would have the cushion, support, and stiffness where you need it and have it improve these features as you use it so that, over time, the actual performance of the shoe gets better. It’s not a personalized fit; it’s a performance-driven adaptation.

There are three advantages to this technology. The first is that customization is not only for elite athletes. Most elite athletes are already getting gear personalized for their specific needs by their sponsoring brands. Now, customized gear can be available for everyone. Second, customized gear currently does not adapt to an athlete’s performance. But you need your footwear to evolve because your needs as a runner evolve. You need to get the comfort, cushioning, and protection, to support your performance.

A third advantage is the manufacturability of this type of shoe. Custom shoes are now made in a factory for the specifications of a single athlete. That doesn’t scale. You can’t produce a manufacturing process where every single person’s shoe is going to be custom-made for them. We’ve shown that every shoe can be the same and mass produced, but, over time, the shoe will evolve to your personal needs. That is a way to get customization without having to change the manufacturing process.

Q: Why the interest in granular systems, and granular convection in particular?

A: We’ve worked on reversible construction techniques with granular jamming over the years, which is at the opposite end of the spectrum. Granular convection promotes the movement of particles; the more they are mixed, the more they separate. Our vision was looking at footwear that adapts with you over time. We thought we could use granular convection as a mechanism for the footwear to evolve.

We put particles with different stiffness, different material properties, and unique sizes, so that over time, we know the softer particles, which are the larger particles, will rise to the top, and the stiffer particles that are smaller will sink to the bottom, towards the outsole. We designed how these particles moved based on the vibration and the impact of walking and running.

We also designed the container. We had three different particle sizes; we conducted tests to try to dial it into the right number of steps for it to evolve over the course of about 20,000 steps. About the length of a marathon. We could either speed up or slow down that process.

Q. Are there future applications of customization for granular convection? If so, where do you see your research going next?

A: Any products that need cushioning systems that improve over time would benefit from this technology. With custom packaging, you have molded foam that fits around a product — a flat-screen television, for example — that is tossed out after it has been shipped from factory to distributor to customer. I worked with a furniture company that wrapped blankets around chairs for transport, but there were still some chairs that sustained damage. Maybe we could develop a blanket or some kind of material that adapts over the journey so that it creates just the right amount of cushion for the shape and property of that product and, once it’s delivered, its shape could be “released” and then reused. How can we reset this product in a timely manner so it can be used again?

Wheelchairs are another product where we would want seat cushions that can adapt to how a person sits, the force distribution, and the environment in which they are being used, such as a sidewalk or a gravel path. We considered this as it relates to footwear. You might want to reset your shoes because you’re going to be running road races on a given day and trail races another day. How can we empty and refill the midsole with different particles so it can adapt again? More importantly, how can we upgrade or change our shoes without throwing them away? This is exciting future work for us to explore.

A regulatory loophole could delay ozone recovery by years

Thu, 04/16/2026 - 5:00am

Often hailed as the most successful international environmental agreement of all time, the 1987 Montreal Protocol continues to successfully phase out the global production of chemicals that were creating a growing hole in the ozone layer, causing skin cancer and other adverse health effects.

MIT-led studies have since shown the subsequent reduction in ozone-depleting substances is helping stratospheric ozone to recover. (It could return to 1980 levels by as early as 2040, according to some estimates.) But the Montreal Protocol made an exception in its rules for the use of ozone-depleting substances as feedstocks in the production of other materials. That’s because it was thought that only a small amount — just 0.5 percent — of the ozone-depleting substances used for this purpose would leak into the atmosphere.

In recent years, however, scientists have observed more ozone-depleting substances in the atmosphere than expected, and have increased their estimates of leakage from feedstocks.

Now an international group of scientists, including researchers from MIT, has calculated the impact of different feedstock leakage rates on the ozone’s fragile recovery. They find the higher leakage rates, if not addressed by the Montreal Protocol, could delay ozone recovery by about seven years.

“We’ve realized in the last few years that these feedstock chemicals are a bug in the system,” says author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry, who was part of the original research team that linked the chemicals to the ozone hole. “Production of ozone-depleting substances has pretty much ceased around the world except for this one use, which is when you have a chemical you convert into something else.”

The paper, which was published in Nature Communications today, is the first to comprehensively quantify the impact of leaked feedstocks, which are currently used to make plastics and nonstick chemicals. They are also used to make substitute chemicals for the ones regulated under the Montreal Protocol. The researchers say it shows the importance of curbing use and preventing leakage of such feedstocks, especially as the production of their end products, like plastic, is projected to grow.

“We’ve gotten to the point where, if we want the protocol to be as successful in the future as it has been in the past, the parties really need to think about how to tighten up the emissions of these industrial processes,” says first author Stefan Reimann of the Swiss Federal Laboratories for Materials Science and Technology.

“To me, it’s only fair, because so many other things have already been completely discontinued. So why should this exemption exist if it’s going to be damaging?” says Solomon.

Joining Reimann on the paper are his colleagues Martin K. Vollmer and Lukas Emmenegger; Luke Western and Susan Solomon of the MIT Center for Sustainability Science and Strategy and the Department of Earth, Atmospheric and Planetary Sciences; David Sherry of Nolan-Sherry and Associates Ltd; Megan Lickley of Georgetown University; Lambert Kuijpers of the A/gent Consultancy b.v.; Stephen A. Montzka and John Daniel of the National Oceanic and Atmospheric Administration; Matthew Rigby of the University of Bristol; Guus J.M. Velders of Utrecht University; Qing Liang of the NASA Goddard Space Flight Center; and Sunyoung Park of Kyungpook National University.

Repairing the ozone

In 1985, scientists discovered a growing hole in the ozone layer over Antarctica that was allowing more of the sun’s harmful ultraviolet radiation to reach Earth’s surface. The following year, researchers including Solomon traveled to Antarctica and discovered the cause of the ozone deterioration: a class of chemicals called chlorofluorocarbons, or CFCs, which were then used in refrigeration, air conditioning, and aerosols.

The revelations led to the Montreal Protocol, an international treaty involving 197 countries and the European Union restricting the use of CFCs. The subsequent decision to exempt the use of ozone-depleting substances for use as feedstocks was based partially on industry estimates of how much of their feedstocks leaked.

“It was thought that the emissions of these substances as a feedstock were minor compared to things like refrigerants and foams,” Western says. “It was also believed that leakage from these sources was minor — around half a percent of what went in — because people would essentially be leaking their profits if their feedstocks were released into the atmosphere.”

Unfortunately, some of those assumptions are no longer true. Western and Reimann are part of the Advanced Global Atmospheric Gases Experiment (AGAGE), a global monitoring network co-founded by Ronald Prinn, MIT’s TEPCO Professor of Atmospheric Science. AGAGE monitors emissions of ozone-depleting substances around the world, and in recent years researchers have revised their estimates of feedstock leakage upwards, to about 3.6 percent. For some chemicals, the number was even higher.

In the new paper, the researchers estimated a 3.6 percent feedstock leakage as the baseline for most chemicals. They compared that with a scenario where 0.5 percent of feedstocks are leaked from 2025 onward and a scenario with zero feedstock-related emissions. The researchers also looked at production trends between 2014 and 2024 to project how much of each specific ozone-depleting chemical would be used as feedstock between 2025 and 2100.

The analysis shows that until 2050, total ozone-depleting chemical emissions decrease in all scenarios as rising feedstock emissions are offset by declining uses enforced by the Montreal Protocol. In the scenario with continued 3.6 percent leakage, however, emissions level off around 2045, and total emissions only decrease by 50 percent overall by 2100.

The researchers then evaluated the impact of feedstock-related emissions on stratospheric ozone depletion. In the scenario where feedstock leakage is 0.5 percent, the ozone returns to its 1980 status by 2066. In the scenario with zero feedstock leakage, the ozone reclaims its 1980 health in 2065. But in the baseline scenario, the recovery is delayed about seven years, to 2073.

“This paper sends an important message that these emissions are too high and we have to find a way to reduce them,” Reimann says. “Either that means no longer using these substances as feedstocks, swapping out chemicals, or reducing the leakage emissions when they are used.”

A global response

Solomon is confident industries will be able to adjust to the latest findings.

“There are a lot of innovators in the chemical industry,” Solomon says. “They make new chemicals and improve chemicals for a living. It’s true they can perhaps get too entrenched with certain chemicals, but it doesn’t happen that often. Actually, they’re usually quite willing to consider alternatives. There are thousands of other chemicals that could be used instead, so why not switch? That’s been the attitude.”

Solomon says the fact that AGAGE can detect the impact of feedstock emissions is a testament to the progress the world has made in reducing emissions from other sources up to this point. She believes raising awareness of the feedstock problem is the first step.

“This isn’t the first time that the AGAGE Network has made measurements that have allowed the world to see we need to do a little better here or there,” Western says. “Often, it’s just a mistake. Sometimes all it takes is making people more aware of these things to tighten up some processes.”

Members of the Montreal Protocol meet every year. In those meetings, they split into working groups around different topics. Feedstock emissions are already one of those topics, so participants will review the evidence together. Typically, they release a statement about mitigation strategies if needed.

“We wanted to raise the warning flag that something is wrong here,” Reimann says. “We could reduce the period of ozone depletion by years. It might not sound like a long time, but if you could count the skin cancer cases you’d avoid in that time, it would seem quite significant.”

The work was supported, in part, by the U.S. National Science Foundation, the U.S. National Aeronautics and Space Administration (NASA), the Swiss Federal Office for the Environment, the VoLo Foundation, the United Kingdom Natural Environment Research Council, and the Korea Meteorological Administration Research and Development Program.

Youth may increase vulnerability to a carcinogen found in contaminated water and some drugs

Thu, 04/16/2026 - 12:00am

A new study from MIT suggests that a carcinogen that has been found in medications and in drinking water contaminated by chemical plants may have a much more severe impact on children than adults.

In a study of mice, the researchers found that juveniles exposed to drinking water containing this compound, known as NDMA, showed dramatically higher rates of DNA damage and cancer than adults.

The findings may help to explain an epidemiological association between childhood cancer and prenatal exposure to NDMA in people living near a contaminated site in Wilmington, Massachusetts, the researchers say. The study also suggests that it is critical to evaluate the impact of potential carcinogens across all ages.

“We really hope that groups that do safety testing will change their paradigm and start looking at young animals, so that we can catch potential carcinogens before people are exposed,” says Bevin Engelward, an MIT professor of biological engineering. “As a solution to cancer, cancer prevention is clearly much better than cancer treatment, so we hope we can spot dangerous chemicals before people are exposed, and therefore prevent extensive cancer risk.”

MIT postdoc Lindsay Volk is the lead author of the paper. Engelward is the senior author of the study, which appears in Nature Communications.

From DNA damage to cancer

NDMA (N-Nitrosodimethylamine) can be generated as a byproduct of many industrial chemical processes, and it is also found in cigarette smoke and processed meats. In recent years, NDMA has been detected in some formulations of the drugs valsartan, ranitidine, and metformin. It was also found in drinking water in Wilmington, Massachusetts, in the 1990s, as a result of contamination from the Olin Chemical site.

In 2021, a study from the Massachusetts Department of Health suggested a link between that water contamination and an elevated incidence of childhood cancer in Wilmington. Between 1990 and 2000, 22 Wilmington children were diagnosed with cancer. The contaminated wells were closed in 2003.

Also in 2021, Engelward and others at MIT published a study on the mechanism of how NDMA can lead to cancer. In the new Nature Communications paper, Engelward and her colleagues set out to see if they could determine why the compound appears to affect children more than adults.

Most studies that evaluate potential carcinogens are performed in mice that are at least 4 to 6 weeks old, and often older. For this study, the researchers studied two groups of mice — one 3 weeks old (juvenile), and one 6 months old (adult). Each group was given drinking water with low levels of NDMA, about five parts per million, for two weeks.

Inside the body, NDMA is metabolized by a liver enzyme called CYP2E1. This produces toxic metabolites that can damage DNA by adding a small chemical group known as a methyl group to DNA bases, creating lesions known as adducts.

When the researchers examined the livers of the mice, they found that juveniles and adults showed similar levels of DNA adducts. However, there were dramatic differences in what happened after that initial damage. In juvenile mice, DNA adducts led to significant accumulation of double-stranded DNA breaks, which occur when cells try to repair adducts. These breaks produce mutations that eventually lead to the development of liver cancer.

In the adult mice, the researchers saw essentially no double-stranded breaks and significantly fewer mutations compared to juveniles. Furthermore, the livers did not develop severe pathology, including tumors, even though they experienced the same initial level of DNA adducts.

“The initial structural changes to the DNA had very different consequences depending on age,” Engelward says. “The double-stranded breaks were exclusively observed in the young.”

Further experiments revealed that these differences stem from differences in the rates of cell proliferation. Cells in the juvenile liver divide rapidly, giving them more opportunity to turn DNA adducts into mutations, while cells of the adult liver rarely divide.

“This really emphasizes the overall problem that we’re trying to highlight in the paper,” Volk says. “With toxicological studies, oftentimes the standard is to use fully grown mice. At that point, they’re already slowing down cell division, so if we are testing the harmful effects of NDMA in adult mice, then we’re completely missing how vulnerable particular groups are, such as younger animals.”

While most of these effects were seen in the liver, because that is where NDMA is metabolized, a few of the mice developed other types of cancer, including lung cancer and lymphoma.

Adult risk is not zero

For most of these studies, the researchers used mice that had two of their DNA repair systems knocked out. This speeds up the mutation process, allowing the researchers to see the effects of NDMA exposure more easily, without needing to study a large population of mice.

However, a small study in mice with normal DNA repair showed that juveniles experienced NDMA-induced double-strand breaks, regenerative proliferation, and large-scale mutations that were completely absent in adults. This occurs because the fast-growing juveniles possess highly active DNA replication machinery that encounters the DNA adducts before the cell has time to repair them.

The researchers also found that if they treated adult mice with thyroid hormone, which stimulates proliferation of liver cells, those cells began accumulating mutations as quickly as the juvenile liver cells. Previous work done in the Engelward laboratory has shown that inflammation can also stimulate cell proliferation-driven vulnerability to DNA damage, so the findings of this study suggest that anything that causes liver inflammation could make the adult liver more vulnerable to damage caused by agents such as NDMA.

“We certainly don’t want to say that adults are completely resistant to NDMA,” Volk says. “Everything impacts your susceptibility to a carcinogen, whether that’s your genetics, your age, your diet, and so forth. In adults, if they have a viral infection, or a high fat diet, or chronic binge alcohol drinking, this can impact proliferation within the liver and potentially make them susceptible to NDMA.”

The researchers are now investigating how a high-fat diet might influence cancer development in mice that also have exposure to NDMA.

This collaborative effort across several MIT labs was funded by the National Institutes of Environmental and Health Sciences (NIEHS) Superfund Research Program, a NIEHS Core Center Grant, a National Institutes of Health Training Grant, and the Anonymous Fund for Climate Action. 

Pages