MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 15 hours 52 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Mapping the ocean with autonomous sensors

15 hours 55 min ago

In late October 2025, Tropical Storm Melissa moved through the Caribbean Sea with moderate winds that didn’t get much attention. But on Oct. 25, aided by a patch of warm ocean, the storm rapidly intensified. By the time it made landfall in Jamaica, it was one of the strongest Atlantic hurricanes on record, uprooting trees, tearing the roofs from buildings, and causing catastrophic flooding and power outages.

Ravi Pappu SM ’95, PhD ’01 blames the surprise on our inability to gather high-quality ocean data.

“The storm intensified because of a small pool of hot water in the Caribbean Ocean that fed it energy,” Pappu explains. “These pools are everywhere. They can be hundreds of kilometers wide and are literally invisible to us. If we knew about that pool, we could say very precisely how the hurricane would intensify and better deal with it.”

Pappu thinks he has a way to solve that problem. He is the founder of Apeiron Labs, a company deploying low-cost autonomous ocean sensors to capture more data, in more places, and at a lower cost than is possible today. The company’s devices roam the ocean up to a quarter mile below the surface and continuously gather data on temperature, acoustics, salinity, and more, providing a real-time look at one of the planet’s last known mysteries. He says the sensors can do for the ocean what small, modular CubeSat satellites did for Earth observation from space.

When the devices are ready to be recharged, trackers make it easy to scoop them from the ocean surface. Pappu envisions the recovery process being done by autonomous boats in the future.

“Humanity needs ocean measurements, and we need them at a scale that has never been attempted before,” Pappu says. “It’s a massively hard problem. In the last century, oceanographers resigned themselves to calling it the century of undersampling. If we are successful, we will have a much more fine-grained understanding of our oceans and how they impact humans. That’s what drives us.”

Homework

Pappu came to MIT after completing a 10-year homework assignment. It started when he was a child in India in the 1980s, when he saw a hologram on the cover of National Geographic for the first time.

“I was so taken by it that I decided I needed to learn how to make those three-dimensional images,” Pappu recalls. “I learned what I could by reading books and papers. I didn’t know who invented the hologram until I read a book about MIT’s Media Lab. The book named the person who invented the rainbow hologram, so I wrote him a letter. I didn’t know his address, so I just wrote on the envelope, ‘Steve Benton, holography researcher, MIT, USA.’”

To Pappu’s surprise, the letter reached Benton, and the former Media Lab professor even wrote back with some further topics he needed to learn about.

Pappu never forgot that. He earned a bachelor’s degree in electrical engineering in India, then earned his master’s degree at Villanova University, taking all the optics classes he could.

“Eventually, about 10 years after I saw my first hologram, I wrote to Steve and I said, ‘I did all these things you asked me, now I want to study with you,’” Pappu says. “That’s how I got into MIT.”

Pappu studied under Benton for the next three years. He also studied under Professor Neil Gershenfeld as part of his PhD. Following graduation, Pappu and four classmates started ThingMagic, a consulting company that eventually sold RFID readers. ThingMagic was acquired 2010. Pappu returned to MIT for two years as a visiting scientist around the time of the acquisition.

Following that experience, Pappu worked at In-Q-Tel, an organization that invested in ThingMagic and other companies with potential to advance national security. It was there that Pappu realized how badly the world needed large-scale, inexpensive ocean sensing.

“All of the ocean sensing up to that point, and even today, was about making a really expensive thing that cost $20 million, goes to the bottom of the ocean, and stays there for five years,” Pappu says. “We needed things that are cheap and scalable to deploy wherever you need them for as long as you want.”

Pappu officially founded Apeiron Labs in 2022.

“What we’re focused on is figuring out how the ocean works,” Pappu says. “How warm is it? What is the pH? How salty is it? These things vary from place to place every 10 kilometers or so. It varies over time, and it varies by season. If we knew the details of the ocean with the same fidelity we have for the atmosphere, we would be able to tell exactly when and where hurricanes hit. It would mean less uncertainty.”

Apeiron’s ocean-sensing devices are each 3 feet long and about 20 pounds. They’re designed to be dropped off a boat or plane with biodegradable parachutes and stay in the ocean for six months. Each device continuously sends data to the cloud, is controllable through a cloud-based ocean operating system, and is accessible on a mobile phone.

“We lower the carbon footprint and cost of gathering ocean data because everything else needs a diesel ship — and a fully crewed ship costs $100,000 a day,” Rappu says. “By the time you collect the first data in the old model, you’ve already committed to a lot of money in addition to millions of dollars for the sensors. “

The company’s devices currently have two types of sensors: one for measuring salinity, temperature, and depth, and the other that uses a hydrophone to passively listen for things like submarines and whales.

That could be used to detect the low-frequency calls and clicks of endangered whales and other fish species. Currently, fishermen must look for whales manually with spotters on ships or planes. The data could also be used to improve weather forecasts, monitor noise from offshore energy projects, and track currents.

“Currents are determined by temperature and salinity, so if there’s an oil spill, our data could help determine where that spill is going,” Pappu says. “Or if you’re a fisherman, knowing where the water changes from warm to cold, which is where the fish hang out, is very useful.”

An ocean of possibilities

Apeiron Labs has worked with government defense agencies including the U.S. Navy over the last two years. The company has also tested its devices off the coast of California and in the Boston Harbor.

“The most important thing is, when we show people our approach and what we’ve demonstrated so far, they are no longer asking, ‘Can it be done?’ they’re asking, ‘What can we do with it?’” Pappu says. “Our customers have spent decades working in the ocean and they understand how novel these capabilities are.”

Of all the possibilities, improved storm forecasting could be the one Pappu is most excited about.

“Our mission is to lower the barriers to ocean data,” Pappu says. “The ocean is a huge determinant of weather, climate, and short-term forecasting. Despite our best efforts to predict the intensity of storms, sudden changes are still the norm, and much of that comes down to a lack of understanding of our oceans. If we were monitoring these things over long periods of time and finer spatial scales, we could see these storms coming much earlier with more certainty.”

MIT student Jack Carson named 2026 Udall Scholar

Thu, 05/07/2026 - 3:50pm

Jack Carson, a second-year undergraduate at MIT majoring in electrical engineering and computer science, has been named a 2026 Udall Scholar, one of up to 65 undergraduates nationally to receive the prestigious $7,500 award. 

The Udall Scholarship honors students who have demonstrated a commitment to the environment, Indigenous health care, or tribal public policy. Carson is only the third MIT student to win this award, and the first to win for tribal policy.

Carson, a member of the Cherokee Nation and resident of Oklahoma, exemplifies the multidisciplinary approach to problem-solving that the Udall Scholarship seeks to honor. His work spans artificial intelligence, biomedical research, Indigenous community development, and ethics.

"Jack is the type of leader the Udall Foundation exists to support," says Kim Benard, associate dean for distinguished fellowships. "He's not only conducting cutting-edge research, but he's actively creating opportunities for Indigenous students to enter tech fields."

At MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Carson works in the Barzilay Lab, developing multiomics models for personalized therapeutic target identification. His work on deep learning and statistical physics has resulted in a sole-author paper published at the International Conference on Machine Learning (ICML).

Carson founded Code.Tulsa, a summer technology program designed to introduce Indigenous high school students to computer science and tech careers. The initiative addresses a significant gap: Indigenous communities remain highly underrepresented in technology fields, despite the potential for tech to advance tribal sovereignty and economic development.

This year, Carson won the Elie Wiesel Prize in Ethics Essay Contest. He is an accomplished musician who has performed at Carnegie Hall and with the National Opera, a motorcycle racer, and a self-described philosopher deeply committed to questions of justice and responsibility.

MIT School of Engineering faculty receive awards in winter 2026

Thu, 05/07/2026 - 12:40pm

Each year, faculty and researchers across the MIT School of Engineering are recognized with prestigious awards for their contributions to research, technology, society, and education. To celebrate these achievements, the school periodically highlights select honors received by members of its departments, institutes, labs, and centers. The following individuals were recognized in winter 2026:

Arup K. Chakraborty, the John M. Deutch (1961) Institute Professor in the departments of Chemical Engineering, Chemistry, and Physics, and the founding director of the Institute for Medical Engineering and Science, as well as James J. Collins, the Termeer Professor of Medical Engineering and Science in the Department of Biological Engineering and IMES, received the 2026 Laureate of the Tel Aviv University International Prize in Biophysics. The prize recognizes outstanding scientists whose work has significantly advanced the understanding of biological systems through physical principles.

Anantha Chandrakasan, MIT provost and the Vannevar Bush Professor in the Department of Electrical Engineering and Computer Science, received the 2025 IEEE Journal of Solid-State Circuits Test of Time Award. The award recognizes an outstanding paper published in the IEEE Journal of Solid-State Circuits at least 10 years prior that has had significant impact on its field.

Charles Harvey, a professor in the Department of Civil and Environmental Engineering; Piotr Indyk, the Thomas D. and Virginia W. Cabot Professor in the Department of Electrical Engineering and Computer Science; John Henry Lienhard, the Abdul Latif Jameel Professor of Water and Mechanical Engineering in the Department of Mechanical Engineering; Frances Ross, the TDK Professor in Materials Science and Engineering; Zoltán Sandor Spakovszky, the T. Wilson (1953) Professor in Aeronautics; and Ram Sasisekharan, the Alfred H. Caspary Professor of Biological Physics and Physics in the Department of Biological Engineering; were elected to the National Academy of Engineering for 2026. One of the highest professional distinctions for engineers, membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education,” and to “the pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education.”

Michael Howland, the Jeffrey Cheah Career Development Professor and assistant professor in the Department of Civil and Environmental Engineering, received a 2026 Faculty Early Career Development (CAREER) Award from the National Science Foundation. The award supports early-career faculty who have the potential to serve as academic role models in research and education and to lead advances in the mission of their department or organization.

Yoon Kim, associate professor in the Department of Electrical Engineering and Computer Science; Anand Natarajan, an associate professor in the Department of Electrical Engineering and Computer Science; and Mengjia Yan, ITT Career Development Professor in Computer Technology and associate professor in the Department of Electrical Engineering and Computer Science, were named 2026 Sloan Research Fellows. Sloan Research Fellowships support fundamental research conducted by early-career scientists, and they are awarded annually to early-career researchers whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders.

Carlos Portela, the Robert N. Noyce Career Development Professor and associate professor in the Department of Mechanical Engineering, has received a 2026 Young Investigator Award from the Office of Naval Research. The Young Investigator Program seeks to identify and support academic scientists and engineers who are in their first or second full-time tenure-track or tenure-track-equivalent academic appointment, who have received their doctorate or equivalent degree in the past seven years, and who show exceptional promise for doing creative research.

Ellen Roche, the Abby Rockefeller Mauzé Professor and associate department head for research in the Department of Mechanical Engineering and a professor in the Institute for Medical Engineering and Science, received the 2026 Sony Women in Technology Award with Nature. The award recognizes exceptional early- to mid-career women researchers in technology who through their research are driving a positive impact on society and the planet.

Tess Smidt, an associate professor in the Department of EECS, was named co–principal investigator on a National Science Foundation (NSF) AI Research Institute award and also received a 2025 Department of Energy Office of Science Early Career Research Program Award. The NSF AI Materials Institute (AI-MI) aims to propel foundational AI research past the limitations of existing AI algorithms by pursuing materials discovery and conquering knowledge- and data-centric challenges. The DoE Early Career Research Program provides five-year awards to exceptional early career researchers at U.S. academic institutions, DoE National Laboratories, and Office of Science User Facilities to stimulate new research directions in mission critical areas supported by DoE’s Office of Science.

Antonio Torralba, the Delta Electronics Professor and faculty head of AI+D in the Department of EECS, was elected to the 2025 cohort of Association for Computing Machinery Fellows. ACM Fellows, the highest honor bestowed by the professional organization, are registered members of the society selected by their peers for outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.

Harry L. Tuller, a professor in the Department of Materials Science and Engineering, received The Senior Scientist Award from the International Society for Solid State Ionics. The Senior Scientist Award, the most prestigious award of the International Society for Solid State Ionics, is presented to a senior solid-state ionics researcher who has made outstanding contributions to the science and engineering of solid-state ionics.

Vinod Vaikuntanathan, the Ford Foundation Professor of Engineering in the Department of Electrical Engineering and Computer Science, was named a 2026 fellow of the International Association for Cryptologic Research. ACR has established the IACR Fellows Program to recognize outstanding IACR members for technical and professional contributions.

Celebrating dorm-to-market social entrepreneurship at MIT

Thu, 05/07/2026 - 11:20am

Over 200 students, alumni, faculty, staff, funders, and community collaborators gathered at the MIT Media Lab on April 15 for the 25th annual IDEAS Social Innovation Incubator Showcase and Awards, hosted by the Priscilla King Gray (PKG) Center for Social Impact

Since its founding in 2001, the PKG Center’s IDEAS Incubator has launched hundreds of social ventures in over 60 countries, guiding MIT’s technical talent toward urgent social challenges — from energy and climate to health care, education, and economic development. 

“Global and local challenges are increasingly complex and interconnected,” said Lauren Tyger, assistant dean for social innovation at the PKG Center and director of IDEAS. “IDEAS educates technical founders in systems thinking and community-based innovation, helping students develop business models that achieve both measurable social outcomes and financial sustainability.” 

IDEAS alumni celebrated

The event celebrated the many successful social ventures launched by IDEAS alumni with a 25-Year Impact Report and a keynote speech from IDEAS alumnus Bill Thies ’01, ’02, MNG ’02, PhD ’09. 

Thies traced his tuberculosis medication adherence work in India from a low-cost electronic pillbox through multiple iterations that helped shift India’s treatment policies toward patient autonomy. Ultimately, his work led to Nikshay, a national electronic medical records platform now supporting 150 million people, which recently transitioned to full government control. 

“Innovations can open doors for much more important changes than the innovations themselves,” Thies said. Limitations to technical interventions surface important questions, such as “what policies do we want to change, to become more supportive and human-centered? And how can technology be a bridge to that new world we would envision?”

Thinking back on the influence of IDEAS on his own path, Theis reflected: “I always assumed that in IDEAS we were incubating projects. But what I’ve come to realize is that it’s actually the other way around: the projects are incubating us. We are the ones who will ultimately drive the change we hope to see in the world.”

Vision for scaling social entrepreneurship at MIT and catalytic gift announced  

Thies’ message was echoed by Chancellor for Academic Advancement Eric Grimson, who explained how IDEAS aligns with MIT’s strategic initiatives, including MIT’s Generative AI Impact Consortium (MGAIC), Health and Life Sciences Collaborative (MIT HEALS), and the Climate Project, as well as President Sally Kornbluth’s and Provost Anantha Chandrakasan’s recent call to accelerate entrepreneurship. “Many of the current presidential initiatives naturally include an opportunity for social entrepreneurship,” said Grimson, who applauded IDEAS alumni pursuing ventures in climate, health, and AI-powered social enterprises. 

The PKG Center’s director, Alison Badgett, shared the center’s vision for the future of IDEAS. “As MIT’s only student entrepreneurship program focused solely on social impact,” said Badgett, “we recognize the need to both scale social entrepreneurship programming at MIT and to better position our student founders for scale after graduating.” 

Badgett announced a first-in gift of $150,000 from the Morgridge Family Foundation to help realize the center’s vision. The foundation’s gift will enable the PKG Center to develop a robust social impact investor ecosystem at MIT, connecting student- and alumni-led ventures with potential funders and helping more aspiring entrepreneurs see social impact as a viable path. 

This year’s award-winning social ventures

This year’s top $20,000 award winner was Beyond Words, an assistive application for iPhone and Apple Watch that gives nonverbal individuals a layer of support by passively capturing biometrics, audio, and location, and communicating it to caregivers. 

Other award winners were:

  • AyuConnect ($10,000) uses WhatsApp-native, voice-first electronic health records to enhance care access while reducing clinician burnout in India.
  • PEAR ($7,500) offers a hands-on STEM research program for Nigerian and other African students, equipping them with technical skills to solve community problems.
  • CommonGround ($5,000) connects Bostonians to tailored and hyper-local climate actions through an online platform, replacing eco-anxiety with collective resilience.
  • Sehat Screen ($5,000) is an AI-powered cervical cancer screening device for women in Afghanistan and other resource-constrained countries.
  • Breakthrough Health ($2,500) is a care coordination platform that links hepatitis C patients in recovery centers to health care.
  • Sero ($2,500) is a voice-first AI tool that helps rural borrowers in Nepal understand loan contracts and access fair credit in their own language, with no dependency on literacy.

During the event, Shane Kosinski, executive director of the Office of the Vice President for Energy and Climate, announced inaugural Climate Student Innovators awards, funded by the MIT Climate Project. Four IDEAS teams received this award, which will be presented annually.

“The MIT Climate Project is an all-of-MIT initiative with the ambitious goal to make a measurable difference on climate change within a decade. We reach this global impact not by top down mandates, but by testing good ideas where they are needed most and supporting them to succeed,” explained Kosinski. “This vision is also hardwired into the character, history and purpose of PKG IDEAS.”

This year’s IDEAS teams awarded by the MIT Climate Project were:

  • Q’ochas Resilientes ($15,000) co-designs climate-resilient water technology in the Peruvian Andes to uplift ancestral knowledge and support agricultural livelihoods.
  • NECTICA ($15,000) tackles urban flooding in Lagos by empowering women-led cooperatives with a low-tech sorter bin to separate and monetize composite waste.
  • MittiNav ($15,000) designs production and supply-chain systems to scale biochar technology that restores soil and stores carbon.
  • Resilient Grid ($5,000) collects and processes food waste through anaerobic digestion on skid platforms to produce biogas for electricity and heat in Caribbean island nations.

“The Climate Project is thrilled to present the first-ever Climate Student Innovators Awards to these teams,” said Vice President for Energy and Climate Evelyn Wang. “We applaud this year’s IDEAS winners for developing systemic interventions in partnership with affected communities.” 

Several additional teams received $1,000 awards: 

  • ​​1for1Health is a fertility platform offering education, testing, and insights to expand access and reduce disparities in reproductive health decisions.
  • Ceed CRM brings cutting-edge AI to mission-driven organizations that have been stuck with tools built for sales teams, not social impact.
  • CerviSeal created a medical device that reduces pain, tissue trauma, and risk during cervical manipulation for women undergoing hysteroscopy.
  • FoodLoop connects farms and restaurants through matchmaking, demand forecasting, and forward contracts to strengthen local food systems.
  • Homeroom Hero is an AI tool for teachers that instantly grades short-form assessments, reducing workload and improving student learning without putting tech in front of kids.
  • Gees Health is a noninvasive, at-home hormone monitor that helps women with polycystic ovary syndrome track and manage their health with continuous insights.
  • Illume makes discreet wearables that are a safe way for recovering victims of human trafficking to contact trusted people, building their support network.
  • Longevia is an AI-powered platform that translates complex medical data into personalized, actionable insights for chronic kidney disease patients.
  • Opta is an AI talent refinery upskilling Brazil's low-socioeconomic status students for small and medium business jobs, driving economic mobility.
  • Recover Hospitality scales recovery-informed wellness coaching for hospitality workers through AI-powered motivational interviewing and benefits navigation.

The event closed with Tyger thanking the vast network of alumni, mentors, funders, and campus partners who make IDEAS possible, and the 104 volunteers who supported this year’s incubator challenge. “IDEAS builds more than social enterprises — we’re building the infrastructure and community needed for alumni and their ventures to achieve long-lasting impact. Our vision is a future where MIT entrepreneurship is not only groundbreaking, but fundamentally grounded in social impact.” 

Rethinking how our brains use categories to make sense of the world

Thu, 05/07/2026 - 10:55am

In the new review article, “Categorization is Baked into the Brain,” cognitive scientists Earl K. Miller, Picower Professor of Neuroscience at MIT, and Lisa Feldman Barrett, university distinguished professor at Northeastern University, contend that categorization is part of a predictive process the brain uses to efficiently meet the body’s needs in a fast-paced, otherwise overwhelming sensory world. In that sense, their paper in Nature Reviews Neuroscience challenges decades of dogma about how and why the brain boils down what it sees, hears, smells, tastes, and feels.

Categories are groups of things that are similar enough to be considered functionally equivalent. When you walk through a neighborhood, you’ll naturally experience the furry, four-legged, barking animal ahead of you as a “dog.” In the classic view of cognition, your brain arrives at that categorization by soaking in lots of basic sensory features of the hound — its shape, its size, the sounds it makes, its behavior — and compares that to some prototype “dog” stored in your memory. Hundreds of milliseconds after the first sensory inputs, you can then decide what you might want to do about the dog.

Barrett and Miller argue that that’s wrong. Instead, they propose that your brain comes prepared for sensory patterns with predictions of the motor action plans that are most likely to achieve the needs and goals you bring to the moment. Those prediction signals can be described as a momentary category that the brain constructs to shape the processing of sensory signals. 

From the very start, incoming sensory signals are compressed and abstracted into that category to efficiently select the best predicted plan. If you are in an unfamiliar neighborhood your brain might construct the category “dog” to avoid being bit, resulting in: “Back away slowly while saying nice doggie.” If you are on your own block and encounter a familiar dog, your brain might construct a category to kneel and open up your arms to summon your neighbor’s adorable pup for a satisfying petting.

In either case, the category “dog” arises in the context of your needs and your prediction from a menu of learned action plans for similar situations, not from an intellectual exercise of neutrally regarding sensory inputs, comparing them to a fixed prototype, and then planning from there. If the brain really worked the classically believed way, you’d be on the back foot when the unfamiliar dog lunged at you.

“One of the main things your brain has to do is predict the world,” says Miller, a faculty member of The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. “It takes several hundred milliseconds to process things, and meanwhile the world is moving on. Your brain has to anticipate things.”

The most pragmatic and efficient way to survive and thrive in such a world, Barrett says, is to have your needs and potential plans ready for the sensory situation. If your predictions are right, you’re prepared in time. If they are wrong, you adjust and learn from it.

“The stimulus, cognition, response model of the brain is wrong,” says Barrett, a faculty member in Northeastern’s Department of Psychology and co-director of the Interdisciplinary Affective Science Laboratory. “The brain prepares for a response and then perceives a stimulus. A brain is not reactive. It’s predictive. Action planning comes first. Perception comes second, as a function of the action plan.”

Anatomical and functional evidence

Throughout the review, Barrett and Miller ground the provocative proposal in copious anatomical, electrophysiological, and imaging evidence about the brain. They cite numerous experimental results that show how the brain is structured to broadcast memories to create motor plans that flow back toward signals that arrive from the body’s sensory surfaces, actively whittling them down and shaping them to give them meaning.

“The capacity to create similarities from differences — to abstract — is embedded in the architecture of the nervous system, and you can see that by looking at what is connected to what and by observing signal flow,” Barrett says.

For example, as circuits feed signals “forward” from sensory surfaces (such as the retina) to regions of the cerebral cortex that are focused on sensory processing (such as the visual cortex) toward the areas that are important for executive control (the prefrontal cortex) and control of the body (limbic cortex), information passes from many small, barely connected neurons to fewer, bigger, and more well-connected neurons. Such an architecture compresses sensory details into increasingly abstract representations that group many different features into smaller groups of similar features, and in doing so helps to select a predicted action plan from the broader category that’s already there.

“Your brain is a big funnel to take the outside world and turn it into an output,” Miller says.

Moreover, anatomical evidence shows that the neurons in the cortex maintain many more connections to provide feedback from memory that control sensory regions than to feed sensory information forward. As much as 90 percent of synapses in the visual cortex are “feedback” instead of “feedforward,” Barrett and Miller wrote. In other words, the brain is built to use memory to filter incoming sensory signals, consistent with imposing needs and goals on what would otherwise be a deluge of sights, sounds, and other sensations.

Yet another line of evidence are numerous studies from Miller’s own lab showing that at the broad network level of information flow in the cortex, the brain uses beta frequency waves that carry information about goals and plans, to constrain the expression of gamma frequency waves that carry information about specific sensory inputs.

Finally, the dominance of “feedback” over “feedforward” signals in the cortical architecture allows for the possibility that sensory signals are made meaningful in terms of predicted plans. When these plans are wrong, the resulting surprise can be integrated for future use.

“In science, there is a special name for that: learning,” Barrett says.

Implications for human thought and disease

In the end, Barrett and Miller’s proposal completely changes the idea of categorization, shifting it from being a particular intellectual skill to being a fundamental function for predictively meeting the body’s needs (or, “allostasis”).

“A category may not be a representation that an animal has, but a signal processing event than an animal does, predictively, to constrain the meaning of a high-dimensional ensemble of signals in a particular situation,” the authors wrote. “Categorization renders these signals meaningful — similar to one another and to past allostatic events — in terms of some goal or function.”

Humans, Barrett says, have a relatively massive amount of the neural network architecture to perform these pragmatic abstractions, and therefore can make categorizations that seem outright metaphorical (e.g., a functional similarity between “climbing the career ladder” and climbing a literal physical ladder).

But these processes can also go awry in disease, Barrett and Miller note. Depression can be seen as a disorder in which the brain imposes overly broad categories, such as “threat” or “criticism” on sensory episodes that don’t have to be perceived that way. By contrast, autism can manifest with features of inadequate compression of incoming sensory signals, not generalizing enough to recognize when a situation is similar enough to a prior one to select the appropriate plan.

Funding to support the paper came from the National Institutes of Health, The U.S. Army Research Institute for the Behavioral and Social Sciences, the Office of Naval Research, the Unlikely Collaborators Foundation, The Freedom Together Foundation, and The Picower Institute for Learning and Memory.

Photonics advance could enable compact, high-performance lidar sensors

Thu, 05/07/2026 - 5:00am

Lidar systems use pulses of infrared light to measure distance and map a 3D scene with high resolution, allowing autonomous vehicles to rapidly react to obstacles that appear in their path. But traditional lidar sensors are expensive, bulky systems with many moving parts that degrade over time, limiting how the sensors can be deployed.

A new study from MIT researchers could help to enable next-generation lidar sensors that are compact, durable, and have no moving parts. The key advance is a novel design for a silicon-photonics chip, which is a semiconductor device that manipulates light rather than electricity. 

Typically, such silicon-photonics chip-based systems have a restricted field of view, so a silicon-photonics-based lidar would not be able to scan angles in the periphery. Existing workarounds to this problem increase noise and hamper precision.

To avoid these drawbacks, the MIT researchers designed and demonstrated an array of integrated antennas that minimizes unwanted crosstalk between the antennas. Their innovation allows a lidar chip to scan a wider field of view while maintaining low-noise operation compared to other silicon-photonics-based approaches.

This novel demonstration could fuel the development of advanced lidar sensors for demanding applications like autonomous vehicle navigation, aerial surveying, and construction site monitoring.

“The functionality we demonstrated in this work solves a fundamental problem for integrated optical-phased-array technology, enabling future lidar sensors that can achieve significantly higher performance than we could demonstrate previously,” says Jelena Notaros, the Robert J. Shillman Career Development Associate Professor of Electrical Engineering and Computer Science (EECS) at MIT, a member of the Research Laboratory of Electronics, and senior author of a paper on this innovation.

She is joined on the paper by lead author and EECS graduate student Henry Crawford-Eng as well as EECS graduate students Andres Garcia Coleto, Benjamin M. Mazur, Daniel M. DeSantis, and Tal Sneh. The research appears today in Nature Communications.

Adjusting an antenna array

Many traditional lidar systems map a scene using a bulky box that spins to send pulses of light in multiple directions. The light bounces off nearby objects and returns to the sensor, providing data that are used to reconstruct the environment. 

Instead, silicon-photonics-based lidar sensors systematically scan an emitted light beam in multiple directions non-mechanically using a system called an integrated optical phased array (OPA).

Key to an OPA is an array of integrated antennas that have tiny perturbations placed periodically along their length. These corrugations allow the antenna to scatter light from an input source up and out of the photonic chip.

By adjusting the phase of light routed to each antenna, the researchers can change the angle at which the light is emitted out of the array. In this way, they can steer the beam with no moving parts.

But if engineers place the antennas too close together, the antennas will couple with each other and the light they emit will get jumbled. To avoid this, scientists typically space the antennas farther apart, but this also has downsides.

If the antennas are spaced too far apart, the array will emit multiple copies of the light beam at different angles. The researchers can only steer the primary beam so far in either direction until it is undiscernible from its neighboring copies.

“This limits our field of view, so the autonomous vehicle now only knows what is in front of it for a certain angular range,” Garcia Coleto explains.

These beam copies, known as grating lobes, can cause false positives by confusing the sensor. They also waste power.

The MIT researchers solved this problem by designing a set of reduced-crosstalk antennas that can be placed close together without causing a significant coupling effect.

In a standard OPA, all the antennas have the same design, meaning the same arrangement of corrugations. These identical antennas couple very strongly when placed close together.

To address this fundamental roadblock, the MIT researchers designed a set of three antennas with different geometries, varying the width of each antenna and the size and arrangement of corrugations. With varied geometries, each antenna has a different propagation coefficient, which determines how light travels down the antenna.

“Because the antennas have very different propagation coefficients, when we put them close together, essentially each antenna doesn’t ‘see’ the antenna next to it. Therefore, it won’t couple with its neighbor,” Garcia Coleto says. 

A photonic balancing act

But even though the antennas have different propagation coefficients, the researchers still need them to emit light in the same way. 

They achieved this by carefully designing the antennas to meet three parameters. 

First, each antenna must emit the same amount of light. Second, each antenna must emit a beam at the same angle for the same wavelength of light. Third, the emission angle must change uniformly across the array as the researchers steer it.

“We have this challenge where we require the antennas to have different geometries to reduce the crosstalk, but we need to simultaneously design the antennas to have the same emission characteristics. While it is possible to engineer this, it is extremely difficult because, typically, when antennas are designed with different geometries, they tend to behave differently,” Crawford-Eng says.

The researchers first developed the fundamental electromagnetic theory behind how radiative modes couple. They used that theory as a guide to design and simulate their antennas.

Building on those analyses, they fabricated the OPA with reduced-crosstalk antennas spaced significantly closer than they would be in a traditional OPA, then experimentally tested the system.

While a typical OPA would have coupling of about 100 percent in this experiment, their OPA reduced coupling to about 1 percent while generating a single, precise beam. Using this design, they demonstrated accurate beam steering across a wide field of view without any grating lobes. 

In the future, the researchers plan to further improve their technique to enable an even wider field of view. In addition, they are exploring a new potential solution to wide field-of-view functionality that they discovered while developing the underlying theory.

“This work addresses a longstanding challenge in integrated optical phased arrays: simultaneously achieving both a wide field of view, which requires dense antenna spacing, and high beam quality, which requires low crosstalk between neighboring antennas. The authors solve this problem with an elegant antenna design. Their innovation is an important step forward for chip-scale, solid-state beam-steering technology,” says Joyce Poon, professor of electrical and computer engineering at the University of Toronto and director of the Max Planck Institute of Microstructure Physics, who was not involved with this work.

This research was supported, in part, by the Semiconductor Research Corporation, the National Science Foundation, an MIT MathWorks Fellowship, the U.S. Department of War, and the MIT Rolf G. Locher Endowed Fellowship.

Study: Firms often use automation to control certain workers’ wages

Thu, 05/07/2026 - 12:00am

When we hear about automation and artificial intelligence replacing jobs, it may seem like a tsunami of technology is going to wipe out workers broadly, in the name of greater efficiency. But a study co-authored by an MIT economist shows markedly different dynamics in the U.S. since 1980. 

Rather than implement automation in pursuit of maximal productivity, firms have often used automation to replace employees who specifically receive a “wage premium,” earning higher salaries than other comparable workers. In practice, that means automation has frequently reduced the earnings of non-college-educated workers who had obtained better salaries than most employees with similar qualifications. 

This finding has at least two big implications. For one thing, automation has affected the growth in U.S. income inequality even more than many observers realize. At the same time, automation has yielded a mediocre productivity boost, plausibly due to the focus of firms on controlling wages rather than finding more tech-driven ways to enhance efficiency and long-term growth.

“There has been an inefficient targeting of automation,” says MIT’s Daron Acemoglu, co-author of a published paper detailing the study’s results. “The higher the wage of the worker in a particular industry or occupation or task, the more attractive automation becomes to firms.” In theory, he notes, firms could automate efficiently. But they have not, by emphasizing it as a tool for shedding salaries, which helps their own internal short-term numbers without building an optimal path for growth.

The study estimates that automation is responsible for 52 percent of the growth in income inequality from 1980 to 2016, and that about 10 percentage points derive specifically from firms replacing workers who had been earning a wage premium. This inefficient targeting of certain employees has offset 60-90 percent of the productivity gains from automation during the time period.

“It’s one of the possible reasons productivity improvements have been relatively muted in the U.S., despite the fact that we’ve had an amazing number of new patents, and an amazing number of new technologies,” Acemoglu says. “Then you look at the productivity statistics, and they are fairly pitiful.”

The paper, “Automation and Rent Dissipation: Implications for Wages, Inequality, and Productivity,” appears in the May print issue of the Quarterly Journal of Economics. The authors are Acemoglu, who is an Institute Professor at MIT; and Pascual Restrepo, an associate professor of economics at Yale University.

Inequality implications

Dating back to the 2010s, Acemoglu and Restrepo have combined to conduct many studies about automation and its effects on employment, wages, productivity, and firm growth. In general, their findings have suggested that the effects of automation on the workforce after 1980 are more significant than many other scholars have believed. 

To conduct the current study, the researchers used data from many sources, including U.S. Census Bureau statistics, data from the bureau’s American Community Survey, industry numbers, and more. Acemoglu and Restrepo analyzed 500 detailed demographic groups, sorted by five levels of education, as well as gender, age, and ethnic background. The study links this information to an analysis of changes in 49 U.S. industries, for a granular look at the way automation affected the workforce. 

Ultimately, the analysis allowed the scholars to estimate not just the overall amount of jobs erased due to automation, but how much of that consisted of firms very specifically trying to remove the wage premium accruing to some of their workers. 

Among other findings, the study shows that within groups of workers affected by automation, the biggest effects occur for workers in the 70th-95th percentile of the salary range, indicating that higher-earning employees bear much of the brunt of this process. 

And as the analysis indicates, about one-fifth of the overall growth in income inequality is attributable to this sole factor.

“I think that is a big number,” says Acemoglu, who shared the 2024 Nobel Prize in economic sciences with his longtime collaborators Simon Johnson of MIT and James Robinson of the University of Chicago.

He adds: “Automation, of course, is an engine of economic growth and we’re going to use it, but it does create very large inequalities between capital and labor, and between different labor groups, and hence it may have been a much bigger contributor to the increase in inequality in the United States over the last several decades.” 

The productivity puzzle

The study also illuminates a basic choice for firm managers, but one that gets overlooked. Imagine a type of automation — call-center technology, for instance — that might actually be inefficient for a business. Even so, firm managers have incentive to adopt it, reduce wages, and oversee a less productive business with increased net profits.

Writ large, some version of this seems to have been happening to the U.S. economy since 1980: Greater profitability is not the same as increased productivity.

“Those two things are different,” says Acemoglu. “You can reduce costs while reducing productivity.” 

Indeed, the current study by Acemoglu and Restrepo calls to mind an observation by the late MIT economist Robert M. Solow, who in 1987 wrote, “You can see the computer age everywhere but in the productivity statistics.” 

In that vein, Acemoglu observes, “If managers can reduce productivity by 1 percent but increase profits, many of them might be happy with that. It depends on their priorities and values. So the other important implication of our paper is that good automation at the margins is being bundled with not-so-good automation.” 

To be clear, the study does not necessarily imply that less automation is always better. Certain types of automation can boost productivity and feed a virtuous cycle in which a firm makes more money and hires more workers. 

But currently, Acemoglu believes, the complexities of automation are not yet recognized clearly enough. Perhaps seeing the broad historical pattern of U.S. automation, since 1980, will help people better grasp the tradeoffs involved — and not just economists, but firm managers, workers, and technologists. 

“The important thing is whether it becomes incorporated into people’s thinking and where we land in terms of the overall holistic assessment of automation, in terms of inequality, productivity and labor market effects,” Acemoglu says. “So we hope this study moves the dial there.”

Or, as he concludes, “We could be missing out on potentially even better productivity gains by calibrating the type and extent of automation more carefully, and in a more productivity-enhancing way. It’s all a choice, 100 percent.”

MIT BrainTrust supports neighbors living with brain injuries

Wed, 05/06/2026 - 2:25pm

Since 1998, members of MIT’s BrainTrust club have helped Boston-area residents with brain injuries or other neurological disorders through their buddy program. The organization’s members also visit patients in nursing homes suffering from neurological issues.

BrainTrust is one of the founding chapters of Synapse National, an organization created by MIT alumna Alissa Totman ’13. Synapse’s goal is to provide social support for individuals living with brain injuries and to educate and inspire student leaders in the field of brain injury.

“Learning directly from individuals who had experienced brain injury during my time in BrainTrust gave me an appreciation of the gaps in resources and opportunities for improvement in brain injury care, which ultimately motivated me to pursue a career in brain injury medicine. My experience in BrainTrust continues to shape my approach to patient care and my professional goal of improving access to specialized care for individuals with brain injury by serving as a consulting provider in the acute care hospital, as well as by training the next generation of leaders in the field,” says Totman.

The club’s president, junior Karie Shen, who is pursuing a double major in biology (Course 7) and brain and cognitive science (Course 9), says, “BrainTrust is a student-run service organization that provides support for individuals with brain injury and other neurological disorders. I joined BrainTrust because it seemed like the perfect intersection of community service and neuroscience, and I care about these two things deeply.”

BrainTrust volunteers participate in training and then are paired with a local buddy who has experienced a brain injury. Members can also spend time on the weekends with patients in nursing homes who have dementia, Alzheimer’s disease, or who have had a stroke.

Shen, along with Elizabeth Zhang, president of the MIT Pre-Med Society, recently developed a program that allows BrainTrust members to visit patients in hospice. “It’s an experience that is deeply valuable for students. We work through a third-party organization called Compassus. Because the pairing process is HIPAA-protected, our role as BrainTrust executive members is to recruit students and connect them with the hospice volunteer coordinator for training. We also provide funding for transportation, generously supported by the UA Community Service Committee,” says Shen.

Shen, who plans to go to medical school and specialize in neurology, neuro-oncology, or geriatric medicine when she completes her degree, finds the experience rewarding, at times difficult, but also offers a glimpse into the reality of working with people with brain injuries.

“Visiting the people in hospice or a nursing home is hard. I’ve seen residents cry for no apparent reason that the nurses or I can understand. But I have also come to understand that caring for a patient’s quality of life and dignity is equally important. What I came to realize is that my presence itself mattered. That perspective has shaped how I think about the kind of physician I want to become,” says Shen.

First-year student Jordan Lacsamana heard about the club during Campus Preview Weekend and was immediately interested. Lacsamana, who will major in brain and cognitive sciences, is a volunteer in the Buddy Program and meets with her buddy at least once a month.

“I joined the club because it aligned with my interests academically, but I also wanted to support someone in the Boston community. I’m pre-med, and I’m interested in surgery, possibly neurosurgery or cardiovascular surgery. But I also think it’s nice to have someone outside of MIT to talk with. It’s great to learn more about them and have that one-on-one friendship, which really is the goal,” says Lacsamana.

Lacsamana says she enjoys spending time with Amanda, her buddy, and exploring Boston and Harvard Square, meeting for coffee or meals, and getting as much out of the relationship as Amanda does.

“I see her as a mentor because coming to Boston from Dallas was such a big change, so I’ve also been able to look to her for advice. But I think one of the great things about the program is that you get to learn more about them as an individual, instead of seeing them as just a person with an injury,” says Lacsamana.

“Many of our brain injury buddies simply enjoy being around students, staying connected to what we are learning and doing. Some have been with the club for years, even upwards of a decade, and still keep up with former student members long after they graduate. It is really wonderful to see how BrainTrust has created this web of friendships between people who would otherwise never have met,” says Shen.

“Amanda has stayed in touch with her former buddy since she graduated from MIT and is going to her wedding,” says Lacsamana. “I think it’s a testament to how amazing this program is at forming those connections.”

MIT students who seek real-world opportunities in fields such as cognitive science, health care, medicine, and cognitive/neurological prosthetics, or who want to help a local resident, can join BrainTrust. Email braintrust-exec@mit.edu for more information.

Method for stress-testing cloud computing algorithms helps avoid network failures

Wed, 05/06/2026 - 12:00am

Researchers from MIT and elsewhere have developed a more user-friendly and efficient method to help networking engineers identify potential system failures before they cause major problems, like a cloud service outage that leaves millions of users unable to access applications. 

The technique uncovers hidden blind spots that might cause a shortcut algorithm to fail unexpectedly when it is deployed. 

This new approach can identify worse-case scenarios that an engineer might miss if they use a traditional method that compares an algorithm against a set of human-designed past test cases. It is also less labor-intensive than other verification tools that require engineers to rewrite an algorithm in a complex mathematical code each time they want to test it.

Instead of needing a mathematical reformulation, the new method reads the algorithm’s source code directly and automatically searches for worse-case scenarios that lead to the highest level of underperformance.

By helping engineers quickly and easily stress-test a networking algorithm before deployment, the method could catch failure modes that might otherwise only appear in a real outage. The technique could also be used to analyze the risks of deploying AI-generated code.

“We need to have good tools to measure the worse-case scenario performance of our algorithms so we know what could happen before we put them into production. This is an easy-to-use tool that can be plugged into current systems so we can find the best algorithm to use and ensure the worse-case scenarios are identified in advance,” says Pantea Karimi, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this new technique. 

She is joined on the paper by senior authors Mohammad Alizadeh, an associate professor of EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Behnaz Arzani, a principal researcher at Microsoft Research; along with Ryan Beckett, Siva Kesava Reddy Karkarla, and Pooria Namyar, researchers at Microsoft Research; and Santiago Segarra, a professor at Rice University. The research will be presented at the USENIX Symposium on Networked Systems Design and Implementation. 

Assessing algorithms

In large systems like cloud servers, the tried-and-true algorithms that route data from one place to another or are often too computationally intensive to run in a feasible amount of time. 

So, engineers and researchers develop suboptimal algorithms called heuristics that can run much faster. However, there could be unexpected but plausible circumstances that will cause a heuristic to underperform or fail when deployed.

A heuristic can route millions of data requests across a cloud network in seconds, but under the wrong conditions — like an unusual traffic pattern or a sudden spike in demand — the shortcut can break down in ways the designer never anticipated.

When these problems occur, a company may have no choice but to drop some requests that can’t be processed. 

The firm could also deliberately allocate more resources in advance to head-off a potential disaster, leading to higher overall costs and wasted electricity from underutilization.

“This is really bad for a company because, either way, they are going to lose a lot of money. If this particular scenario hasn’t happened before and was never tested, how would a developer know in advance before it happens?” Karimi says.

Stress-testing heuristics typically involves running a new algorithm in simulation using a set of human-designed test cases and manually comparing the performance with a previous algorithm. But this is time-consuming and can leave blind spots if an engineer doesn’t know to test for certain situations.

Alternatively, engineers could use a verification tool to evaluate the performance of their heuristic more systematically. However, these tools require the engineer to encode the algorithm into a complex, mathematical formula that can take days to flesh out. The process, which doesn’t work for every type of heuristic, must be repeated each time the engineer changes the code.

Instead, the researchers developed a more user-friendly and efficient verification tool, called MetaEase, that analyzes the heuristic’s existing implementation code directly to identify the biggest risks of deploying it.

“This would reduce the friction of using these heuristic analysis tools,” Karimi says.

She began this work during an internship at Microsoft Research, where the team previously developed MetaOpt, a heuristic analyzer that requires engineers to rewrite their algorithms as formal optimization models. MetaEase grew out of the desire to remove that barrier.

Maximizing the gap

MetaEase is driven by two key innovations. First, it uses a technique called symbolic execution to map out the different decision points in the heuristic's code. These are places where the algorithm might behave differently depending on the input.

This technique produces a set of representative starting points, each corresponding to a distinct behavior the heuristic could exhibit.

Second, from these starting points, MetaEase utilizes a guided search to systematically move toward inputs that make the heuristic perform as poorly as possible, compared to the optimal algorithm.

In machine learning, for instance, an input could be a set of user queries to an AI chatbot at a given time.

“In this way, we have exploited every possible heuristic behavior and used special techniques to move in the direction where we think the performance gap is going to increase,” Karimi explains.

In the end, MetaEase identifies the input that maximizes the performance gap between the heuristic and an optimal benchmark.

With this information, a heuristic developer could inspect the input to understand what went wrong and incorporate safeguards that will prevent the problem from happening during deployment.

In simulated experiments, MetaEase often identified inputs with larger performance gaps than traditional methods — pinpointing more catastrophic worse-case scenarios. And it did so much more efficiently. 

It was also able to analyze a recent networking heuristic that no state-of-the-art method could handle.

In the future, the researchers want to enhance MetaEase so it can process additional types of types of data, like categorical inputs. They also want to improve the scalability of their method and adapt MetaEase to evaluate more complex heuristics.

“Reasoning about the worst-case performance of deployed heuristics is a hard and longstanding problem. MetaEase makes tangible progress by analyzing heuristics directly from source code, eliminating the need for formal models that have historically limited who can use such analysis tools. I was pleasantly surprised that it handles non-convex and randomized heuristics by combining symbolic execution with gradient-based search in a practical and effective way,” says Ratul Mahajan of the University of Washington Paul G. Allen School of Computer Science and Engineering, who was not involved with this research.

This research was funded, in part, by a Microsoft Research internship and the U.S. National Science Foundation (NSF).

Games people — and machines — play: Untangling strategic reasoning to advance AI

Tue, 05/05/2026 - 5:00pm

Gabriele Farina grew up in a small town in a hilly winemaking region of northern Italy. Neither of his parents had college degrees, and although both were convinced they “didn’t understand math,” Farina says, they bought him the technical books he wanted and didn’t discourage him from attending the science-oriented, rather than the classical, high school.

By around age 14, Farina had focused on an idea that would prove foundational to his career.

“I was fascinated very early by the idea that a machine could make predictions or decisions so much better than humans,” he says. “The fact that human-made mathematics and algorithms could create systems that, in some sense, outperform their creators, all while building on simple building blocks, has always been a major source of awe for me.”

At age 16, Farina wrote code to solve a board game he played with his 13-year-old sister.

“I used game after game to compute the optimal move and prove to my sister that she had already lost long before either of us could see it ourselves,” Farina says, adding that his sister was less enthralled with his new system.

Now an assistant professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS), Farina combines concepts from game theory with such tools as machine learning, optimization, and statistics to advance theoretical and algorithmic foundations for decision-making.

Enrolling at Politecnico di Milano for college, Farina studied automation and control engineering. Over time, however, he realized that what activated his interest was not “just applying known techniques, but understanding and extending their foundations,” he says. “I gradually shifted more and more toward theory, while still caring deeply about demonstrating concrete applications of that theory.”

Farina’s advisor at Politecnico di Milano, Nicola Gatti, professor and researcher in computer science and engineering, introduced Farina to research questions in computational game theory and encouraged him to apply for a PhD. At the time, being the first in his immediate family to earn a college degree and living in Italy, where doctoral degrees are handled differently, Farina says he didn’t even know what a PhD was.

Nevertheless, one month after graduating with his undergraduate degree, Farina began a doctoral degree in computer science at Carnegie Mellon University. There, he won distinctions for his research and dissertation, as well as a Facebook Fellowship in Economics and Computation.

As he was finishing his doctorate, Farina worked for a year as a research scientist in Meta’s Fundamental AI Research Labs. One of his major projects was helping to develop Cicero, an AI that was able to beat human players in a game that involves forming alliances, negotiating, and detecting when other players are bluffing.

Farina says, “when we built Cicero, we designed it so that it would not agree to form an alliance if it was not in its interest, and it likewise understood whether a player was likely lying, because for them to do as they proposed would be against their own incentives.”

A 2022 article in the MIT Technology Review said Cicero could represent advancement toward AIs that can solve complex problems requiring compromise.

After his year at Meta, Farina joined the MIT faculty. In 2025, he was distinguished with the National Science Foundation CAREER Award. His work — based on game theory and its mathematical language describing what happens when different parties have different objectives, and then quantifying the “equilibrium” where no one has a reason to change their strategy — aims to simplify massive, complex real-world scenarios where calculating such an equilibrium could take a billion years.

“I research how we can use optimization and algorithms to actually find these stable points efficiently,” he says. “Our work tries to shed new light on the mathematical underpinnings of the theory, better control and predict these complex dynamical systems, and uses these ideas to compute good solutions to large multi-agent interactions.”

Farina is especially interested in settings with “imperfect information,” which means that some agents have information that is unknown to other participants. In such scenarios, information has value, and participants must be strategic about acting on the information they possess so as not to reveal it and reduce its value. An everyday example occurs in the game of poker, where players bluff in order to conceal information about their cards.

According to Farina, “we now live in a world in which machines are far better at bluffing than humans.”

A situation with “massive amounts of imperfect information,” has brought Farina back to his board-game beginnings. Stratego is a military strategy game that has inspired research efforts costing millions of dollars to produce systems capable of beating human players. Requiring complex risk calculation and misdirection, or bluffing, it was possibly the only classical game for which major efforts had failed to produce superhuman performance, Farina says.

With new algorithms and training costing less than $10,000, rather than millions, Farina and his research team were able to beat the best player of all time — with 15 wins, four draws, and one loss. Farina says he is thrilled to have produced such results so economically, and he hopes “these new techniques will be incorporated into future pipelines,” he says.

“We have seen constant progress towards constructing algorithms that can reason strategically and make sound decisions despite large action spaces or imperfect information. I am excited about seeing these algorithms incorporated into the broader AI revolution that’s happening around us.”

MIT marks first Robert R. Taylor Day with Tuskegee University

Tue, 05/05/2026 - 4:35pm

On April 10, MIT marked its first official Robert R. Taylor Day with a program centered on the life and work of Robert Robinson Taylor (Class of 1892), the Institute’s first Black graduate and the first academically trained Black architect in the United States.

After graduating from MIT, Taylor joined Tuskegee Institute (now Tuskegee University), where he designed campus buildings, developed a curriculum, and helped establish an approach to architectural education grounded in making and community life — an orientation that continues to shape the relationship between MIT and Tuskegee today. 

Taylor returned to MIT on April 10, 1911, to speak at the 50th anniversary of the Institute’s founding — the date now observed as Robert R. Taylor Day. Reflecting on his education, he credited MIT with the “methods and plans” he carried to Tuskegee Institute. “Certainly the spirit,” he said, was found “in the love of doing things correctly, of putting logical ways of thinking into the humblest task … to build up the immediate community in which the persons live.”

One hundred fifteen years later, at the MIT Museum, students and faculty gathered around Taylor’s original thesis, “A Soldiers Home.” The work was presented alongside archival materials from Taylor’s time at MIT by Jonathan Duval, assistant curator of architecture and design. Rather than framing Taylor as a distant historical figure, the encounter with the work itself — its drawings, assumptions, and ambitions — set the terms for the day, bringing forward not only his accomplishments but the ideas and methods that continue to inform teaching and collaboration today. Attendees then gathered for a lunch-and-learn session including a hybrid panel involving MIT and Tuskegee University faculty. 

“It is so important to continue to develop the MIT-Tuskegee relationship begun by Robert R. Taylor,” says Kwesi Daniels, associate professor and head of the architecture department at Tuskegee University. “MIT students are provided an opportunity to experience the campus Taylor designed and his ethos of social architecture. For the Tuskegee students, they are able to appreciate the foundation Taylor received at MIT. The engagement epitomizes the ‘mind and hand’ philosophy of MIT and the head, hand, heart philosophy of Tuskegee.”

An ongoing exchange

Student and faculty exchanges, launched by the architecture departments at both institutions, have extended these connections in recent years. MIT students travel to Tuskegee for work in historic preservation and community engagement, sampling Daniels’ scanning and drone equipment, while Tuskegee students come to MIT to engage with digital fabrication and entrepreneurship.

For Nicholas de Monchaux, professor and head of the Department of Architecture at MIT, the relationship reflects continuity. “We are not uniting. We’re reuniting,” he says. “This year’s celebration should really be seen as the kickoff of a year of reflecting on Robert Taylor’s legacy and imagining what the day, and his legacy, can become over time.”

The day’s program — the vision for which originally emerged from a suggestion made by MIT literature professor Joshua Bennett during a meeting at Tuskegee with de Monchaux, Daniels, and Tuskegee President Mark Brown — moved into a broader effort among faculty and collaborators across architecture, history, and the humanities. As Bennett put it, “The primary aim of Robert R. Taylor Day is to lift up not only Taylor’s accomplishments, but his ideas — and the fact that his ideas live on in those of us who have inherited his legacy.”

That emphasis is also visible in the dedicated coursework and research that has accompanied the exchange since 2022. In class 4.s12 (Brick x Brick: Drawing a Particular Survey), taught by Carrie Norman, assistant professor in architecture at MIT, students document buildings on the Tuskegee campus through measured drawings and archival interpretation. Working from limited historical material, they reconstruct both form and intent.

“My role has been to structure this work pedagogically,” Norman says, “guiding students in methods of close looking, measured drawing, and archival interpretation.” She describes Taylor’s work as “an ongoing research agenda,” adding that “the broader aim is not only to deepen engagement with Taylor’s legacy, but to build on it through new forms of design research.”

Related work has contributed to a recent exhibition on the Tuskegee Chapel at the National Building Museum, curated by Helen Bechtel of the Yale School of Architecture. Building on research conducted in Norman’s course, students developed large-scale models that form part of the exhibition. New 3D fabrications use a limited set of archival materials to reconstruct the chapel originally designed by Taylor as the first electrified building in Alabama’s Macon County, which was destroyed by fire in 1957.

Looking ahead

Timothy Hyde, professor in the MIT Department of Architecture, has also been involved in the ongoing MIT–Tuskegee collaboration and in efforts to situate Taylor’s work within a broader historical context. He notes that Taylor’s training at MIT helped shape the curriculum he later developed at Tuskegee. “The other influence I would like to mention is the city of Boston itself,” Hyde adds. “Boston was a prosperous city with a wealth of civic architecture that Taylor would have seen and studied.” 

A documentary project on Taylor’s life, supported by the MIT Human Insight Collaborative and led by Hyde and historian Christopher Capozzola, senior associate dean for MIT Open Learning, is currently in development.

For some students, these encounters shape longer trajectories. As an undergraduate at Tuskegee, Myles Sampson participated in the MIT Summer Research Program (MSRP), where he began to connect architecture with a growing interest in computation. He later enrolled in MIT’s Master of Science in Architecture Studies (SMArchS) computation program, working with Professor Larry Sass, who introduced him to robotic fabrication.

“I never looked back,” Sampson says. “Without that hands-on research experience, I would never have looked past contemporary architectural practice.” He is now pursuing a doctorate in computational design at Carnegie Mellon University, focused on the role of automation in architecture and construction.

Sampson contributed significant work to the National Building Museum’s exhibition. His installation, Brick Parable, brings together historical reference and robotic construction. As de Monchaux notes, the project reflects the long arc of Taylor’s legacy: “bricks were fired by students as part of Taylor’s training program … Myles [Sampson]’s piece, made with a robotic assembly of bricks, explores the architectural idea of the chapel in contemporary form.”

For Daniels, the continued circulation of students between the two institutions remains central. Viewing Taylor’s thesis in particular offers a shared point of reference. “Whether the student is from Tuskegee or MIT, they are able to appreciate the quality of work Taylor completed as a student,” he says, “and how he built on that work by creating a college campus, beginning at age 25.”

Across these activities, Taylor’s work is approached not as a fixed legacy, but as a set of methods and commitments that continue to be tested. As Catherine Armwood, dean of Tuskegee University Robert R. Taylor School of Architecture and Construction Science, describes it: “While our students leverage [the design and entrepreneurship program] MITdesignX to turn architectural concepts into social enterprises through advanced fabrication and venture mentorship, MIT students come to Tuskegee for an immersion in historic preservation. By surveying buildings handcrafted by our founding students, they learn a legacy of self-reliance and community impact that can’t be found anywhere else,” Armwood says. “Together, we are bridging technical innovation with deep-rooted heritage to train a new generation of visionary leaders.” 

Astronomers pin down the origins of a planetary odd couple

Tue, 05/05/2026 - 12:00am

Across the Milky Way galaxy, a planetary odd couple is circling a star some 190 light years from Earth. A normally “lonely” hot Jupiter is sharing space with a mini-Neptune, in a rare and unlikely pairing that’s had astronomers puzzled since the system’s discovery in 2020.

Now MIT scientists have caught a glimpse into the atmosphere of the mini-Neptune, which is circling inside the orbit of its Jupiter-sized companion, and discovered clues to explain the origins of this unusual planetary system.

In a study appearing today in Astrophysical Journal Letters, the scientists report on new measurements of the mini-Neptune’s atmosphere, made using NASA’s James Webb Space Telescope (JWST). It is the first time astronomers have measured the composition of a mini-Neptune that resides inside the orbit of a hot Jupiter.

Their measurements reveal that the smaller planet has a “heavy” atmosphere that is rich with water vapor, carbon dioxide, sulfur dioxide, and hints of methane. Such a heavy atmosphere would not have been acquired by the planet if it had formed in its current location, very close to its star.

Instead, the scientists say their findings point to an alternate origin story: Both the mini-Neptune and the hot Jupiter may have formed much farther away, in the colder region of the protoplanetary disk. There, the planets could slowly build up atmospheres of ice and other volatiles. Over time, the planets were likely drawn in toward the star in a gradual process that kept them close, with their atmospheres intact.

The team’s results are the first to show that mini-Neptunes can form beyond a star’s “frost line.” This boundary refers to the minimum distance from a star where the temperature is low enough that water instantly condenses into ice.

“This is the first time we’ve observed the atmosphere of a planet that is inside the orbit of a hot Jupiter,” says Saugata Barat, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research and the lead author of the study. “This measurement tells us this mini-Neptune indeed formed beyond the frost line, giving confirmation that this formation channel does exist.”

The team consists of astronomers around the world, including Andrew Vanderburg, a visiting assistant professor at MIT, and co-authors from multiple other institutions including the Harvard and Smithsonian Center for Astrophysics, the University of South Queensland, the University of Texas at Austin, and Lund University.

A “one-of-a-kind” system

As their name implies, mini-Neptunes are planets that are less massive than Neptune. They are considered to be gas dwarfs, which are made mostly of gas, with an inner, rocky core. Mini-Neptunes are the most commonly found planet in the Milky Way, though, interestingly, no such world exists in our own solar system. Astronomers have observed many planets circling a wide variety of stars in a range of planetary systems. Mini-Neptunes, then, are generally considered to be garden-variety planets.

But in 2020, Chelsea X. Huang, then a Torres Postdoctoral fellow at MIT (now on the faculty at University of South Queensland), discovered a mini-Neptune in a rare and puzzling circumstance: The planet appeared to be circling its star with an unlikely companion — a hot Jupiter.

The astronomers made their discovery using NASA’s Transiting Exoplanet Survey Satellite (TESS). They analyzed TESS’ measurements of TOI-1130, a star located 190 light years from Earth, and detected signs of a mini-Neptune and a hot Jupiter, orbiting the star every four and eight days respectively.

“This was a one-of-a-kind system,” says Huang. “Hot Jupiters are ‘lonely,’ meaning they don’t have companion planets inside their orbits. They are so massive, and their gravity is so strong, that whatever is inside their orbit just gets scattered away. But somehow, with this hot Jupiter, an inner companion has survived. And that raises questions about how such a system could form.”

A spot-on snapshot

The 2020 discovery of TOI-1130 and its odd planetary pair inspired Huang, Vanderburg, and their colleagues to take a closer look at the planets, and specifically, their atmospheres, with JWST. In its new study, the team reports its analysis of TOI-1130b — the inner-orbiting mini-Neptune.

Catching the planet at just the right time was their first challenge. Most planets circle their star with a regular, predictable period, like the tick of a clock. But the mini-Neptune and the hot Jupiter were found to be in “mean motion resonance,” meaning that each can affect the other’s motion, pulling and tugging, and slightly varying the time each takes to orbit their star. This made it tricky to predict when JWST could get a clear view.

The team, led by Judith Korth of Lund University, assembled as many past observations of the system as they could, and developed a model to predict when each planet would pass by the star at an angle that JWST could observe.

“It was a challenging prediction, and we had to be spot-on,” Barat says.

In the end, the team was able to catch a direct and detailed snapshot of both planets.

“The beauty of JWST is that it does not observe just in one color, but at different colors, or wavelengths,” Barat explains. “And the specific wavelengths that a planet absorbs can tell you a lot about the composition of its atmosphere.”

From JWST’s measurements, the team found that the planet absorbed wavelengths specifically for water, carbon dioxide, sulfur dioxide, and to a lesser degree, methane. These molecules are heavier than hydrogen and helium, which constitute lighter atmospheres. Astronomers had assumed that, if mini-Neptunes formed very close to their star, they should have light atmospheres.

But the team’s new results counter that assumption and offer a new way that mini-Neptunes could form. Since heavier molecules were found in the atmosphere of TOI-1130b, which resides very close to its star, the scientists say the only possible explanation for its composition is that the planet formed much farther out than its current location.

The planet likely accumulated its heavy atmosphere of water and other volatiles such as carbon dioxide and sulfur dioxide in the icy region beyond the star’s frost line. In this much colder environment, water condenses onto bits of dust to form icy pebbles, which an infant planet can draw into its atmosphere. The water evaporates as it slowly migrates in closer to its star.

Barat says the team’s detection of heavy molecules in the atmosphere of TOI-1130b confirms that the planet — and likely its hot Jupiter companion — formed in the outskirts of the system. Through gradual migration, the two planets would be able to stay close together and keep their atmospheres intact.

“This system represents one of the rarest architectures that astronomers have ever found,” Barat says. “The observations of TOI-1130b provide the first hint that such mini-Neptunes that form beyond the water/ice line are indeed present in nature.”

This work was supported, in part, by NASA.

The tech revolution that wasn’t

Tue, 05/05/2026 - 12:00am

In 1960, engineers at India’s Tata Institute of Fundamental Research (TIFR) built what they called an “Automatic Calculator,” the country’s first working computer. It had the same type of ferrite-core memory as IBM’s world-leading machines, and at a glance, appeared to herald a new age of tech advances in India.

Constructed with a fraction of the resources Western computer engineers had, the TIFRAC, as they called it, was a remarkable feat.

“The people working on it had never really seen an actual functioning computer,” says Dwai Banerjee, an associate professor of science, technology, and society, and the author of a new book about computing in India. “You had this ambitious group of engineers building a state-of-the-art machine with very, very, limited resources. The fact they could build this is staggering.”

However, the TIFRAC was never even replicated, let alone produced at scale. The visionaries behind it wanted to turn India into an independent computing nation: a place that would produce its own equipment and become an industry power. Instead, the TIFRAC became a technological cul-de-sac, and India’s tech industry took on a very different shape. Instead of exporting equipment, it exports talent, sending skilled engineers and executives around the globe.

Now Banerjee explores those issues in the book, “Computing in the Age of Decolonization: India’s Lost Technological Revolution,” published by Princeton University Press. In it, he examines the country’s pursuit of technological self-sufficiency, and the global forces that prevailed against this vision. As a result, the country is “the world’s leading provider of inexpensive outsourcing and offshoring services, yet enjoys minimal benefits from more profitable advances in research, manufacturing, and development,” Banerjee writes.

“This book is about understanding how the current landscape of technological power came to be and the unequal way in which power is distributed across the world when it comes to anything to do with computing,” Banerjee says. “Basically, the historical conditions of the mid-20th century period are essential to understanding why the world of computing looks the way it does today.”

Computing and the geopolitics of knowledge

When India became a sovereign nation in 1947, many of its leaders believed “rapid technology-driven industrialization was the only way out of centuries of colonial underdevelopment,” as Banerjee writes. Some leapt into action, such as the remarkable nuclear physicist Homi J. Bhabha, who helped establish the TIFR.

Initially, Indian leaders hoped to gain cooperation for the U.S. and international organizations in making technological advances, but quickly ran into Cold War politics. Computing was heavily bound up with defense matters; India was not always fully aligned with U.S. political interests, so the flow of knowledge from the U.S. to India was distinctly limited.

“This is very much an external constraint story,” Banerjee says. “You need blueprints and not just working papers, and that’s what was guarded by the U.S. for a very long time.”

Still, the TIFR research team toiled away as its computing projects until the TIFRAC was up and running — making national headlines.

“The achievement it represents is mind-boggling,” Banerjee emphasizes. “A computer in the U.S. would have cost more to run than this entire institute in India.”

As Banerjee details in the book, the TIFRAC machine was built to grow. Its engineers matched the speed of IBM machines and planned to import larger ferrite-core memory stacks as their workload expanded. But when IBM released the FORTRAN programming language in 1957, it required four times the memory the TIFRAC machine was equipped with. India’s 1958 foreign exchange crisis then shaped the machine’s fate: The World Bank convened a U.S.-led creditor consortium that conditioned rescue loans on the opening of Indian markets to Western capital. Importing larger memory stacks became unaffordable, rendering the TIFRAC obsolete almost as soon as it was completed.

“It’s a geopolitics-of-knowledge question, not that they made a mistake,” Banerjee says of the Indian engineers. “They didn’t know IBM was about to reshape software.”

Exit IBM, enter services

Though IBM’s jump forward after the release of Fortran left the TIFRAC project stalled out, Indian advocates for computer manufacturing did not give up their dream. For one thing, they looked around for partnerships and other ways of moving their domestic tech industry forward. And then in 1978, India, uniquely, banned IBM from the country, on account of its business practices.

That might have set the stage for India’s computer manufacturing industry to flourish. But at the same moment, countervailing forces took hold, including a widespread turn toward the private sector as an increasing source of activity, rather than public-private enterprises.

“For a moment you have this imagination come to a sort of fruition,” Banerjee observes. “But by the late 1970s and 1980s, there is a new group of people arguing for quick profits through software services, saying that this route feels less painful than setting up manufacturing, R&D, and firms for a decade or more.”

This turn toward private-sector services rather than government-involved manufacturing ultimately became a decisive factor in shaping India’s tech-sector trajectory. Rather than seeking to make machines domestically, the country became part of the global tech-services sector, while many of its engineers migrated to Silicon Valley and other tech hotspots. Global tech firms used their reach to advance the idea that many countries would develop independent industries. This is not the outcome India’s leaders and technologists once envisioned.

“It still surprises me because of the one thing India did that no other country in the world managed to do, and that’s kick out IBM,” Banerjee says. “The fact that this vision fades is part of changing government ambition.”

Beyond the mavericks

In writing the book, Banerjee has multiple goals. One is simply shedding more light on the rich details of India’s initial computing efforts. Another is contesting the idea that India somehow naturally found a role providing services and exporting talent; that is not what many people once hoped.

Still another motif in Banerjee’s work is that the history of computing too often centers on innovators who are cast as mavericks, shrugging off conventions to upend business and society — whereas the large-scale forces of global capital and geopolitics matter greatly in technological development.

“This book suggests we often overplay those stories of individual genius, because you can be a genius with all the right ideas, but if you don’t have all the institutions supporting you, it means nothing,” Banerjee says.

Other scholars have praised “Computing in the Age of Decolonization.” Matthew L. Jones, a professor of history at Princeton University, has stated that Banerjee’s book is a “scrupulous accounting of ultimately failed Indian efforts to secure technological sovereignty in the wake of independence,” which “joins the best recent accounts of computing worldwide and transforms how we think through diverse national trajectories through the Cold War and beyond.”

For his part, Banerjee hopes a wide variety of readers will be interested in the book — and recognize that the specific case of India and computing can tell us a lot about the challenges of new types of economic growth in many places.

“India stands in for a lot of countries in the mid-20th century that had recently gained formal political independence and were thinking of ways to catch up with the rest of the advanced industrialized world,” Banerjee says. “But the power structures tied to technological and scientific advancement did not disappear. They were replaced by newer structures, including foreign policy with very specific ideas about what different countries should be doing with regard to technology. That’s where the story starts.”

Biologist Joey Davis explores how cells build complex structures

Tue, 05/05/2026 - 12:00am

Ribosomes, the cellular machines that assemble proteins, are made from dozens of proteins and RNA molecules. Putting all of those pieces together is a complex puzzle — one that MIT Associate Professor Joey Davis PhD ’10 revels in trying to solve.

Understanding how these structures form and later break down could help researchers learn more about how disruptions of these fundamental processes can lead to disease. But, as Davis points out, it’s also an interesting biological question.

“Our long-term goal is to really understand how the natural world assembles these huge complexes rapidly and efficiently. It’s a fundamentally interesting question to think about how these things get put together,” he says.

His work has helped reveal that unlike building a house, which happens in a prescribed sequence of steps — pouring the foundation, building the frame, putting on the roof, then doing electrical and plumbing work — ribosomes can be assembled in a more flexible way. Cells can even skip an assembly step and then come back to it later.

“In these natural systems, it seems like the assembly pathways are much more dynamic and flexible,” he says. “It appears that evolution has selected pathways that aren’t strictly ordered in the way we would think about an assembly line, where you always put in one component, then the next, and then the next. We’re excited to understand the selective advantages of such approaches.”

A love of discovery

Davis’ interest in how things are put together developed early in life, inspired by his father, a carpenter who framed houses. During the mid-1980s, the family moved from Colorado to Southern California, where his father worked in construction during a housing boom there.

“I was always interested in building things, which I think probably came from being around my dad and other builders,” Davis says.

As an undergraduate at the University of California at Berkeley, where he majored in computer science and biological engineering, Davis’ interests turned toward smaller scales, in the realm of cells and molecules. During his junior year, he started working in the lab of chemistry professor Michael Marletta, who studies molecular-level biological interactions.

In the lab, Davis investigated how enzymes that contain heme are able to preferentially bind to either oxygen or nitric oxide, two gases that are very similar in structure. That work kindled a love of studying the natural world and pursuing discoveries in fundamental science.

“Being in the Marletta lab and seeing students and postdocs that were really passionate about these problems had a big impact on me,” Davis says. “The goal was to understand the fundamentals of how molecular discrimination works, and the idea of discovery for the sake of discovery was thrilling.”

After graduating from Berkeley, Davis spent another year working in Marletta’s lab, and then a year working odd jobs, before heading to MIT to pursue a PhD in biology. There, he worked with Professor Bob Sauer, now emeritus, who studied the relationship between protein structure and function, with a particular focus on the molecular machines that degrade or remodel proteins.

Davis’ thesis research centered on enzymes called AAA proteases, which remove damaged proteins from cellular membranes and send them to cell organelles that break them down. In addition to studying the structure and function of the proteases, Davis worked on ways to engineer them to tag specific proteins for destruction.

That work led him into synthetic biology, which he used to develop genetic parts that drive production of proteins of interest. Some of those parts ended up being used by the biotech startup Ginkgo Bioworks, where Davis took a job as a senior scientist after graduating.

Working at Ginkgo Bioworks allowed Davis to stay in Boston while his partner finished her PhD. The couple then moved back to California, where Davis worked as a postdoc at Scripps Research, which was home to one of the first direct electron detection cameras for cryo-electron microscopy (cryo-EM). These detectors allow researchers to generate structures with near atomic resolution. At Scripps, Davis began using them to study ribosomes as they were being assembled.

Peering into the ribosome

After joining the MIT faculty in 2017, Davis continued his work on ribosomes and assembled a lab group that includes students from a variety of backgrounds who work together to develop new ways to explore biological phenomena.

“I have a mix of method developers and biologists in the group, and the work from each of them informs each other,” Davis says. “My lab goes back and forth between building sets of tools to answer biological questions, and then as we’re answering those questions, it motivates the next generation of tool development.”

During ribosome assembly, RNA molecules fold themselves into the correct shapes, creating docking sites for proteins to attach. Then, more RNA molecules come in and fold themselves into the structure.

“It’s a beautifully coupled process by which the cell folds hundreds of RNA helices and binds on the order of 50 proteins, and it does it in two minutes from start to finish. E. coli does this 100,000 times per hour, and it’s amazing how rapid and efficient the process is,” Davis says.

Cryo-EM allows scientists to capture this process in minute detail. It can be used to take hundreds of thousands of two-dimensional images of ribosome samples frozen in a thin layer of ice, from different angles. Computer algorithms then piece together these images into a three-dimensional representation of the ribosome.

To gain insight into how ribosomes are assembled, researchers can stall the process at different points and then analyze the resulting structures. In 2021, Davis’s lab developed a new method called CryoDRGN, which uses neural networks to analyze cryo-EM data and generate the full ensemble of structures that were present in the sample.

This work has shown that when certain steps of ribosome assembly are blocked, many different structures result, suggesting that the assembly can occur in a variety of ways.

In future work, Davis aims to dramatically increase the throughput of cryo-EM to generate datasets of protein structures that could help improve the AI-based models that are now used to predict protein structures.

“There are still huge swaths of sequence space that these models are very poor at predicting, but if we could collect data on those sequences en masse, that could potentially serve as key training data for a next-generation protein structure prediction method that could fill out that space,” he says.

Rett syndrome study highlights potential for personalized treatments

Mon, 05/04/2026 - 2:00pm

Although many studies approach the developmental disorder Rett syndrome as a single condition arising from general loss of function in the gene MECP2, a new study by neuroscientists in The Picower Institute for Learning and Memory at MIT shows that two different mutations of the gene caused many distinct abnormalities in lab cultures. Moreover, correcting key differences made by each mutation required different treatments.

“Individual mutations matter,” says Mriganka Sur, senior author of the new open-accdess study in Nature Communications and the Newton Professor in the Picower Institute and the Department of Brain and Cognitive Sciences. “This is an approach to personalizing treatment, even for a single-gene disorder.”

The study employed advanced 3D human brain tissue cultures called “organoids” or “minibrains” derived from skin cells or blood cells donated by Rett syndrome patients with each mutation. Lead author Tatsuya Osaki, a Picower Institute research scientist, says that the organoids’ ability to model the specific consequences of each mutation enabled him to gain mutation-specific insights that haven’t emerged in prior studies, where scientists just knocked out MECP2 overall. The organoids also provided a novel opportunity to understand how each mutation affected different cell types and their interactions.

Distinct effects

More than 800 mutations in MECP2 can cause Rett syndrome, but just eight account for more than 60 percent of cases. Sur and Osaki chose one of these, R306C, which involves a difference of just one DNA base pair (916C>T), because it represents 7-8 percent of Rett syndrome cases. The other mutation they chose, V247X, is much more rare and severe because it cuts off production of the gene’s protein product by a single DNA base deletion (705Gdel), leaving the protein not just errant, but incomplete.

In organoids cultured for three months, each mutation produced some common but also sometimes distinct consequences compared to control organoids with non-mutated MECP2. For many of their experiments, the team used “three-photon” microscopes capable of cellular-level resolution all the way through the organoids’ approximate 1 millimeter thickness, resolving both their structure (via “third-harmonic generation” imaging), and the live activity patterns of their neurons (via calcium fluorescence).

For instance, the scientists observed that the V247X organoids exhibited several structural differences from their controls — they were larger and had different thicknesses of various layers — but the R306C ones were much more like their controls. Organoids harboring either mutation exhibited less-developed axon projections from their neurons, compared to their control comparators.

Looking at properties of neural activity and connectivity in the organoids, the scientists found some similar deficits across both mutations. Both showed reduced spiking activity and synchronicity between neurons compared to in their controls.

But when the scientists looked at other properties, the organoids started to diverge from each other. In particular, an indication of the efficiency of their network structure called “small-world propensity” (SWP) was decreased in R306C organoids, and increased in V247X ones, compared to controls. This means that both mutations altered the development of typical network structures for information processing, but in different directions.

To ensure that their results were meaningful for Rett syndrome patients, the team collaborated with Charles Nelson at Boston Children’s Hospital, whose team measured EEG in several children with different Rett mutations. Although the sample was small, the researchers measured indications that the SWP property in the EEG readings was altered in the volunteers, much like in the organoids.

Finally, by labeling excitatory neurons to flash in one color and inhibitory neurons to flash in a different color, the scientists were able to see that connectivity between the different neural types differed significantly from controls in the V247X organoids.

Treatment tests

All the testing showed that each mutation caused several changes in organoid structure, activity, and connectivity, and that the deviations were often particular to the specific mutation.

To understand how these differences emerged, and how they might be corrected, Sur and Osaki’s team turned to examining how the cells in each kind of organoid might be expressing their genes differently than controls. Differences in gene expression often lead to alterations of key molecular pathways in cells that can disrupt their activity and function. Analysis with a technique called single cell RNA sequencing indeed yielded hundreds of differences in each organoid type, where some genes were expressed more than in controls while others were underexpressed.

For instance, the analyses revealed that in R306C organoids a gene called HDAC2 was overexpressed. That protein is known for repressing expression of other genes. Meanwhile, in the V247X organoids, the scientists found reduced expression of genes for some receptors of the inhibitory neurotransmitter GABA. These organoids also showed defects in the function of astrocyte cells, which support many aspects of neural function.

Organoids with either mutation also exhibited aberrations in molecular pathways that enable the development of circuit connections between neurons, called synapses.

Given the specific defects they observed, the scientists decided to treat the organoids with a drug that can inhibit HDAC2 activity and another that increases GABA’s efficacy. The HDAC2 inhibitor restored neuronal activity and SWP to normal levels in the R306C organoids, and the GABA “agonist” baclofen restored SWP to control levels in the V247X organoids.

Osaki notes each of the treatment drugs has already been studied in other disease contexts, meaning they are well-understood drugs that could be repurposed.

Now that the researchers have developed an organoid platform for dissecting individual mutations’ consequences, identifying both their roots and testing treatments, they plan to apply it to studying four more mutations, Sur says, comparing all of them against a standardized control organoid.

In addition to Sur, Osaki, and Nelson, the paper’s other authors are Chloe Delepine, Yuma Osako, Devorah Kranz, April Levin, and Michela Fagiolini.

The National Institutes of Health, a MURI grant, The Freedom Together Foundation, and the Simons Foundation provided support for the research.

Powering 160,000 hours of discovery at MIT.nano

Mon, 05/04/2026 - 1:50pm

Each year, more than 1,500 researchers rely on over 200 tools and instruments at MIT.nano to pursue experiments that span MIT’s disciplines, collectively generating 160,000 hours of work across 88,000 instances of tool use. Behind this activity is an operational framework that must discretely coordinate access, maintain fairness, and keep research moving without friction.

Managing such a dynamic environment requires more than a scheduling calendar. An automated reservation system serves as the connective tissue of the facility, balancing demand across diverse user needs while supporting the practical realities of a shared lab space. Researchers arrive at MIT.nano with different workflows, safety requirements, and administrative needs, yet the system must present a seamless experience. Integration with MIT’s broader digital infrastructure, from onboarding and authentication to safety training and billing, ensures that access is both efficient and compliant, reducing barriers so researchers can focus on their work.

A system for the modern era

Over the past three years, during a period of rapid growth in both equipment and facility usage, MIT.nano undertook a transition to a new platform designed to scale with demand while maintaining operational continuity. The effort reflects an ongoing commitment to evolving infrastructure that supports the pace, complexity, and collaborative spirit of modern research.

The importance of robust laboratory management systems has long been recognized at MIT. For decades, researchers in the Microsystems Technology Laboratories (MTL) and the Materials Research Laboratory relied on the CORAL lab management platform to reserve and manage shared instrumentation. Jointly developed by MIT and Stanford University and introduced in 2003, CORAL represented a significant advance over the text-based system it replaced. But by the time MIT.nano adopted CORAL in 2018, active development had slowed, and the platform was beginning to show its age, most visibly through the absence of modern web and mobile interfaces expected by today’s users.

To address these limitations, MIT.nano has transitioned to NEMO, an open-source laboratory management system originally developed at the National Institute of Standards and Technology. NEMO centralizes scheduling, communication, and operational logistics into a single platform that manages tool reservations and user access while supporting facility growth. Its modular architecture and plugin framework allow for extensive customization, enabling the system to evolve alongside the needs of a large, shared research environment.

“Over time, NEMO was replicating core functionalities of CORAL while introducing new features that CORAL simply could not support,” explains Thomas Lohman, senior software and systems manager at MTL and a long-time contributor to CORAL’s development. “The question became whether to continue patching the old system or adopt this new platform that already had a lot of the features we use daily, as well as an active community continually improving it.”

For MIT.nano leadership, modernization was about more than replacing an aging tool. “We needed a system that centralizes everything a facility user depends on — policies, tool documentation, training workflows, and communications — within a user-friendly, mobile-accessible environment,” says Anna Osherov, associate director for Characterization.nano, who led the evaluation and transition effort. “Just as important was making sure the platform enhances the experience for both users and staff.”

Collaborating at MIT and with shared access facilities

MIT.nano collaborated closely with Mathieu Rampant, NEMO project lead and CEO of Atlantis Labs, to adopt the community edition of NEMO, an extended version enriched by contributions from a growing global user base. The open-source model ensures that improvements developed at MIT.nano benefit the broader research community, reinforcing a shared ecosystem of innovation. “The NEMO community is expanding rapidly, and many new features originate directly from facility users and administrators,” says Rampant. “That collaborative model allows improvements to propagate quickly while giving institutions a sense of ownership in the platform’s evolution.”

NEMO introduces modern features long requested by MIT.nano researchers, including mobile access, improved transparency, and streamlined workflows. Facility users can now monitor their own tool usage and consumables, customize notifications, register for training, join real-time equipment waitlists, report issues, and communicate with staff, all through a unified dashboard. What was once distributed across multiple systems is now centralized, reducing friction in day-to-day lab operations.

Launching a new platform at the scale of MIT.nano required careful planning and sustained collaboration. The system needed to support multiple facility types, integrate with existing MIT infrastructure, and accommodate a diverse set of instrumentation workflows. “Features that work well in a typical characterization lab can quickly become a burden in a more chemically active environment like the cleanroom,” explains Jorg Scholvin, associate director of Fab.nano. “Relying on researchers to log in using personal devices and Duo authentication, for example, would be impractical in that setting.”

To address these challenges, MIT.nano collaborated with MIT Information Systems and Technology Associate Vice President Olu Brown and Senior Director for Infrastructure Operations Marco Gomes and their teams to streamline integration between MIT systems and NEMO for cleanroom users. “The availability of modern APIs allowed us to connect very different systems efficiently and deliver a convenient, seamless, and productive experience in the lab,” says Scholvin.

The result is a platform that now processes thousands of reservations, communications, and operational actions daily. “We truly value the partnership with MIT.nano and appreciate the collaboration throughout this effort,” says Gomes. “It’s been a great example of teams working together to deliver something meaningful for the research community.”

As one of the largest shared-access facilities deploying NEMO, MIT.nano has played a central role in advancing the platform’s capabilities, both by helping shape its development and by demonstrating a model that is scalable and effective for other facilities and research centers nationwide. Enhancements first created to meet MIT.nano’s needs are now leveraged by other facilities adopting NEMO across the globe. 

It took 40 years for technology to catch up to this zipper design

Mon, 05/04/2026 - 1:45pm

In 1985, the Innovative Design Fund placed an ad in Scientific American offering up to $10,000 to support clever prototypes for clothing, home decor, and textiles. William Freeman PhD ’92, then an electrical engineer at Polaroid and now an MIT professor, saw it and submitted a novel idea: a three-sided zipper. Instead of fastening pants, it’d be like a switch that seamlessly flips chairs, tents, and purses between soft and rigid states, making them easier to pack and put together.

Freeman’s blueprint was much like a regular zipper, except triangular. On each side, he nailed a belt to connect narrow wooden “teeth” together. A slider wrapping around the device could be moved up to fasten the three strips into place, straightening them into a triangular tube. His proposal was rejected, but Freeman patented his prototype and stored it in his garage in the hopes it might come in handy one day.

Nearly 40 years later, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers wanted to revive the project to create items with “tunable stiffness.” Prior attempts to adjust that weren’t easily reversible or required manual assembly, so CSAIL built an automated design tool and adaptable fastener called the “Y-zipper.” The scientists’ software program helps users customize three-sided zippers, which it then builds on its own in a 3D printer using plastics. These devices can be attached or embedded into camping equipment, medical gear, robots, and art installations for more convenient assembly.

“A regular zipper is great for closing up flat objects, like a jacket, but Freeman ideated something more dynamic. Using current fabrication technology, his mechanism can transform more complex items,” says MIT postdoc and CSAIL researcher Jiaji Li, who is a lead author on an open-access paper presenting the project. “We’ve developed a process that builds objects you can rapidly shift from flexible to rigid, and you can be confident they’ll work in the real world.”

Why zippers?

Users can customize how the fasteners look when they’re zipped up in CSAIL’s software program; they can select the length of each strip, as well as the direction and angle at which they’ll bend. They can also choose from one of four motion “primitives” to select how the zipper will appear when it’s zipped up: straight, bent (similar to an arch), coiled (resembling a spring), or twisted (looks like screws).

The Y-zipper that results will appear to “shape-shift” in the real world. When unzipped, it can look like a squid with three sprawling tentacles, and when you close it up, it becomes a more compact structure (like a rod, for instance). This flexibility could be useful when you’re traveling — take pitching a tent, for example. The process can take up to six minutes to do alone, but with the Y-zipper’s help, it can be done in one minute and 20 seconds. You simply attach each arm to a side of the tent, supporting the structure from the top so that the zipper seemingly pops the canopy into place. 

This seamless transition could also unlock more flexible wearables, often useful in medical scenarios. The team wrapped the Y-zipper around a wrist cast, so that a user could loosen it during the day, and zip it up at night to prevent further injuries. In turn, a seemingly stiff device can be made more comfortable, adjusting to a patient’s needs.

The system can also aid users in crafting technology that moves at the push of a button. One can attach a motor to the Y-zipper after fabrication to automate the zipping process, which helps build things like an adaptive robotic quadruped. The robot could potentially change the size of its legs, tightening up into taller limbs and unzipping when it needs to be lower to the ground. Eventually, such rapid adjustments could help the robot explore the uneven terrain of places like canyons or forests. Actuated Y-zippers can also build dynamic art installations — for example, the team created a long, winding flower that “bloomed” thanks to a static motor zipping up the device.

Mastering the material

While Li and his colleagues saw the creative potential of the Y-zipper, it wasn’t yet clear how durable it would be. Could they sustain daily use?

The team ran a series of stress tests to find out. First, they evaluated the strength and flexibility of polylactic acid (PLA) and thermoplastic polyurethane (TPU), two plastics commonly used in 3D printing. Using a machine that bent the Y-zippers down, they found that PLA could handle heavier loads, while TPU was more pliable.

In another experiment, CSAIL researchers used an actuator to continuously open and close the Y-zipper to see how long it’d take to snap. Some 18,000 cycles of zipping and unzipping later, they finally broke. Y-zipper’s secret to durability, according to 3D simulations: its elastic structure, which helps distribute the stress of heavy loads.

Despite these findings, Li envisions an even more durable three-sided zipper using stronger materials, like metal. They may also make the zippers bigger for larger-scale projects, but that’s not yet possible with their current 3D printing platform.

Jiaji also notes that some applications remain unexplored, like space exploration, wherein Y-zipper’s tentacles could be built into a spacecraft to grab nearby rock samples. Likewise, the zippers could be embedded into structures that can be assembled rapidly, helping relief workers quickly set up shelters or medical tents during natural disasters and rescues.

“Reimagining an everyday zipper to tackle 3D morphological transitions is a brilliant approach to dynamic assembly,” says Zhejiang University assistant professor Guanyun Wang, who wasn’t involved in the paper. “More importantly, it effectively bridges the gap between soft and rigid states, offering a highly scalable and innovative fabrication approach that will greatly benefit the future design of embodied intelligence.”

Li and Freeman wrote the paper with Tianjin University PhD student Xiang Chang and MIT CSAIL colleagues: PhD student Maxine Perroni-Scharf; undergraduate Dingning Cao; recent visiting researchers Mingming Li (Zhejiang University), Jeremy Mrzyglocki (Technical University of Munich), and Takumi Yamamoto (Keio University); and MIT Associate Professor Stefanie Mueller, who is a CSAIL principal investigator and senior author on the work. Their research was supported, in part, by a postdoctoral research fellowship from Zhejiang University and the MIT-GIST Program.

The researchers’ work was presented at the ACM’s ​​Computer-Human Interaction (CHI) conference on Human Factors in Computing Systems in April.

How chromatin movement helps control gene expression

Mon, 05/04/2026 - 5:00am

Gene expression is controlled, in part, by the interactions between genes and regulatory elements located along the genome. Those interactions depend on the ability of chromatin — a mix of DNA and proteins — to move around within a crowded space.

In a new study, MIT researchers have measured chromatin movement at timescales ranging from hundreds of microseconds to hours, allowing them to rigorously quantify those dynamics for the first time.

Their analysis revealed that chromatin can exist in two different categories: In one, chromatin moves in a constrained way that allows it to primarily contact only neighboring regions of the genome; in the other, chromatin moves more freely and contacts regions that are farther away, but only over longer timescales.

The findings offer insight into how gene expression is regulated, as well as how chromatin segments come together for other processes such as DNA repair, the researchers say.

“Because we were able to look at chromatin dynamics for the first time at these very fast timescales, and also for the first time across the full dynamic range, we were able to observe chromatin motion over a range that just wasn’t possible before,” says Anders Sejr Hansen, an associate professor of biological engineering at MIT and the senior author of the new study, which appears today in Nature Structural and Molecular Biology.

The paper’s lead authors are MIT postdoc Matteo Mazzocca, Domenic Narducci PhD ’25, and Simon Grosse-Holz PhD ’23. Jessica Matthias, chief commercial officer of Abberior Instruments, and Tatiana Karpova, manager of the National Cancer Institute Optical Microscopy Core, are also authors of the paper.

Constrained movement

In textbooks, chromatin is often depicted as a static structure within the cell nucleus, but in reality, it is constantly moving. Those movements are necessary for genes to interact with DNA regulatory sequences such as enhancers, which can be as far as 1 million base pairs away. They also ensure that when DNA breaks occur, the two ends of DNA can encounter each other to be repaired.

“Chromatin dynamics are foundational to all processes in the nucleus, and especially processes that involve two things finding each other. That’s important in DNA repair, gene regulation, recombination, or moving a particular gene to the right compartment of the nucleus,” Hansen says.

The movement of any particular location on the genome, or locus, is constrained by the fact that DNA is a polymer. After moving in any direction, a locus will be pulled back by the DNA on either side of it.

“Chromosomes are polymers. They’re held together by many nucleotides of DNA. Being part of DNA is a little bit like running while holding hands with other people. If a hundred people are holding hands and you, in the middle of the chain, try to run in one direction, you’ll get pulled back,” Hansen says.

This type of behavior is known as subdiffusive movement. Previous studies have yielded conflicting reports on how subdiffusive chromatin is, mainly because the studies were not able to track the movement over a long enough period of time to obtain statistically robust measurements. Because the movements are so small, on the order of nanometers, data needs to be obtained over long dynamic ranges — from milliseconds to hours.

In those earlier studies, researchers used imaging techniques that can track the position of a single molecule over time by comparing images frame by frame. These are useful but can only be used over a small dynamic range because of the limitations of conventional microscopy.

To generate more statistically robust data, the MIT team used MINFLUX — a super-resolution light microscopy technique that can track the movement of tiny objects such as proteins over longer periods of time. This technique was recently developed by Stefan Hell of the Max Planck Institute, a Nobel laureate for his work in super resolution microscopy. In this study, the MIT team became the first to apply this technique to chromatin in living cells.

“MINFLUX allowed us to get around the limitations of conventional microscopy, letting us measure chromatin movement faster and for a longer period of time than ever before,” Narducci says. “To our knowledge, it’s the first time this technique has been used this way.”

Using MINFLUX, the researchers were able to study cells over timescales that covered four orders of magnitude — from 200 microseconds to 10 seconds. And by combining MINFLUX with two traditional imaging techniques, they could track chromatin movement over seven orders of magnitude across time, from hundreds of microseconds to several hours.

“Region of influence”

These studies, performed across several different mouse and human cell types, allowed the researchers to identify two distinct classes of chromatin dynamics. In both classes, over short and intermediate timescales (up to 200 seconds), any given locus tends to move only within about 200 nanometers. This suggests that the subdiffusive pull is stronger than had been previously thought.

“One of the main takeaways is that you have this region of influence where a genomic locus has access to other genomic loci, and this is roughly a couple hundred nanometers large,” Grosse-Holz says. “If loci are much closer together than a couple hundred nanometers, they’re effectively in contact all the time. You get a cutoff at a couple hundred nanometers where everything within that region around a given locus can see that locus, and everything outside cannot.”

This constant contact is likely beneficial for DNA repair, as the broken strands remain in close proximity to each other. The findings also suggest that for genes and regulatory elements that are within about 100,000 base pairs, they don’t need any extra help to find each other — they will do so routinely through their normal movement.

“If they are closer than 100,000 bases, and most regulatory elements are, then those elements are going to find their target gene within a few milliseconds or a few minutes,” Mazzocca says. “These are timescales that are completely consistent with transcription.”

In the other class of chromatin dynamics that the researchers identified, chromatin is able to move over a wider range, but only at longer timescales (a few minutes to hours). This class of chromatin appeared in some types of cells but not others, for reasons that are not yet understood.

“It would be reasonable to assume that the behavior would be more or less the same in all cell types, but that’s not at all what we found,” Hansen says. “It’s very different in different cell types, with no obvious way of categorizing things.”

He adds that the strength of the subdiffusive pull that the researchers found in this study can’t be explained with existing models that have been developed to study chromatin dynamics — the Rouse model and the fractal globule model. This suggests that the models may need to incorporate factors that were previously left out, such as the interactions between chromatin and the crowded nucleoplasm it sits within.

“These findings are significant for two key reasons,” says Luca Giorgetti, a group leader at the Friedrich Miescher Institute for Biomedical Research in Switzerland, who was not involved in the study. “First, they rigorously confirm longstanding but anecdotal observations that chromatin motion is strongly subdiffusive. Second, they demonstrate that this behavior is consistent across multiple cell types and persists across all measured timescales.”

The research was funded, in part, by the National Institutes of Health, a National Science Foundation CAREER Award, a Pew-Stewart Scholar for Cancer Research Award, and the Bridge Project, a partnership between the Koch Institute for Integrative Cancer Research at MIT and the Dana-Farber/Harvard Cancer Center.

Found Industries aims to strengthen America’s industrial supply chains

Sun, 05/03/2026 - 12:00am

Found Industries has gone through several distinct phases in the four years since it was originally formed as Found Energy. There was the scrappy startup stage, in which the company was primarily housed in the basement of founder Peter Godart ’15, SM ’19, PhD ’21. Then there was the demonstration phase, in which the company worked to productize its technology for transforming aluminum into high-density fuel for industrial operations.

Now, after confronting supply chain vulnerabilities related to critical metals in its aluminum fuel business, the company is launching a new division, Found Metals, to extract the critical metal gallium from mineral refineries — a move that builds on its original technology while addressing a major national security need.

Gallium is a critical material in the defense, semiconductor, and energy sectors. In 2024, China produced 99 percent of the world’s primary supply — market dominance the country takes advantage of through export controls.

Godart’s company developed an electrochemical gallium extraction technology for internal use after realizing how dependent it would be on China for the catalyst material at the center of its aluminum fuel reactors. Now, with support from the U.S. Department of Energy, Found is hoping to use that technology to create a new domestic supply chain for gallium and a host of other important metals.

Found Industries is still committed to its aluminum fuel operations, now under its Found Energy division. It is already running a 100-kilowatt-class demonstration plant and is preparing for industrial pilot deployments next year. But with its expansion, which was announced April 21, the company is also working to meet the moment for critical metals production.

“Gallium is the world’s most critical metal, as it’s 99 percent controlled by China,” Godart says. “When you produce 99 percent of something, you also produce 99 percent of the tools required to extract it. We couldn’t get our hands on some of those tools, so we were forced to come up with a new technology. Now we believe we can deploy this at scale to become one the first major Western suppliers of these metals.”

From fuel to metals

Godart focused on robotics as an undergraduate in MIT’s Department of Mechanical Engineering and Department of Electrical Engineering and Computer Science. Following graduation, he worked at NASA’s Jet Propulsion Laboratory, where he explored systems for tapping into high-density fuels like aluminum on other planets.

“I had this crazy idea that you could use aluminum, which is already a common construction material for aerospace, as a fuel on other planets,” Godart says. “You don’t need most of the aluminum on a spacecraft once you land on another planet. Aluminum is around 40 times more energy-dense than lithium-ion batteries, and if you have an oxidizer, like water on an icy moon for example, then you can react that aluminum with water and extract energy as heat and hydrogen.”

Luckily for people who might spill water on aluminum while cooking, the metal is normally very stable when exposed to air. In order to tap into aluminum’s stored energy, it needs to undergo a chemical reaction. Godart began exploring catalyst materials to create that reaction at NASA. He continued that work with professor of mechanical engineering Douglas Hart when he returned to MIT in 2017, this time for applications a little closer to home.

“If we want to think about moving humanity to other planets, we have some problems to solve here first,” Godart says. “That was the impetus for me to go back to MIT to study using aluminum as a fuel for energy distribution on Earth.”

Around 70 million tons of aluminum are already transported around the globe every year. Godart says that gives aluminum an easier path to scale. During his PhD, he created a process for coating aluminum with a gallium-containing alloy to help tap into aluminum’s embodied energy.

“We found a catalyst that, when mixed with aluminum scraps, enabled aluminum to react with water very rapidly and at orders of magnitude higher power density than what had been possible before,” Godart says. “That meant you could use aluminum as a fuel and get megawatt-scale power from compact reactor systems.”

By the time he finished his PhD in 2021, Godart and his collaborators had developed a system that mixes aluminum fuel with those catalysts to continuously produce electricity at the kilowatt scale through a hydrogen fuel cell.

Godart launched Found Energy in 2022, licensing part of his research from MIT’s Technology License Office and receiving support from MIT’s Venture Mentoring Service. The company received an Activate fellowship, and after quickly outgrowing Godart’s basement, moved into its current 20,000 square foot facility in Charlestown, Massachusetts.

Today, Found Energy is working with industrial companies that have abundant aluminum scrap.

“When you invent a fuel, you then have to invent the engine,” Godart says. “Our engine is called a catalyzed aluminum water reactor. You feed in aluminum that’s been treated with the catalyst and water, and you get a steam-hydrogen gas mixture. We call that our power stream. We use it to cogenerate industrial heat and electricity. The reaction byproduct is a hydrated aluminum oxide that can be sold into various industries or recycled back into aluminum, which is the long-term vision.”

As Godart worked to build more of the systems, he became concerned about Found’s reliance on Chinese supply chains for its catalyst material. So, in 2024, he developed a new way to extract gallium from Bayer liquor, an industrial process stream used to produce aluminum. Traditional methods for extracting gallium rely on foreign-controlled organic chemicals or resins to bind and concentrate the gallium.

Found uses a continuous electrochemical process to recover the gallium directly from Bayer liquor and other industrial feedstocks, even at low concentrations.

“We thought of it as a way to future-proof what we were doing,” Godart says. “Necessity was the mother of invention.”

Then, toward the end of 2024, China began restricting the export of critical metals including gallium.

“We realized we had already developed a technique for producing these restricted metals that could be very quickly adapted,” Godart recalls.

Scaling for national security

On April 14, the Department of Energy’s Office of Critical Minerals and Energy Innovation selected Found as part of its $5.4 million program to recover gallium from domestic feedstocks. The company plans to start extracting gallium, along with other critical metals like indium and germanium, by the end of 2027.

Meanwhile, Found is already running a 100-kilowatt-class aluminum fuel demonstration system in Charlestown and is working through a orders of several megawatts from large public companies.

“For our fuel technology, the vision is to go as big as possible,” Godart says. “We envision major power plants. Aluminum refineries today, for example, consume hundreds of megawatts of continuous thermal power. That’s what we aim to deliver.”

Godart says he spends most of his time now on gallium extraction, but both branches of the business could make supply chains more secure in the future.

“The big focus now is critical metals, because the government needs this,” Godart says. “We’re also making these metals for ourselves, so we’re vertically integrating our own supply chain, which is table stakes now for companies that deal in physical goods. You need to be able to control your inputs. By focusing on metals, it improves the likelihood of success for our aluminum fuel business.”

Pages