MIT Latest News
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Nuno Loureiro, professor and director of MIT’s Plasma Science and Fusion Center, dies at 47
This article may be updated.
Nuno Loureiro, a professor of nuclear science and engineering and of physics at MIT, has died. He was 47.
A lauded theoretical physicist and fusion scientist, and director of the MIT Plasma Science and Fusion Center, Loureiro joined MIT’s faculty in 2016. His research addressed complex problems lurking at the center of fusion vacuum chambers and at the edges of the universe.
Loureiro’s research at MIT advanced scientists’ understanding of plasma behavior, including turbulence, and uncovered the physics behind astronomical phenomena like solar flares. He was the Herman Feshbach (1942) Professor of Physics at MIT and was named director of the Plasma Science and Fusion Center in 2024, though his contributions to fusion science and engineering began far before that.
His research on magnetized plasma dynamics, magnetic field amplification, and confinement and transport in fusion plasmas helped inform the design of fusion devices that could harness the energy of fusing plasmas, bringing the dream of clean, near-limitless fusion power closer to reality.
“Nuno was not only a brilliant scientist, he was a brilliant person,” says Dennis Whyte, the Hitachi America Professor of Engineering, who previously served as the head of the Department of Nuclear Science and Engineering and director of the Plasma Science and Fusion Center. “He shone a bright light as a mentor, friend, teacher, colleague and leader, and was universally admired for his articulate, compassionate manner. His loss is immeasurable to our community at the PSFC, NSE and MIT, and around the entire fusion and plasma research world.”
“Nuno was a champion for plasma physics within the Physics Department, a wonderful and engaging colleague, and an inspiring and caring mentor for graduate students working in plasma science. His recent work on quantum computing algorithms for plasma physics simulations was a particularly exciting new scientific direction,” says Deepto Chakrabarty, the William A. M. Burden Professor in Astrophysics and head of the Department of Physics.
Whether working on fusion or astrophysics research, Loureiro merged fundamental physics with technology and engineering, to maximize impact.
“There are people who are driven by technology and engineering, and others who are driven by fundamental mathematics and physics. We need both,” Loureiro said in 2019. “When we stimulate theoretically inclined minds by framing plasma physics and fusion challenges as beautiful theoretical physics problems, we bring into the game incredibly brilliant students — people who we want to attract to fusion development.”
Loureiro majored in physics at Instituto Superior Tecnico (IST) in Portugal and obtained a PhD in physics at Imperial College London in 2005. He conducted postdoctoral work at the Princeton Plasma Physics Laboratory for the next two years before moving to the UKAEA Culham Center for Fusion Energy in 2007. Loureiro returned to IST in 2009, where he was a researcher at the Institute for Plasmas and Nuclear Fusion until coming to MIT in 2016.
He wasted no time contributing to the intellectual environment at MIT, spending part of his first two years at the Institute working on the vexing problem of plasma turbulence. Plasma is the super-hot state of matter that serves as the fuel for fusion reactors. Loureiro’s lab at PSFC illuminated how plasma behaves inside fusion reactors, which could help prevent material failures and better contain the plasma to harvest electricity.
“Nuno was not only an extraordinary scientist and educator, but also a tremendous colleague, mentor, and friend who cared deeply about his students and his community. His absence will be felt profoundly across NSE and far beyond,” Benoit Forget, the KEPCO Professor and head of the Department of Nuclear Science and Engineering, wrote in an email to the department today.
On other fronts, Loureiro’s work in astrophysics helped reveal fundamental mechanisms of the universe. He put forward the first theory of turbulence in pair plasmas, which differ from regular plasmas and may be abundant in space. The work was driven, in part, by unprecedented observations of a binary neutron star merger in 2018.
As an assistant professor and then a full professor at MIT, Loureiro taught course 22.612 (Intro to Plasma Physics) and course 22.615 (MHD Theory of Fusion Systems), for which he was twice recognized with the Department of Nuclear Science and Engineering’s PAI Outstanding Professor Award.
Loureiro’s research earned him many prominent awards throughout his prolific career, including the National Science Foundation Career Award and the American Physical Society Thomas H. Stix Award for Outstanding Early Career Contributions to Plasma Physics Research. He was also an APS fellow. Earlier this year, he earned the Presidential Early Career Award for Scientists and Engineers.
How cement “breathes in” and stores millions of tons of CO₂ a year
The world’s most common construction material has a secret. Cement, the “glue” that holds concrete together, gradually “breathes in” and stores millions of tons of carbon dioxide (CO2) from the air over the lifetimes of buildings and infrastructure.
A new study from the MIT Concrete Sustainability Hub quantifies this process, carbon uptake, at a national scale for the first time. Using a novel approach, the research team found that the cement in U.S. buildings and infrastructure sequesters over 6.5 million metric tons of CO2 annually. This corresponds to roughly 13 percent of the process emissions — the CO2 released by the underlying chemical reaction — in U.S. cement manufacturing. In Mexico, the same building stock sequesters about 5 million tons a year.
But how did the team come up with those numbers?
Scientists have known how carbon uptake works for decades. CO2 enters concrete or mortar — the mixture that glues together blocks, brick, and stones — through tiny pores, reacts with the calcium-rich products in cement, and becomes locked into a stable mineral called calcium carbonate, or limestone.
The chemistry is well-known, but calculating the magnitude of this at scale is not. A concrete highway in Dallas sequesters CO2 differently than Mexico City apartments made from concrete masonry units (CMUs), also called concrete blocks or, colloquially, cinder blocks. And a foundation slab buried under the snow in Fairbanks, Alaska, “breathes in” CO2 at a different pace entirely.
As Hessam AzariJafari, lead author and research scientist in the MIT Department of Civil and Environmental Engineering, explains, “Carbon uptake is very sensitive to context. Four major factors drive it: the type of cement used, the product we make with it — concrete, CMUs, or mortar — the geometry of the structure, and the climate and conditions it’s exposed to. Even within the same structure, uptake can vary five-fold between different elements.”
As no two structures sequester CO2 in the same way, estimating uptake nationwide would normally require simulating an array of cement-based elements: slabs, walls, beams, columns, pavements, and more. On top of that, each of those has its own age, geometry, mixture, and exposure condition to account for.
Seeing that this approach would be like trying to count every grain of sand on a beach, the team took a different route. They developed hundreds of archetypes, typical designs that could stand in for different buildings and pieces of infrastructure. It’s a bit like measuring the beach instead by mapping out its shape, depth, and shoreline to estimate how much sand usually sits in a given spot.
With these archetypes in hand, the team modeled how each one sequesters CO2 in different environments and how common each is across every state in the United States and Mexico. In this way, they could estimate not just how much CO2 structures sequester, but why those numbers differ.
Two factors stood out. The first was the “construction trend,” or how the amount of new construction had changed over the previous five years. Because it reflects how quickly cement products are being added to the building stock, it shapes how much cement each state consumes and, therefore, how much of that cement is actively carbonating. The second was the ratio of mortar to concrete, since porous mortars sequester CO2 an order of magnitude faster than denser concrete.
In states where mortar use was higher, the fraction of CO2 uptake relative to process emissions was noticeably greater. “We observed something unique about Mexico: Despite using half the cement that the U.S. does, the country has three-quarters of the uptake,” notes AzariJafari. “This is because Mexico makes more use of mortars and lower-strength concrete, and bagged cement mixed on-site. These practices are why their uptake sequesters about a quarter of their cement manufacturing emissions.”
While care must be taken for structural elements that use steel reinforcement, as uptake can accelerate corrosion, it’s possible to enhance the uptake of many elements without negative impacts.
Randolph Kirchain, director of the MIT Concrete Sustainability Hub, principal research scientist in the MIT Materials Research Laboratory, and the senior author of this study, explains: “For instance, increasing the amount of surface area exposed to air accelerates uptake and can be achieved by foregoing painting or tiling, or choosing designs like waffle slabs with a higher surface area-to-volume ratio. Additionally, avoiding unnecessarily stronger, less-porous concrete mixtures than required would speed up uptake while using less cement.”
“There is a real opportunity to refine how carbon uptake from cement is represented in national inventories,” AzariJafari comments. “The buildings around us and the concrete beneath our feet are constantly ‘breathing in’ millions of tons of CO2. Nevertheless, some of the simplified values in widely used reporting frameworks can lead to higher estimates than what we observe empirically. Integrating updated science into international inventories and guidelines such as the Intergovernmental Panel on Climate Change (IPCC) would help ensure that reported numbers reflect the material and temporal realities of the sector.”
By offering the first rigorous, bottom-up estimation of carbon uptake at a national scale, the team’s work provides a more representative picture of cement’s environmental impact. As we work to decarbonize the built environment, understanding what our structures are already doing in the background may be just as important as the innovations we pursue moving forward. The approach developed by MIT researchers could be extended to other countries by combining global building-stock databases with national cement-production statistics. It could also inform the design of structures that safely maximize uptake.
The findings were published Dec. 15 in the Proceedings of the National Academy of Sciences. Joining AzariJafari and Kirchain on the paper are MIT researchers Elizabeth Moore of the Department of Materials Science and Engineering and the MIT Climate Project and former postdocs Ipek Bensu Manav SM ’21, PhD ’24 and Motahareh Rahimi, along with Bruno Huet and Christophe Levy from the Holcim Innovation Center in France.
A new immunotherapy approach could work for many types of cancer
Researchers at MIT and Stanford University have developed a new way to stimulate the immune system to attack tumor cells, using a strategy that could make cancer immunotherapy work for many more patients.
The key to their approach is reversing a “brake” that cancer cells engage to prevent immune cells from launching an attack. This brake is controlled by sugar molecules known as glycans that are found on the surface of cancer cells.
By blocking those glycans with molecules called lectins, the researchers showed they could dramatically boost the immune system’s response to cancer cells. To achieve this, they created multifunctional molecules known as AbLecs, which combine a lectin with a tumor-targeting antibody.
“We created a new kind of protein therapeutic that can block glycan-based immune checkpoints and boost anti-cancer immune responses,” says Jessica Stark, the Underwood-Prescott Career Development Professor in the MIT departments of Biological Engineering and Chemical Engineering. “Because glycans are known to restrain the immune response to cancer in multiple tumor types, we suspect our molecules could offer new and potentially more effective treatment options for many cancer patients.”
Stark, who is also a member of MIT’s Koch Institute for Integrative Cancer Research, is the lead author of the paper. Carolyn Bertozzi, a professor of chemistry at Stanford and director of the Sarafan ChEM Institute, is the senior author of the study, which appears today in Nature Biotechnology.
Releasing the brakes
Training the immune system to recognize and destroy tumor cells is a promising approach to treating many types of cancer. One class of immunotherapy drugs known as checkpoint inhibitors stimulate immune cells by blocking an interaction between the proteins PD-1 and PD-L1. This removes a brake that tumor cells use to prevent immune cells like T cells from killing cancer cells.
Drugs targeting the PD-1- PD-L1 checkpoint have been approved to treat several kinds of cancer. In some of these patients, checkpoint inhibitors can lead to long-lasting remission, but for many others, they don’t work at all.
In hopes of generating immune responses in a greater number of patients, researchers are now working on ways to target other immunosuppressive interactions between cancer cells and immune cells. One such interaction occurs between glycans on tumor cells and receptors found on immune cells.
Glycans are found on nearly all living cells, but tumor cells often express glycans that are not found on healthy cells, including glycans that contain a monosaccharide called sialic acid. When sialic acids bind to lectin receptors, located on immune cells, it turns on an immunosuppressive pathway in the immune cells. These lectins that bind to sialic acid are known as Siglecs.
“When Siglecs on immune cells bind to sialic acids on cancer cells, it puts the brakes on the immune response. It prevents that immune cell from becoming activated to attack and destroy the cancer cell, just like what happens when PD-1 binds to PD-L1,” Stark says.
Currently, there aren’t any approved therapies that target this Siglec-sialic acid interaction, despite a number of drug development approaches that have been tried. For example, researchers have tried to develop lectins that could bind to sialic acids and prevent them from interacting with immune cells, but so far, this approach hasn’t worked well because lectins don’t bind strongly enough to accumulate on the cancer cell surface in large numbers.
To overcome that, Stark and her colleagues developed a way to deliver larger quantities of lectins by attaching them to antibodies that target cancer cells. Once there, the lectins can bind to sialic acid, preventing sialic acid from interacting with Siglec receptors on immune cells. This lifts the brakes off the immune response, allowing immune cells such as macrophages and natural killer (NK) cells to launch an attack on the tumor.
“This lectin binding domain typically has relatively low affinity, so you can’t use it by itself as a therapeutic. But, when the lectin domain is linked to a high-affinity antibody, you can get it to the cancer cell surface where it can bind and block sialic acids,” Stark says.
A modular system
In this study, the researchers designed an AbLec based on the antibody trastuzumab, which binds to HER2 and is approved as a cancer therapy to treat breast, stomach, and colorectal cancers. To form the AbLec, they replaced one arm of the antibody with a lectin, either Siglec-7 or Siglec-9.
Tests using cells grown in the lab showed that this AbLec rewired immune cells to attack and destroy cancer cells.
The researchers then tested their AbLecs in a mouse model that was engineered to express human Siglec receptors and antibody receptors. These mice were then injected with cancer cells that formed metastases in the lungs. When treated with the AbLec, these mice showed fewer lung metastases than mice treated with trastuzumab alone.
The researchers also showed that they could swap in other tumor-specific antibodies, such as rituximab, which targets CD20, or cetuximab, which targets EGFR. They could also swap in lectins that target other glycans involved in immunosuppression, or antibodies that target checkpoint proteins such as PD-1.
“AbLecs are really plug-and-play. They’re modular,” Stark says. “You can imagine swapping out different decoy receptor domains to target different members of the lectin receptor family, and you can also swap out the antibody arm. This is important because different cancer types express different antigens, which you can address by changing the antibody target.”
Stark, Bertozzi, and others have started a company called Valora Therapeutics, which is now working on developing lead AbLec candidates. They hope to begin clinical trials in the next two to three years.
The research was funded, in part, by a Burroughs Wellcome Fund Career Award at the Scientific Interface, a Society for Immunotherapy of Cancer Steven A. Rosenberg Scholar Award, a V Foundation V Scholar Grant, the National Cancer Institute, the National Institute of General Medical Sciences, a Merck Discovery Biologics SEEDS grant, an American Cancer Society Postdoctoral Fellowship, and a Sarafan ChEM-H Postdocs at the Interface seed grant.
“Robot, make me a chair”
Computer-aided design (CAD) systems are tried-and-true tools used to design many of the physical objects we use each day. But CAD software requires extensive expertise to master, and many tools incorporate such a high level of detail they don’t lend themselves to brainstorming or rapid prototyping.
In an effort to make design faster and more accessible for non-experts, researchers from MIT and elsewhere developed an AI-driven robotic assembly system that allows people to build physical objects by simply describing them in words.
Their system uses a generative AI model to build a 3D representation of an object’s geometry based on the user’s prompt. Then, a second generative AI model reasons about the desired object and figures out where different components should go, according to the object’s function and geometry.
The system can automatically build the object from a set of prefabricated parts using robotic assembly. It can also iterate on the design based on feedback from the user.
The researchers used this end-to-end system to fabricate furniture, including chairs and shelves, from two types of premade components. The components can be disassembled and reassembled at will, reducing the amount of waste generated through the fabrication process.
They evaluated these designs through a user study and found that more than 90 percent of participants preferred the objects made by their AI-driven system, as compared to different approaches.
While this work is an initial demonstration, the framework could be especially useful for rapid prototyping complex objects like aerospace components and architectural objects. In the longer term, it could be used in homes to fabricate furniture or other objects locally, without the need to have bulky products shipped from a central facility.
“Sooner or later, we want to be able to communicate and talk to a robot and AI system the same way we talk to each other to make things together. Our system is a first step toward enabling that future,” says lead author Alex Kyaw, a graduate student in the MIT departments of Electrical Engineering and Computer Science (EECS) and Architecture.
Kyaw is joined on the paper by Richa Gupta, an MIT architecture graduate student; Faez Ahmed, associate professor of mechanical engineering; Lawrence Sass, professor and chair of the Computation Group in the Department of Architecture; senior author Randall Davis, an EECS professor and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); as well as others at Google Deepmind and Autodesk Research. The paper was recently presented at the Conference on Neural Information Processing Systems.
Generating a multicomponent design
While generative AI models are good at generating 3D representations, known as meshes, from text prompts, most do not produce uniform representations of an object’s geometry that have the component-level details needed for robotic assembly.
Separating these meshes into components is challenging for a model because assigning components depends on the geometry and functionality of the object and its parts.
The researchers tackled these challenges using a vision-language model (VLM), a powerful generative AI model that has been pre-trained to understand images and text. They task the VLM with figuring out how two types of prefabricated parts, structural components and panel components, should fit together to form an object.
“There are many ways we can put panels on a physical object, but the robot needs to see the geometry and reason over that geometry to make a decision about it. By serving as both the eyes and brain of the robot, the VLM enables the robot to do this,” Kyaw says.
A user prompts the system with text, perhaps by typing “make me a chair,” and gives it an AI-generated image of a chair to start.
Then, the VLM reasons about the chair and determines where panel components go on top of structural components, based on the functionality of many example objects it has seen before. For instance, the model can determine that the seat and backrest should have panels to have surfaces for someone sitting and leaning on the chair.
It outputs this information as text, such as “seat” or “backrest.” Each surface of the chair is then labeled with numbers, and the information is fed back to the VLM.
Then the VLM chooses the labels that correspond to the geometric parts of the chair that should receive panels on the 3D mesh to complete the design.
Human-AI co-design
The user remains in the loop throughout this process and can refine the design by giving the model a new prompt, such as “only use panels on the backrest, not the seat.”
“The design space is very big, so we narrow it down through user feedback. We believe this is the best way to do it because people have different preferences, and building an idealized model for everyone would be impossible,” Kyaw says.
“The human‑in‑the‑loop process allows the users to steer the AI‑generated designs and have a sense of ownership in the final result,” adds Gupta.
Once the 3D mesh is finalized, a robotic assembly system builds the object using prefabricated parts. These reusable parts can be disassembled and reassembled into different configurations.
The researchers compared the results of their method with an algorithm that places panels on all horizontal surfaces that are facing up, and an algorithm that places panels randomly. In a user study, more than 90 percent of individuals preferred the designs made by their system.
They also asked the VLM to explain why it chose to put panels in those areas.
“We learned that the vision language model is able to understand some degree of the functional aspects of a chair, like leaning and sitting, to understand why it is placing panels on the seat and backrest. It isn’t just randomly spitting out these assignments,” Kyaw says.
In the future, the researchers want to enhance their system to handle more complex and nuanced user prompts, such as a table made out of glass and metal. In addition, they want to incorporate additional prefabricated components, such as gears, hinges, or other moving parts, so objects could have more functionality.
“Our hope is to drastically lower the barrier of access to design tools. We have shown that we can use generative AI and robotics to turn ideas into physical objects in a fast, accessible, and sustainable manner,” says Davis.
3 Questions: Using computation to study the world’s best single-celled chemists
Today, out of an estimated 1 trillion species on Earth, 99.999 percent are considered microbial — bacteria, archaea, viruses, and single-celled eukaryotes. For much of our planet’s history, microbes ruled the Earth, able to live and thrive in the most extreme of environments. Researchers have only just begun in the last few decades to contend with the diversity of microbes — it’s estimated that less than 1 percent of known genes have laboratory-validated functions. Computational approaches offer researchers the opportunity to strategically parse this truly astounding amount of information.
An environmental microbiologist and computer scientist by training, new MIT faculty member Yunha Hwang is interested in the novel biology revealed by the most diverse and prolific life form on Earth. In a shared faculty position as the Samuel A. Goldblith Career Development Professor in the Department of Biology, as well as an assistant professor at the Department of Electrical Engineering and Computer Science and the MIT Schwarzman College of Computing, Hwang is exploring the intersection of computation and biology.
Q: What drew you to research microbes in extreme environments, and what are the challenges in studying them?
A: Extreme environments are great places to look for interesting biology. I wanted to be an astronaut growing up, and the closest thing to astrobiology is examining extreme environments on Earth. And the only thing that lives in those extreme environments are microbes. During a sampling expedition that I took part in off the coast of Mexico, we discovered a colorful microbial mat about 2 kilometers underwater that flourished because the bacteria breathed sulfur instead of oxygen — but none of the microbes I was hoping to study would grow in the lab.
The biggest challenge in studying microbes is that a majority of them cannot be cultivated, which means that the only way to study their biology is through a method called metagenomics. My latest work is genomic language modeling. We’re hoping to develop a computational system so we can probe the organism as much as possible “in silico,” just using sequence data. A genomic language model is technically a large language model, except the language is DNA as opposed to human language. It’s trained in a similar way, just in biological language as opposed to English or French. If our objective is to learn the language of biology, we should leverage the diversity of microbial genomes. Even though we have a lot of data, and even as more samples become available, we’ve just scratched the surface of microbial diversity.
Q: Given how diverse microbes are and how little we understand about them, how can studying microbes in silico, using genomic language modeling, advance our understanding of the microbial genome?
A: A genome is many millions of letters. A human cannot possibly look at that and make sense of it. We can program a machine, though, to segment data into pieces that are useful. That’s sort of how bioinformatics works with a single genome. But if you’re looking at a gram of soil, which can contain thousands of unique genomes, that’s just too much data to work with — a human and a computer together are necessary in order to grapple with that data.
During my PhD and master’s degree, we were only just discovering new genomes and new lineages that were so different from anything that had been characterized or grown in the lab. These were things that we just called “microbial dark matter.” When there are a lot of uncharacterized things, that’s where machine learning can be really useful, because we’re just looking for patterns — but that’s not the end goal. What we hope to do is to map these patterns to evolutionary relationships between each genome, each microbe, and each instance of life.
Previously, we’ve been thinking about proteins as a standalone entity — that gets us to a decent degree of information because proteins are related by homology, and therefore things that are evolutionarily related might have a similar function.
What is known about microbiology is that proteins are encoded into genomes, and the context in which that protein is bounded — what regions come before and after — is evolutionarily conserved, especially if there is a functional coupling. This makes total sense because when you have three proteins that need to be expressed together because they form a unit, then you might want them located right next to each other.
What I want to do is incorporate more of that genomic context in the way that we search for and annotate proteins and understand protein function, so that we can go beyond sequence or structural similarity to add contextual information to how we understand proteins and hypothesize about their functions.
Q: How can your research be applied to harnessing the functional potential of microbes?
A: Microbes are possibly the world’s best chemists. Leveraging microbial metabolism and biochemistry will lead to more sustainable and more efficient methods for producing new materials, new therapeutics, and new types of polymers.
But it’s not just about efficiency — microbes are doing chemistry we don’t even know how to think about. Understanding how microbes work, and being able to understand their genomic makeup and their functional capacity, will also be really important as we think about how our world and climate are changing. A majority of carbon sequestration and nutrient cycling is undertaken by microbes; if we don’t understand how a given microbe is able to fix nitrogen or carbon, then we will face difficulties in modeling the nutrient fluxes of the Earth.
On the more therapeutic side, infectious diseases are a real and growing threat. Understanding how microbes behave in diverse environments relative to the rest of our microbiome is really important as we think about the future and combating microbial pathogens.
MIT community members elected to the National Academy of Inventors for 2025
The National Academy of Inventors (NAI) has named nine MIT affiliates as members of the 2025 class of NAI Fellows. They include Ahmad Bahai, an MIT professor of the practice in the Department of Electrical Engineering and Computer Science (EECS), and Kripa K. Varanasi, MIT professor in the Department of Mechanical Engineering, as well as seven additional MIT alumni. NAI fellowship is the highest professional distinction awarded solely to inventors.
“NAI Fellows are a driving force within the innovation ecosystem, and their contributions across scientific disciplines are shaping the future of our world,” says Paul R. Sanberg, fellow and president of the National Academy of Inventors. “We are thrilled to welcome this year’s class of fellows to the academy.”
This year’s 169 U.S. fellows represent 127 universities, government agencies, and research institutions across 40 U.S. states. Together, the 2025 class hold more than 5,300 U.S. patents and include recipients of the Nobel Prize, the National Medal of Science and National Medal of Technology and Innovation, as well as members of the national academies of Sciences, Engineering, and Medicine, among others.
Ahmad Bahai is professor of the practice in EECS. He was an adjunct professor at Stanford University from 2017 to 2022 and a professor in residence at the University of California at Berkeley from 2001 to 2010. Bahai has held a number of leadership roles, including director of research labs and chief technology officer of National Semiconductor, technical manager of a research group at Bell Laboratories, and founder of Algorex, a communication and acoustic integrated circuit and system company, which was acquired by National Semiconductor.
Currently, Bahai is the chief technology officer and director of corporate research of Texas Instruments and director of Kilby Labs and corporate research, and is a member of the Industrial Advisory Committee of CHIPS Act. Bahai is an IEEE Fellow and an AIMBE Fellow; he has authored over 80 publications in IEEE/IEE journals and holds more than 40 patents related to systems and circuits.
He holds an MS in electrical engineering from Imperial College London and a doctorate degree in electrical engineering from UC Berkeley.
Kripa K. Varanasi SM ’02, PhD ’04, professor of mechanical engineering, is widely recognized for his significant contributions in the field of interfacial science, thermal fluids, electrochemical systems, advanced materials, and manufacturing. A member of the MIT faculty since 2009, he leads the interdisciplinary Varanasi Research Group, which focuses on understanding physico-chemical and biological phenomena at the interfaces of matter. His group develops innovative surfaces, materials, devices, processes, and associated technologies that improve efficiency and performance across industries, including energy, decarbonization, life sciences, water, agriculture, transportation, and consumer products.
Varanasi has also scaled basic research into practical, market-ready technologies. He has co-founded six companies, including AgZen, Alsym Energy, CoFlo Medical, Dropwise, Infinite Cooling, and LiquiGlide, and his companies have been widely recognized for driving innovation across a range of industries. Throughout his career, Varanasi has been recognized for excellence in research and mentorship. Honors include the National Science Foundation CAREER Award, DARPA Young Faculty Award, SME Outstanding Young Manufacturing Engineer Award, ASME’s Bergles-Rohsenow Heat Transfer Award and Gustus L. Larson Memorial Award, Boston Business Journal’s 40 Under 40, and MIT’s Frank E. Perkins Award for Excellence in Graduate Advising.
Varanasi earned his undergraduate degree in mechanical engineering from the Indian Institute of Technology Madras, and his master’s degree and PhD from MIT. Prior to joining the faculty, he served as lead researcher and project leader at the GE Global Research Center, where he received multiple internal awards for innovation, leadership, and technical excellence. He was recently named faculty director of the Deshpande Center for Technological Innovation.
The seven additional MIT alumni who were elected to the NAI for 2025 include:
- Robert William Brown PhD ’68 (Physics);
- André DeHon ’90, SM ’93, PhD ’96 (Electrical Engineering and Computer Science);
- Shanhui Fan PhD ’97 (Physics);
- Jun O. Liu PhD ’90 (Chemistry);
- Marios-Christos Papaefthymiou SM ’90, PhD ’93 (Electrical Engineering and Computer Science);
- Darryll J. Pines SM ’88, PhD ’92 (Mechanical Engineering); and
- Yasha Yi PhD ’04 (Physics).
The NAI Fellows program was founded in 2012 and has grown to include 2,253 distinguished researchers and innovators, who hold over 86,000 U.S. patents and 20,000 licensed technologies. Collectively, NAI Fellows’ innovations have generated an estimated $3.8 trillion in revenue and 1.4 million jobs.
The 2025 class will be honored and presented with their medals by a senior official of the United States Patent and Trademark Office at the NAI 15th Annual Conference on June 4, 2026, in Los Angeles.
Working to eliminate barriers to adopting nuclear energy
What if there were a way to solve one of the most significant obstacles to the use of nuclear energy — the disposal of high-level nuclear waste (HLW)? Dauren Sarsenbayev, a third-year doctoral student at the MIT Department of Nuclear Science and Engineering (NSE), is addressing the challenge as part of his research.
Sarsenbayev focuses on one of the primary problems related to HLW: decay heat released by radioactive waste. The basic premise of his solution is to extract the heat from spent fuel, which simultaneously takes care of two objectives: gaining more energy from an existing carbon-free resource while decreasing the challenges associated with storage and handling of HLW. “The value of carbon-free energy continues to rise each year, and we want to extract as much of it as possible,” Sarsenbayev explains.
While the safe management and disposal of HLW has seen significant progress, there can be more creative ways to manage or take advantage of the waste. Such a move would be especially important for the public’s acceptance of nuclear energy. “We’re reframing the problem of nuclear waste, transforming it from a liability to an energy source,” Sarsenbayev says.
The nuances of nuclear
Sarsenbayev had to do a bit of reframing himself in how he perceived nuclear energy. Growing up in Almaty, the largest city in Kazakhstan, the collective trauma of Soviet nuclear testing loomed large over the public consciousness. Not only does the country, once a part of the Soviet Union, carry the scars of nuclear weapon testing, Kazakhstan is the world’s largest producer of uranium. It’s hard to escape the collective psyche of such a legacy.
At the same time, Sarsenbayev saw his native Almaty choking under heavy smog every winter, due to the burning of fossil fuels for heat. Determined to do his part to accelerate the process of decarbonization, Sarsenbayev gravitated to undergraduate studies in environmental engineering at Kazakh-German University. It was during this time that Sarsenbayev realized practically every energy source, even the promising renewable ones, came with challenges, and decided nuclear was the way to go for its reliable, low-carbon power. “I was exposed to air pollution from childhood; the horizon would be just black. The biggest incentive for me with nuclear power was that as long as we did it properly, people could breathe cleaner air,” Sarsenbayev says.
Studying transport of radionuclides
Part of “doing nuclear properly” involves studying — and reliably predicting — the long-term behavior of radionuclides in geological repositories.
Sarsenbayev discovered an interest in studying nuclear waste management during an internship at Lawrence Berkeley National Laboratory as a junior undergraduate student.
While at Berkeley, Sarsenbayev focused on modeling the transport of radionuclides from the nuclear waste repository’s barrier system to the surrounding host rock. He discovered how to use the tools of the trade to predict long-term behavior. “As an undergrad, I was really fascinated by how far in the future something could be predicted. It’s kind of like foreseeing what future generations will encounter,” Sarsenbayev says.
The timing of the Berkeley internship was fortuitous. It was at the laboratory that he worked with Haruko Murakami Wainwright, who was herself getting started at MIT NSE. (Wainwright is the Mitsui Career Development Professor in Contemporary Technology, and an assistant professor of NSE and of civil and environmental engineering).
Looking to pursue graduate studies in the field of nuclear waste management, Sarsenbayev followed Wainwright to MIT, where he has further researched the modeling of radionuclide transport. He is the first author on a paper that details mechanisms to increase the robustness of models describing the transport of radionuclides. The work captures the complexity of interactions between engineered barrier components, including cement-based materials and clay barriers, the typical medium proposed for the storage and disposal of spent nuclear fuel.
Sarsenbayev is pleased with the results of the model’s prediction, which closely mirrors experiments conducted at the Mont Terri research site in Switzerland, famous for studies in the interactions between cement and clay. “I was fortunate to work with Doctor Carl Steefel and Professor Christophe Tournassat, leading experts in computational geochemistry,” he says.
Real-life transport mechanisms involve many physical and chemical processes, the complexities of which increase the size of the computational model dramatically. Reactive transport modeling — which combines the simulation of fluid flow, chemical reactions, and the transport of substances through subsurface media — has evolved significantly over the past few decades. However, running accurate simulations comes with trade-offs: The software can require days to weeks of computing time on high-performance clusters running in parallel.
To arrive at results faster by saving on computing time, Sarsenbayev is developing a framework that integrates AI-based “surrogate models,” which train on simulated data and approximate the physical systems. The AI algorithms make predictions of radionuclide behavior faster and less computationally intensive than the traditional equivalent.
Doctoral research focus
Sarsenbayev is using his modeling expertise in his primary doctoral work as well — in evaluating the potential of spent nuclear fuel as an anthropogenic geothermal energy source. “In fact, geothermal heat is largely due to the natural decay of radioisotopes in Earth’s crust, so using decay heat from spent fuel is conceptually similar,” he says. A canister of nuclear waste can generate, under conservative assumptions, the energy equivalent of 1,000 square meters (a little under a quarter of an acre) of solar panels.
Because the potential for heat from a canister is significant — a typical one (depending on how long it was cooled in the spent fuel pool) has a temperature of around 150 degrees Celsius — but not enormous, extracting heat from this source makes use of a process called a binary cycle system. In such a system, heat is extracted indirectly: the canister warms a closed water loop, which in turn transfers that heat to a secondary low-boiling-point fluid that powers the turbine.
Sarsenbayev’s work develops a conceptual model of a binary-cycle geothermal system powered by heat from high-level radioactive waste. Early modeling results have been published and look promising. While the potential for such energy extraction is at the proof-of-concept stage in modeling, Sarsenbayev is hopeful that it will find success when translated to practice. “Converting a liability into an energy source is what we want, and this solution delivers,” he says.
Despite work being all-consuming — “I’m almost obsessed with and love my work” — Sarsenbayev finds time to write reflective poetry in both Kazakh, his native language, and Russian, which he learned growing up. He’s also enamored by astrophotography, taking pictures of celestial bodies. Finding the right night sky can be a challenge, but the canyons near his home in Almaty are an especially good fit. He goes on photography sessions whenever he visits home for the holidays, and his love for Almaty shines through. “Almaty means 'the place where apples originated.' This part of Central Asia is very beautiful; although we have environmental pollution, this is a place with a rich history,” Sarsenbayev says.
Sarsenbayev is especially keen on finding ways to communicate both the arts and sciences to future generations. “Obviously, you have to be technically rigorous and get the modeling right, but you also have to understand and convey the broader picture of why you’re doing the work, what the end goal is,” he says. Through that lens, the impact of Sarsenbayev’s doctoral work is significant. The end goal? Removing the bottleneck for nuclear energy adoption by producing carbon-free power and ensuring the safe disposal of radioactive waste.
RNA editing study finds many ways for neurons to diversify
All starting from the same DNA, neurons ultimately take on individual characteristics in the brain and body. Differences in which genes they transcribe into RNA help determine which type of neuron they become, and from there, a new MIT study shows, individual cells edit a selection of sites in those RNA transcripts, each at their own widely varying rates.
The new study surveyed the whole landscape of RNA editing in more than 200 individual cells commonly used as models of fundamental neural biology: tonic and phasic motor neurons of the fruit fly. One of the main findings is that most sites were edited at rates between the “all-or-nothing” extremes many scientists have assumed based on more limited studies in mammals, says senior author Troy Littleton, the Menicon Professor in the MIT departments of Biology and Brain and Cognitive Sciences. The resulting dataset and open-access analyses, recently published in eLife, set the table for discoveries about how RNA editing affects neural function and what enzymes implement those edits.
“We have this ‘alphabet’ now for RNA editing in these neurons,” Littleton says. “We know which genes are edited in these neurons, so we can go in and begin to ask questions as to what is that editing doing to the neuron at the most interesting targets.”
Andres Crane PhD ’24, who earned his doctorate in Littleton’s lab based on this work, is the study’s lead author.
From a genome of about 15,000 genes, Littleton and Crane’s team found, the neurons made hundreds of edits in transcripts from hundreds of genes. For example, the team documented “canonical” edits of 316 sites in 210 genes. Canonical means that the edits were made by the well-studied enzyme ADAR, which is also found in mammals, including humans. Of the 316 edits, 175 occurred in regions that encode the contents of proteins. Analysis indeed suggested 60 are likely to significantly alter amino acids. But they also found 141 more editing sites in areas that don’t code for proteins but instead affect their production, which means they could affect protein levels, rather than their contents.
The team also found many “non-canonical” edits that ADAR didn’t make. That’s important, Littleton says, because that information could aid in discovering more enzymes involved in RNA editing, potentially across species. That, in turn, could expand the possibilities for future genetic therapies.
“In the future, if we can begin to understand in flies what the enzymes are that make these other non-canonical edits, it would give us broader coverage for thinking about doing things like repairing human genomes where a mutation has broken a protein of interest,” Littleton says.
Moreover, by looking specifically at fly larvae, the team found many edits that were specific to juveniles, versus adults, suggesting potential significance during development. And because they looked at full gene transcripts of individual neurons, the team was also able to find editing targets that had not been cataloged before.
Widely varying rates
Some of the most heavily edited RNAs were from genes that make critical contributions to neural circuit communication such as neurotransmitter release, and the channels that cells form to regulate the flow of chemical ions that vary their electrical properties. The study identified 27 sites in 18 genes that were edited more than 90 percent of the time.
Yet neurons sometimes varied quite widely in whether they would edit a site, which suggests that even neurons of the same type can still take on significant degrees of individuality.
“Some neurons displayed ~100 percent editing at certain sites, while others displayed no editing for the same target,” the team wrote in eLife. “Such dramatic differences in editing rate at specific target sites is likely to contribute to the heterogeneous features observed within the same neuronal population.”
On average, any given site was edited about two-thirds of the time, and most sites were edited within a range well between all-or-nothing extremes.
“The vast majority of editing events we found were somewhere between 20 percent and 70 percent,” Littleton says. “We were seeing mixed ratios of edited and unedited transcripts within a single cell.”
Also, the more a gene was expressed, the less editing it experienced, suggesting that ADAR could only keep up so much with its editing opportunities.
Potential impacts on function
One of the key questions the data enables scientists to ask is what impact RNA edits have on the function of the cells. In a 2023 study, Littleton’s lab began to tackle this question by looking at just two edits they found in the most heavily edited gene: complexin. Complexin’s protein product restrains release of the neurotransmitter glutamate, making it a key regulator of neural circuit communication. They found that by mixing and matching edits, neurons produced up to eight different versions of the protein with significant effects on their glutamate release and synaptic electrical current. But in the new study, the team reports 13 more edits in complexin that are yet to be studied.
Littleton says he’s intrigued by another key protein, called Arc1, that the study shows experienced a non-canonical edit. Arc is a vitally important gene in “synaptic plasticity,” which is the property neurons have of adjusting the strength or presence of their “synapse” circuit connections in response to nervous system activity. Such neural nimbleness is hypothesized to be the basis of how the brain can responsively encode new information in learning and memory. Notably, Arc1 editing fails to occur in fruit flies that model Alzheimer’s disease.
Littleton says the lab is now working hard to understand how the RNA edits they’ve documented affect function in the fly motor neurons.
In addition to Crane and Littleton, the study’s other authors are Michiko Inouye and Suresh Jetti.
The National Institutes of Health, The Freedom Together Foundation, and The Picower Institute for Learning and Memory provided support for the study.
What makes a good proton conductor?
A number of advanced energy technologies — including fuel cells, electrolyzers, and an emerging class of low-power electronics — use protons as the key charge carrier. Whether or not these devices will be widely adopted hinges, in part, on how efficiently they can move protons.
One class of materials known as metal oxides has shown promise in conducting protons at temperatures above 400 degrees Celsius. But researchers have struggled to find the best materials to increase the proton conductivity at lower temperatures and improve efficiency.
Now, MIT researchers have developed a physical model to predict proton mobility across a wide range of metal oxides. In a new paper, the researchers ranked the most important features of metal oxides for facilitating proton conduction, and demonstrated for the first time how much the flexibility of the materials’ oxide ions improves their ability to transfer protons.
The researchers believe their findings can guide scientists and engineers as they develop materials for more efficient energy technologies enabled by protons, which are lighter, smaller, and more abundant than more common charge carriers like lithium ions.
“If you understand the mechanism of a process and what material traits govern that mechanism, then you can tune those traits to improve the speed of that process — in this case, proton conduction,” says Bilge Yildiz, the Breen M. Kerr Professor in the departments of Nuclear Science and Engineering (NSE) and Materials Science and Engineering (DMSE) at MIT and the senior author of a paper describing the work. “For this application, we need to understand these quantitative relations between the proton transfer and the material’s structural, chemical, electronic, and dynamic traits. Establishing these relations can help us screen material databases to find compounds that satisfy those material traits, or even go beyond screening. There could be ways to use generative AI tools to create compounds that optimize for those traits.”
The paper appears in the journal Matter. Joining Yildiz are Heejung W. Chung, the paper’s first author and an MIT PhD student in DMSE; Pjotrs Žguns, a former postdoc in DMSE; and Ju Li, the Carl Richard Soderberg Professor of Power Engineering in NSE and DMSE.
Making protons hop
Protons are already used at scale in electrolyzers for hydrogen production and in fuel cells. They are also expected to be used in promising energy-storage technologies such as proton batteries, which could be water-based and rely on cheaper materials than lithium-ion batteries. A more recent and exciting application is low-energy, brain-inspired computing to emulate synaptic functions in devices for artificial intelligence.
“Proton conductors are important materials in different energy conversion technologies for clean electricity, clean fuels, and clean industrial chemical synthesis,” explains Yildiz. “Inorganic, scalable proton conductors that work at room temperature are also needed for energy-efficient brain-inspired computing.”
Protons, which are the positively charged state of hydrogen, are different from lithium or sodium ions because they don’t have their own electrons — protons consist of just the bare nucleus. Therefore, protons prefer to embed into the electron clouds of nearby ions, hopping from one to the next. In metal oxides, protons embed into oxygen ions, forming a covalent bond, and hop to a nearby oxygen ion through a hydrogen bond. After every hop, the covalent H-O bond rotates to prevent the proton from shuttling back and forth.
All that hopping and rotating got MIT’s researchers thinking that the flexibility of those oxide ion sublattices must be important for conducting protons. Indeed, their previous studies in another class of proton conductors had shown how lattice flexibility impacts proton transport.
For their study, the researchers created a metric to quantify lattice flexibility across materials that they call “O…O fluctuation,” which measures the change in spacing between oxygen ions contributed by phonons at finite temperature. They also created a dataset of other material features that influence proton mobility and set out to quantify how important each one is for facilitating proton conduction.
“We were trying to better understand how protons move through these inorganic materials so that we can optimize them and improve the efficiency of downstream energy and computing applications,” Chung explains.
The researchers ranked the importance of all seven features they studied, which also included structural and chemical traits of materials, and trained a model on the findings to predict how well materials would conduct protons. The model found that the two most important features in predicting proton transfer barriers are the hydrogen bond length and the oxygen sublattice flexibility characterized by the O…O fluctuation metric. The shorter the hydrogen bond length, the better the material was at transporting protons, which aligned with previous studies of metal oxides. The researchers’ O…O fluctuation metric was the new and the second most important feature they studied. The more flexible the oxygen ion chains, the better the proton conduction.
Better proton conductors
The researchers believe their model could be used to estimate proton conduction across a broader range of materials.
“We always have to be cautious about generalizing findings, but the local chemistries and structures we studied have a wide enough spectrum that we think this finding is broadly applicable to a range of inorganic proton conductors,” Yildiz says.
Beyond being used to screen for promising materials, the researchers say their findings could also be used to train generative AI models to create materials optimized for proton transfer. As our understanding of materials improves, that could enable a new class of hyper-efficient clean energy technologies.
“There are very large materials databases generated recently in the field, for example those by Google and Microsoft, that could be screened for these relations we’ve found,” Yildiz says. “If the material compound that satisfies these parameters does not exist, we could also use these parameters to generate new compounds. That would enable increases in the energy efficiency and viability of clean energy conversion and low-power computing devices. For that, we need to figure out how to get more flexible oxide ion sublattices that are percolated. What are the composition and structure metrics that I can use to design the material to have that flexibility? Those are the next steps.”
The research was supported by the U.S. Department of Energy’s Energy Frontier Center – Hydrogen in Energy and Information Sciences – and the National Science Foundation’s Graduate Research Fellowship Program.
Deep-learning model predicts how fruit flies form, cell by cell
During early development, tissues and organs begin to bloom through the shifting, splitting, and growing of many thousands of cells.
A team of MIT engineers has now developed a way to predict, minute by minute, how individual cells will fold, divide, and rearrange during a fruit fly’s earliest stage of growth. The new method may one day be applied to predict the development of more complex tissues, organs, and organisms. It could also help scientists identify cell patterns that correspond to early-onset diseases, such as asthma and cancer.
In a study appearing today in the journal Nature Methods, the team presents a new deep-learning model that learns, then predicts, how certain geometric properties of individual cells will change as a fruit fly develops. The model records and tracks properties such as a cell’s position, and whether it is touching a neighboring cell at a given moment.
The team applied the model to videos of developing fruit fly embryos, each of which starts as a cluster of about 5,000 cells. They found the model could predict, with 90 percent accuracy, how each of the 5,000 cells would fold, shift, and rearrange, minute by minute, during the first hour of development, as the embryo morphs from a smooth, uniform shape into more defined structures and features.
“This very initial phase is known as gastrulation, which takes place over roughly one hour, when individual cells are rearranging on a time scale of minutes,” says study author Ming Guo, associate professor of mechanical engineering at MIT. “By accurately modeling this early period, we can start to uncover how local cell interactions give rise to global tissues and organisms.”
The researchers hope to apply the model to predict the cell-by-cell development in other species, such zebrafish and mice. Then, they can begin to identify patterns that are common across species. The team also envisions that the method could be used to discern early patterns of disease, such as in asthma. Lung tissue in people with asthma looks markedly different from healthy lung tissue. How asthma-prone tissue initially develops is an unknown process that the team’s new method could potentially reveal.
“Asthmatic tissues show different cell dynamics when imaged live,” says co-author and MIT graduate student Haiqian Yang. “We envision that our model could capture these subtle dynamical differences and provide a more comprehensive representation of tissue behavior, potentially improving diagnostics or drug-screening assays.”
The study’s co-authors are Markus Buehler, the McAfee Professor of Engineering in MIT’s Department of Civil and Environmental Engineering; George Roy and Tomer Stern of the University of Michigan; and Anh Nguyen and Dapeng Bi of Northeastern University.
Points and foams
Scientists typically model how an embryo develops in one of two ways: as a point cloud, where each point represents an individual cell as point that moves over time; or as a “foam,” which represents individual cells as bubbles that shift and slide against each other, similar to the bubbles in shaving foam.
Rather than choose between the two approaches, Guo and Yang embraced both.
“There’s a debate about whether to model as a point cloud or a foam,” Yang says. “But both of them are essentially different ways of modeling the same underlying graph, which is an elegant way to represent living tissues. By combining these as one graph, we can highlight more structural information, like how cells are connected to each other as they rearrange over time.”
At the heart of the new model is a “dual-graph” structure that represents a developing embryo as both moving points and bubbles. Through this dual representation, the researchers hoped to capture more detailed geometric properties of individual cells, such as the location of a cell’s nucleus, whether a cell is touching a neighboring cell, and whether it is folding or dividing at a given moment in time.
As a proof of principle, the team trained the new model to “learn” how individual cells change over time during fruit fly gastrulation.
“The overall shape of the fruit fly at this stage is roughly an ellipsoid, but there are gigantic dynamics going on at the surface during gastrulation,” Guo says. “It goes from entirely smooth to forming a number of folds at different angles. And we want to predict all of those dynamics, moment to moment, and cell by cell.”
Where and when
For their new study, the researchers applied the new model to high-quality videos of fruit fly gastrulation taken by their collaborators at the University of Michigan. The videos are one-hour recordings of developing fruit flies, taken at single-cell resolution. What’s more, the videos contain labels of individual cells’ edges and nuclei — data that are incredibly detailed and difficult to come by.
“These videos are of extremely high quality,” Yang says. “This data is very rare, where you get submicron resolution of the whole 3D volume at a pretty fast frame rate.”
The team trained the new model with data from three of four fruit fly embryo videos, such that the model might “learn” how individual cells interact and change as an embryo develops. They then tested the model on an entirely new fruit fly video, and found that it was able to predict with high accuracy how most of the embryo’s 5,000 cells changed from minute to minute.
Specifically, the model could predict properties of individual cells, such as whether they will fold, divide, or continue sharing an edge with a neighboring cell, with about 90 percent accuracy.
“We end up predicting not only whether these things will happen, but also when,” Guo says. “For instance, will this cell detach from this cell seven minutes from now, or eight? We can tell when that will happen.”
The team believes that, in principle, the new model, and the dual-graph approach, should be able to predict the cell-by-cell development of other multiceullar systems, such as more complex species, and even some human tissues and organs. The limiting factor is the availability of high-quality video data.
“From the model perspective, I think it’s ready,” Guo says. “The real bottleneck is the data. If we have good quality data of specific tissues, the model could be directly applied to predict the development of many more structures.”
This work is supported, in part, by the U.S. National Institutes of Health.
Fraternities and sororities at MIT raise funds for local charities
Throughout campus and across the river in Boston and Brookline, MIT hosts a vibrant network of 43 fraternities and sororities, with more than 35 percent of undergraduate students belonging to one of these value-based communities. Each fraternity and sorority is a unique community that not only fosters leadership and builds lifelong friendships, but also takes its role in giving back seriously.
Keeping up a 143-year-long tradition of philanthropy, several fraternities and sororities raised funds for a variety of local charities this fall, including the Breast Cancer Research Foundation, Boston Area Rape Crisis Center, and Dignity Matters of Boston.
With donations still coming in, Liz Jason, associate dean of Fraternities, Sororities and Independent Living Groups (FSILG) at MIT, says, “Philanthropy is a defining tradition within our FSILG community; it’s where values become action. When chapters give back, they strengthen their bonds, uplift others, and demonstrate what it truly means to be part of MIT: using talent, passion, and collective effort to make a real difference.”
To raise money, the fraternities and sororities hosted a variety of fun, clever, and even unique events and challenges over the course of the fall semester.
Sorority Alpha Chi Omega held an event called Walk a Mile in Her Shoes, where participants donned heels for a relay race-style event to raise awareness of gender stereotypes, domestic violence, and sexual assault. They also held a bake sale at the event, with funds going to the Boston Area Rape Crisis Center.
The Interfraternity Council (IFC) hosted a Greek Carnival on Kresge Oval in October to benefit the Boston Area Rape Crisis Center and to raise awareness about sexual violence. They held a variety of games and activities, including a dunk tank, a bake sale, a tug-of-war competition, and other field-day games.
“In my own chapter, Delta Tau Delta, I’ve seen an interest in increasing our philanthropic efforts, and as a member of the IFC Executive Board, I realized we could take the initiative to reduce barriers to entry for all chapters through a single large fundraising event,” says senior Luc Gaitskell.
In mid-November, the MIT Panhellenic Association created an event in which members of the community donated clothing, and then Panhel used the clothing to set up a one-time thrift shop where community members could come buy second-hand clothes at discounted prices. All the money raised was donated to Dignity Matters.
“Service has always been at the heart of what MIT Panhel does,” says senior Sabrina Chen. “We chose to partner with Dignity Matters because their mission of helping individuals stay healthy and regain self-confidence resonates with our commitment to supporting women and advancing equity. Our thrift shop was a perfect way to raise money for the organization while encouraging affordable, sustainable fashion.”
Division of Student Life vice chancellor Suzy Nelson explains, “Our students are committed to a range of causes; their dedication reflects not only their generosity, but also the spirit of engaging the MIT community in giving back through philanthropy.”
Students interested in joining a fraternity, sorority, or an independent living group can find more information on the Division of Student Life website.
MIT HEALS leadership charts a bold path for convergence in health and life sciences
In February, President Sally Kornbluth announced the appointment of Professor Angela Koehler as faculty director of the MIT Health and Life Sciences Collaborative (MIT HEALS), with professors Iain Cheeseman and Katharina Ribbeck as associate directors. Since then, the leadership team has moved quickly to shape HEALS into an ambitious, community-wide platform for catalyzing research, translation, and education at MIT and beyond — at a moment when advances in computation, biology, and engineering are redefining what’s possible in health and the life sciences.
Rooted in MIT’s long-standing strengths in foundational discovery, convergence, and translational science, HEALS is designed to foster connections across disciplines — linking life scientists and engineers with clinicians, computational scientists, humanists, operations researchers, and designers. The initiative builds on a simple premise: that solving today’s most pressing challenges in health and life sciences requires bold thinking, deep collaboration, and sustained investment in people.
“HEALS is an opportunity to rethink how we support talent, unlock scientific ideas, and translate them into impact,” says Koehler, the Charles W. and Jennifer C. Johnson Professor in the Department of Biological Engineering and associate director of the Koch Institute for Integrative Cancer Research. “We’re building on MIT’s best traditions — convergence, experimentation, and entrepreneurship — while opening new channels for interdisciplinary research and community building.”
Koehler says her own path has been shaped by that same belief in convergence. Early collaborations between chemists, engineers, and clinicians convinced her that bringing diverse people together — what she calls “induced proximity” — can spark discoveries that wouldn’t emerge in isolation.
A culture of connection
Since stepping into their roles, the HEALS leadership team has focused on building a collaborative ecosystem that enables researchers to take on bold, interdisciplinary challenges in health and life sciences. Rather than creating a new center or department, their approach emphasizes connecting the MIT community across existing boundaries — disciplinary, institutional, and cultural.
“We want to fund science that wouldn’t otherwise happen — projects that bridge gaps, open new doors, and bring researchers together in ways that are genuinely constructive and collaborative,” says Iain Cheeseman, the Herman and Margaret Sokol Professor of Biology, core member of the Whitehead Institute for Biomedical Research, and associate head of the Department of Biology.
That vision is already taking shape through initiatives like the MIT HEALS seed grants, which support bold new collaborations between MIT principal investigators; the MIT–Mass General Brigham Seed Program, which supports joint research between investigators at MIT and clinicians at MGB; and the Biswas Postdoctoral Fellowship Program, designed to bring top early-career researchers to MIT to pursue cross-cutting work in areas such as computational biology, biomedical engineering, and therapeutic discovery.
The leadership team sees these programs not as endpoints, but as starting points for a broader shift in how MIT supports health and life sciences research.
For Cheeseman, whose lab is working to build on their fundamental discoveries on how human cells function to impact cancer treatment and rare human disease, HEALS represents a way to connect deep biological discovery with the translational insights emerging from MIT’s engineering and clinical communities. He puts it simply: “to me, this is deeply personal, recognizing the limitations that existed for my own work and hoping to unlock these possibilities for researchers across MIT.”
Training the next generation
Ribbeck, a biologist focused on mucus and microbial ecosystems, sees HEALS as a way to train scientists who are as comfortable discussing patient needs as they are conducting experiments at the bench. She emphasizes that preparing the next generation of researchers means equipping them with fluency in areas like clinical language, regulatory processes, and translational pathways — skills many current investigators lack. “Many PIs, although they do clinical research, may not have dedicated support for taking their findings to the next level — how to design a clinical trial, or what regulatory questions need to be addressed — reflecting a broader structural gap in translational training” she says.
A central focus for the HEALS leadership team is building new models for training researchers to move fluidly between disciplines, institutions, and methods of translation. Ribbeck and Koehler stress the importance of giving students and postdocs hands-on opportunities that connect research with real-world experience. That means expanding programs like the Undergraduate Research Opportunities Program (UROP), the Advanced UROP (SuperUROP), and the MIT New Engineering Education Transformation, and creating new ways for trainees to engage with industry, clinical partners, and entrepreneurship. They are learning at the intersection of engineering, biology, and medicine — and increasingly across disciplines that span economics, design, the social sciences, and the humanities, where students are already creating collaborations that do not yet have formal pathways.
Koehler, drawing from her leadership at the Deshpande Center for Technological Innovation and the Koch Institute, notes that “if we invest in the people, the solutions to problems will naturally arise.” She envisions HEALS as a platform for induced proximity — not just of disciplines, but of people at different career stages, working together in environments that support both risk-taking and mentorship.
“For me, HEALS builds on what I’ve seen work at MIT — bringing people with different skill sets together to tackle challenges in life sciences and medicine,” she says. “It’s about putting community first and empowering the next generation to lead across disciplines.”
A platform for impact
Looking ahead, the HEALS leadership team envisions the collaborative as a durable platform for advancing health and life sciences at MIT. That includes launching flagship events, supporting high-risk, high-reward ideas, and developing partnerships across the biomedical ecosystem in Boston and beyond. As they see it, MIT is uniquely positioned for this moment: More than three-quarters of the Institute’s faculty work in areas that touch health and life sciences, giving HEALS a rare opportunity to bring that breadth together in new configurations and amplify impact across disciplines.
From the earliest conversations, the leaders have heard a clear message from faculty across MIT — a strong appetite for deeper connection, for working across boundaries, and for tackling urgent societal challenges together. That shared sense of momentum is what gave rise to HEALS, and it now drives the team’s focus on building the structures that can support a community that wants to collaborate at scale.
“Faculty across MIT are already reaching out — looking to connect with clinics, collaborate on new challenges, and co-create solutions,” says Koehler. “That hunger for connection is why HEALS was created. Now we have to build the structures that support it.”
Cheeseman adds that this collaborative model is what makes MIT uniquely positioned to lead. “When you bring together people from different fields who are motivated by impact,” he says, “you create the conditions for discoveries that none of us could achieve alone.”
Enabling small language models to solve complex reasoning tasks
As language models (LMs) improve at tasks like image generation, trivia questions, and simple math, you might think that human-like reasoning is around the corner. In reality, they still trail us by a wide margin on complex tasks. Try playing Sudoku with one, for instance, where you fill in numbers one through nine in such a way that each appears only once across the columns, rows, and sections of a nine-by-nine grid. Your AI opponent will either fail to fill in boxes on its own or do so inefficiently, although it can verify if you’ve filled yours out correctly.
Whether an LM is trying to solve advanced puzzles, design molecules, or write math proofs, the system struggles to answer open-ended requests that have strict rules to follow. The model is better at telling users how to approach these challenges than attempting them itself. Moreover, hands-on problem-solving requires LMs to consider a wide range of options while following constraints. Small LMs can’t do this reliably on their own; large language models (LLMs) sometimes can, particularly if they’re optimized for reasoning tasks, but they take a while to respond, and they use a lot of computing power.
This predicament led researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) to develop a collaborative approach where an LLM does the planning, then divvies up the legwork of that strategy among smaller ones. Their method helps small LMs provide more accurate responses than leading LLMs like OpenAI’s GPT-4o, and approach the precision of top reasoning systems such as o1, while being more efficient than both. Their framework, called “Distributional Constraints by Inference Programming with Language Models” (or “DisCIPL”), has a large model steer smaller “follower” models toward precise responses when writing things like text blurbs, grocery lists with budgets, and travel itineraries.
The inner workings of DisCIPL are much like contracting a company for a particular job. You provide a “boss” model with a request, and it carefully considers how to go about doing that project. Then, the LLM relays these instructions and guidelines in a clear way to smaller models. It corrects follower LMs’ outputs where needed — for example, replacing one model’s phrasing that doesn’t fit in a poem with a better option from another.
The LLM communicates with its followers using a language they all understand — that is, a programming language for controlling LMs called “LLaMPPL.” Developed by MIT's Probabilistic Computing Project in 2023, this program allows users to encode specific rules that steer a model toward a desired result. For example, LLaMPPL can be used to produce error-free code by incorporating the rules of a particular language within its instructions. Directions like “write eight lines of poetry where each line has exactly eight words” are encoded in LLaMPPL, queuing smaller models to contribute to different parts of the answer.
MIT PhD student Gabriel Grand, who is the lead author on a paper presenting this work, says that DisCIPL allows LMs to guide each other toward the best responses, which improves their overall efficiency. “We’re working toward improving LMs’ inference efficiency, particularly on the many modern applications of these models that involve generating outputs subject to constraints,” adds Grand, who is also a CSAIL researcher. “Language models are consuming more energy as people use them more, which means we need models that can provide accurate answers while using minimal computing power.”
“It's really exciting to see new alternatives to standard language model inference,” says University of California at Berkeley Assistant Professor Alane Suhr, who wasn’t involved in the research. “This work invites new approaches to language modeling and LLMs that significantly reduce inference latency via parallelization, require significantly fewer parameters than current LLMs, and even improve task performance over standard serialized inference. The work also presents opportunities to explore transparency, interpretability, and controllability of model outputs, which is still a huge open problem in the deployment of these technologies.”
An underdog story
You may think that larger-scale LMs are “better” at complex prompts than smaller ones when it comes to accuracy and efficiency. DisCIPL suggests a surprising counterpoint for these tasks: If you can combine the strengths of smaller models instead, you may just see an efficiency bump with similar results.
The researchers note that, in theory, you can plug in dozens of LMs to work together in the DisCIPL framework, regardless of size. In writing and reasoning experiments, they went with GPT-4o as their “planner LM,” which is one of the models that helps ChatGPT generate responses. It brainstormed a plan for several “Llama-3.2-1B” models (smaller systems developed by Meta), in which those LMs filled in each word (or token) of the response.
This collective approach competed against three comparable ones: a follower-only baseline powered by Llama-3.2-1B, GPT-4o working on its own, and the industry-leading o1 reasoning system that helps ChatGPT figure out more complex questions, such as coding requests and math problems.
DisCIPL first presented an ability to write sentences and paragraphs that follow explicit rules. The models were given very specific prompts — for example, writing a sentence that has exactly 18 words, where the fourth word must be “Glasgow,” the eighth should be “in”, and the 11th must be “and.” The system was remarkably adept at handling this request, crafting coherent outputs while achieving accuracy and coherence similar to o1.
Faster, cheaper, better
This experiment also revealed that key components of DisCIPL were much cheaper than state-of-the-art systems. For instance, whereas existing reasoning models like OpenAI’s o1 perform reasoning in text, DisCIPL “reasons” by writing Python code, which is more compact. In practice, the researchers found that DisCIPL led to 40.1 percent shorter reasoning and 80.2 percent cost savings over o1.
DisCIPL’s efficiency gains stem partly from using small Llama models as followers, which are 1,000 to 10,000 times cheaper per token than comparable reasoning models. This means that DisCIPL is more “scalable” — the researchers were able to run dozens of Llama models in parallel for a fraction of the cost.
Those weren’t the only surprising findings, according to CSAIL researchers. Their system also performed well against o1 on real-world tasks, such as making ingredient lists, planning out a travel itinerary, and writing grant proposals with word limits. Meanwhile, GPT-4o struggled with these requests, and with writing tests, it often couldn’t place keywords in the correct parts of sentences. The follower-only baseline essentially finished in last place across the board, as it had difficulties with following instructions.
“Over the last several years, we’ve seen some impressive results from approaches that use language models to ‘auto-formalize’ problems in math and robotics by representing them with code,” says senior author Jacob Andreas, who is an MIT electrical engineering and computer science associate professor and CSAIL principal investigator. “What I find most exciting about this paper is the fact that we can now use LMs to auto-formalize text generation itself, enabling the same kinds of efficiency gains and guarantees that we’ve seen in these other domains.”
In the future, the researchers plan on expanding this framework into a more fully-recursive approach, where you can use the same model as both the leader and followers. Grand adds that DisCIPL could be extended to mathematical reasoning tasks, where answers are harder to verify. They also intend to test the system on its ability to meet users’ fuzzy preferences, as opposed to following hard constraints, which can’t be outlined in code so explicitly. Thinking even bigger, the team hopes to use the largest possible models available, although they note that such experiments are computationally expensive.
Grand and Andreas wrote the paper alongside CSAIL principal investigator and MIT Professor Joshua Tenenbaum, as well as MIT Department of Brain and Cognitive Sciences Principal Research Scientist Vikash Mansinghka and Yale University Assistant Professor Alex Lew SM ’20 PhD ’25. CSAIL researchers presented the work at the Conference on Language Modeling in October and IVADO’s “Deploying Autonomous Agents: Lessons, Risks and Real-World Impact” workshop in November.
Their work was supported, in part, by the MIT Quest for Intelligence, Siegel Family Foundation, the MIT-IBM Watson AI Lab, a Sloan Research Fellowship, Intel, the Air Force Office of Scientific Research, the Defense Advanced Research Projects Agency, the Office of Naval Research, and the National Science Foundation.
New MIT program to train military leaders for the AI age
Artificial intelligence can enhance decision-making and enable action with reduced risk and greater precision, making it a critical tool for national security. A new program offered jointly by the MIT departments of Mechanical Engineering (Course 2, MechE) and Electrical Engineering and Computer Science (Course 6, EECS) will provide breadth and depth in technical studies for naval officers, as well as a path for non-naval officers studying at MIT, to grow in their understanding of applied AI for naval and military applications.
“The potential for artificial intelligence is just starting to be fully realized. It’s a tool that dramatically improves speed, efficiency, and decision-making with countless applications,” says Commander Christopher MacLean, MIT associate professor of the practice in mechanical engineering, naval construction, and engineering. “AI is a force multiplier that can be used for data processing, decision support, unmanned and autonomous systems, cyber defense, logistics and supply chains, energy management, and many other fields.”
The program, called “2N6: Applied Artificial Intelligence Program for Naval Officers,” comprises a two-year master of science degree in mechanical engineering with an accompanying AI certificate awarded by the MIT Schwarzman College of Computing.
“The officers entering this program will learn from the world’s experts, and conduct cutting-edge relevant research, and will exit the program best prepared for their roles as leaders across the U.S. naval enterprise,” says MacLean.
The 2N6 curriculum is application focused, and the content is built to satisfy the U.S. Navy’s sub-specialty code for Applied Artificial Intelligence. Students will learn core AI concepts, as well as applications to special topics, such as decision-making for computational exercises; AI for manufacturing and design, with special emphasis on navy applications; and AI for marine autonomy of surface and underwater vehicles.
“The expanding influence of artificial intelligence is redefining our approach to problem-solving. AI holds the potential to address some of the most pressing issues in nearly every field,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I’m honored that the college can contribute to and support such a vital program that will equip our nation’s naval officers with the technical expertise they need for mission-relevant challenges.”
MIT has been a leading center of ship research and design for over a century, with work at the Institute today representing significant advancements in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. The 2N program will celebrate its 125th year at MIT in 2026.
“In MechE, we are embracing the use of AI to explore new frontiers in research and education, with deep grounding in the fundamentals, design, and scaling of physical systems,” says John Hart, the Class of 1922 Professor and head of MechE. “With the 2N6 program, we’re proud to be at the helm of such an important charge in training the next generation of leaders for the Navy.”
“Breakthroughs in artificial intelligence are reshaping society and advancing human decision-making and creativity,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing, head of EECS, and MathWorks Professor. “We are delighted to partner with the Department of Mechanical Engineering in launching this important collaboration with the U.S. Navy. The program will explore not only the forefront of AI advances, but also its effective application in Navy operations.”
2N6 was created following a visit to campus from Admiral Samuel Paparo, commander of the U.S. Indo-Pacific Command, with MIT Provost Anantha Chandrakasan, who was then dean of engineering and chief innovation and strategy officer.
“[Admiral Paparo] was given an overview of some of the cutting-edge work and research that MIT has done and is doing in the field of AI, [and was introduced to the 2N program],” says MacLean. “The admiral made the connection, envisioning an applied AI program similar to 2N.”
2N6 will run as a pilot program for at least two years. The program’s first cohort will comprise only U.S. Navy officers, with plans to expand more broadly.
“We are thrilled to build on the long-standing relationship between MIT and the U.S. Navy with this new program,” says Themis Sapsis, William I. Koch Professor in mechanical engineering and the director of the Center for Ocean Engineering at MIT. “It is specifically designed to train naval officers on the fundamentals and applications of AI, but also involve them in research that has direct impact to the Navy. We believe that 2N6 can model a new paradigm for advanced AI education focused more broadly on supporting national security.”
A better DNA material for genetic medicine
To our immune system, a potentially lifesaving gene therapy can look a lot like a dangerous infection. That’s because most genetic medicine uses viruses or double-stranded DNA to deliver genetic information to target cells. DNA in its traditional double helix form can lead to toxic immune stimulation and be difficult to package into cellular delivery vehicles. As a result, the reach of genetic medicine is limited today.
Kano Therapeutics is taking a different approach to genetic therapies. The company is developing gene-editing technologies using circular single-stranded DNA (cssDNA), a biomolecule that is less toxic than double stranded DNA and more stable than RNA, and could be delivered more efficiently to many parts of the body to treat genetic diseases, cancers, and more.
The company, which was founded by former MIT postdoc Floris Engelhardt, professor of biological engineering Mark Bathe, and John Vroom MBA ’22, is developing a platform for manufacturing cssDNA of customized lengths and sequences, which could deliver genetic material to fix or replace faulty genes.
“We can work with CRISPR and other gene-editing technologies,” Engelhardt says. “CRISPR finds a location in a genome, binds to it, and cuts at that location. That allows you to edit a gene or stop a gene from functioning. But what if you have a loss-of-function disease where you need to insert a new piece of genetic code? Our approach allows you to replace whole genes or add genetic information.”
Making DNA flexible
Around 2019, Bathe’s lab published research describing ways to engineer the sequence and length of cssDNA molecules, which have been used in labs for decades but have increasingly drawn interest for improving gene therapies. Several pharmaceutical companies immediately reached out.
“Single-stranded DNA is a little like messenger RNA, which can code for any protein in any cell, tumor, or organ,” Bathe says. “It fundamentally encodes for a protein, so it can be used across diseases, including rare diseases that may only affect a few people in the country.”
Engelhardt had also worked on cssDNA as a PhD student in Munich. She met Bathe at a conference.
“We were considering collaborating on research,” Engelhardt recalls. “Then Mark heard I was finishing my PhD and said, ‘Wait a minute. Instead of collaborating, I should hire you.’”
Within 48 hours of submitting her PhD thesis, Engelhardt received an email asking her to apply to Bathe’s lab as a postdoc. She was drawn to the position because she would be focusing on research that had the potential to help patients.
“MIT is very good at creating industry-focused postdocs,” Engelhardt says. “I was inspired by the idea of doing postdoc work with the goal of spinning out a company, as opposed to doing solely academic-focused research.”
Bathe and Engelhardt learned from members of the pharmaceutical industry how single-stranded DNA could help overcome limitations in gene and cell therapies. Although CRISPR-based treatments have recently been approved for a few genetic diseases, CRISPR’s effectiveness has been limited by its potential toxicity and inefficient delivery to specific sites in the body. Also, those treatments can only be administered once because CRISPR often gets labeled as foreign by our immune systems and rejected from the body.
Engelhardt began exploring MIT’s resources to help commercialize her research. She met Vroom through an online “founder speed dating” event at MIT. She also received support from the Venture Mentoring Service, took classes at MIT’s Sloan School of Management, and worked with MIT’s Industrial Liaison Program. Early on, Bathe suggested Engelhardt work with MIT’s Technology License Office, something she says she tells every founder to do the moment they start thinking about commercializing their research.
In 2021, Kano won the $20,000 first place prize at the MIT Sloan Healthcare Innovation Prize (SHIP) to commercialize a new way to design and manufacture single-stranded DNA. Kano uses fermentation to produce its cssDNA less expensively than approaches based on chemical DNA synthesis.
“No one had the ability to access this type of genetic material, and so a lot of our work was around creating the highest-quality, economically scalable process to allow circular single-stranded DNA to be commercially viable,” Engelhardt says.
Engelhardt and Vroom began meeting with investors as soon as Engelhardt finished her postdoc work in 2021. The founders worked to raise money over the next year while Vroom finished his MBA.
Today, Kano’s circular ssDNA can be used to insert entire genes, up to 10,000 nucleotides long, into the body. Kano is planning to partner with pharmaceutical companies to make their gene therapies more targeted and potent. For instance, pharmaceutical partners could use Kano’s platform to join the CD19 and CD20 genes, which are expressed in certain tumor cells, and stipulate that only if both genes bind to a cell receptor do they enter that cell’s genome and make edits.
Overall, Engelhardt says working with circular single-stranded DNA makes Kano’s approach more flexible than platforms like CRISPR.
“We realized working with pharmaceutical companies early on in my postdoc there was a lack of design understanding because of the lack of access to these molecules,” Engelhardt says. “When it comes to gene or cell therapies, people just think of the gene itself, not the flanking sequences or anything else that goes around the gene. Now that the DNA isn’t stuck in a double helix all the time, I can create small, three-dimensional structures — think loops or hairpins — that work, for example, as a binding protein that pulls it into the nucleus. That unlocks a completely new path for DNA because it makes it engineerable — not only on a structural level but also a sequence level.”
Partnering for impact
To facilitate more partnerships, Kano is signing agreements with partners that give it a smaller percentage of eventual drug royalties but allow it to work with many companies at the same time. In a recent collaboration with Merck KGaA, Kano combined its circular cssDNA platform with the company’s lipid nanoparticles solutions for delivering gene therapies. Kano is also in discussions with other large pharmaceutical companies to jointly bring cancer drugs into the clinic over the next two years.
“That’s exciting because we’ll be implementing our DNA into partners’ drug system, so when they file their new drug and dose their first patients, our DNA is going to be the therapeutic information carrier for efficacy,” Engelhardt says. “As a first-time founder, this is where you want to go. We talk about patient impact all the time, and this is how we’re going to get it.”
Kano is also developing the first databank mapping cssDNA designs to activity, to speed up the development of new treatments.
“Right now, there is no understanding of how to design DNA for these therapies,” Engelhardt says. “Everyone who wants to differentiate needs to come up with a new editing tool, a new delivery tool, and there’s no connecting company that can enable those areas of expertise. When partners come to us, we can say, ‘The gene sequence is all yours.’ But often it’s not just about the sequence. It’s also about the promoter or flanking sequence that allows you to insert your DNA into the genome, or that makes DNA package well into your delivery nanoparticle. At Kano, we’re building the best knowledgebase to use DNA material to treat diseases.”
Making clean energy investments more successful
Governments and companies constantly face decisions about how to allocate finite amounts of money to clean energy technologies that can make a difference to the world’s climate, its economies, and to society as a whole. The process is inherently uncertain, but research has been shown to help predict which technologies will be most successful. Using data-driven bases for such decisions can have a significant impact on allowing more informed decisions that produce the desired results.
The role of these predictive tools, and the areas where further research is needed, are addressed in a perspective article published Nov. 24 in Nature Energy, by professor Jessika Trancik of MIT’s Sociotechnical Systems Research Center and Institute of Data, Systems, and Society and 13 co-authors from institutions around the world.
She and her co-authors span engineering and social science and share “a common interest in understanding how to best use data and models to inform decisions that influence how technology evolves,” Trancik says. They are interested in “analyzing many evolving technologies — rather than focusing on developing only one particular technology — to understand which ones can deliver.” Their paper is aimed at companies and governments, as well as researchers. “Increasingly, companies have as much agency as governments over these technology portfolio decisions,” she says, “although government policy can still do a lot because it can provide a sort of signal across the market.”
The study looked at three stages of the process, starting with forecasting the actual technological changes that are likely to play important roles in coming years, then looking at how those changes could affect economic, social, and environmental conditions, and finally, how to apply these insights into the actual decision-making processes as they occur.
Forecasting usually falls into two categories, either data-driven or expert-driven, or a combination of those. That provides an estimate of how technologies may be improving, as well as an estimate of the uncertainties in those predictions. Then in the next step, a variety of models are applied that are “very wide ranging,” Trancik says, “different models that cover energy systems, transportation systems, electricity, and also integrated assessment models that look at the impact of technology on the environment and on the economy.”
And then, the third step is “finding structured ways to use the information from predictive models to interact with people that may be using that information to inform their decision-making process,” she says. “In all three of these steps, how you need to recognize the vast uncertainty and tease out the predictive aspects. How you deal with uncertainty is really important.”
In the implementation of these decisions, “people may have different objectives, or they may have the same objective but different beliefs about how to get there. And so, part of the research is bringing in this quantitative analysis, these research results, into that process,” Trancik says. And a very important aspect of that third step, she adds, is “recognizing that it’s not just about presenting the model results and saying, ‘here you go, this is the right answer.’ Rather, you have to bring people into the process of designing the studies and interacting with the modeling results.”
She adds that “the role of research is to provide information to, in this case, the decision-making processes. It’s not the role of the researchers to push for one outcome or another, in terms of balancing the trade-offs,” such as between economic, environmental, and social equity concerns. It’s about providing information, not just for the decision-makers themselves, but also for the public who may influence those decisions. “I do think it’s relevant for the public to think about this, and to think about the agency that actually they could have over how technology is evolving.”
In the study, the team highlighted priorities for further research that needs to be done. Those priorities, Trancik says, include “streamlining and validating models, and also streamlining data collection,” because these days “we often have more data than we need, just tons of data,” and yet “there’s often a scarcity of data in certain key areas like technology performance and evolution. How technologies evolve is just so important in influencing our daily lives, yet it’s hard sometimes to access good representative data on what’s actually happening with this technology.” But she sees opportunities for concerted efforts to assemble large, comprehensive data on technology from publicly available sources.
Trancik points out that many models are developed to represent some real-world process, and “it’s very important to test how well that model does against reality,” for example by using the model to “predict” some event whose outcome is already known and then “seeing how far off you are.” That’s easier to do with a more streamlined model, she says.
“It’s tempting to develop a model that includes many, many parameters and lots of different detail. But often what you need to do is only include detail that’s relevant for the particular question you’re asking, and that allows you to make your model simpler.” Sometimes that means you can simplify the decision down to just solving an equation, and other times, “you need to simulate things, but you can still validate the model against real-world data that you have.”
“The scale of energy and climate problems mean there is much more to do,” says Gregory Nemet, faculty chair in business and regulation at the University of Wisconsin at Madison, who was a co-author of the paper. He adds, “while we can’t accurately forecast individual technologies on their own, a variety of methods have been developed that in conjunction can enable decision-makers to make public dollars go much further, and enhance the likelihood that future investments create strong public benefits.”
This work is perhaps particularly relevant now, Trancik says, in helping to address global challenges including climate change and meeting energy demand, which were in focus at the global climate conference COP 30 that just took place in Brazil. “I think with big societal challenges like climate change, always a key question is, ‘how do you make progress with limited time and limited financial resources?’” This research, she stresses, “is all about that. It’s about using data, using knowledge that’s out there, expertise that’s out there, drawing out the relevant parts of all of that, to allow people and society to be more deliberate and successful about how they’re making decisions about investing in technology.”
As with other areas such as epidemiology, where the power of analytical forecasting may be more widely appreciated, she says, “in other areas of technology as well, there’s a lot we can do to anticipate where things are going, how technology is evolving at the global or at the national scale … There are these macro-level trends that you can steer in certain directions, that we actually have more agency over as a society than we might recognize.”
The study included researchers in Massachusetts, Wisconsin, Colorado, Maryland, Maine, California, Austria, Norway, Mexico, Finland, Italy, the U.K., and the Netherlands.
President Tharman Shanmugaratnam of Singapore visits MIT
President Tharman Shanmugaratnam of the Republic of Singapore visited MIT on Tuesday, meeting campus leaders while receiving the Miriam Pozen Prize and delivering a lecture on fiscal policy at the MIT Sloan School of Management.
“We really have to re-orient fiscal policy and develop new fiscal compacts,” said Tharman in his remarks, referring to the budget policy challenges countries face at a time of expanding government debt.
His talk, “The Compacts We Need: Fiscal Choices and Risk-sharing for Sustained Prosperity,” was delivered before a capacity audience of students, faculty, administrators, and staff at MIT’s Samberg Center.
Tharman is a trained economist who for many years ran Singapore’s central bank and has become a notable presence in global policymaking circles. Presenting a crisp summary of global trends, he observed that debt levels in major economies are at or beyond levels once regarded as unsustainable.
“There is no realistic solution to putting government debts back on a sustainable path other than having to make major adjustments to taxes and spending,” he said. However, he emphasized that his remarks were distinctly not “a call for austerity.” Instead, as he outlined, well-considered public investment can reduce the need for additional spending and thus be fiscally sound over time.
For instance, he noted, sound policy approaches can reduce individuals’ health care needs by better providing the conditions in which people stay healthy. Lowering some of these individual burdens and investing in community-building policies can help society both fiscally and by enhancing social solidarity.
“The challenge is to make these adjustments while re-fashioning fiscal policy so that people can see the adjustments — they can see the value in government spending that their taxes are contributing to — and to make adjustments in a way that doesn’t reduce growth,” Tharman said. “You do need growth for solidarity.”
In this sense, he proposed, “We need new fiscal compacts, new retirement compacts, and new global compacts to address the risks that are posed in the minds of individuals, as well as the largest risks” in society. Countries are vulnerable to a variety of shocks, he noted, calling climate change the “defining challenge of our time.” And yet, he added, for all of this, sensible policymaking can encourage people, creating more support for public-minded governance.
“It is that sharing of hopes and aspirations that is at the heart of true solidarity, not the sharing of fears,” Tharman concluded.
Before the lecture, Tharman was greeted by MIT Provost Anantha Chandrakasan, who presented him with a small gift from the MIT Glass Lab, and MIT Sloan Dean Richard Locke. Locke then made welcoming remarks at the event, praising Tharman’s “remarkable leadership in international financial policy, among other things.” After the lecture, Tharman also met with a group of MIT students from Singapore.
The Miriam Pozen Prize is awarded every two years by the MIT Golub Center for Finance and Policy, part of MIT Sloan. The prize, which recognizes extraordinary contributions to financial policy, was created to draw attention to the important research on financial policy conducted at the Golub Center, whose mission is to support research and educational initiatives related to governments’ roles as financial institutions and as regulators of the global financial system. It is named for the mother of MIT Sloan Senior Lecturer Robert C. Pozen, who is also the former executive chairman of MFS Investment Management, and a former vice chairman of Fidelity Investments and president of Fidelity Management and Research Company.
In introductory remarks. Robert Pozen said he was “deeply honored” to present the prize, adding, “It’s very unusual to have someone who is both a brilliant economist and an effective political leader, and that combination is exactly what we’re trying to honor and recognize.”
The previous recipients of the award are Mario Draghi PhD ’77, the former prime minister of Italy and president of the European Central Bank; and the late Stanley Fischer PhD ’69, an influential MIT economist who later became governor of the Bank of Israel, and then vice-chairman of the U.S. Federal Reserve. Draghi received the honor in 2023, and Fischer in 2021.
Tharman was first elected to his current office in 2023. In Singapore, he previously served as, among other roles, deputy prime minister, minister for finance, minister for education, and chairman of the Monetary Authority of Singapore.
Tharman holds a BA in economics from the London School of Economics, an MA in economics from the University of Cambridge, and an MPA from the Harvard Kennedy School at Harvard University.
MIT and Singapore have developed a sustained and productive relationship in research and education over the last quarter-century. The Singapore-MIT Alliance for Research and Technology (SMART), formally launched in 2007, is MIT’s first research center located outside of the United States, featuring work in several interdisciplinary areas of innovation.
The MIT-Singapore program also provides MIT students with research, work, and educational opportunities in Singapore. Additionally, MIT Institute Professor Emeritus Thomas Magnanti, who was present at Tuesday’s event, was the founding president of the Singapore University of Technology and Design, in 2009.
Tuesday’s event also had introductory remarks from Deborah J. Lucas, Sloan Distinguished Professor of Finance at MIT Sloan and director of the MIT Golub Center for Finance and Policy; Peter Fischer, Golub Distinguished Senior Fellow at MIT Sloan and a former under secretary in the U.S. Treasury Department; and Robert C. Merton, School of Managament Distinguished Professor of Finance at MIT Sloan.
In her comments, Lucas said that Tharman “personifies the qualities the award was created to honor,” while Fischer cited his emphasis on “the betterment of humankind.”
Merton praised Tharman’s “deep commitment for advancing financial policy in a way that serves both national and global arenas.” He added: “You have always believed that policy is not just about numbers, but about people. And that sound financial [policies] serve the many, not just the few.”
New method improves the reliability of statistical estimations
Let’s say an environmental scientist is studying whether exposure to air pollution is associated with lower birth weights in a particular county.
They might train a machine-learning model to estimate the magnitude of this association, since machine-learning methods are especially good at learning complex relationships.
Standard machine-learning methods excel at making predictions and sometimes provide uncertainties, like confidence intervals, for these predictions. However, they generally don’t provide estimates or confidence intervals when determining whether two variables are related. Other methods have been developed specifically to address this association problem and provide confidence intervals. But, in spatial settings, MIT researchers found these confidence intervals can be completely off the mark.
When variables like air pollution levels or precipitation change across different locations, common methods for generating confidence intervals may claim a high level of confidence when, in fact, the estimation completely failed to capture the actual value. These faulty confidence intervals can mislead the user into trusting a model that failed.
After identifying this shortfall, the researchers developed a new method designed to generate valid confidence intervals for problems involving data that vary across space. In simulations and experiments with real data, their method was the only technique that consistently generated accurate confidence intervals.
This work could help researchers in fields like environmental science, economics, and epidemiology better understand when to trust the results of certain experiments.
“There are so many problems where people are interested in understanding phenomena over space, like weather or forest management. We’ve shown that, for this broad class of problems, there are more appropriate methods that can get us better performance, a better understanding of what is going on, and results that are more trustworthy,” says Tamara Broderick, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society, an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and senior author of this study.
Broderick is joined on the paper by co-lead authors David R. Burt, a postdoc, and Renato Berlinghieri, an EECS graduate student; and Stephen Bates an assistant professor in EECS and member of LIDS. The research was recently presented at the Conference on Neural Information Processing Systems.
Invalid assumptions
Spatial association involves studying how a variable and a certain outcome are related over a geographic area. For instance, one might want to study how tree cover in the United States relates to elevation.
To solve this type of problem, a scientist could gather observational data from many locations and use it to estimate the association at a different location where they do not have data.
The MIT researchers realized that, in this case, existing methods often generate confidence intervals that are completely wrong. A model might say it is 95 percent confident its estimation captures the true relationship between tree cover and elevation, when it didn’t capture that relationship at all.
After exploring this problem, the researchers determined that the assumptions these confidence interval methods rely on don’t hold up when data vary spatially.
Assumptions are like rules that must be followed to ensure results of a statistical analysis are valid. Common methods for generating confidence intervals operate under various assumptions.
First, they assume that the source data, which is the observational data one gathered to train the model, is independent and identically distributed. This assumption implies that the chance of including one location in the data has no bearing on whether another is included. But, for example, U.S. Environmental Protection Agency (EPA) air sensors are placed with other air sensor locations in mind.
Second, existing methods often assume that the model is perfectly correct, but this assumption is never true in practice. Finally, they assume the source data are similar to the target data where one wants to estimate.
But in spatial settings, the source data can be fundamentally different from the target data because the target data are in a different location than where the source data were gathered.
For instance, a scientist might use data from EPA pollution monitors to train a machine-learning model that can predict health outcomes in a rural area where there are no monitors. But the EPA pollution monitors are likely placed in urban areas, where there is more traffic and heavy industry, so the air quality data will be much different than the air quality data in the rural area.
In this case, estimates of association using the urban data suffer from bias because the target data are systematically different from the source data.
A smooth solution
The new method for generating confidence intervals explicitly accounts for this potential bias.
Instead of assuming the source and target data are similar, the researchers assume the data vary smoothly over space.
For instance, with fine particulate air pollution, one wouldn’t expect the pollution level on one city block to be starkly different than the pollution level on the next city block. Instead, pollution levels would smoothly taper off as one moves away from a pollution source.
“For these types of problems, this spatial smoothness assumption is more appropriate. It is a better match for what is actually going on in the data,” Broderick says.
When they compared their method to other common techniques, they found it was the only one that could consistently produce reliable confidence intervals for spatial analyses. In addition, their method remains reliable even when the observational data are distorted by random errors.
In the future, the researchers want to apply this analysis to different types of variables and explore other applications where it could provide more reliable results.
This research was funded, in part, by an MIT Social and Ethical Responsibilities of Computing (SERC) seed grant, the Office of Naval Research, Generali, Microsoft, and the National Science Foundation (NSF).
School of Science welcomed new faculty in 2024
The School of Science welcomed 11 new faculty members in 2024.
Shaoyun Bai researches symplectic topology, the study of even-dimensional spaces whose properties are reflected by two-dimensional surfaces inside them. He is interested in this area’s interaction with other fields, including algebraic geometry, algebraic topology, geometric topology, and dynamics. He has been developing new tool kits for counting problems from moduli spaces, which have been applied to classical questions, including the Arnold conjecture, periodic points of Hamiltonian maps, higher-rank Casson invariants, enumeration of embedded curves, and topology of symplectic fibrations.
Bai completed his undergraduate studies at Tsinghua University in 2017 and earned his PhD in mathematics from Princeton University in 2022, advised by John Pardon. Bai then held visiting positions at MSRI (now known as Simons Laufer Mathematical Sciences Institute) as a McDuff Postdoctoral Fellow and at the Simons Center for Geometry and Physics, and he was a Ritt Assistant Professor at Columbia University. He joined the MIT Department of Mathematics as an assistant professor in 2024.
Abigail Bodner investigates turbulence in the upper ocean using remote sensing measurements, in-situ ocean observations numerical simulations, climate models, and machine learning. Her research explores how the small-scale physics of turbulence near the ocean surface impacts the large-scale climate.
Bodner earned a BS and MS from Tel Aviv University studying mathematics and geophysics, atmospheric and planetary sciences. She then went on to Brown University, earning an MS in applied mathematics before completing her PhD studies in 2021 in Earth, environmental, and planetary science. Prior to coming to MIT, Bodner was a Simons Society Junior Fellow at New York University. Bodner joined the Department of Earth, Atmospheric and Planetary Sciences (EAPS) faculty in 2024, with a shared appointment in the Department of Electrical Engineering and Computer Science.
Jacopo Borga is interested in probability theory and its connections to combinatorics, and in mathematical physics. He studies various random combinatorial structures — mathematical objects such as graphs or permutations — and their patterns and behavior at a large scale. This research includes random permutons, meanders, multidimensional constrained Brownian motions, Schramm-Loewner evolutions, and Liouville quantum gravity.
Borga earned bachelor’s and master’s degrees in mathematics from the Università degli Studi di Padova, and a master’s degree in mathematics from Université Sorbonne Paris Cité (USPC), then proceeded to complete a PhD in mathematics at Unstitut für Mathematik at the Universität Zürich. Borga was an assistant professor at Stanford University before joining MIT as an assistant professor of mathematics in 2024.
Linlin Fan aims to decipher the neural codes underlying learning and memory and to identify the physical basis of learning and memory. Her research focus is on the learning rules of brain circuits — what kinds of activity trigger the encoding and storing of information — how these learning rulers are implemented, and how memories can be inferred from mapping neural functional connectivity patterns. To answer these questions, Fan’s group leverages high-precision, all-optical technologies to map and control the electrical charges of neurons within the brain.
Fan earned her PhD at Harvard University after undergraduate studies at Peking University in China. She joined the MIT Department of Brain and Cognitive Sciences as the Samuel A. Goldblith Career Development Professor of Applied Biology, and the Picower Institute for Learning and Memory as an investigator in January 2024. Previously, Fan worked as a postdoc at Stanford University.
Whitney Henry investigates ferroptosis, a type of cell death dependent on iron, to uncover how oxidative stress, metabolism, and immune signaling intersect to shape cell fate decisions. Her research has defined key lipid metabolic and iron homeostatic programs that regulate ferroptosis susceptibility. By uncovering the molecular factors influencing ferroptosis susceptibility, investigating its effects on the tumor microenvironment, and developing innovative methods to manipulate ferroptosis resistance in living organisms, Henry’s lab aims to gain a comprehensive understanding of the therapeutic potential of ferroptosis, especially to target highly metastatic, therapy-resistant cancer cells.
Henry received her bachelor's degree in biology with a minor in chemistry from Grambling State University and her PhD from Harvard University. Following her doctoral studies, she worked at the Whitehead Institute for Biomedical Research and was supported by fellowships from the Jane Coffin Childs Memorial Fund for Medical Research and the Ludwig Center at MIT. Henry joined the MIT faculty in 2024 as an assistant professor in the Department of Biology and a member of the Koch Institute for Integrative Cancer Research, and was recently named the Robert A. Swanson (1969) Career Development Professor of Life Sciences and a HHMI Freeman Hrabowski Scholar.
Gian Michele Innocenti is an experimental physicist who probes new regimes of quantum chromodynamics (QCD) through collisions of ultra relativistic heavy ions at the Large Hadron Collider. He has developed advanced analysis techniques and data-acquisition strategies that enable novel measurements of open heavy-flavor and jet production in hadronic and ultraperipheral heavy-ion collisions, shedding light on the properties of high-temperature QCD matter and parton dynamics in Lorentz-contracted nuclei. He leads the MIT Pixel𝜑 program, which exploits CMOS MAPS technology to build a high-precision tracking detector for the ePIC experiment at the Electron–Ion Collider.
Innocenti received his PhD in particle and nuclear physics at the University of Turin in Italy in early 2014. He then joined the MIT heavy-ion group in the Laboratory of Nuclear Science in 2014 as a postdoc, followed by a staff research physicist position at CERN in 2018. Innocenti joined the MIT Department of Physics as an assistant professor in January 2024.
Mathematician Christoph Kehle's research interests lie at the intersection of analysis, geometry, and partial differential equations. In particular, he focuses on the Einstein field equations of general relativity and our current understanding of gravitation, which describe how matter and energy shape spacetime. His work addresses the Strong Cosmic Censorship conjecture, singularities in black hole interiors, and the dynamics of extremal black holes.
Prior to joining MIT, Kehle was a junior fellow at ETH Zürich and a member at the Institute for Advanced Study in Princeton. He earned his bachelor’s and master’s degrees at Ludwig Maximilian University and Technical University of Munich, and his PhD in 2020 from the University of Cambridge. Kehle joined the Department of Mathematics as an assistant professor in July 2024.
Aleksandr Logunov is a mathematician specializing in harmonic analysis and geometric analysis. He has developed novel techniques for studying the zeros of solutions to partial differential equations and has resolved several long-standing problems, including Yau’s conjecture, Nadirashvili’s conjecture, and Landis’ conjectures.
Logunov earned his PhD in 2015 from St. Petersburg State University. He then spent two years as a postdoc at Tel Aviv University, followed by a year as a member of the Institute for Advanced Study in Princeton. In 2018, he joined Princeton University as an assistant professor. In 2020, he spent a semester at Tel Aviv University as an IAS Outstanding Fellow, and in 2021, he was appointed full professor at the University of Geneva. Logunov joined MIT as a full professor in the Department of Mathematics in January 2024.
Lyle Nelson is a sedimentary geologist studying the co-evolution of life and surface environments across pivotal transitions in Earth history, especially during significant ecological change — such as extinction events and the emergence of new clades — and during major shifts in ocean chemistry and climate. Studying sedimentary rocks that were tectonically uplifted and are now exposed in mountain belts around the world, Nelson’s group aims to answer questions such as how the reorganization of continents influenced the carbon cycle and climate, the causes and effects of ancient ice ages, and what factors drove the evolution of early life forms and the rapid diversification of animals during the Cambrian period.
Nelson earned a bachelor’s degree in earth and planetary sciences from Harvard University in 2015 and then worked as an exploration geologist before completing his PhD at Johns Hopkins University in 2022. Prior to coming to MIT, he was an assistant professor in the Department of Earth Sciences at Carleton University in Ontario, Canada. Nelson joined the EAPS faculty in 2024.
Protein evolution is the process by which proteins change over time through mechanisms such as mutation or natural selection. Biologist Sergey Ovchinnikov uses phylogenetic inference, protein structure prediction/determination, protein design, deep learning, energy-based models, and differentiable programming to tackle evolutionary questions at environmental, organismal, genomic, structural, and molecular scales, with the aim of developing a unified model of protein evolution.
Ovchinnikov received his BS in micro/molecular biology from Portland State University in 2010 and his PhD in molecular and cellular biology from the University of Washington in 2017. He was next a John Harvard Distinguished Science Fellow at Harvard University until 2023. Ovchinnikov joined MIT as an assistant professor of biology in January 2024.
Shu-Heng Shao explores the structural aspects of quantum field theories and lattice systems. Recently, his research has centered on generalized symmetries and anomalies, with a particular focus on a novel type of symmetry without an inverse, referred to as non-invertible symmetries. These new symmetries have been identified in various quantum systems, including the Ising model, Yang-Mills theories, lattice gauge theories, and the Standard Model. They lead to new constraints on renormalization group flows, new conservation laws, and new organizing principles in classifying phases of quantum matter.
Shao obtained his BS in physics from National Taiwan University in 2010, and his PhD in physics from Harvard University in 2016. He was then a five-year long-term member at the Institute for Advanced Study in Princeton before he moved to the Yang Institute for Theoretical Physics at Stony Brook University as an assistant professor in 2021. In 2024, he joined the MIT faculty as an assistant professor of physics.
