MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 2 hours 5 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Book reviews technologies aiming to remove carbon from the atmosphere

6 hours 33 min ago

Two leading experts in the field of carbon capture and sequestration (CCS) — Howard J. Herzog, a senior research engineer in the MIT Energy Initiative, and Niall Mac Dowell, a professor in energy systems engineering at Imperial College London — explore methods for removing carbon dioxide already in the atmosphere in their new book, “Carbon Removal.” Published in October, the book is part of the Essential Knowledge series from the MIT Press, which consists of volumes “synthesizing specialized subject matter for nonspecialists” and includes Herzog’s 2018 book, “Carbon Capture.”

Burning fossil fuels, as well as other human activities, cause the release of carbon dioxide (CO2) into the atmosphere, where it acts like a blanket that warms the Earth, resulting in climate change. Much attention has focused on mitigation technologies that reduce emissions, but in their book, Herzog and Mac Dowell have turned their attention to “carbon dioxide removal” (CDR), an approach that removes carbon already present in the atmosphere.

In this new volume, the authors explain how CO2 naturally moves into and out of the atmosphere and present a brief history of carbon removal as a concept for dealing with climate change. They also describe the full range of “pathways” that have been proposed for removing CO2 from the atmosphere. Those pathways include engineered systems designed for “direct air capture” (DAC), as well as various “nature-based” approaches that call for planting trees or taking steps to enhance removal by biomass or the oceans. The book offers easily accessible explanations of the fundamental science and engineering behind each approach.

The authors compare the “quality” of the different pathways based on the following metrics:

Accounting. For public acceptance of any carbon-removal strategy, the authors note, the developers need to get the accounting right — and that’s not always easy. “If you’re going to spend money to get CO2 out of the atmosphere, you want to get paid for doing it,” notes Herzog. It can be tricky to measure how much you have removed, because there’s a lot of CO2 going in and out of the atmosphere all the time. Also, if your approach involves, say, burning fossil fuels, you must subtract the amount of CO2 that’s emitted from the total amount you claim to have removed. Then there’s the timing of the removal. With a DAC device, the removal happens right now, and the removed CO2 can be measured. “But if I plant a tree, it’s going to remove CO2 for decades. Is that equivalent to removing it right now?” Herzog queries. How to take that factor into account hasn’t yet been resolved.

Permanence. Different approaches keep the CO2 out of the atmosphere for different durations of time. How long is long enough? As the authors explain, this is one of the biggest issues, especially with nature-based solutions, where events such as wildfires or pestilence or land-use changes can release the stored CO2 back into the atmosphere. How do we deal with that?

Cost. Cost is another key factor. Using a DAC device to remove CO2 costs far more than planting trees, but it yields immediate removal of a measurable amount of CO2 that can then be locked away forever. How does one monetize that trade-off?

Additionality. “You’re doing this project, but would what you’re doing have been done anyway?” asks Herzog. “Is your effort additional to business as usual?” This question comes into play with many of the nature-based approaches involving trees, soils, and so on.

Permitting and governance. These issues are especially important — and complicated — with approaches that involve doing things in the ocean. In addition, Herzog points out that some CCS projects could also achieve carbon removal, but they would have a hard time getting permits to build the pipelines and other needed infrastructure.

The authors conclude that none of the CDR strategies now being proposed is a clear winner on all the metrics. However, they stress that carbon removal has the potential to play an important role in meeting our climate change goals — not by replacing our emissions-reduction efforts, but rather by supplementing them. However, as Herzog and Mac Dowell make clear in their book, many challenges must be addressed to move CDR from today’s speculation to deployment at scale, and the book supports the wider discussion about how to move forward. Indeed, the authors have fulfilled their stated goal: “to provide an objective analysis of the opportunities and challenges for CDR and to separate myth from reality.”

Breaking the old model of education with MIT Open Learning

7 hours 53 min ago

At an age when many kids prefer to play games on their phones, 11-year-old Vivan Mirchandani wanted to explore physics videos. Little did he know that MIT Open Learning’s free online resources would change the course of his life. 

Now, at 16, Mirchandani is well on his way to a career as a physics scholar — all because he forged his own unconventional educational journey.

Nontraditional education has granted Mirchandani the freedom to pursue topics he’s personally interested in. This year, he wrote a paper on cosmology that proposes a new framework for understanding Einstein’s general theory of relativity. Other projects include expanding on fluid dynamics laws for cats, training an AI model to resemble the consciousness of his late grandmother, and creating his own digital twin. That’s in addition to his regular studies, regional science fairs, Model United Nations delegation, and a TEDEd Talk.

Mirchandani started down this path between the ages of 10 and 12, when he decided to read books and find online content about physics during the early Covid-19 lockdown in India. He was shocked to find that MIT Open Learning offers free course videos, lecture notes, exams, and other resources from the Institute on sites like MIT OpenCourseWare and the newly launched MIT Learn.

“My first course was 8.01 (Classical Mechanics), and it completely changed how I saw physics,” Mirchandani says. “Physics sounded like elegance. It’s the closest we’ve ever come to have a theory of everything.”

Experiencing “real learning”

Mirchandani discovered MIT Open Learning through OpenCourseWare, which offers free, online, open educational resources from MIT undergraduate and graduate courses. He says MIT Open Learning’s “academically rigorous” content prepares learners to ask questions and think like a scientist.

“Instead of rote memorization, I finally experienced real learning,” Mirchandani says. “OpenCourseWare was a holy grail. Without it, I would still be stuck on the basic concepts.”

Wanting to follow in the footsteps of physicists like Sir Isaac Newton, Albert Einstein, and Stephen Hawking, Mirchandani decided at age 12 he would sacrifice his grade point average to pursue a nontraditional educational path that gave him hands-on experience in science.

“The education system doesn’t prepare you for actual scientific research, it prepares you for exams,” Mirchandani says. “What draws me to MIT Open Learning and OpenCourseWare is it breaks the old model of education. It’s not about sitting in a lecture hall, it’s about access and experimentation.”

With guidance from his physics teacher, Mirchandani built his own curriculum using educational materials on MIT OpenCourseWare to progress from classical physics to computer science to quantum physics. He has completed more than 27 online MIT courses to date.

“The best part of OpenCourseWare is you get to study from the greatest institution in the world, and you don’t have to pay for it,” he says.

Innovating in the real world

6.0001 (Introduction to Computer Science and Programming Using Python) and slides from 2.06 (Fluid Dynamics) gave Mirchandani the foundation to help with the family business, Dynamech Engineers, which sells machinery for commercial snack production. Some of the recent innovations he has assisted with include a zero-oil frying technology that cuts 300 calories per kilogram, a gas-based heat exchange system, and a simplified, singular machine combining the processes of two separate machines. Using the modeling techniques he learned through MIT OpenCourseWare, Mirchandani designed how these products would work without losing efficiency.

But when you ask Mirchandani which achievement he is most proud of, he’ll say it’s being one of 35 students accepted for the inaugural RSI-India cohort, an academic program for high school students modeled after the Research Science Institute program co-sponsored by MIT and the Center for Excellence in Education. Competing against other Indian students who had perfect scores on their board exams and SATs, he didn’t expect to get in, but the program valued the practical research experience he was able to pursue thanks to the knowledge he gained from his external studies.

“None of it would have happened without MIT OpenCourseWare,” he says. “It’s basically letting curiosity get the better of us. If everybody does that, we’d have a better scientific community.”

Method teaches generative AI models to locate personalized objects

23 hours 8 min ago

Say a person takes their French Bulldog, Bowser, to the dog park. Identifying Bowser as he plays among the other canines is easy for the dog-owner to do while onsite.

But if someone wants to use a generative AI model like GPT-5 to monitor their pet while they are at work, the model could fail at this basic task. Vision-language models like GPT-5 often excel at recognizing general objects, like a dog, but they perform poorly at locating personalized objects, like Bowser the French Bulldog.    

To address this shortcoming, researchers from MIT and the MIT-IBM Watson AI Lab have introduced a new training method that teaches vision-language models to localize personalized objects in a scene.

Their method uses carefully prepared video-tracking data in which the same object is tracked across multiple frames. They designed the dataset so the model must focus on contextual clues to identify the personalized object, rather than relying on knowledge it previously memorized.

When given a few example images showing a personalized object, like someone’s pet, the retrained model is better able to identify the location of that same pet in a new image.

Models retrained with their method outperformed state-of-the-art systems at this task. Importantly, their technique leaves the rest of the model’s general abilities intact.

This new approach could help future AI systems track specific objects across time, like a child’s backpack, or localize objects of interest, such as a species of animal in ecological monitoring. It could also aid in the development of AI-driven assistive technologies that help visually impaired users find certain items in a room.

“Ultimately, we want these models to be able to learn from context, just like humans do. If a model can do this well, rather than retraining it for each new task, we could just provide a few examples and it would infer how to perform the task from that context. This is a very powerful ability,” says Jehanzeb Mirza, an MIT postdoc and senior author of a paper on this technique.

Mirza is joined on the paper by co-lead authors Sivan Doveh, a graduate student at Weizmann Institute of Science; and Nimrod Shabtay, a researcher at IBM Research; James Glass, a senior research scientist and the head of the Spoken Language Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); and others. The work will be presented at the International Conference on Computer Vision.

An unexpected shortcoming

Researchers have found that large language models (LLMs) can excel at learning from context. If they feed an LLM a few examples of a task, like addition problems, it can learn to answer new addition problems based on the context that has been provided.

A vision-language model (VLM) is essentially an LLM with a visual component connected to it, so the MIT researchers thought it would inherit the LLM’s in-context learning capabilities. But this is not the case.

“The research community has not been able to find a black-and-white answer to this particular problem yet. The bottleneck could arise from the fact that some visual information is lost in the process of merging the two components together, but we just don’t know,” Mirza says.

The researchers set out to improve VLMs abilities to do in-context localization, which involves finding a specific object in a new image. They focused on the data used to retrain existing VLMs for a new task, a process called fine-tuning.

Typical fine-tuning data are gathered from random sources and depict collections of everyday objects. One image might contain cars parked on a street, while another includes a bouquet of flowers.

“There is no real coherence in these data, so the model never learns to recognize the same object in multiple images,” he says.

To fix this problem, the researchers developed a new dataset by curating samples from existing video-tracking data. These data are video clips showing the same object moving through a scene, like a tiger walking across a grassland.

They cut frames from these videos and structured the dataset so each input would consist of multiple images showing the same object in different contexts, with example questions and answers about its location.

“By using multiple images of the same object in different contexts, we encourage the model to consistently localize that object of interest by focusing on the context,” Mirza explains.

Forcing the focus

But the researchers found that VLMs tend to cheat. Instead of answering based on context clues, they will identify the object using knowledge gained during pretraining.

For instance, since the model already learned that an image of a tiger and the label “tiger” are correlated, it could identify the tiger crossing the grassland based on this pretrained knowledge, instead of inferring from context.

To solve this problem, the researchers used pseudo-names rather than actual object category names in the dataset. In this case, they changed the name of the tiger to “Charlie.”

“It took us a while to figure out how to prevent the model from cheating. But we changed the game for the model. The model does not know that ‘Charlie’ can be a tiger, so it is forced to look at the context,” he says.

The researchers also faced challenges in finding the best way to prepare the data. If the frames are too close together, the background would not change enough to provide data diversity.

In the end, finetuning VLMs with this new dataset improved accuracy at personalized localization by about 12 percent on average. When they included the dataset with pseudo-names, the performance gains reached 21 percent.

As model size increases, their technique leads to greater performance gains.

In the future, the researchers want to study possible reasons VLMs don’t inherit in-context learning capabilities from their base LLMs. In addition, they plan to explore additional mechanisms to improve the performance of a VLM without the need to retrain it with new data.

“This work reframes few-shot personalized object localization — adapting on the fly to the same object across new scenes — as an instruction-tuning problem and uses video-tracking sequences to teach VLMs to localize based on visual context rather than class priors. It also introduces the first benchmark for this setting with solid gains across open and proprietary VLMs. Given the immense significance of quick, instance-specific grounding — often without finetuning — for users of real-world workflows (such as robotics, augmented reality assistants, creative tools, etc.), the practical, data-centric recipe offered by this work can help enhance the widespread adoption of vision-language foundation models,” says Saurav Jha, a postdoc at the Mila-Quebec Artificial Intelligence Institute, who was not involved with this work.

Additional co-authors are Wei Lin, a research associate at Johannes Kepler University; Eli Schwartz, a research scientist at IBM Research; Hilde Kuehne, professor of computer science at Tuebingen AI Center and an affiliated professor at the MIT-IBM Watson AI Lab; Raja Giryes, an associate professor at Tel Aviv University; Rogerio Feris, a principal scientist and manager at the MIT-IBM Watson AI Lab; Leonid Karlinsky, a principal research scientist at IBM Research; Assaf Arbelle, a senior research scientist at IBM Research; and Shimon Ullman, the Samy and Ruth Cohn Professor of Computer Science at the Weizmann Institute of Science.

This research was funded, in part, by the MIT-IBM Watson AI Lab.

MIT-Toyota collaboration powers driver assistance in millions of vehicles

Wed, 10/15/2025 - 3:35pm

A decade-plus collaboration between MIT’s AgeLab and the Toyota Motor Corporation is recognized as a key contributor to advancements in automotive safety and human-machine interaction. Through the AgeLab at the MIT Center for Transportation and Logistics (CTL), researchers have collected and analyzed vast real-world driving datasets that have helped inform Toyota’s vehicle design and safety systems.

Toyota recently marked the completion of its 100th project through the Collaborative Safety Research Center (CSRC), celebrating MIT’s role in shaping technologies that enhance driver-assistance features and continue to forge the path for automated mobility. A key foundation for the 100th project is CSRC’s ongoing support for MIT CTL’s Advanced Vehicle Technology (AVT) Consortium.

Real-world data, real-world impact

“AVT was conceptualized over a decade ago as an academic-industry partnership to promote shared investment in real-world, naturalistic data collection, analysis, and collaboration — efforts aimed at advancing safer, more convenient, and more comfortable automobility,” says Bryan Reimer, founder and co-director of AVT. “Since its founding, AVT has drawn together over 25 organizations — including vehicle manufacturers, suppliers, insurers, and consumer research groups — to invest in understanding how automotive technologies function, how they influence driver behavior, and where further innovation is needed. This work has enabled stakeholders like Toyota to make more-informed decisions in product development and deployment.”

“CSRC’s 100th project marks a significant milestone in our collaboration,” Reimer adds. “We deeply value CSRC’s sustained investment, and commend the organization’s commitment to global industry impact and the open dissemination of research to advance societal benefit.”

“Toyota, through its Collaborative Safety Research Center, is proud to be a founding member of the AVT Consortium,” says Jason Hallman, senior manager of Toyota CSRC. “Since 2011, CSRC has collaborated with researchers such as AVT and MIT AgeLab on projects that help inform future products and policy, and to promote a future safe mobility society for all. The AVT specifically has helped us to study the real-world use of several vehicle technologies now available.”

Among these technologies are lane-centering assistance and adaptive cruise control — widely-used technologies that benefit from an understanding of how drivers interact with automation. “AVT uniquely combines vehicle and driver data to help inform future products and highlight the interplay between the performance of these features and the drivers using them,” says Josh Domeyer, principal scientist at CSRC.

Influencing global standards and Olympic-scale innovation

Insights from MIT’s pedestrian-driver interaction research with CSRC also helped shape Toyota’s automated vehicle communication systems. “These data helped develop our foundational understanding that drivers and pedestrians use their movements to communicate during routine traffic encounters,” said Domeyer. “This concept informed the deployment of Toyota’s e-Palette at the Tokyo Olympics, and it has been captured as a best practice in an ISO standard for automated driving system communication.”

The AVT Consortium's naturalistic driving datasets continue to serve as a foundation for behavioral safety strategies. From identifying moments of distraction to understanding how drivers multitask behind the wheel, the work is guiding subtle but impactful design considerations.

“By studying the natural behaviors of drivers and their contexts in the AVT datasets, we hope to identify new ways to encourage safe habits that align with customer preferences,” Domeyer says. “These can include subtle nudges, or modifications to existing vehicle features, or even communication and education partnerships outside of Toyota that reinforce these safe driving habits.”

Professor Yossi Sheffi, director of MIT CTL, comments, “This partnership exemplifies the impact of MIT collaborative research on industry to make real, practical innovation possible.” 

A model for industry-academic collaboration

Founded in 2015, the AVT Consortium brings together automotive manufacturers, suppliers, and insurers to accelerate research in driver behavior, safety, and the transition toward automated systems. The consortium’s interdisciplinary approach — integrating engineering, human factors, and data science — has helped generate one of the world’s most unique and actionable real-world driving datasets.

As Toyota celebrates its research milestone, MIT reflects on a partnership that exemplifies the power of industry-academic collaboration to shape safer, smarter mobility.

MIT engineers solve the sticky-cell problem in bioreactors and other industries

Wed, 10/15/2025 - 2:00pm

To help mitigate climate change, companies are using bioreactors to grow algae and other microorganisms that are hundreds of times more efficient at absorbing CO2 than trees. Meanwhile, in the pharmaceutical industry, cell culture is used to manufacture biologic drugs and other advanced treatments, including lifesaving gene and cell therapies.

Both processes are hampered by cells’ tendency to stick to surfaces, which leads to a huge amount of waste and downtime for cleaning. A similar problem slows down biofuel production, interferes with biosensors and implants, and makes the food and beverage industry less efficient.

Now, MIT researchers have developed an approach for detaching cells from surfaces on demand, using electrochemically generated bubbles. In an open-access paper published in Science Advances, the researchers demonstrated their approach in a lab prototype and showed it could work across a range of cells and surfaces without harming the cells.

“We wanted to develop a technology that could be high-throughput and plug-and-play, and that would allow cells to attach and detach on demand to improve the workflow in these industrial processes,” says Professor Kripa Varanasi, senior author of the study. “This is a fundamental issue with cells, and we’ve solved it with a process that can scale. It lends itself to many different applications.”

Joining Varanasi on the study are co-first authors Bert Vandereydt, a PhD student in mechanical engineering, and former postdoc Baptiste Blanc.

Solving a sticky problem

The researchers began with a mission.

“We’ve been working on figuring out how we can efficiently capture CO2 across different sources and convert it into valuable products for various end markets,” Varanasi says. “That’s where this photobioreactor and cell detachment comes into the picture.”

Photobioreactors are used to grow carbon-absorbing algae cells by creating tightly controlled environments involving water and sunlight. They feature long, winding tubes with clear surfaces to let in the light algae need to grow. When algae stick to those surfaces, they block out the light, requiring cleaning.

“You have to shut down and clean up the entire reactor as frequently as every two weeks,” Varanasi says. “It’s a huge operational challenge.”

The researchers realized other industries have similar problem due to many cells’ natural adhesion, or stickiness. Each industry has its own solution for cell adhesion depending on how important it is that the cells survive. Some people scrape the surfaces clean, while others use special coatings that are toxic to cells.

In the pharmaceutical and biotech industries, cell detachment is typically carried out using enzymes. However, this method poses several challenges — it can damage cell membranes, is time-consuming, and requires large amounts of consumables, resulting in millions of liters of biowaste.

To create a better solution, the researchers began by studying other efforts to clear surfaces with bubbles, which mainly involved spraying bubbles onto surfaces and had been largely ineffective.

“We realized we needed the bubbles to form on the surfaces where we don’t want these cells to stick, so when the bubbles detach it creates a local fluid flow that creates shear stress at the interface and removes the cells,” Varanasi explains.

Electric currents generate bubbles by splitting water into hydrogen and oxygen. But previous attempts at using electricity to detach cells were hampered because the cell culture mediums contain sodium chloride, which turns into bleach when combined with an electric current. The bleach damages the cells, making it impractical for many applications.

“The culprit is the anode — that’s where the sodium chloride turns to bleach,” Vandereydt explained. “We figured if we could separate that electrode from the rest of the system, we could prevent bleach from being generated.”

To make a better system, the researchers built a 3-square-inch glass surface and deposited a gold electrode on top of it. The layer of gold is so thin it doesn’t block out light. To keep the other electrode separate, the researchers integrated a special membrane that only allows protons to pass through. The set up allowed the researchers to send a current through without generating bleach.

To test their setup, they allowed algae cells from a concentrated solution to stick to the surfaces. When they applied a voltage, the bubbles separated the cells from the surfaces without harming them.

The researchers also studied the interaction between the bubbles and cells, finding the higher the current density, the more bubbles were created and the more algae was removed. They developed a model for understanding how much current would be needed to remove algae in different settings and matched it with results from experiments involving algae as well as cells from ovarian cancer and bones.

“Mammalian cells are orders of magnitude more sensitive than algae cells, but even with those cells, we were able to detach them with no impact to the viability of the cell,” Vandereydt says.

Getting to scale

The researchers say their system could represent a breakthrough in applications where bleach or other chemicals would harm cells. That includes pharmaceutical and food production.

“If we can keep these systems running without fouling and other problems, then we can make them much more economical,” Varanasi says.

For cell culture plates used in the pharmaceutical industry, the team envisions their system comprising an electrode that could be robotically moved from one culture plate to the next, to detach cells as they’re grown. It could also be coiled around algae harvesting systems.

“This has general applicability because it doesn’t rely on any specific biological or chemical treatments, but on a physical force that is system-agnostic,” Varanasi says. “It’s also highly scalable to a lot of different processes, including particle removal.”

Varanasi cautions there is much work to be done to scale up the system. But he hopes it can one day make algae and other cell harvesting more efficient.

“The burning problem of our time is to somehow capture CO2 in a way that’s economically feasible,” Varanasi says. “These photobioreactors could be used for that, but we have to overcome the cell adhesion problem.”

The work was supported, in part, by Eni S.p.A through the MIT Energy Initiative, the Belgian American Educational Foundation Fellowship, and the Maria Zambrano Fellowship.

Blending neuroscience, AI, and music to create mental health innovations

Wed, 10/15/2025 - 1:20pm

Computational neuroscientist and singer/songwriter Kimaya (Kimy) Lecamwasam, who also plays electric bass and guitar, says music has been a core part of her life for as long as she can remember. She grew up in a musical family and played in bands all through high school.

“For most of my life, writing and playing music was the clearest way I had to express myself,” says Lecamwasam. “I was a really shy and anxious kid, and I struggled with speaking up for myself. Over time, composing and performing music became central to both how I communicated and to how I managed my own mental health.”

Along with equipping her with valuable skills and experiences, she credits her passion for music as the catalyst for her interest in neuroscience.

“I got to see firsthand not only the ways that audiences reacted to music, but also how much value music had for musicians,” she says. “That close connection between making music and feeling well is what first pushed me to ask why music has such a powerful hold on us, and eventually led me to study the science behind it.”

Lecamwasam earned a bachelor’s degree in 2021 from Wellesley College, where she studied neuroscience — specifically in the Systems and Computational Neuroscience track — and also music. During her first semester, she took a class in songwriting that she says made her more aware of the connections between music and emotions. While studying at Wellesley, she participated in the MIT Undergraduate Research Opportunities Program for three years. Working in the Department of Brain and Cognitive Sciences lab of Emery Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, she focused primarily on classifying consciousness in anesthetized patients and training brain-computer interface-enabled prosthetics using reinforcement learning.

“I still had a really deep love for music, which I was pursuing in parallel to all of my neuroscience work, but I really wanted to try to find a way to combine both of those things in grad school,” says Lecamwasam. Brown recommended that she look into the graduate programs at the MIT Media Lab within the Program in Media Arts and Sciences (MAS), which turned out to be an ideal fit.

“One thing I really love about where I am is that I get to be both an artist and a scientist,” says Lecamwasam. “That was something that was important to me when I was picking a graduate program. I wanted to make sure that I was going to be able to do work that was really rigorous, validated, and important, but also get to do cool, creative explorations and actually put the research that I was doing into practice in different ways.”

Exploring the physical, mental, and emotional impacts of music

Informed by her years of neuroscience research as an undergraduate and her passion for music, Lecamwasam focused her graduate research on harnessing the emotional potency of music into scalable, non-pharmacological mental health tools. Her master’s thesis focused on “pharmamusicology,” looking at how music might positively affect the physiology and psychology of those with anxiety.

The overarching theme of Lecamwasam’s research is exploring the various impacts of music and affective computing — physically, mentally, and emotionally. Now in the third year of her doctoral program in the Opera of the Future group, she is currently investigating the impact of large-scale live music and concert experiences on the mental health and well-being of both audience members and performers. She is also working to clinically validate music listening, composition, and performance as health interventions, in combination with psychotherapy and pharmaceutical interventions.

Her recent work, in collaboration with Professor Anna Huang’s Human-AI Resonance Lab, assesses the emotional resonance of AI-generated music compared to human-composed music; the aim is to identify more ethical applications of emotion-sensitive music generation and recommendation that preserve human creativity and agency, and can also be used as health interventions. She has co-led a wellness and music workshop at the Wellbeing Summit in Bilbao, Spain, and has presented her work at the 2023 CHI conference on Human Factors in Computing Systems in Hamburg, Germany and the 2024 Audio Mostly conference in Milan, Italy. 

Lecamwasam has collaborated with organizations near and far to implement real-world applications of her research. She worked with Carnegie Hall's Weill Music Institute on its Well-Being Concerts and is currently partnering on a study assessing the impact of lullaby writing on perinatal health with the North Shore Lullaby Project in Massachusetts, an offshoot of Carnegie Hall’s Lullaby Project. Her main international collaboration is with a company called Myndstream, working on projects comparing the emotional resonance of AI-generated music to human-composed music and thinking of clinical and real-world applications. She is also working on a project with the companies PixMob and Empatica (an MIT Media Lab spinoff), centered on assessing the impact of interactive lighting and large-scale live music experiences on emotional resonance in stadium and arena settings.

Building community

“Kimy combines a deep love for — and sophisticated knowledge of — music with scientific curiosity and rigor in ways that represent the Media Lab/MAS spirit at its best,” says Professor Tod Machover, Lecamwasam’s research advisor, Media Lab faculty director, and director of the Opera of the Future group. “She has long believed that music is one of the most powerful and effective ways to create personalized interventions to help stabilize emotional distress and promote empathy and connection. It is this same desire to establish sane, safe, and sustaining environments for work and play that has led Kimy to become one of the most effective and devoted community-builders at the lab.”

Lecamwasam has participated in the SOS (Students Offering Support) program in MAS for a few years, which assists students from a variety of life experiences and backgrounds during the process of applying to the Program in Media Arts and Sciences. She will soon be the first MAS peer mentor as part of a new initiative through which she will establish and coordinate programs including a “buddy system,” pairing incoming master’s students with PhD students as a way to help them transition into graduate student life at MIT. She is also part of the Media Lab’s Studcom, a student-run organization that promotes, facilitates, and creates experiences meant to bring the community together.

“I think everything that I have gotten to do has been so supported by the friends I’ve made in my lab and department, as well as across departments,” says Lecamwasam. “I think everyone is just really excited about the work that they do and so supportive of one another. It makes it so that even when things are challenging or difficult, I’m motivated to do this work and be a part of this community.”

Why some quantum materials stall while others scale

Wed, 10/15/2025 - 12:00am

People tend to think of quantum materials — whose properties arise from quantum mechanical effects — as exotic curiosities. But some quantum materials have become a ubiquitous part of our computer hard drives, TV screens, and medical devices. Still, the vast majority of quantum materials never accomplish much outside of the lab.

What makes certain quantum materials commercial successes and others commercially irrelevant? If researchers knew, they could direct their efforts toward more promising materials — a big deal since they may spend years studying a single material.

Now, MIT researchers have developed a system for evaluating the scale-up potential of quantum materials. Their framework combines a material’s quantum behavior with its cost, supply chain resilience, environmental footprint, and other factors. The researchers used their framework to evaluate over 16,000 materials, finding that the materials with the highest quantum fluctuation in the centers of their electrons also tend to be more expensive and environmentally damaging. The researchers also identified a set of materials that achieve a balance between quantum functionality and sustainability for further study.

The team hopes their approach will help guide the development of more commercially viable quantum materials that could be used for next generation microelectronics, energy harvesting applications, medical diagnostics, and more.

“People studying quantum materials are very focused on their properties and quantum mechanics,” says Mingda Li, associate professor of nuclear science and engineering and the senior author of the work. “For some reason, they have a natural resistance during fundamental materials research to thinking about the costs and other factors. Some told me they think those factors are too ‘soft’ or not related to science. But I think within 10 years, people will routinely be thinking about cost and environmental impact at every stage of development.”

The paper appears in Materials Today. Joining Li on the paper are co-first authors and PhD students Artittaya Boonkird, Mouyang Cheng, and Abhijatmedhi Chotrattanapituk, along with PhD students Denisse Cordova Carrizales and Ryotaro Okabe; former graduate research assistants Thanh Nguyen and Nathan Drucker; postdoc Manasi Mandal; Instructor Ellan Spero of the Department of Materials Science and Engineering (DMSE); Professor Christine Ortiz of the Department of DMSE; Professor Liang Fu of the Department of Physics; Professor Tomas Palacios of the Department of Electrical Engineering and Computer Science (EECS); Associate Professor Farnaz Niroui of EECS; Assistant Professor Jingjie Yeo of Cornell University; and PhD student Vsevolod Belosevich and Assostant Professor Qiong Ma of Boston College.

Materials with impact

Cheng and Boonkird say that materials science researchers often gravitate toward quantum materials with the most exotic quantum properties rather than the ones most likely to be used in products that change the world.

“Researchers don’t always think about the costs or environmental impacts of the materials they study,” Cheng says. “But those factors can make them impossible to do anything with.”

Li and his collaborators wanted to help researchers focus on quantum materials with more potential to be adopted by industry. For this study, they developed methods for evaluating factors like the materials’ price and environmental impact using their elements and common practices for mining and processing those elements. At the same time, they quantified the materials’ level of “quantumness” using an AI model created by the same group last year, based on a concept proposed by MIT professor of physics Liang Fu, termed quantum weight.

“For a long time, it’s been unclear how to quantify the quantumness of a material,” Fu says. “Quantum weight is very useful for this purpose. Basically, the higher the quantum weight of a material, the more quantum it is.”

The researchers focused on a class of quantum materials with exotic electronic properties known as topological materials, eventually assigning over 16,000 materials scores on environmental impact, price, import resilience, and more.

For the first time, the researchers found a strong correlation between the material’s quantum weight and how expensive and environmentally damaging it is.

“That’s useful information because the industry really wants something very low-cost,” Spero says. “We know what we should be looking for: high quantum weight, low-cost materials. Very few materials being developed meet that criteria, and that likely explains why they don’t scale to industry.”

The researchers identified 200 environmentally sustainable materials and further refined the list down to 31 material candidates that achieved an optimal balance of quantum functionality and high-potential impact.

The researchers also found that several widely studied materials exhibit high environmental impact scores, indicating they will be hard to scale sustainably. “Considering the scalability of manufacturing and environmental availability and impact is critical to ensuring practical adoption of these materials in emerging technologies,” says Niroui.

Guiding research

Many of the topological materials evaluated in the paper have never been synthesized, which limited the accuracy of the study’s environmental and cost predictions. But the authors say the researchers are already working with companies to study some of the promising materials identified in the paper.

“We talked with people at semiconductor companies that said some of these materials were really interesting to them, and our chemist collaborators also identified some materials they find really interesting through this work,” Palacios says. “Now we want to experimentally study these cheaper topological materials to understand their performance better.”

“Solar cells have an efficiency limit of 34 percent, but many topological materials have a theoretical limit of 89 percent. Plus, you can harvest energy across all electromagnetic bands, including our body heat,” Fu says. “If we could reach those limits, you could easily charge your cell phone using body heat. These are performances that have been demonstrated in labs, but could never scale up. That’s the kind of thing we’re trying to push forward."

This work was supported, in part, by the National Science Foundation and the U.S. Department of Energy.

Earthquake damage at deeper depths occurs long after initial activity

Tue, 10/14/2025 - 5:00pm

Earthquakes often bring to mind images of destruction, of the Earth breaking open and altering landscapes. But after an earthquake, the area around it undergoes a period of post-seismic deformation, where areas that didn’t break experience new stress as a result of the sudden change in the surroundings. Once it has adjusted to this new stress, it reaches a state of recovery.

Geologists have often thought that this recovery period was a smooth, continuous process. But MIT research published recently in Science has found evidence that while healing occurs quickly at shallow depths — roughly above 10 km — deeper depths recover more slowly, if at all.

“If you were to look before and after in the shallow crust, you wouldn’t see any permanent change. But there’s this very permanent change that persists in the mid-crust,” says Jared Bryan, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author on the paper.

The paper’s other authors include EAPS Professor William Frank and Pascal Audet from the University of Ottawa.

Everything but the quakes

In order to assemble a full understanding of how the crust behaves before, during, and after an earthquake sequence, the researchers looked at seismic data from the 2019 Ridgecrest earthquakes in California. This immature fault zone experienced the largest earthquake in the state in 20 years, and tens of thousands of aftershocks over the following year. They then removed seismic data created by the sequence and only looked at waves generated by other seismic activity around the world to see how their paths through the Earth changed before and after the sequence.

“One person’s signal is another person’s noise,” says Bryan. They also used general ambient noise from sources like ocean waves and traffic that are also picked up by seismometers. Then, using a technique called a receiver function, they were able to see the speed of the waves as they traveled and how it changed due to conditions in the Earth such as rock density and porosity, much in the same way we use sonar to see how acoustic waves change when they interact with objects. With all this information, they were able to construct basic maps of the Earth around the Ridgecrest fault zone before and after the sequence.

What they found was that the shallow crust, extending about 10 km into the Earth, recovered over the course of a few months. In contrast, deeper depths in the mid-crust didn’t experience immediate damage, but rather changed over the same timescale as shallow depths recovered.

“What was surprising is that the healing in the shallow crust was so quick, and then you have this complementary accumulation occurring, not at the time of the earthquake, but instead over the post-seismic phase,” says Bryan.

Balancing the energy budget

Understanding how recovery plays out at different depths is crucial for determining how energy is spent during different parts of the seismic process, which includes activities such as the release of energy as waves, the creation of new fractures, or energy being stored elastically in the surrounding areas. Altogether, this is collectively known as the energy budget, and it is a useful tool for understanding how damage accumulates and recovers over time.

What remains unclear is the timescales at which deeper depths recover, if at all. The paper presents two possible scenarios to explain why that might be: one in which the deep crust recovers over a much longer timescale than they observed, or one where it never recovers at all.

“Either of those are not what we expected,” says Frank. “And both of them are interesting.”

Further research will require more observations to build out a more detailed picture to see at what depth the change becomes more pronounced. In addition, Bryan wants to look at other areas, such as more mature faults that experience higher levels of seismic activity, to see if it changes the results.

“We’ll let you know in 1,000 years whether it’s recovered,” says Bryan.

Darcy McRose and Mehtaab Sawhney ’20, PhD ’24 named 2025 Packard Fellows for Science and Engineering

Tue, 10/14/2025 - 4:51pm

The David and Lucile Packard Foundation has announced that two MIT affiliates have been named 2025 Packard Fellows for Science and EngineeringDarcy McRose, the Thomas D. and Virginia W. Cabot Career Development Assistant Professor in the MIT Department of Civil and Environmental Engineering, has been honored, along with Mehtaab Sawhney ’20, PhD ’24, a graduate of the Department of Mathematics who is now at Columbia University. 

The honorees are among 20 junior faculty named among the nation’s most innovative early-career scientists and engineers. Each Packard Fellow receives an unrestricted research grant of $875,000 over five years to support their pursuit of pioneering research and bold new ideas.

“I’m incredibly grateful and honored to be awarded a Packard Fellowship,” says McRose. “It will allow us to continue our work exploring how small molecules control microbial communities in soils and on plant roots, with much-appreciated flexibility to follow our imagination wherever it leads us.”

McRose and her lab study secondary metabolites — small organic molecules that microbes and plants release into soils. Often known as antibiotics, these compounds do far more than fight infections; they can help unlock soil nutrients, shape microbial communities around plant roots, and influence soil fertility.

“Antibiotics made by soil microorganisms are widely used in medicine, but we know surprisingly little about what they do in nature,” explains McRose. “Just as healthy microbiomes support human health, plant microbiomes support plant health, and secondary metabolites can help to regulate the microbial community, suppressing pathogens and promoting beneficial microbes.” 

Her lab integrates techniques from genetics, chemistry, and geosciences to investigate how these molecules shape interactions between microbes and plants in soil — one of Earth’s most complex and least-understood environments. By using secondary metabolites as experimental tools, McRose aims to uncover the molecular mechanisms that govern processes like soil fertility and nutrient cycling that are foundational to sustainable agriculture and ecosystem health.

Studying antibiotics in the environments where they evolved could also yield new strategies for combating soil-borne pathogens and improving crop resilience. “Soil is a true scientific frontier,” McRose says. “Studying these environments has the potential to reveal fascinating, fundamental insights into microbial life — many of which we can’t even imagine yet.”

A native of California, McRose earned her bachelor’s and master’s degrees from Stanford University, followed by a PhD in geosciences from Princeton University. Her graduate thesis focused on how bacteria acquire trace metals from the environment. Her postdoctoral research on secondary metabolites at Caltech was supported by multiple fellowships, including the Simons Foundation Marine Microbial Ecology Postdoctoral Fellowship, the L’Oréal USA For Women in Science Fellowship, and a Division Fellowship from Biology and Biological Engineering at Caltech.

McRose joined the MIT faculty in 2022. In 2025, she was named a Sloan Foundation Research Fellow in Earth System Science and awarded the Maseeh Excellence in Teaching Award.

Past Packard Fellows have gone on to earn the highest honors, including Nobel Prizes in chemistry and physics, the Fields Medal, Alan T. Waterman Awards, Breakthrough Prizes, Kavli Prizes, and elections to the National Academies of Science, Engineering, and Medicine. Each year, the foundation reviews 100 nominations for consideration from 50 invited institutions. The Packard Fellowships Advisory Panel, a group of 12 internationally recognized scientists and engineers, evaluates the nominations and recommends 20 fellows for approval by the Packard Foundation Board of Trustees.

Engineering next-generation fertilizers

Tue, 10/14/2025 - 4:50pm

Born in Palermo, Sicily, Giorgio Rizzo spent his childhood curious about the natural world. “I have always been fascinated by nature and how plants and animals can adapt and survive in extreme environments,” he says. “Their highly tuned biochemistry, and their incredible ability to create ones of the most complex and beautiful structures in chemistry that we still can’t even achieve in our laboratories.”

As an undergraduate student, he watched as a researcher mounted a towering chromatography column layered with colorful plant chemicals in a laboratory. When the researcher switched on a UV light, the colors turned into fluorescent shades of blue, green, red and pink. “I realized in that exact moment that I wanted to be the same person, separating new unknown compounds from a rare plant with potential pharmaceutical properties,” he recalls.

These experiences set him on a path from a master’s degree in organic chemistry to his current work as a postdoc in the MIT Department of Civil and Environmental Engineering, where he focuses on developing sustainable fertilizers and studying how rare earth elements can boost plant resilience, with the aim of reducing agriculture’s environmental impact.

In the lab of MIT Professor Benedetto Marelli, Rizzo studies plant responses to environmental stressors, such as heat, drought, and prolonged UV irradiation. This includes developing new fertilizers that can be applied as seed coating to help plants grow stronger and enhance their resistance.

“We are working on new formulations of fertilizers that aim to reduce the huge environmental impact of classical practices in agriculture based on NPK inorganic fertilizers,” Rizzo explains. Although they are fundamental to crop yields, their tendency to accumulate in soil is detrimental to the soil health and microbiome living in it. In addition, producing NPK (nitrogen, phosphorus, and potassium) fertilizers is one of the most energy-consuming and polluting chemical processes in the world.

“It is mandatory to reshape our conception of fertilizers and try to rely, at least in part, on alternative products that are safer, cheaper, and more sustainable,” he says.

Recently, Rizzo was awarded a Kavanaugh Fellowship, a program that gives MIT graduate students and postdocs entrepreneurial training and resources to bring their research from the lab to the market. “This prestigious fellowship will help me build a concrete product for a company, adding more value to our research,” he says.

Rizzo hopes their work will help farmers increase their crop yields without compromising soil quality or plant health. A major barrier to adopting new fertilizers is cost, as many farmers rely heavily on each growing season’s output and cannot risk investing in products that may underperform compared to traditional NPK fertilizers. The fertilizers being developed in the Marelli Lab address this challenge by using chitin and chitosan, abundant natural materials that make them far less expensive to produce, which Rizzo hopes will encourage farmers to try them.

“Through the Kavanaugh Fellowship, I will spend this year trying to bring the technology outside the lab to impact the world and meet the need for farmers to support their prosperity,” he says.

Mentorship has been a defining part of his postdoc experience. Rizzo describes Professor Benedetto Marelli as “an incredible mentor” who values his research interests and supports him through every stage of his work. The lab spans a wide range of projects — from plant growth enhancement and precision chemical delivery to wastewater treatment, vaccine development for fish, and advanced biochemical processes. “My colleagues created a stimulant environment with different research topics,” he notes. He is also grateful for the work he does with international institutions, which has helped him build a network of researchers and academics around the world.

Rizzo enjoys the opportunity to mentor students in the lab and appreciates their curiosity and willingness to learn. “It is one of the greatest qualities you can have as a scientist because you must be driven by curiosity to discover the unexpected,” he says.

He describes MIT as a “dynamic and stimulating experience,” but also acknowledges how overwhelming it can be. “You will feel like a small fish in a big ocean,” he says. “But that is exactly what MIT is: an ocean full of opportunities and challenges that are waiting to be solved.”

Beyond his professional work, Rizzo enjoys nature and the arts. An avid reader, he balances his scientific work with literature and history. “I never read about science-related topics — I read about it a lot already for my job,” he says. “I like classic literature, novels, essays, history of nations, and biographies. Often you can find me wandering in museums’ art collections.” Classical art, Renaissance, and Pre-Raphaelites are his favorite artistic currents.

Looking ahead, Rizzo hopes to shift his professional pathway toward startups or companies focused on agrotechnical improvement. His immediate goal is to contribute to initiatives where research has a direct, tangible impact on everyday life.

“I want to pursue the option of being part of a spinout process that would enable my research to have a direct impact in everyday life and help solve agricultural issues,” he adds.

Optimizing food subsidies: Applying digital platforms to maximize nutrition

Tue, 10/14/2025 - 3:40pm

Oct. 16 is World Food Day, a global campaign to celebrate the founding of the Food and Agriculture Organization 80 years ago, and to work toward a healthy, sustainable, food-secure future. More than 670 million people in the world are facing hunger. Millions of others are facing rising obesity rates and struggle to get healthy food for proper nutrition. 

World Food Day calls on not only world governments, but business, academia, the media, and even the youth to take action to promote resilient food systems and combat hunger. This year, the Abdul Latif Jameel Water and Food Systems Laboratory (J-WAFS) is spotlighting an MIT researcher who is working toward this goal by studying food and water systems in the Global South.

J-WAFS seed grants provide funding to early-stage research projects that are unique to prior work. In an 11th round of seed grant funding in 2025, 10 MIT faculty members received support to carry out their cutting-edge water and food research. Ali Aouad PhD ’17, assistant professor of operations management at the MIT Sloan School of Management, was one of those grantees. “I had searched before joining MIT what kind of research centers and initiatives were available that tried to coalesce research on food systems,” Aouad says. “And so, I was very excited about J-WAFS.” 

Aouad gathered more information about J-WAFS at the new faculty orientation session in August 2024, where he spoke to J-WAFS staff and learned about the program’s grant opportunities for water and food research. Later that fall semester, he attended a few J-WAFS seminars on agricultural economics and water resource management. That’s when Aouad knew that his project was perfectly aligned with the J-WAFS mission of securing humankind’s water and food.

Aouad’s seed project focuses on food subsidies. With a background in operations research and an interest in digital platforms, much of his work has centered on aligning supply-side operations with heterogeneous customer preferences. Past projects include ones on retail and matching systems. “I started thinking that these types of demand-driven approaches may be also very relevant to important social challenges, particularly as they relate to food security,” Aouad says. Before starting his PhD at MIT, Aouad worked on projects that looked at subsidies for smallholder farmers in low- and middle-income countries. “I think in the back of my mind, I've always been fascinated by trying to solve these issues,” he noted.

His seed grant project, Optimal subsidy design: Application to food assistance programs, aims to leverage data on preferences and purchasing habits from local grocery stores in India to inform food assistance policy and optimize the design of subsidies. Typical data collection systems, like point-of-sales, are not as readily available in India’s local groceries, making this type of data hard to come by for low-income individuals. “Mom-and-pop stores are extremely important last-mile operators when it comes to nutrition,” he explains. 

For this project, the research team gave local grocers point-of-sale scanners to track purchasing habits. “We aim to develop an algorithm that converts these transactions into some sort of ‘revelation’ of the individuals’ latent preferences,” says Aouad. “As such, we can model and optimize the food assistance programs — how much variety and flexibility is offered, taking into account the expected demand uptake.” He continues, “now, of course, our ability to answer detailed design questions [across various products and prices] depends on the quality of our inference from  the data, and so this is where we need more sophisticated and robust algorithms.”

Following the data collection and model development, the ultimate goal of this research is to inform policy surrounding food assistance programs through an “optimization approach.” Aouad describes the complexities of using optimization to guide policy. “Policies are often informed by domain expertise, legacy systems, or political deliberation. A lot of researchers build rigorous evidence to inform food policy, but it’s fair to say that the kind of approach that I’m proposing in this research is not something that is commonly used. I see an opportunity for bringing a new approach and methodological tradition to a problem that has been central for policy for many decades.” 

The overall health of consumers is the reason food assistance programs exist, yet measuring long-term nutritional impacts and shifts in purchase behavior is difficult. In past research, Aouad notes that the short-term effects of food assistance interventions can be significant. However, these effects are often short-lived. “This is a fascinating question that I don’t think we will be able to address within the space of interventions that we will be considering. However, I think it is something I would like to capture in the research, and maybe develop hypotheses for future work around how we can shift nutrition-related behaviors in the long run.”

While his project develops a new methodology to calibrate food assistance programs, large-scale applications are not promised. “A lot of what drives subsidy mechanisms and food assistance programs is also, quite frankly, how easy it is and how cost-effective it is to implement these policies in the first place,” comments Aouad. Cost and infrastructure barriers are unavoidable to this kind of policy research, as well as sustaining these programs. Aouad’s effort will provide insights into customer preferences and subsidy optimization in a pilot setup, but replicating this approach on a real scale may be costly. Aouad hopes to be able to gather proxy information from customers that would both feed into the model and provide insight into a more cost-effective way to collect data for large-scale implementation.

There is still much work to be done to ensure food security for all, whether it’s advances in agriculture, food-assistance programs, or ways to boost adequate nutrition. As the 2026 seed grant deadline approaches, J-WAFS will continue its mission of supporting MIT faculty as they pursue innovative projects that have practical and real impacts on water and food system challenges.

Checking the quality of materials just got easier with a new AI tool

Tue, 10/14/2025 - 11:00am

Manufacturing better batteries, faster electronics, and more effective pharmaceuticals depends on the discovery of new materials and the verification of their quality. Artificial intelligence is helping with the former, with tools that comb through catalogs of materials to quickly tag promising candidates.

But once a material is made, verifying its quality still involves scanning it with specialized instruments to validate its performance — an expensive and time-consuming step that can hold up the development and distribution of new technologies.

Now, a new AI tool developed by MIT engineers could help clear the quality-control bottleneck, offering a faster and cheaper option for certain materials-driven industries.

In a study appearing today in the journal Matter, the researchers present “SpectroGen,” a generative AI tool that turbocharges scanning capabilities by serving as a virtual spectrometer. The tool takes in “spectra,” or measurements of a material in one scanning modality, such as infrared, and generates what that material’s spectra would look like if it were scanned in an entirely different modality, such as X-ray. The AI-generated spectral results match, with 99 percent accuracy, the results obtained from physically scanning the material with the new instrument.

Certain spectroscopic modalities reveal specific properties in a material: Infrared reveals a material’s molecular groups, while X-ray diffraction visualizes the material’s crystal structures, and Raman scattering illuminates a material’s molecular vibrations. Each of these properties is essential in gauging a material’s quality and typically requires tedious workflows on multiple expensive and distinct instruments to measure.

With SpectroGen, the researchers envision that a diversity of measurements can be made using a single and cheaper physical scope. For instance, a manufacturing line could carry out quality control of materials by scanning them with a single infrared camera. Those infrared spectra could then be fed into SpectroGen to automatically generate the material’s X-ray spectra, without the factory having to house and operate a separate, often more expensive X-ray-scanning laboratory.

The new AI tool generates spectra in less than one minute, a thousand times faster compared to traditional approaches that can take several hours to days to measure and validate.

“We think that you don’t have to do the physical measurements in all the modalities you need, but perhaps just in a single, simple, and cheap modality,” says study co-author Loza Tadesse, assistant professor of mechanical engineering at MIT. “Then you can use SpectroGen to generate the rest. And this could improve productivity, efficiency, and quality of manufacturing.”

The study’s lead author is former MIT postdoc Yanmin Zhu.

Beyond bonds

Tadesse’s interdisciplinary group at MIT pioneers technologies that advance human and planetary health, developing innovations for applications ranging from rapid disease diagnostics to sustainable agriculture.

“Diagnosing diseases, and material analysis in general, usually involves scanning samples and collecting spectra in different modalities, with different instruments that are bulky and expensive and that you might not all find in one lab,” Tadesse says. “So, we were brainstorming about how to miniaturize all this equipment and how to streamline the experimental pipeline.”

Zhu noted the increasing use of generative AI tools for discovering new materials and drug candidates, and wondered whether AI could also be harnessed to generate spectral data. In other words, could AI act as a virtual spectrometer?

A spectroscope probes a material’s properties by sending light of a certain wavelength into the material. That light causes molecular bonds in the material to vibrate in ways that scatter the light back out to the scope, where the light is recorded as a pattern of waves, or spectra, that can then be read as a signature of the material’s structure.

For AI to generate spectral data, the conventional approach would involve training an algorithm to recognize connections between physical atoms and features in a material, and the spectra they produce. Given the complexity of molecular structures within just one material, Tadesse says such an approach can quickly become intractable.

“Doing this even for just one material is impossible,” she says. “So, we thought, is there another way to interpret spectra?”

The team found an answer with math. They realized that a spectral pattern, which is a sequence of waveforms, can be represented mathematically. For instance, a spectrum that contains a series of bell curves is known as a “Gaussian” distribution, which is associated with a certain mathematical expression, compared to a series of narrower waves, known as a “Lorentzian” distribution, that is described by a separate, distinct algorithm. And as it turns out, for most materials infrared spectra characteristically contain more Lorentzian waveforms, while Raman spectra are more Gaussian, and X-ray spectra is a mix of the two.

Tadesse and Zhu worked this mathematical interpretation of spectral data into an algorithm that they then incorporated into a generative AI model.

It’s a physics-savvy generative AI that understands what spectra are,” Tadesse says. “And the key novelty is, we interpreted spectra not as how it comes about from chemicals and bonds, but that it is actually math — curves and graphs, which an AI tool can understand and interpret.”

Data co-pilot

The team demonstrated their SpectroGen AI tool on a large, publicly available dataset of over 6,000 mineral samples. Each sample includes information on the mineral’s properties, such as its elemental composition and crystal structure. Many samples in the dataset also include spectral data in different modalities, such as X-ray, Raman, and infrared. Of these samples, the team fed several hundred to SpectroGen, in a process that trained the AI tool, also known as a neural network, to learn correlations between a mineral’s different spectral modalities. This training enabled SpectroGen to take in spectra of a material in one modality, such as in infrared, and generate what a spectra in a totally different modality, such as X-ray, should look like.

Once they trained the AI tool, the researchers fed SpectroGen spectra from a mineral in the dataset that was not included in the training process. They asked the tool to generate a spectra in a different modality, based on this “new” spectra. The AI-generated spectra, they found, was a close match to the mineral’s real spectra, which was originally recorded by a physical instrument. The researchers carried out similar tests with a number of other minerals and found that the AI tool quickly generated spectra, with 99 percent correlation.

“We can feed spectral data into the network and can get another totally different kind of spectral data, with very high accuracy, in less than a minute,” Zhu says.

The team says that SpectroGen can generate spectra for any type of mineral. In a manufacturing setting, for instance, mineral-based materials that are used to make semiconductors and battery technologies could first be quickly scanned by an infrared laser. The spectra from this infrared scanning could be fed into SpectroGen, which would then generate a spectra in X-ray, which operators or a multiagent AI platform can check to assess the material’s quality.

“I think of it as having an agent or co-pilot, supporting researchers, technicians, pipelines and industry,” Tadesse says. “We plan to customize this for different industries’ needs.”

The team is exploring ways to adapt the AI tool for disease diagnostics, and for agricultural monitoring through an upcoming project funded by Google. Tadesse is also advancing the technology to the field through a new startup and envisions making SpectroGen available for a wide range of sectors, from pharmaceuticals to semiconductors to defense.

Helping scientists run complex data analyses without writing code

Tue, 10/14/2025 - 10:15am

As costs for diagnostic and sequencing technologies have plummeted in recent years, researchers have collected an unprecedented amount of data around disease and biology. Unfortunately, scientists hoping to go from data to new cures often require help from someone with experience in software engineering.

Now, Watershed Bio is helping scientists and bioinformaticians run experiments and get insights with a platform that lets users analyze complex datasets regardless of their computational skills. The cloud-based platform provides workflow templates and a customizable interface to help users explore and share data of all types, including whole-genome sequencing, transcriptomics, proteomics, metabolomics, high-content imaging, protein folding, and more.

“Scientists want to learn about the software and data science parts of the field, but they don’t want to become software engineers writing code just to understand their data,” co-founder and CEO Jonathan Wang ’13, SM ’15 says. “With Watershed, they don’t have to.”

Watershed is being used by large and small research teams across industry and academia to drive discovery and decision-making. When new advanced analytic techniques are described in scientific journals, they can be added to Watershed’s platform immediately as templates, making cutting-edge tools more accessible and collaborative for researchers of all backgrounds.

“The data in biology is growing exponentially, and the sequencing technologies generating this data are only getting better and cheaper,” Wang says. “Coming from MIT, this issue was right in my wheelhouse: It’s a tough technical problem. It’s also a meaningful problem because these people are working to treat diseases. They know all this data has value, but they struggle to use it. We want to help them unlock more insights faster.”

No code discovery

Wang expected to major in biology at MIT, but he quickly got excited by the possibilities of building solutions that scaled to millions of people with computer science. He ended up earning both his bachelor’s and master’s degrees from the Department of Electrical Engineering and Computer Science (EECS). Wang also interned at a biology lab at MIT, where he was surprised how slow and labor-intensive experiments were.

“I saw the difference between biology and computer science, where you had these dynamic environments [in computer science] that let you get feedback immediately,” Wang says. “Even as a single person writing code, you have so much at your fingertips to play with.”

While working on machine learning and high-performance computing at MIT, Wang also co-founded a high frequency trading firm with some classmates. His team hired researchers with PhD backgrounds in areas like math and physics to develop new trading strategies, but they quickly saw a bottleneck in their process.

“Things were moving slowly because the researchers were used to building prototypes,” Wang says. “These were small approximations of models they could run locally on their machines. To put those approaches into production, they needed engineers to make them work in a high-throughput way on a computing cluster. But the engineers didn’t understand the nature of the research, so there was a lot of back and forth. It meant ideas you thought could have been implemented in a day took weeks.”

To solve the problem, Wang’s team developed a software layer that made building production-ready models as easy as building prototypes on a laptop. Then, a few years after graduating MIT, Wang noticed technologies like DNA sequencing had become cheap and ubiquitous.

“The bottleneck wasn’t sequencing anymore, so people said, ‘Let’s sequence everything,’” Wang recalls. “The limiting factor became computation. People didn’t know what to do with all the data being generated. Biologists were waiting for data scientists and bioinformaticians to help them, but those people didn’t always understand the biology at a deep enough level.”

The situation looked familiar to Wang.

“It was exactly like what we saw in finance, where researchers were trying to work with engineers, but the engineers never fully understood, and you had all this inefficiency with people waiting on the engineers,” Wang says. “Meanwhile, I learned the biologists are hungry to run these experiments, but there is such a big gap they felt they had to become a software engineer or just focus on the science.”

Wang officially founded Watershed in 2019 with physician Mark Kalinich ’13, a former classmate at MIT who is no longer involved in day-to-day operations of the company.

Wang has since heard from biotech and pharmaceutical executives about the growing complexity of biology research. Unlocking new insights increasingly involves analyzing data from entire genomes, population studies, RNA sequencing, mass spectrometry, and more. Developing personalized treatments or selecting patient populations for a clinical study can also require huge datasets, and there are new ways to analyze data being published in scientific journals all the time.

Today, companies can run large-scale analyses on Watershed without having to set up their own servers or cloud computing accounts. Researchers can use ready-made templates that work with all the most common data types to accelerate their work. Popular AI-based tools like AlphaFold and Geneformer are also available, and Watershed’s platform makes sharing workflows and digging deeper into results easy.

“The platform hits a sweet spot of usability and customizability for people of all backgrounds,” Wang says. “No science is ever truly the same. I avoid the word product because that implies you deploy something and then you just run it at scale forever. Research isn’t like that. Research is about coming up with an idea, testing it, and using the outcome to come up with another idea. The faster you can design, implement, and execute experiments, the faster you can move on to the next one.”

Accelerating biology

Wang believes Watershed is helping biologists keep up with the latest advances in biology and accelerating scientific discovery in the process.

“If you can help scientists unlock insights not a little bit faster, but 10 or 20 times faster, it can really make a difference,” Wang says.

Watershed is being used by researchers in academia and in companies of all sizes. Executives at biotech and pharmaceutical companies also use Watershed to make decisions about new experiments and drug candidates.

“We’ve seen success in all those areas, and the common thread is people understanding research but not being an expert in computer science or software engineering,” Wang says. “It’s exciting to see this industry develop. For me, it’s great being from MIT and now to be back in Kendall Square where Watershed is based. This is where so much of the cutting-edge progress is happening. We’re trying to do our part to enable the future of biology.”

New MIT initiative seeks to transform rare brain disorders research

Tue, 10/14/2025 - 9:00am

More than 300 million people worldwide are living with rare disorders — many of which have a genetic cause and affect the brain and nervous system — yet the vast majority of these conditions lack an approved therapy. Because each rare disorder affects fewer than 65 out of every 100,000 people, studying these disorders and creating new treatments for them is especially challenging.

Thanks to a generous philanthropic gift from Ana Méndez ’91 and Rajeev Jayavant ’86, EE ’88, SM ’88, MIT is now poised to fill gaps in this research landscape. By establishing the Rare Brain Disorders Nexus — or RareNet — at MIT's McGovern Institute for Brain Research, the alumni aim to convene leaders in neuroscience research, clinical medicine, patient advocacy, and industry to streamline the lab-to-clinic pipeline for rare brain disorder treatments.

“Ana and Rajeev’s commitment to MIT will form crucial partnerships to propel the translation of scientific discoveries into promising therapeutics and expand the Institute’s impact on the rare brain disorders community,” says MIT President Sally Kornbluth. “We are deeply grateful for their pivotal role in advancing such critical science and bringing attention to conditions that have long been overlooked.”

Building new coalitions

Several hurdles have slowed the lab-to-clinic pipeline for rare brain disorder research. It is difficult to secure a sufficient number of patients per study, and current research efforts are fragmented, since each study typically focuses on a single disorder (there are more than 7,000 known rare disorders, according to the World Health Organization). Pharmaceutical companies are often reluctant to invest in emerging treatments due to a limited market size and the high costs associated with preparing drugs for commercialization.

Méndez and Jayavant envision that RareNet will finally break down these barriers. “Our hope is that RareNet will allow leaders in the field to come together under a shared framework and ignite scientific breakthroughs across multiple conditions. A discovery for one rare brain disorder could unlock new insights that are relevant to another,” says Jayavant. “By congregating the best minds in the field, we are confident that MIT will create the right scientific climate to produce drug candidates that may benefit a spectrum of uncommon conditions.”

Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor in Neuroscience and associate director of the McGovern Institute, will serve as RareNet’s inaugural faculty director. Feng holds a strong record of advancing studies on therapies for neurodevelopmental disorders, including autism spectrum disorders, Williams syndrome, and uncommon forms of epilepsy. His team’s gene therapy for Phelan-McDermid syndrome, a rare and profound autism spectrum disorder, has been licensed to Jaguar Gene Therapy and is currently undergoing clinical trials. “RareNet pioneers a unique model for biomedical research — one that is reimagining the role academia can play in developing therapeutics,” says Feng.

RareNet plans to deploy two major initiatives: a global consortium and a therapeutic pipeline accelerator. The consortium will form an international network of researchers, clinicians, and patient groups from the outset. It seeks to connect siloed research efforts, secure more patient samples, promote data sharing, and drive a strong sense of trust and goal alignment across the RareNet community. Partnerships within the consortium will support the aim of the therapeutic pipeline accelerator: to de-risk early lab discoveries and expedite their translation to clinic. By fostering more targeted collaborations — especially between academia and industry — the accelerator will prepare potential treatments for clinical use as efficiently as possible.

MIT labs are focusing on four uncommon conditions in the first wave of RareNet projects: Rett syndrome, prion disease, disorders linked to SYNGAP1 mutations, and Sturge-Weber syndrome. The teams are working to develop novel therapies that can slow, halt, or reverse dysfunctions in the brain and nervous system.

These efforts will build new bridges to connect key stakeholders across the rare brain disorders community and disrupt conventional research approaches. “Rajeev and I are motivated to seed powerful collaborations between MIT researchers, clinicians, patients, and industry,” says Méndez. “Guoping Feng clearly understands our goal to create an environment where foundational studies can thrive and seamlessly move toward clinical impact.”

“Patient and caregiver experiences, and our foreseeable impact on their lives, will guide us and remain at the forefront of our work,” Feng adds. “For far too long has the rare brain disorders community been deprived of life-changing treatments — and, importantly, hope. RareNet gives us the opportunity to transform how we study these conditions, and to do so at a moment when it’s needed more than ever.”

Geologists discover the first evidence of 4.5-billion-year-old “proto Earth”

Tue, 10/14/2025 - 5:00am

Scientists at MIT and elsewhere have discovered extremely rare remnants of “proto Earth,” which formed about 4.5 billion years ago, before a colossal collision irreversibly altered the primitive planet’s composition and produced the Earth as we know today. Their findings, reported today in the journal Nature Geosciences, will help scientists piece together the primordial starting ingredients that forged the early Earth and the rest of the solar system.

Billions of years ago, the early solar system was a swirling disk of gas and dust that eventually clumped and accumulated to form the earliest meteorites, which in turn merged to form the proto Earth and its neighboring planets.

In this earliest phase, Earth was likely rocky and bubbling with lava. Then, less than 100 million years later, a Mars-sized meteorite slammed into the infant planet in a singular “giant impact” event that completely scrambled and melted the planet’s interior, effectively resetting its chemistry. Whatever original material the proto Earth was made from was thought to have been altogether transformed.

But the MIT team’s findings suggest otherwise. The researchers have identified a chemical signature in ancient rocks that is unique from most other materials found in the Earth today. The signature is in the form of a subtle imbalance in potassium isotopes discovered in samples of very old and very deep rocks. The team determined that the potassium imbalance could not have been produced by any previous large impacts or geological processes occurring in the Earth presently.

The most likely explanation for the samples’ chemical composition is that they must be leftover material from the proto Earth that somehow remained unchanged, even as most of the early planet was impacted and transformed.

“This is maybe the first direct evidence that we’ve preserved the proto Earth materials,” says Nicole Nie, the Paul M. Cook Career Development Assistant Professor of Earth and Planetary Sciences at MIT. “We see a piece of the very ancient Earth, even before the giant impact. This is amazing because we would expect this very early signature to be slowly erased through Earth’s evolution.”

The study’s other authors include Da Wang of Chengdu University of Technology in China, Steven Shirey and Richard Carlson of the Carnegie Institution for Science in Washington, Bradley Peters of ETH Zürich in Switzerland, and James Day of Scripps Institution of Oceanography in California.

A curious anomaly

In 2023, Nie and her colleagues analyzed many of the major meteorites that have been collected from sites around the world and carefully studied. Before impacting the Earth, these meteorites likely formed at various times and locations throughout the solar system, and therefore represent the solar system’s changing conditions over time. When the researchers compared the chemical compositions of these meteorite samples to Earth, they identified among them a “potassium isotopic anomaly.”

Isotopes are slightly different versions of an element that have the same number of protons but a different number of neutrons. The element potassium can exist in one of three naturally-occurring isotopes, with mass numbers (protons plus neutrons) of 39, 40, and 41, respectively. Wherever potassium has been found on Earth, it exists in a characteristic combination of isotopes, with potassium-39 and potassium-41 being overwhelmingly dominant. Potassium-40 is present, but at a vanishingly small percentage in comparison.

Nie and her colleagues discovered that the meteorites they studied showed balances of potassium isotopes that were different from most materials on Earth. This potassium anomaly suggested that any material that exhibits a similar anomaly likely predates Earth’s present composition. In other words, any potassium imbalance would be a strong sign of material from the proto Earth, before the giant impact reset the planet’s chemical composition.

“In that work, we found that different meteorites have different potassium isotopic signatures, and that means potassium can be used as a tracer of Earth’s building blocks,” Nie explains.

“Built different”

In the current study, the team looked for signs of potassium anomalies not in meteorites, but within the Earth. Their samples include rocks, in powder form, from Greenland and Canada, where some of the oldest preserved rocks are found. They also analyzed lava deposits collected from Hawaii, where volcanoes have brought up some of the Earth’s earliest, deepest materials from the mantle (the planet’s thickest layer of rock that separates the crust from the core).

“If this potassium signature is preserved, we would want to look for it in deep time and deep Earth,” Nie says.

The team first dissolved the various powder samples in acid, then carefully isolated any potassium from the rest of the sample and used a special mass spectrometer to measure the ratio of each of potassium’s three isotopes. Remarkably, they identified in the samples an isotopic signature that was different from what’s been found in most materials on Earth.

Specifically, they identified a deficit in the potassium-40 isotope. In most materials on Earth, this isotope is already an insignificant fraction compared to potassium’s other two isotopes. But the researchers were able to discern that their samples contained an even smaller percentage of potassium-40. Detecting this tiny deficit is like spotting a single grain of brown sand in a bucket rather than a scoop full of of yellow sand.

The team found that, indeed, the samples exhibited the potassium-40 deficit, showing that the materials “were built different,” says Nie, compared to most of what we see on Earth today.

But could the samples be rare remnants of the proto Earth? To answer this, the researchers assumed that this might be the case. They reasoned that if the proto Earth were originally made from such potassium-40-deficient materials, then most of this material would have undergone chemical changes — from the giant impact and subsequent, smaller meteorite impacts — that ultimately resulted in the materials with more potassium-40 that we see today. 

The team used compositional data from every known meteorite and carried out simulations of how the samples’ potassium-40 deficit would change following impacts by these meteorites and by the giant impact. They also simulated geological processes that the Earth experienced over time, such as the heating and mixing of the mantle. In the end, their simulations produced a composition with a slightly higher fraction of potassium-40 compared to the samples from Canada, Greenland, and Hawaii. More importantly, the simulated compositions matched those of most modern-day materials.

The work suggests that materials with a potassium-40 deficit are likely leftover original material from the proto Earth.

Curiously, the samples’ signature isn’t a precise match with any other meteorite in geologists’ collections. While the meteorites in the team’s previous work showed potassium anomalies, they aren’t exactly the deficit seen in the proto Earth samples. This means that whatever meteorites and materials originally formed the proto Earth have yet to be discovered.

“Scientists have been trying to understand Earth’s original chemical composition by combining the compositions of different groups of meteorites,” Nie says. “But our study shows that the current meteorite inventory is not complete, and there is much more to learn about where our planet came from.”

This work was supported, in part, by NASA and MIT.

A new system can dial expression of synthetic genes up or down

Mon, 10/13/2025 - 5:00am

For decades, synthetic biologists have been developing gene circuits that can be transferred into cells for applications such as reprogramming a stem cell into a neuron or generating a protein that could help treat a disease such as fragile X syndrome.

These gene circuits are typically delivered into cells by carriers such as nonpathogenic viruses. However, it has been difficult to ensure that these cells end up producing the correct amount of the protein encoded by the synthetic gene.

To overcome that obstacle, MIT engineers have designed a new control mechanism that allows them to establish a desired protein level, or set point, for any gene circuit. This approach also allows them to edit the set point after the circuit is delivered.

“This is a really stable and multifunctional tool. The tool is very modular, so there are a lot of transgenes you could control with this system,” says Katie Galloway, an assistant professor in Chemical Engineering at MIT and the senior author of the new study.

Using this strategy, the researchers showed that they could induce cells to generate consistent levels of target proteins. In one application that they demonstrated, they converted mouse embryonic fibroblasts to motor neurons by delivering high levels of a gene that promotes that conversion.

MIT graduate student Sneha Kabaria is the lead author of the paper, which appears today in Nature Biotechnology. Other authors include Yunbeen Bae ’24; MIT graduate students Mary Ehmann, Brittany Lende-Dorn, Emma Peterman, and Kasey Love; Adam Beitz PhD ’25; and former MIT postdoc Deon Ploessl.

Dialing up gene expression

Synthetic gene circuits are engineered to include not only the gene of interest, but also a promoter region. At this site, transcription factors and other regulators can bind, turning on the expression of the synthetic gene.

However, it’s not always possible to get all of the cells in a population to express the desired gene at a uniform level. One reason for that is that some cells may take up just one copy of the circuit, while others receive many more. Additionally, cells have natural variation in how much protein they produce.

That has made reprogramming cells challenging because it’s difficult to ensure that every cell in a population of skin cells, for example, will produce enough of the necessary transcription factors to successfully transition into a new cell identity, such as a neuron or induced pluripotent stem cell.

In the new paper, the researchers devised a way to control gene expression levels by changing the distance between the synthetic gene and its promoter. They found that when there was a longer DNA “spacer” between the promoter region and the gene, the gene would be expressed at a lower level. That extra distance, they showed, makes it less likely that transcription factors bound to the promoter will effectively turn on gene transcription.

Then, to create set points that could be edited, the researchers incorporated sites within the spacer that can be excised by an enzyme called Cre recombinase. As parts of the spacer are cut out, it helps bring the transcription factors closer to the gene of interest, which turns up gene expression.

The researchers showed they could create spacers with multiple excision points, each targeted by different recombinases. This allowed them to create a system called DIAL, that they could use to establish “high,” “med,” “low” and “off” set points for gene expression.

After the DNA segment carrying the gene and its promoter is delivered into cells, recombinases can be added to the cells, allowing the set point to be edited at any time.

The researchers demonstrated their system in mouse and human cells by delivering the gene for different fluorescent proteins and functional genes, and showed that they could get uniform expression across the a population of cells at the target level.

“We achieved uniform and stable control. This is very exciting for us because lack of uniform, stable control has been one of the things that's been limiting our ability to build reliable systems in synthetic biology. When there are too many variables that affect your system, and then you add in normal biological variation, it’s very hard to build stable systems,” Galloway says.

Reprogramming cells

To demonstrate potential applications of the DIAL system, the researchers then used it to deliver different levels of the gene HRasG12V to mouse embryonic fibroblasts. This HRas variant has previously been shown to increase the rate of conversion of fibroblasts to neurons. The MIT team found that in cells that received a higher dose of the gene, a larger percentage of them were able to successfully transform into neurons.

Using this system, researchers now hope to perform more systematic studies of different transcription factors that can induce cells to transition to different cell types. Such studies could reveal how different levels of those factors affect the success rate, and whether changing the transcription factors levels might alter the cell type that is generated.

In ongoing work, the researchers have shown that DIAL can be combined with a system they previously developed, known as ComMAND, that uses a feedforward loop to help prevent cells from overexpressing a therapeutic gene.

Using these systems together, it could be possible to tailor gene therapies to produce specific, consistent protein levels in the target cells of individual patients, the researchers say.

“This is something we’re excited about because both DIAL and ComMAND are highly modular, so you could not only have a well-controlled gene therapy that’s somewhat general for a population, but you could, in theory, tailor it for any given person or any given cell type,” Galloway says.

The research was funded, in part, by the National Institute of General Medical Sciences, the National Science Foundation, and the Institute for Collaborative Biotechnologies.

MIT releases financials and endowment figures for 2025

Fri, 10/10/2025 - 4:00pm

The Massachusetts Institute of Technology Investment Management Company (MITIMCo) announced today that MIT’s unitized pool of endowment and other MIT funds generated an investment return of 14.8 percent during the fiscal year ending June 30, 2025, as measured using valuations received within one month of fiscal year end. At the end of the fiscal year, MIT’s endowment funds totaled $27.4 billion, excluding pledges. Over the 10 years ending June 30, 2025, MIT generated an annualized return of 10.7 percent.

The endowment is the bedrock of MIT’s finances, made possible by gifts from alumni and friends for more than a century. The use of the endowment is governed by a state law that requires MIT to maintain each endowed gift as a permanent fund, preserve its purchasing power, and spend it as directed by its original donor. Most of the endowment’s funds are restricted and must be used for a specific purpose. MIT uses the bulk of the income these endowed gifts generate to support financial aid, research, and education.

The endowment supports 50 percent of undergraduate tuition, helping to enable the Institute’s need-blind undergraduate admissions policy, which ensures that an MIT education is accessible to all qualified candidates regardless of financial resources. MIT works closely with all families of undergraduates who qualify for financial aid to develop an individual affordability plan tailored to their financial circumstances. In 2024-25, the average need-based MIT undergraduate scholarship was $62,127. Fifty-seven percent of MIT undergraduates received need-based financial aid, and 39 percent of MIT undergraduate students received scholarship funding from MIT and other sources sufficient to cover the total cost of tuition.

Effective in fiscal 2026, MIT enhanced undergraduate financial aid, ensuring that all families with incomes below $200,000 and typical assets have tuition fully covered by scholarships, and that families with incomes below $100,000 and typical assets pay nothing at all for their students’ MIT education. Eighty-eight percent of seniors who graduated in academic year 2025 graduated with no debt.

MITIMCo is a unit of MIT, created to manage and oversee the investment of the Institute’s endowment, retirement, and operating funds.

MIT’s Report of the Treasurer for fiscal year 2025, which details the Institute’s annual financial performance, was made available publicly today.

Ray Kurzweil ’70 reinforces his optimism in tech progress

Fri, 10/10/2025 - 12:00am

Innovator, futurist, and author Ray Kurzweil ’70 emphasized his optimism about artificial intelligence, and technological progress generally, in a lecture on Wednesday while accepting MIT’s Robert A. Muh Alumni Award from the School of Humanities, Arts, and Social Sciences (SHASS).

Kurzweil offered his signature high-profile forecasts about how AI and computing will entirely blend with human functionality, and proposed that AI will lead to monumental gains in longevity, medicine, and other realms of life.

“People do not appreciate that the rate of progress is accelerating,” Kurzweil said, forecasting “incredible breakthroughs” over the next two decades.

Kurzweil delivered his lecture, titled “Reinventing Intelligence,” in the Thomas Tull Concert Hall of the Edward and Joyce Linde Music Building, which opened earlier in 2025 on the MIT campus.

The Muh Award was founded and endowed by Robert A. Muh ’59 and his wife Berit, and is one of the leading alumni honors granted by SHASS and MIT. Muh, a life member emeritus of the MIT Corporation, established the award, which is granted every two years for “extraordinary contributions” by alumni in the humanities, arts, and social sciences.

Robert and Berit Muh were both present at the lecture, along with their daughter Carrie Muh ’96, ’97, SM ’97.

Agustín Rayo, dean of SHASS, offered introductory remarks, calling Kurzweil “one of the most prolific thinkers of our time.” Rayo added that Kurzweil “has built his life and career on the belief that ideas change the world, and change it for the better.”

Kurzweil has been an innovator in language recognition technologies, developing advances and founding companies that have served people who are blind or low-vision, and helped in music creation. He is also a best-selling author who has heralded advances in computing capabilities, and even the merging of human and machines.

The initial segment of Kurzweil’s lecture was autobiographical in focus, reflecting on his family and early years. The families of both of Kurzweil’s parents fled the Nazis in Europe, seeking refuge in the U.S., with the belief that people could create a brighter future for themselves.

“My parents taught me the power of ideas can really change the world,” Kurzweil said.

Showing an early interest in how things worked, Kurzweil had decided to become an inventor by about the age of 7, he recalled. He also described his mother as being tremendously encouraging to him as a child. The two would take walks together, and the young Kurzweil would talk about all the things he imagined inventing.

“I would tell her my ideas and no matter how fantastical they were, she believed them,” he said. “Now other parents might have simply chuckled … but she actually believed my ideas, and that actually gave me my confidence, and I think confidence is important in succeeding.”

He became interested in computing by the early 1960s and majored in both computer science and literature as an MIT undergraduate.

Kurzweil has a long-running association with MIT extending far beyond his undergraduate studies. He served as a member of the MIT Corporation from 2005 to 2012 and was the 2001 recipient of the $500,000 Lemelson-MIT Prize, an award for innovation, for his development of reading technology.

“MIT has played a major role in my personal and professional life over the years,” Kurzweil said, calling himself “truly honored to receive this award.” Addressing Muh, he added: “Your longstanding commitment to our alma mater is inspiring.”

After graduating from MIT, Kurzweil launched a successful career developing innovative computing products, including one that recognized text across all fonts and could produce an audio reading. He also developed leading-edge music synthesizers, among many other advances.

In a corresponding part of his career, Kurzweil has become an energetic author, whose best-known books include “The Age of Intelligent Machines” (1990), “The Age of Spiritual Machines” (1999), “The Singularity Is Near” (2005), and “The Singularity Is Nearer” (2024), among many others.

Kurzweil was recently named chief AI officer of Beyond Imagination, a robotics firm he co-founded; he has also held a position at Google in recent years, working on natural language technologies.

In his remarks, Kurzweil underscored his view that, as exemplified and enabled by the growth of computing power over time, technological innovation moves at an exponential pace.

“People don’t really think about exponential growth; they think about linear growth,” Kurzweil said.

This concept, he said, makes him confident that a string of innovations will continue at remarkable speed.

“One of the bigger transformations we’re going to see from AI in the near term is health and medicine,” Kurweil said, forecasting that human medical trials will be replaced by simulated “digital trials.”

Kurzweil also believes computing and AI advances can lead to so many medical advances it will soon produce a drastic improvement in human longevity.

“These incredible breakthroughs are going to lead to what we’ll call longevity escape velocity,” Kurzweil said. “By roughly 2032 when you live through a year, you’ll get back an entire year from scientific progress, and beyond that point you’ll get back more than a year for every year you live, so you’ll be going back into time as far as your health is concerned,” Kurweil said. He did offer that these advances will “start” with people who are the most diligent about their health.

Kurzweil also outlined one of his best-known forecasts, that AI and people will be combined. “As we move forward, the lines between humans and technology will blur, until we are … one and the same,” Kurzweil said. “This is how we learn to merge with AI. In the 2030s, robots the size of molecules will go into our brains, noninvasively, through the capillaries, and will connect our brains directly to the cloud. Think of it like having a phone, but in your brain.”

“By 2045, once we have fully merged with AI, our intelligence will no longer be constrained … it will expand a millionfold,” he said. “This is what we call the singularity.”

To be sure, Kurzweil acknowledged, “Technology has always been a double-edged sword,” given that a drone can deliver either medical supplies or weaponry. “Threats of AI are real, must be taken seriously, [and] I think we are doing that,” he said. In any case, he added, we have “a moral imperative to realize the promise of new technologies while controlling the peril.” He concluded: “We are not doomed to fail to control any of these risks.” 

Gene-Wei Li named associate head of the Department of Biology

Thu, 10/09/2025 - 5:00pm

Associate Professor Gene-Wei Li has accepted the position of associate head of the MIT Department of Biology, starting in the 2025-26 academic year. 

Li, who has been a member of the department since 2015, brings a history of departmental leadership, service, and research and teaching excellence to his new role. He has received many awards, including a Sloan Research Fellowship (2016), an NSF Career Award (2019), Pew and Searle scholarships, and MIT’s Committed to Caring Award (2020). In 2024, he was appointed as a Howard Hughes Medical Institute (HHMI) Investigator

“I am grateful to Gene-Wei for joining the leadership team,” says department head Amy E. Keating, the Jay A. Stein (1968) Professor of Biology and professor of biological engineering. “Gene will be a key leader in our educational initiatives, both digital and residential, and will be a critical part of keeping our department strong and forward-looking.” 

A great environment to do science

Li says he was inspired to take on the role in part because of the way MIT Biology facilitates career development during every stage — from undergraduate and graduate students to postdocs and junior faculty members, as he was when he started in the department as an assistant professor just 10 years ago. 

“I think we all benefit a lot from our environment, and I think this is a great environment to do science and educate people, and to create a new generation of scientists,” he says. “I want us to keep doing well, and I’m glad to have the opportunity to contribute to this effort.” 

As part of his portfolio as associate department head, Li will continue in the role of scientific director of the Koch Biology Building, Building 68. In the last year, the previous scientific director, Stephen Bell, Uncas and Helen Whitaker Professor of Biology and HHMI Investigator, has continued to provide support and ensured a steady ramp-up, transitioning Li into his new duties. The building, which opened its doors in 1994, is in need of a slate of updates and repairs. 

Although Li will be managing more administrative duties, he has provided a stable foundation for his lab to continue its interdisciplinary work on the quantitative biology of gene expression, parsing the mechanisms by which cells control the levels of their proteins and how this enables cells to perform their functions. His recent work includes developing a method that leverages the AI tool AlphaFold to predict whether protein fragments can recapitulate the native interactions of their full-length counterparts.  

“I’m still very heavily involved, and we have a lab environment where everyone helps each other. It’s a team, and so that helps elevate everyone,” he says. “It’s the same with the whole building: nobody is working by themselves, so the science and administrative parts come together really nicely.” 

Teaching for the future

Li is considering how the department can continue to be a global leader in biological sciences while navigating the uncertainty surrounding academia and funding, as well as the likelihood of reduced staff support and tightening budgets.

“The question is: How do you maintain excellence?” Li says. “That involves recruiting great people and giving them the resources that they need, and that’s going to be a priority within the limitations that we have to work with.” 

Li will also be serving as faculty advisor for the MIT Biology Teaching and Learning Group, headed by Mary Ellen Wiltrout, and will serve on the Department of Biology Digital Learning Committee and the new Open Learning Biology Advisory Committee. Li will serve in the latter role in order to represent the department and work with new faculty member and HHMI Investigator Ron Vale on Institute-level online learning initiatives. Li will also chair the Biology Academic Planning Committee, which will help develop a longer-term outlook on faculty teaching assignments and course offerings. 

Li is looking forward to hearing from faculty and students about the way the Institute teaches, and how it could be improved, both for the students on campus and for the online learners from across the world. 

“There are a lot of things that are changing; what are the core fundamentals that the students need to know, what should we teach them, and how should we teach them?” 

Although the commitment to teaching remains unchanged, there may be big transitions on the horizon. With two young children in school, Li is all too aware that the way that students learn today is very different from what he grew up with, and also very different from how students were learning just five or 10 years ago — writing essays on a computer, researching online, using AI tools, and absorbing information from media like short-form YouTube videos. 

“There’s a lot of appeal to a shorter format, but it’s very different from the lecture-based teaching style that has worked for a long time,” Li says. “I think a challenge we should and will face is figuring out the best way to communicate the core fundamentals, and adapting our teaching styles to the next generation of students.” 

Ultimately, Li is excited about balancing his research goals along with joining the department’s leadership team, and knows he can look to his fellow researchers in Building 68 and beyond for support.

“I’m privileged to be working with a great group of colleagues who are all invested in these efforts,” Li says. “Different people may have different ways of doing things, but we all share the same mission.” 

Pages