MIT Latest News
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Lincoln Laboratory laser communications terminal launches on historic Artemis II moon mission
In 1969, Apollo 11 astronaut Neil Armstrong stepped onto the moon’s surface — a momentous engineering and science feat marked by his iconic words: "That’s one small step for man, one giant leap for mankind." Now, NASA is making history again.
With the successful launch of NASA’s Artemis II mission yesterday, four astronauts are set to become the first humans to travel to the moon in more than 50 years. In 2022, the uncrewed Artemis I mission demonstrated that NASA’s new Orion spacecraft could travel farther into space than ever before and return safely to Earth. Building on that success, the 10-day Artemis II mission will pave the way for future Artemis missions, which aim to land astronauts on the moon to prepare for a lasting lunar presence, and eventually human missions to Mars.
As it orbits the moon, the Orion spacecraft will carry an optical (laser) communications system developed at MIT Lincoln Laboratory in collaboration with NASA Goddard Space Flight Center. Called the Orion Artemis II Optical Communications System (O2O), the system is capable of higher-bandwidth data transmissions from space compared to traditional radio-frequency (RF) systems. During the Artemis II mission, O2O will use laser beams to send high-resolution video and images of the lunar surface down to Earth.
"Space-based communications has always been a big challenge," says lead systems engineer Farzana Khatri, a senior staff member in the laboratory’s Optical and Quantum Communications Group. "RF communications have served their purpose well. However, the RF spectrum is highly congested now, and RF does not scale well to longer distances across space. Laser communication [lasercom] is a solution that could solve this problem, and the laboratory is an expert in the field, which was really pioneered here."
Artemis II is historic not only for renewing human exploration beyond Earth, but also for being the first crewed lunar flight to demonstrate lasercom technologies, which are poised to revolutionize how spacecraft communicate. Lincoln Laboratory has been developing such technologies for more than two decades, and NASA has been infusing them into its missions to meet the growing demands of long-distance and data-intensive space exploration.
"The Orion spacecraft collects a huge amount of data during the first day of a mission, and typically these data sit on the spacecraft until it splashes down and can take months to be offloaded," Khatri says. "With an optical link running at the highest rate, we should be able to get all the data down to Earth within a few hours for immediate analysis. Furthermore, astronauts will be able to communicate in real-time over the optical link to stay in touch with Earth during their journey, inspiring the public and the next generation of deep-space explorers, much like the Apollo 11 astronauts who first landed on the moon 57 years ago."
At the heart of O2O is the laboratory-developed Modular, Agile, Scalable Optical Terminal (MAScOT). About the size of a house cat, MAScOT features a 4-inch telescope mounted on a two-axis pivoted support (gimbal) with fixed backend optics. The gimbal precisely points the telescope and tracks the laser beam through which communications signals are emitted and received in the direction of the desired data recipient or sender. Underneath the gimbal, in a separate assembly, are the backend optics, which contain light-focusing lenses, tracking sensors, fast-steering mirrors, and other components to finely point the laser beam.
MAScOT made its debut in space as part of the laboratory’s Integrated Laser Communications Relay Demonstration (LCRD) LEO User Modem and Amplifier Terminal (ILLUMA-T), which launched to the International Space Station in November 2023. Over the following six months, the laboratory team performed experiments to test and characterize the system's basic functionality, performance, and utility for human crews and user applications. Initially, the team checked whether the ILLUMA-T-to-LCRD optical link was operating at the intended data rates in both directions: 622 Mbps down and 51 Mbps up. In fact, even higher data rates were achieved: 1.2 Gbps down and 155 Mbps up. MAScOT’s lasercom terminal architecture, which was recognized with a 2025 R&D 100 Award, is now being used for Artemis II and will support future space missions.
"Our success with ILLUMA-T laid the foundation for streaming HD [high-definition] video to and from the moon," says co-principal investigator Jade Wang, an assistant leader of the Optical and Quantum Communications Group. "You can imagine the Artemis astronauts using videoconferencing to connect with physicians, coordinate mission activities, and livestream their lunar trips."
A dedicated operations team from Lincoln Laboratory is following the 10-day Artemis II mission from ground stations in Houston, Texas, and White Sands, New Mexico, and even as far as an experimental ground station in Australia, which allows for a better view of the spacecraft from the Southern Hemisphere. Leading up to the launch, the operations team had been making monthly trips to the Houston and White Sands ground stations to perform maintenance and simulations of various stages of the Artemis mission — from prelaunch to launch to the journey to the moon and back to the splashdown at the end of the mission.
"Doing these monthly simulations is important so we all stay fresh and engaged, especially when there is a launch delay," says Khatri, who adds that team members have had the opportunity to meet and speak with the four astronauts several times during these trips.
Lessons learned throughout the Artemis II mission will pave the way for humans to return to the lunar surface and beyond, eventually to Mars. Through the Artemis program, NASA will travel farther into space and explore more of the moon while creating an enduring presence in deep space and a legacy for future generations.
O2O is funded by the Space Communication and Navigation (SCaN) program at NASA Headquarters in Washington. O2O was developed by a team of engineers from NASA’s Goddard Space Flight Center and Lincoln Laboratory. This partnership has led to multiple lasercom missions, such as the 2013 Lunar Laser Communication Demonstration (LLCD), the 2021 LCRD, the 2022 TeraByte Infrared Delivery (TBIRD), and the 2023 ILLUMA-T.
MIT researchers measure traffic emissions, to the block, in real-time
In a study focused on New York City, MIT researchers have shown that existing sensors and mobile data can be used to generate a near real-time, high-resolution picture of auto emissions, which could be used to develop local transportation and decarbonization policies.
The new method produces much more detailed data than some other common approaches, which use intermittent samples of vehicle emissions. The researchers say it is also more practical and scales up better than some studies that have aimed for very granular emissions data from a small number of automobiles at once. The work helps bridge the gap between less-detailed citywide emissions inventories and highly detailed analyses based on individual vehicles.
“Our model, by combining real-time traffic cameras with multiple data sources, allows extrapolating very detailed emission maps, down to a single road and hour of the day,” says Paolo Santi, a principal research scientist in the MIT Senseable City Lab and co-author of a new paper detailing the project’s results. “Such detailed information can prove very helpful to support decision-making and understand effects of traffic and mobility interventions.”
Carlo Ratti, director of the MIT Senseable City Lab, notes that the research “is part of our lab’s ongoing quest into hyperlocal measurements of air quality and other environmental factors. By integrating multiple streams of data, we can reach a level of precision that was unthinkable just a few years ago — giving policymakers powerful new tools to understand and protect human health.”
The new method also protects privacy, since it uses computer vision techniques to recognize types of vehicles, but without compiling license plate numbers. The study leverages technologies, including those already installed at intersections, to yield richer data about vehicle movement and pollution.
“The very basic idea is just to estimate traffic emissions using existing data sources in a cost-effective way,” says Songhua Hu, a former postdoc in the Senseable City Lab, and now an assistant professor at City University of Hong Kong.
The paper, “Ubiquitous Data-driven Framework for Traffic Emission Estimation and Policy Evaluation,” is published in Nature Sustainability.
The authors are Hu; Santi; Tom Benson, a researcher in the Senseable City Lab; Xuesong Zhou, a professor of transportation engineering at Arizona State University; An Wang, an assistant professor at Hong Kong Polytechnic University; Ashutosh Kumar, a visiting doctoral student at the Senseable City Lab; and Ratti. The MIT Senseable City Lab is part of MIT’s Department of Urban Studies and Planning.
Manhattan measurements
To conduct the study, the researchers used images from 331 cameras already in use in Manhattan intersections, along with anonymized location records from over 1.75 million mobile phones. Applying vehicle-recognition programs and defining 12 broad categories of automobiles, the scholars found they could correctly place 93 percent of vehicles in the right category. The imaging also yielded important information about the specific ways traffic signals affect traffic flow. That matters because traffic signals are a major reason for stop-and-go driving patterns, which strongly affect urban emissions but are often omitted in conventional inventories.
The mobile phone data then provided rich information about the overall patterns of traffic and movement of individual vehicles throughout the city. The scholars combined the camera and phone data with known information about emissions rates to arrive at their own emissions estimates for New York City.
“We just need to input all emission-related information based on existing urban data sources, and we can estimate the traffic emissions,” Hu says.
Moreover, the researchers evaluated the changes in emissions that might occur in different scenarios when traffic patterns, or vehicle types, also change.
For one, they modeled what would happen to emissions if a certain percentage of travel demand shifted from private vehicles to buses. In another scenario, they looked at what would happen if morning and evening rush hour times were spread out a bit longer, leaving fewer vehicles on the road at once. They also modeled the effects of replacing fine-grained emissions inputs with citywide averages — finding that the rougher emissions estimates could vary widely, from −49 percent to 25 percent of the more fine-tuned results. That underscores how seemingly small simplifications can introduce large errors into emission estimates.
Major emissions drop
On one level, this work involved altering inputs into the model and seeing what emerged. But one scenario the researchers studied is based on a real-world change: In January 2025, New York City implemented congestion pricing south of 60th Street in Manhattan.
To study that, the researchers looked at what happened to vehicle traffic at intervals of two, four, six, and eight weeks after the program began. Overall, congestion pricing lowered traffic volume by about 10 percent — but there was a corresponding drop in emissions of 16-22 percent.
This finding aligns with a previous study by researchers at Cornell University, which reported a 22 percent reduction in particulate matter (PM2.5) levels within the pricing zone. The MIT team also found that these reductions were not evenly distributed across the network, with larger declines on some major streets and more mixed effects outside the pricing zone.
“We see these kinds of huge changes after the congestion pricing began, Hu says. “I think that’s a demonstration that our model can be very helpful if a government really wants to know if a new policy converts into real-world impact.”
There are additional forms of data that could be fed into the researchers’ new method. For instance, in related work in Amsterdam, the team leveraged dashboard cams from vehicles to yield rich information about vehicle movement.
“With our model we can make any camera used in cities, from the hundreds of traffic cameras to the thousands of dash cams, a powerful device to estimate traffic emissions in real-time,” says Fábio Duarte, the associate director of research and design at the MIT Senseable City Lab, who has worked on multiple related studies.
The research was supported by the city of Amsterdam, the AMS Institute, and the Abu Dhabi’s Department of Municipalities and Transport.
It was also supported by the MIT Senseable City Consortium, which consists of Atlas University, the city of Laval, the city of Rio de Janeiro, Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, the Dubai Future Foundation, FAE Technology, KAIST Center for Advanced Urban Systems, Sondotecnica, Toyota, and Volkswagen Group America.
Evaluating the ethics of autonomous systems
Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.
But while these AI-driven outputs may be technically optimal, are they fair? What if a low-cost power distribution strategy leaves disadvantaged neighborhoods more vulnerable to outages than higher-income areas?
To help stakeholders quickly pinpoint potential ethical dilemmas before deployment, MIT researchers developed an automated evaluation method that balances the interplay between measurable outcomes, like cost or reliability, and qualitative or subjective values, such as fairness.
The system separates objective evaluations from user-defined human values, using a large language model (LLM) as a proxy for humans to capture and incorporate stakeholder preferences.
The adaptive framework selects the best scenarios for further evaluation, streamlining a process that typically requires costly and time-consuming manual effort. These test cases can show situations where autonomous systems align well with human values, as well as scenarios that unexpectedly fall short of ethical criteria.
“We can insert a lot of rules and guardrails into AI systems, but those safeguards can only prevent the things we can imagine happening. It is not enough to say, ‘Let’s just use AI because it has been trained on this information.’ We wanted to develop a more systematic way to discover the unknown unknowns and have a way to predict them before anything bad happens,” says senior author Chuchu Fan, an associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).
Fan is joined on the paper by lead author Anjali Parashar, a mechanical engineering graduate student; Yingke Li, an AeroAstro postdoc; and others at MIT and Saab. The research will be presented at the International Conference on Learning Representations.
Evaluating ethics
In a large system like a power grid, evaluating the ethical alignment of an AI model’s recommendations in a way that considers all objectives is especially difficult.
Most testing frameworks rely on pre-collected data, but labeled data on subjective ethical criteria are often hard to come by. In addition, because ethical values and AI systems are both constantly evolving, static evaluation methods based on written codes or regulatory documents require frequent updates.
Fan and her team approached this problem from a different perspective. Drawing on their prior work evaluating robotic systems, they developed an experimental design framework to identify the most informative scenarios, which human stakeholders would then evaluate more closely.
Their two-part system, called Scalable Experimental Design for System-level Ethical Testing (SEED-SET), incorporates quantitative metrics and ethical criteria. It can identify scenarios that effectively meet measurable requirements and align well with human values, and vice versa.
“We don’t want to spend all our resources on random evaluations. So, it is very important to guide the framework toward the test cases we care the most about,” Li says.
Importantly, SEED-SET does not need pre-existing evaluation data, and it adapts to multiple objectives.
For instance, a power grid may have several user groups, including a large rural community and a data center. While both groups may want low-cost and reliable power, each group’s priority from an ethical perspective may vary widely.
These ethical criteria may not be well-specified, so they can’t be measured analytically.
The power grid operator wants to find the most cost-effective strategy that best meets the subjective ethical preferences of all stakeholders.
SEED-SET tackles this challenge by splitting the problem into two, following a hierarchical structure. An objective model considers how the system performs on tangible metrics like cost. Then a subjective model that considers stakeholder judgements, like perceived fairness, builds on the objective evaluation.
“The objective part of our approach is tied to the AI system, while the subjective part is tied to the users who are evaluating it. By decomposing the preferences in a hierarchical fashion, we can generate the desired scenarios with fewer evaluations,” Parashar says.
Encoding subjectivity
To perform the subjective assessment, the system uses an LLM as a proxy for human evaluators. The researchers encode the preferences of each user group into a natural language prompt for the model.
The LLM uses these instructions to compare two scenarios, selecting the preferred design based on the ethical criteria.
“After seeing hundreds or thousands of scenarios, a human evaluator can suffer from fatigue and become inconsistent in their evaluations, so we use an LLM-based strategy instead,” Parashar explains.
SEED-SET uses the selected scenario to simulate the overall system (in this case, a power distribution strategy). These simulation results guide its search for the next best candidate scenario to test.
In the end, SEED-SET intelligently selects the most representative scenarios that either meet or are not aligned with objective metrics and ethical criteria. In this way, users can analyze the performance of the AI system and adjust its strategy.
For instance, SEED-SET can pinpoint cases of power distribution that prioritize higher-income areas during periods of peak demand, leaving underprivileged neighborhoods more prone to outages.
To test SEED-SET, the researchers evaluated realistic autonomous systems, like an AI-driven power grid and an urban traffic routing system. They measured how well the generated scenarios aligned with ethical criteria.
The system generated more than twice as many optimal test cases as the baseline strategies in the same amount of time, while uncovering many scenarios other approaches overlooked.
“As we shifted the user preferences, the set of scenarios SEED-SET generated changed drastically. This tells us the evaluation strategy responds well to the preferences of the user,” Parashar says.
To measure how useful SEED-SET would be in practice, the researchers will need to conduct a user study to see if the scenarios it generates help with real decision-making.
In addition to running such a study, the researchers plan to explore the use of more efficient models that can scale up to larger problems with more criteria, such as evaluating LLM decision-making.
This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency.
Preview tool helps makers visualize 3D-printed objects
Designers, makers, and others often use 3D printing to rapidly prototype a range of functional objects, from movie props to medical devices. Accurate print previews are essential so users know a fabricated object will perform as expected.
But previews generated by most 3D-printing software focus on function rather than aesthetics. A printed object may end up with a different color, texture, or shading than the user expected, resulting in multiple reprints that waste time, effort, and material.
To help users envision how a fabricated object will look, researchers from MIT and elsewhere developed an easy-to-use preview tool that puts appearance first.
Users upload a screenshot of the object from their 3D-printing software, along with a single image of the print material. From these inputs, the system automatically generates a rendering of how the fabricated object is likely to look.
The artificial intelligence-powered system, called VisiPrint, is designed to work with a range of 3D-printing software and can handle any material example. It considers not only the color of the material, but also gloss, translucency, and how nuances of the fabrication process affect the object’s appearance.
Such aesthetics-focused previews could be especially useful in areas like dentistry, by helping clinicians ensure temporary crowns and bridges match the appearance of a patient’s teeth, or in architecture, to aid designers in assessing the visual impact of models.
“3D printing can be a very wasteful process. Some studies estimate that as much as a third of the material used goes straight to the landfill, often from prototypes the user ends of discarding. To make 3D printing more sustainable, we want to reduce the number of tries it takes to get the prototype you want. The user shouldn’t have to try out every printing material they have before they settle on a design,” says Maxine Perroni-Scharf, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on VisiPrint.
She is joined on the paper by Faraz Faruqi, a fellow EECS graduate student; Raul Hernandez, an MIT undergraduate; SooYeon Ahn, a graduate student at the Gwangju Institute of Science and Technology; Szymon Rusinkiewicz, a professor of computer science at Princeton University; William Freeman, the Thomas and Gerd Perkins Professor of EECS at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Stefanie Mueller, an associate professor of EECS and Mechanical Engineering at MIT, and a member of CSAIL. The research will be presented at the ACM CHI Conference on Human Factors in Computing Systems.
Accurate aesthetics
The researchers focused on fused deposition modeling (FDM), the most common type of 3D printing. In FDM, print material filament is melted and then squirted through a nozzle to fabricate an object one layer at a time.
Generating accurate aesthetic previews is challenging because the melting and extrusion process can change the appearance of a material, as can the height of each deposited layer and the path the nozzle follows during fabrication.
VisiPrint uses two AI models that work together to overcome those challenges.
The VisiPrint preview is based on two inputs: a screenshot of the digital design from a user’s 3D-printing software (called “slicer” software), and an image of the print material, which can be taken from an online source or captured from a printed sample.
From these inputs, a computer vision model extracts features from the material sample that are important for the object’s appearance.
It feeds those features to a generative AI model that computes the geometry and structure of the object, while incorporating the so-called “slicing” pattern the nozzle will follow as it extrudes each layer.
The key to the researchers’ approach is a special conditioning method. This involves carefully adjusting the inner workings of the model to guide it, so it follows the slicing pattern and obeys the constraints of the 3D-printing process.
Their conditioning method utilizes a depth map that preserves the shape and shading of the object, along with a map of the edges that reflects the internal contours and structural boundaries.
“If you don’t have the right balance of these two things, you could use up with bad geometry or an incorrect slicing pattern. We had to be careful to combine them in the right way,” Perroni-Scharf says.
A user-focused system
The team also produced an easy-to-use interface where one can upload the required images and evaluate the preview.
The VisiPrint interface enables more advanced makers to adjust multiple settings, such as the influence of certain colors on the final appearance.
In the end, the aesthetic preview is intended to complement the functional preview generated by slicer software, since VisiPrint does not estimate printability, mechanical feasibility, or likelihood of failure.
To evaluate VisiPrint, the researchers conducted a user study that asked participants to compare the system to other approaches. Nearly all participants said it provided better overall appearance as well as more textural similarity with printed objects.
In addition, the VisiPrint preview process took about a minute on average, which was more than twice as fast as any competing method.
“VisiPrint really shined when compared to other AI interfaces. If you give a more general AI model the same screenshots, it might randomly change the shape or use the wrong slicing pattern because it had no direct conditioning,” she says.
In the future, the researchers want to address artifacts that can occur when model previews have extremely fine details. They also want to add features that allow users to optimize parts of the printing process beyond color of the material.
“It is important to think about the way that we fabricate objects. We need to continue striving to develop methods that reduce waste. To that end, this marriage of AI with the physical making process is an exciting area of future work,” Perroni-Scharf says.
“‘What you see is what you get’ has been the main thing that made desktop publishing ‘happen’ in the 1980s, as it allowed users to get what they wanted at first try. It is time to get WYSIWYG for 3D printing as well. VisiPrint is a great step in this direction,” says Patrick Baudisch, a professor of computer science at the Hasso Plattner Institute, who was not involved with this work.
This research was funded, in part, by an MIT Morningside Academy for Design Fellowship and an MIT MathWorks Fellowship.
Two physicists and a curious host walk into a studio…
This March on The Curiosity Desk, GBH’s daily science show with host Edgar B. Herwick III, MIT scientists dropped by to address the questions: “How close are we to observing the dark universe?” (Thursday, March 12 episode) and “Is Earth prepared for asteroids?” (Thursday, March 26 episode).
Up first, Prof. Nergis Mavalvala, dean of the MIT School of Science, and Prof. Salvatore Vitale joined the host live in studio to talk about the science behind the Laser Interferometer Gravitational-wave Observatory (LIGO) and how LIGO has provided the ability to observe the universe in ways that have never been done before.
In addition to learning something new, Mavalvala explained how experimenting delivers an added piece of excitement: “pushing the technology, the precision of the instrument, requires you to be very inventive. There’s almost nothing in these experiments that you can go buy off a shelf. Everything you’re designing, everything is from scratch. You’re meeting very stringent requirements.”
Herwick likened how they might tweak or tinker with the experiment to souping up a car engine, and the LIGO scientists nodded – adding that in the most complex experiments, each bite-sized part on its own works well, and it’s the interfaces between them that scientists must get right.
While there, the two long-time colleagues also took a detour to explain how in physics experimentalists benefit from the work of theorists and vice versa. Mavalvala, whose work focuses on building the world’s most precise instruments to study physical phenomena, described the synergy between ideas that come from theory (work that Vitale does) and how you measure. (No, they assure Herwick, they don’t get into a lot of fights.)
In fact, it’s fantastic to have people from both worlds at MIT, said Vitale. Mavalvala agreed. “One of the things that’s really important about theory in science is that ultimately, in physics especially, it’s a bunch of math. And the important thing that you have to ask is, ‘does nature really behave that way?’ And how do you answer that question? You have to go out and measure. You have to go observe nature,” said Mavalvala.
As scientists fine-tune the gravitational wave detectors, they will inform what data are collected, what astrophysical objects they might find or hope to find – and the search for certain fainter, farther away, or more exotic objects can inform what enhancements they prioritize.
But what if I’m not interested in any of that, asked Herwick? Why should I care?
“To me, it falls in the category of for the betterment of humankind. You never know what is going to be useful. A lot of fundamental research was very far at the beginning from what turned out to be fundamental applications,” said Vitale, adding, “What they do on the instrument side has already now very important applications.”
Mavalvala was unequivocal, underscoring how pursuing curiosity is put to good use:
“When you’re making instruments that achieve that kind of precision, you’re inventing new technologies. [With LIGO] We’ve invented vibration isolation technologies to keep our mirrors really still. We’ve invented lasers that are quieter than any that were ever made before. We’ve invented photonic techniques that are allowing us to make applications even to far off things like quantum computing.
“So, this is one of the beauties of fundamental discovery science. A, you’ll discover something. But B you’ll be doing two things: you’ll be inventing the technologies of the future, and you’ll be training the generations of scientists who may go off to do completely different things, but this is what inspires them.”
Watch the full conversation below and on YouTube:
Planetary defense
Turning to objects beyond Earth – specifically, asteroids – Associate Professor Julien de Wit, along with research scientists Artem Burdanov and Saverio Cambioni, joined Herwick at the Curiosity Desk later in the month. They talked about their ongoing research to identify smaller asteroids (about the size of a school bus) using the James Webb Space Telescope and why planetary defense goes beyond thinking about the massive asteroids featured in movies like Armageddon. Notably, a lot of technology on earth depends on satellites, and asteroids pose the biggest threat to satellites.
“Dinosaurs didn’t need to care about an asteroid hitting the moon. Humanity a century ago didn’t care. Now, if [an asteroid] hits the moon, a lot of debris will be expelled and all those particles – big and small – they will affect the fleet of satellites around Earth. That’s a big potential problem, so we need to take that into account in our future,” said Burdanov.
There’s also a potential upside to being better able to detect and potentially “capture” asteroids, explained de Wit, all of it benefitted by new instruments. “It’s really an asteroid revolution going on… Our situational awareness of what’s out there is really about to change dramatically.”
He explains that one dream is to mine asteroids themselves for material to build or power next generation technologies or stations in space. “The way to reliably move into space is to use resources from space. We can’t just move stuff to build a full city. We use stuff from space.”
Echoing the sentiments expressed earlier in the month by MIT’s dean of science, the trio of asteroid explorers also described how the pursuits of planetary scientists can lead to unexpected rewards along the way. “We are swimming in an era that is data rich, and so what we do in our group and at MIT is mine that data to reveal the universe like never before,” says de Wit. “Revealing new populations of asteroids, new populations of planets, and making sense of our universe like we have never done.”
Watch the full conversation below and on the GBH YouTube channel:
Tune in to the Curiosity Desk some Thursdays to hear from MIT researchers as they visit Herwick and the production team.
Building the blocks of life
Billions of years ago, simple organic molecules drifted across Earth's primordial landscape — nothing more than basic chemical compounds. But as natural forces shaped the planet over hundreds of millions of years, these molecules began to interact and bond in increasingly complex ways. Along the way, something spectacular emerged: life.
“Life is, to some degree, magical,” says computational biologist Sergei Kotelnikov. Simple organic compounds congregate into polymers, which assemble into living cells and ultimately organisms — the whole being greater than the sum of its parts.
“You can write formulas on how a molecule behaves,” he says, referring to the world of quantum mechanics. “But yet somehow, a few orders of magnitude above, on a bigger scale, it gives rise to such a mystery.”
Kotelnikov builds models to analyze and predict the structure of these biomolecules, particularly proteins, the fundamental building blocks of every organism. This year, he joined MIT as part of the School of Science Dean’s Postdoctoral Fellowship to work with the Keating Lab, where researchers focus on protein structure, function, and interaction. Using machine learning, his goal is to develop new methods in protein modeling with potential applications that span from medicine to agriculture.
A hunger for problems to solve
Kotelnikov grew up in Abakan, Russia, a small city sitting right in the center of Eurasia. As a child, one of his favorite pastimes was playing with Lego bricks.
“It encouraged me to build new things, rather than just following instructions,” he says. “You can do anything.”
Kotelnikov’s father, whose background lies in engineering and economics, would often challenge him with math problems.
“Your brain — you can feel some kind of expansion of understanding how things work, and that’s a very satisfactory feeling,” Kotelnikov says.
This itch to solve problems led him to join science Olympiad competitions, and later, a science-focused public boarding school located near the Russian Academy of Sciences, from which he often encountered scientists.
“It was like a candy shop,” he recalls, describing the period as a life-changing experience.
In 2012, Kotelnikov began his bachelor of science in physics and applied mathematics at the Moscow Institute of Physics and Technology — considered one of the leading STEM universities in Russia, and globally — and continued there for his master’s degree. It was there that biology came into the picture.
During a course on statistical physics, Kotelnikov was first introduced to the idea of the “emergence of complexity.” He became fascinated by this “mysterious and attractive manifestation of biology … this evolution that sharpens the physical phenomenon” to create, drive, and shape life as we know it today. By the time he completed his master’s degree, he realized he had only scratched surface of the field of computational biology.
In 2018, he began his PhD at Stony Brook University in New York and began working with Dima Kozakov, who is recognized as one of the world’s leaders in predicting protein interactions and complex structures.
Studying the architecture of life
Proteins act like the bricks that construct an organism, underpinning almost every cellular process from tissue repair to hormone production. Like pieces of a Lego tower, their structures and interactions determine the functions that they carry out in a body.
However, diseases arise when they’re folded, curled, twisted, or connected in unusual ways. To develop medical interventions, scientists break down the tower and examine each individual piece to find the culprit and correct their shape and pairing. With limited experimental data on protein structures and interactions currently available, simulations developed by computational biologists like Kotelnikov provide crucial insight that inform fundamental understanding and applications like drug discovery.
With the guidance of Kozakov at Stony Brook’s Laufer Center for Physical and Quantitative Biology, Kotelnikov carried over his understanding of physics to create modeling methods that are more effective, efficient, reliable, and generalizable. Among them, he developed a new way of predicting the protein complex structures mediated by proteolysis-targeting chimeras, or PROTACs, a new class of molecules that can trigger the breakdown of specific proteins previously considered undruggable, such as those found in cancer.
PROTACs have been challenging to model, in part because they are composed of proteins that don’t naturally interact with each other, and because the linker that connects them is flexible. Imagine trying to guess the overall shape of a bendy Lego piece attached to two other pieces of different irregular, unmatched shapes. To efficiently find all possible configurations, Kotelnikov’s method conceptually cuts the linker into two halves and models each separately, then reformulates the problem and calculates it using a powerful algorithm called Fast Fourier Transform.
“It’s kind of like applied math judo that you sometimes need to do in order to make certain intractable computations tractable,” he says.
Kotelnikov’s state-of-the-art methods have been instrumental to his team’s top performance in numerous international challenges including the Critical Assessment of protein Structure Prediction (CASP) competition — the same contest in which the Nobel Prize-winning AlphaFold system for protein 3D structure prediction was presented.
Physics and machine learning
At MIT, Kotelnikov is working with Amy Keating, the Jay A. Stein (1968) Professor of Biology, biology department head, and professor of biological engineering, to study protein structure, function, and interactions.
A recognized leader in the field, Keating employs both computational and experimental methods to study proteins, their interactions, as well as how this can impact disease. By infusing physics with machine learning, Kotelnikov’s goal is to advance modeling methods that can vastly inform applications such as cancer immunology and crop protection.
“Kotelnikov stands to gain a lot from working closely with wet lab researchers who are doing the experiments that will complement and test his predictions, and my lab will benefit from his experience developing and applying advanced computational analyses,” says Keating.
Kotelnikov is also planning to work with professors Tommi Jaakkola and Tess Smidt in MIT’s Department of Electrical Engineering and Computer Science to explore a field called geometric deep learning. In particular, he aims to integrate physical and geometric knowledge about biomolecules into neural network architectures and learning procedures. This approach can significantly reduce the amount of data needed for learning, and improve the generalizability of resulting models.
Beyond the two departments, Kotelnikov is also excited to see how the diversity and interdisciplinary mix of MIT’s community will help him come up with ideas.
“When you’re building a model, you’re entering this imaginary world of assumptions and simplifications and it might feel challenging because of this disconnect with reality,” Kotelnikov says. “Being able to efficiently communicate with experimentalists is of high value.”
Tomás Palacios named director of the Institute for Soldier Nanotechnologies
Tomás Palacios, the Clarence J. LeBel Professor of Electrical Engineering at MIT, has been appointed director of the MIT Institute for Soldier Nanotechnologies (ISN). Palacios assumed the role on Feb. 4, and will continue to serve as the director of the MIT Microsystems Technology Laboratories (MTL).
Founded in 2002, ISN is a U.S. Army-sponsored University Affiliated Research Center focused on advancing fundamental science and engineering to enable next-generation capabilities for protection, survivability, sensing, and system performance. ISN brings together researchers from across MIT to address challenges at the intersection of materials, devices, and systems. In collaboration with industry, MIT Lincoln Laboratory, the U.S. Army, and other U.S. military services, ISN works to transition promising technologies for both commercial and defense applications.
As director, Palacios will oversee ISN’s research portfolio, facilities, and strategic partnerships, working closely with the ISN leadership team, MIT administration, U.S. Army, and other research sponsors to guide the institute’s next phase of research and collaboration.
“Tomás Palacios brings exceptional energy, vision, and leadership to the Institute for Soldier Nanotechnologies,” says Ian A. Waitz, MIT’s vice president for research, who announced the appointment in a recent letter. “As director of Microsystems Technology Laboratories, he has demonstrated a rare ability to build strong research communities and partnerships across academia, industry, and government. I am confident he will guide ISN’s next phase with momentum, scientific excellence, and a deep sense of service to MIT and the nation.”
Palacios brings deep leadership experience within MIT and across national research collaborations. As director of MTL, he leads one of MIT’s flagship interdisciplinary research laboratories supporting work in micro- and nano-scale materials, devices, and systems. He is a member of the MIT.nano Leadership Council and, since 2023, has served as associate director of the multi-university SUPeRior Energy-efficient Materials and dEvices (SUPREME) Center, a Semiconductor Research Corp. JUMP 2.0 program focused on next-generation energy-efficient semiconductor technologies. Palacios is also the co-founder of several technology companies, including Vertical Semiconductor, Finwave Semiconductor, and CDimension, Inc.
“MIT’s motto, ‘mens et manus’ — ‘mind and hand’ — reminds us that fundamental research and real-world impact must go hand-in-hand,” says Palacios. “At ISN, our mission is to help protect and empower those who defend our nation. That responsibility demands urgency, creativity, and deep collaboration. I look forward to building on ISN’s strong partnership with the U.S. Army, industry, and colleagues across MIT to push the frontiers of nanotechnology and translate discovery into meaningful impact at the speed of relevance.”
Palacios is internationally recognized for his work on wide-bandgap semiconductors, nanoelectronics, and advanced electronic materials. An IEEE Fellow, his research spans fundamental device physics through system-level integration, with applications in high-power and high-frequency electronics, sensing, and energy systems. He is widely recognized for his research contributions, as well as for his leadership in education and mentoring.
Palacios succeeds John Joannopoulos, who served as ISN director from 2006 until his death in August 2025. During his nearly two decades of ISN leadership, Joannopoulos strengthened ISN’s interdisciplinary culture, devoting significant effort to fostering collaborations among ISN-funded principal investigators, building partnerships that extend across MIT and beyond to the Army research community. Joannopoulos, an extraordinary researcher and a generous mentor, was also a co-founder of companies such as WiTricity and OmniGuide, helping to translate many of ISN’s foundational scientific discoveries into commercial technologies. Raúl Radovitzky, ISN’s associate director, served as interim director during the search for a new director, providing continuity to ISN’s research programs, facilities, and partnerships.
“It is an honor to serve as director of the Institute for Soldier Nanotechnologies at such an important moment in time,” says Palacios. “ISN has built an extraordinary foundation of interdisciplinary excellence under Professor John Joannopoulos’ leadership and, more recently, Prof. Radovitzky’s. I look forward to working with the ISN community to advance breakthrough research at the intersection of materials, devices, and systems — research that not only strengthens national security, but also translates into technologies that benefit society more broadly.”
Turning muscles into motors gives static organs new life
What if a technology could reanimate parts of the body that have lost their connection to the brain — like a bladder that can no longer empty due to a spinal cord injury, or intestines that can’t push food forward due to Crohn’s disease? What if this technology could also send sensations such as hunger or touch back to the brain?
New MIT research offers a glimpse into this future. In an open-access study published today in Nature Communications, the researchers introduce a novel myoneural actuator (MNA) that reprograms living muscles into fatigue-resistant, computer-controlled motors that can be implanted inside the body to restore movement in organs.
“We’ve built an interface that leverages natural pathways used by the nervous system so that we can seamlessly control organs in the body, while also enabling the transmission of sensory feedback to the brain,” says Hugh Herr, senior author of the study, a professor of media arts and sciences at the MIT Media Lab, co-director of the K. Lisa Yang Center for Bionics, and an associate member of the McGovern Institute for Brain Research at MIT. The study was co-led by Herr’s postdoc Guillermo Herrera-Arcos and former postdoc Hyungeun Song.
By repurposing existing muscle in the body, the researchers have developed the first “living” implant that uses rewired sensory nerves to revive paralyzed organs — which may present a new genre of medicine, where a person’s own tissue becomes the hardware.
Rewiring the brain-body interface
Many scientists have toiled to restore function in paralyzed organs, but it’s extremely challenging to design a technology that both communicates with the nervous system and doesn't fatigue over time. Some have tried to insert miniaturized actuators — small machines that can power bionic limbs — into the body. However, Herrera-Arcos says, “it’s hard to make actuators at the centimeter level, and they aren’t very efficient.” Others have focused on creating muscle tissue in the lab, but building muscles cell by cell is time-intensive and far from ready for human use.
Herr’s team tried something different.
“We engineered existing muscles to become an actuator, or motor, that reinstates motion in organs,” says Song.
To do this, the researchers had to navigate the delicate dynamics within the nervous system. The actuator would have to interface with the nervous system to work properly, but it must also somehow evade the brain’s control. “You don’t want the brain to consciously control the muscle actuator because you want the actuator to automatically control an organ, like the heart,” explains Herrera-Arcos. Establishing a computer-controlled muscle to move organs could ensure automatic function and also bypass damaged brain pathways.
Incorporating motor neurons into the actuator may help generate movement, but these neurons are directly controlled by the brain. “Sensory neurons, however, are wired to receive, not to command,” explains Song. “We thought we could leverage this dynamic and reroute motor signals through sensory fibers, making a computer — rather than the brain — the muscle’s new command center.”
To achieve this, sensory nerves would need to fuse fluidly with muscle, and scientists had not yet determined if this was possible. Remarkably, when the team replaced motor nerves in rodent muscle with sensory ones, “the sensory nerves re-innervated the muscles and formed functional synapses. It’s a tremendous discovery,” says Herrera-Arcos.
Sensory neurons not only enabled the use of a digital controller, but also helped curb muscle fatigue — increasing fatigue resistance in rodent muscle by 260 percent compared to native muscles. That’s because muscle fatigue depends largely on the diameter of the axons, or cable-like projections that innervate muscles. Motor neuron axons vary greatly in size, and when a motor nerve is electrically stimulated, the largest axons fire first — exhausting the muscle quickly. However, sensory axons are all nearly the same size, so the signal is broadcast more evenly across muscle fibers, avoiding fatigue, explains Herrera-Arcos.
Designing a biohybrid system
They combined all of these elements into a fatigue-resistant biohybrid motor called a myoneural actuator (MNA). By wrapping their actuator around a paralyzed intestine in a rodent, the researchers reinstated the organ’s squeezing motion. They also successfully controlled rodent calf muscles in an experiment designed to mimic residual muscle in human lower-limb amputations. Importantly, the MNA system transmitted sensory signals to the brain. “This suggests that our technology could seamlessly link organs to the brain. For example, we might be able to make a paralyzed stomach relay hunger,” explains Song.
Bringing their MNA to clinic will require further testing in larger animal models, and eventually, humans. But if it passes the regulatory gauntlet, their system could pave a smoother and safer path toward reviving static organs. Implanting MNAs would require a surgery that is already commonplace in clinic, the researchers say, and their system might be simpler and safer to implement than mechanical devices or organ transplants that introduce foreign material into the body.
The team is hopeful that their new technology could improve the lives of millions living with organ dysfunctions. “Today’s solutions are mostly synthetic: pacemakers and other mechanical assist devices. A living muscle actuator implanted alongside a weakened organ would be part of the body itself. That is a category of medicine different from anything seen in clinic,” explains Herrera-Arcos.
Song says that skin is of special interest. “Hypothetically, we could wrap MNAs around skin grafts to relay tactile feedback, such as strain or tension, which is currently missing for users of prostheses.” Their technology could even augment virtual reality systems, too. “The idea is that, if we couple the MNA system to skin and muscles, a person could feel what their virtual avatar is touching even though their real body isn’t moving,” says Song.
“Our research is on the brink of giving new life to various parts and extensions of the body,” adds Herrera-Arcos. “It’s exciting to think that our system could enhance human potential in ways that once only belonged to the realm of science fiction.”
This research was funded, in part, by the Yang Tan Collective at MIT, K. Lisa Yang Center for Bionics at MIT, Nakos Family Bionics Research Fund at MIT, and the Carl and Ruth Shapiro Foundation.
Climate change may produce “fast-food” phytoplankton
We are what we eat. And in the ocean, most life-forms source their food from phytoplankton. These microscopic, plant-like algae are the primary food source for krill, sea snails, some small fish, and jellyfish, which in turn feed larger marine animals that are prey for the ocean’s top predators, including humans.
Now MIT scientists are finding that phytoplankton's composition, and the basic diet of the ocean, will shift significantly with climate change.
In an open-access study appearing today in the journal Nature Climate Change, the team reports that as sea surface temperatures rise over the next century, phytoplankton in polar regions will adapt to be less rich in proteins, heavier in carbohydrates, and lower in nutrients overall.
The conclusions are based on results from the team’s new model, which simulates the composition of phytoplankton in response to changes in ocean temperature, circulation, and sea ice coverage. In a scenario in which humans continue to emit greenhouse gases through the year 2100, the team found that changing ocean conditions, particularly in the polar regions, will shift phytoplankton’s balance of proteins to carbohydrates and lipids by approximately 20 percent. The researchers analyzed observations from the past several decades, and already have found a signature of this change in the real world.
“We’re moving in the poles toward a sort of fast-food ocean,” says lead author and MIT postdoc Shlomit Sharoni. “Based on this prediction, the nutritional composition of the surface ocean will look very different by the end of the century.”
The study’s MIT co-authors are Mick Follows, Stephanie Dutkiewicz, and Oliver Jahn; along with Keisuke Inomura of the University of Rhode Island; Zoe Finkel, Andrew Irwin, and Mohammad Amirian of Dalhousie University in Halifax, Canada; and Erwan Monier of the University of California at Davis.
Nutritional information
Phytoplankton drift through the upper, sun-lit layers of the ocean. Like plants on land, the marine microalgae are photosynthetic. Their growth depends on light from the sun, carbon dioxide from the atmosphere, and nutrients such as nitrogen and iron that well up from the deep ocean.
When studying how phytoplankton will respond to climate change, scientists have primarily focused on how rising ocean temperatures will affect phytoplankton populations. Whether and how the plankton’s composition will change is less well-understood.
“There’s been an awareness that the nutritional value of phytoplankton can shift with climate change,” says Sharoni, “But there has been very little work in directly addressing that question.”
She and her colleagues set out to understand how ocean conditions influence phytoplankton macromolecular composition. Macromolecules are large molecules that are essential for life. The main types of macromolecules include proteins, lipids, carbohydrates, and nucleic acids (the building blocks of DNA and RNA). Every form of life, including phytoplankton, is composed of a balance of macromolecules that helps it to survive in its particular environment.
“Nearly all the material in a living organism is in these broad molecular forms, each having a particular physiological function, depending on the circumstances that the organism finds itself in,” says Follows, a professor in the Department of Earth, Atmospheric and Planetary Sciences.
An unbalanced diet
In their new study, the researchers first looked at how today’s ocean conditions influence phytoplankton’s macromolecular composition. The team used data from lab experiments carried out by their collaborators at Dalhousie. These experiments revealed ways in which phytoplankton’s balance of macromolecules, such as proteins to carbohydrates, shifted in response to changes in water temperature and the availability of light and nutrients.
With these lab-based data, the group developed a quantitative model that simulates how plankton in the lab would readjust its balance of proteins to carbohydrates under different light and nutrient conditions. Sharoni and Inomura then paired this new model with an established model of ocean circulation and dynamics developed previously at MIT. With this modeling combination, they simulated how phytoplankton composition shifts in response to ocean conditions in different parts of the world and under different climate scenarios.
The team first modeled today’s current climate conditions. Consistent with observations, their model predicts that that a little more than half of the average phytoplankton cell today is composed of proteins. The rest is a mix of carbohydrates and lipids.
Interestingly, in polar regions, phytoplankton are slightly more protein-rich. At the poles, the cover of sea ice limits the amount of sunlight phytoplankton can absorb. The researchers surmise that phytoplankton may have adapted by making more light-harvesting proteins to help the organisms efficiently absorb the weak sunlight.
However, when they modeled a future climate change scenario, the team found a significant shift in phytoplankton composition. They simulated a scenario in which humans continue to emit greenhouse gases through the year 2100. In this scenario, the ocean sea surface temperatures will rise by 3 degrees Celsius, substantially reducing sea ice coverage. Warmer temperatures will also limit the ocean’s circulation, as well as the amount of nutrients that can circulate up from the deep ocean.
Under these conditions, the model predicts that the population of phytoplankton growth in polar regions will increase significantly, consistent with earlier studies. Uniquely, this model predicts that phytoplankton in polar regions will shift from a protein-rich to a carb- and lipid-heavy composition. They found that plankton will not need as much light-harvesting protein, since less sea ice will make sunlight more easily available for the organisms to absorb. Total protein levels in these polar phytoplankton will decline by up to 30 percent, with a corresponding increase in the contribution of carbs and lipids.
It’s unclear what impact a larger population of carb- and lipid-heavy phytoplankton may have on the rest of the marine food web. While some organisms may be stressed by a reduction in protein, others that make lipid stores to survive through the winter might thrive.
The team also simulated phytoplankton in subtropical, higher-latitude regions. In these ocean areas, it’s expected that phytoplankton populations will decline by 50 percent. And the team’s modeling shows that their composition will also shift.
With warmer temperatures, the ocean’s circulation will slow down, limiting the amount of nutrients that can upwell from the deep ocean. In response, subtropical phytoplankton may have to find ways to live at deeper depths, to strike a balance between getting enough sunlight and nutrients. Under these conditions, the organisms will likely shift to a slightly more protein-rich composition, making use of the same photosynthetic proteins that their polar counterparts will require less of.
On balance, given the projected changes in phytoplankton populations with climate change, their average composition around the world will shift to a more carb-heavy, low-nutrient composition.
The researchers went a step further and found that their modeling agrees with available small set of actual phytoplankton field samples that other scientists previously collected from Arctic and Antarctic regions. These samples showed compositions of phytoplankton have become more carb- and lipid-heavy over the past few decades, as the team’s model predicts under climate warming.
“In these regions, you can already see climate change, because sea ice is already melting,” Sharoni explains. “And our model shows that proteins in polar plankton have been declining, while carbs and lipids are increasing.”
“It turns out that climate change is accelerated in the Arctic, and we have data showing that the composition of phytoplankton has already responded,” Follows adds. “The main message is: The caloric content at the base of the marine food web is already changing. And it’s not a clear story as to how this change will transmit through the food web.”
This work was supported, in part, by the Simons Foundation.
MIT researchers use AI to uncover atomic defects in materials
In biology, defects are generally bad. But in materials science, defects can be intentionally tuned to give materials useful new properties. Today, atomic-scale defects are carefully introduced during the manufacturing process of products like steel, semiconductors, and solar cells to help improve strength, control electrical conductivity, optimize performance, and more.
But even as defects have become a powerful tool, accurately measuring different types of defects and their concentrations in finished products has been challenging, especially without cutting open or damaging the final material. Without knowing what defects are in their materials, engineers risk making products that perform poorly or have unintended properties.
Now, MIT researchers have built an AI model capable of classifying and quantifying certain defects using data from a noninvasive neutron-scattering technique. The model, which was trained on 2,000 different semiconductor materials, can detect up to six kinds of point defects in a material simultaneously, something that would be impossible using conventional techniques alone.
“Existing techniques can’t accurately characterize defects in a universal and quantitative way without destroying the material,” says lead author Mouyang Cheng, a PhD candidate in the Department of Materials Science and Engineering. “For conventional techniques without machine learning, detecting six different defects is unthinkable. It’s something you can’t do any other way.”
The researchers say the model is a step toward harnessing defects more precisely in products like semiconductors, microelectronics, solar cells, and battery materials.
“Right now, detecting defects is like the saying about seeing an elephant: Each technique can only see part of it,” says senior author and associate professor of nuclear science and engineering Mingda Li. “Some see the nose, others the trunk or ears. But it is extremely hard to see the full elephant. We need better ways of getting the full picture of defects, because we have to understand them to make materials more useful.”
Joining Cheng and Li on the paper are postdoc Chu-Liang Fu, undergraduate researcher Bowen Yu, master’s student Eunbi Rha, PhD student Abhijatmedhi Chotrattanapituk ’21, and Oak Ridge National Laboratory staff members Douglas L Abernathy PhD ’93 and Yongqiang Cheng. The paper appears today in the journal Matter.
Detecting defects
Manufacturers have gotten good at tuning defects in their materials, but measuring precise quantities of defects in finished products is still largely a guessing game.
“Engineers have many ways to introduce defects, like through doping, but they still struggle with basic questions like what kind of defect they’ve created and in what concentration,” Fu says. “Sometimes they also have unwanted defects, like oxidation. They don’t always know if they introduced some unwanted defects or impurity during synthesis. It’s a longstanding challenge.”
The result is that there are often multiple defects in each material. Unfortunately, each method for understanding defects has its limits. Techniques like X-ray diffraction and positron annihilation characterize only some types of defects. Raman spectroscopy can discern the type of defect but can’t directly infer the concentration. Another technique known as transmission electron microscope requires people to cut thin slices of samples for scanning.
In a few previous papers, Li and collaborators applied machine learning to experimental spectroscopy data to characterize crystalline materials. For the new paper, they wanted to apply that technique to defects.
For their experiment, the researchers built a computational database of 2,000 semiconductor materials. They made sample pairs of each material, with one doped for defects and one left without defects, then used a neutron-scattering technique that measures the different vibrational frequencies of atoms in solid materials. They trained a machine-learning model on the results.
“That built a foundational model that covers 56 elements in the periodic table,” Cheng says. “The model leverages the multihead attention mechanism, just like what ChatGPT is using. It similarly extracts the difference in the data between materials with and without defects and outputs a prediction of what dopants were used and in what concentrations.”
The researchers fine-tuned their model, verified it on experimental data, and showed it could measure defect concentrations in an alloy commonly used in electronics and in a separate superconductor material.
The researchers also doped the materials multiple times to introduce multiple point defects and test the limits of the model, ultimately finding it can make predictions about up to six defects in materials simultaneously, with defect concentrations as low as 0.2 percent.
“We were really surprised it worked that well,” Cheng says. “It’s very challenging to decode the mixed signals from two different types of defects — let alone six.”
A model approach
Typically, manufacturers of things like semiconductors run invasive tests on a small percentage of products as they come off the manufacturing line, a slow process that limits their ability to detect every defect.
“Right now, people largely estimate the quantities of defects in their materials,” Yu says. “It is a painstaking experience to check the estimates by using each individual technique, which only offers local information in a single grain anyway. It creates misunderstandings about what defects people think they have in their material.”
The results were exciting for the researchers, but they note their technique measuring the vibrational frequencies with neutrons would be difficult for companies to quickly deploy in their own quality-control processes.
“This method is very powerful, but its availability is limited,” Rha says. “Vibrational spectra is a simple idea, but in certain setups it’s very complicated. There are some simpler experimental setups based on other approaches, like Raman spectroscopy, that could be more quickly adopted.”
Li says companies have already expressed interest in the approach and asked when it will work with Raman spectroscopy, a widely used technique that measures the scattering of light. Li says the researchers’ next step is training a similar model based on Raman spectroscopy data. They also plan to expand their approach to detect features that are larger than point defects, like grains and dislocations.
For now, though, the researchers believe their study demonstrates the inherent advantage of AI techniques for interpreting defect data.
“To the human eye, these defect signals would look essentially the same,” Li says. “But the pattern recognition of AI is good enough to discern different signals and get to the ground truth. Defects are this double-edged sword. There are many good defects, but if there are too many, performance can degrade. This opens up a new paradigm in defect science.”
The work was supported, in part, by the Department of Energy and the National Science Foundation.
Leading with rigor, kindness, and care
Professor Sara Prescott embodies the kind of mentorship every graduate student hopes to find: grounded in scientific rigor, guided by kindness, and defined by a deep commitment to well-being. Her approach reflects a simple but powerful belief that transformative mentorship is not only about advancing research, but about cultivating confidence, belonging, and resilience in the next generation of scholars.
A member of the 2025–27 Committed to Caring cohort, Prescott exemplifies the program’s spirit, which honors faculty who go above and beyond in nurturing both the intellectual and personal development of MIT’s graduate students.
Prescott is the Pfizer Inc. - Gerald D. Laubach Career Development Professor in the MIT departments of Biology and Brain and Cognitive Sciences, and an investigator at the Picower Institute for Learning and Memory. Her research addresses fundamental questions in body-brain communication, with a focus on lung biology, early-life adversity, women’s health, and the impacts of climate change on respiratory health.
A culture of compassion
Prescott’s mentoring philosophy begins with a focus on professional sustainability. “We cannot be effective scientists if we are unhappy or unhealthy outside of the lab,” she says.
She pushes back against what she sees as an unhelpful narrative in academia. “There’s this idea that you must choose between a successful PhD or having a personal life. This is a false dichotomy, and a problematic attitude.” Instead, she reminds her mentees that “graduate school is a marathon, not a sprint,” encouraging them to place importance not only on their research, but also on their mental and physical well-being.
This set of values shines through within her lab climate as a whole. Students describe support for flexible scheduling and mental health leave, a willingness to reimburse meals during late-night lab sessions, and encouragement during stretches of experimental failure. In addition to these more technical supports, nominators also shared stories of Prescott engaging in the smaller details: prioritizing connection for her students, celebrating their milestones, organizing lab retreats, and fostering a culture where people feel valued beyond their productivity.
Students recognize Prescott as a safe haven within the often complex and challenging world of research. Joining Prescott’s lab was a turning point for one student who was recovering from a damaging prior mentorship experience. They arrived uncertain, struggling to trust faculty and questioning whether they belonged in science at all. Prescott met them with empathy and professionalism, offering patience and trust not just in their work, but in them as a person. They describe steady support that, over time, helped them “fall back in love with science” and envision a future they had nearly abandoned.
Prescott draws inspiration from the mentorship she received early in her career. As a trainee, she had mentors who helped her believe that she could succeed. Now in a mentoring role herself, she does her best to pass this sense of confidence on to her advisees.
She is intentional about creating space where students can grow without fear. From their very first meetings, one nominator wrote, Prescott emphasized that “graduate school is a place for learning and curiosity.” They never felt judged for not knowing something; instead, they were encouraged to ask questions, share ideas, and take intellectual risks. That environment, the student explained, allowed them to grow into their scientific identity with confidence.
Prescott reinforces this message often. Success, she tells students, grows from effort, learning, and persistence, rather than from fixed traits. When working with students, she does her best to reframe failure as part of the process, emphasizing its importance within the scientific journey. Through these avenues, she cultivates a lab culture where nominators are challenged to think boldly while feeling genuinely supported, and where her students are seen not only as researchers, but as whole people.
Advocacy beyond the bench
Prescott’s commitment to caring extends well beyond day-to-day lab work. Her nominators relate that she actively supports her students’ professional development, encouraging them to pursue writing projects, certificates, internships, leadership roles, and community engagement.
Nominators also highlight Prescott’s focus on supporting underserved communities within the field as a whole. Students highlight her involvement with Graduate Women in Biology (GwiBio), where she volunteered as a speaker for the “Glass Shards” series. Her talk “Failure as the Path to Success,” in which she candidly shared pivots and setbacks in her own career, was described as one of the organization’s most impactful sessions.
Her dedication to inclusion is equally evident in her mentorship of scholars whose role in her lab is more temporary. She welcomes international visiting scholars, temporary lab techs, and undergraduate interns in the MIT Summer Research Program. When one intern encountered barriers at their home institution, Prescott ensured they had a continued research home in her lab at MIT. These additional resources allowed them to complete their undergraduate thesis and graduate on time from their university.
Prescott says that she views mentorship as an evolving practice, regularly soliciting feedback from her students. Effective leadership, in her view, grows from mutual trust and open communication.
For many nominators, Prescott’s impact extends beyond their careers. “She has taught me what positive and supportive mentoring relationships look like,” one student reflected. “When I think about the type of mentor I want to be, I hope I can emulate the ways in which she supports and guides nominators to develop their scientific independence and confidence.”
In lifting up the people behind the science as thoughtfully as the science itself, Sara Prescott demonstrates that the most enduring legacy of a mentor is not only the discoveries from their lab, but the composure and courage their advisees carry forward.
MIT hackathon tackles real-world challenges in Ukraine
During this year’s Independent Activities Period (IAP), students, researchers, and collaborators across seven time zones came together to tackle urgent technical challenges facing Ukraine as the full-scale war enters its fourth year.
A four-week hackathon, Build for Ukraine 2.0, brought MIT students and Ukrainian collaborators into a shared innovation environment where power outages, air-raid alerts, and subzero temperatures were part of the daily reality of teamwork.
The event was co-led by the MIT-Ukraine Program, MIT Edgerton Center, and MIT Lincoln Laboratory Beaver Works, with support from Mission Innovation X, MathWorks, and MIT.nano.
Designed and taught as an IAP subject EC.S01/EC.S11 (Build for Ukraine 2026), the hackathon paired technically diverse participants with Ukrainian organizations seeking near-term solutions to problems arising directly from wartime conditions.
“It’s not every working group that has to reschedule team meetings because some members are in Ukraine and just had a blackout,” says Hosea Siu ’14, SM ’15, PhD ’18, one of the lead organizers. “This class is unusual — in the most meaningful ways.”
A collaborative class built for real-world urgency
Build for Ukraine centered on co-design and rapid prototyping with in-country partners. Organizers spent the fall gathering challenge statements from stakeholders in Ukraine, Taiwan, the United Kingdom, Spain, and across the United States. The goal: identify problems where a small, interdisciplinary team could make measurable progress in one month.
The participant pool reflected MIT’s open IAP structure. First-year undergraduates worked alongside senior engineers, international researchers, and Ukrainian colleagues participating remotely despite frequent blackouts. Many joined meetings from darkened apartments in Kyiv, Kharkiv, and Cherkasy — often relying on unstable heaters and backup battery packs. One participant excused himself from a design review due to an air-raid alert.
“These groups developed what I call ‘quantum entanglement,’” says Svetlana Boriskina, a principal research scientist at MIT and director of the Multifunctional Metamaterials Laboratory in the Department of Mechanical Engineering. “They were sharing data in real time across continents, while experiencing the war’s impacts directly and indirectly.”
Setting the foundation: briefings and technical overviews
The first week introduced participants to the geopolitical, technical, and humanitarian landscape that would frame their work. Topics included:
- War context and co-design practices. Boriskina and Elizabeth Wood, faculty director of the MIT-Ukraine Program and professor of history at MIT, outlined current conditions in Ukraine. Student mentor Natalie Dean ’26 (vice president of MIT’s Assistive Technology Club) led a session on co-design — emphasizing partnership with, not for, Ukrainian collaborators.
- Extreme-environment engineering. Boriskina introduced two possible technical tracks proposed by her collaborators at Kharkiv Institute of Physics and Technology: radiation-hardened materials and self-powered sensors for extreme environments, and acoustic analysis for monitoring supercritical water cooling systems in nuclear reactors. One team, later known as HotPot, adopted the latter challenge.
- AI, Open Source Intelligence, and disinformation. Phil Tinn ’16, a research scientist at SINTEF and an affiliate of the MIT-Ukraine Program, along with specialists from IN2, described how disinformation narratives travel across platforms, from Telegram to global social media. Cambridge University researcher Jon Roozenbeek discussed early threat-signal detection using pricing fluctuations in fake SMS verifications. Ukrainian partners presented on large language model bias propagation, bot detection, and media-anomaly analysis — groundwork for the eventual VibeTracking team.
- Explosive ordnance disposal. Experts from MineSight and the U.S. Army National Guard detailed the scale of landmine contamination in Ukraine — by some estimates affecting a third of the country. These sessions inspired Clearview Interface, which worked on improving visual feedback for de-mining tools.
- Drone detection. Engineers from Skyfall and MIT’s student community introduced acoustic, radiofrequency (RF), and fiber-optic-tether detection methods for drones — leading to two separate teams: Birdwatch (acoustic detection) and Hrobachki (RF detection).
Five teams, seven time zones, and one month of development
Nearly 90 people joined the project through Discord, and by the end of week one, five core teams had formed. Roles blurred: Undergraduates mentored professionals; Ukrainian engineers supplied real-time operational data; and faculty offered rapid problem-solving guidance. Each team completed a Preliminary Design Review, Critical Design Review, and final presentation to an audience of more than 80 people, online and in person.
Despite the compressed timeline, the teams delivered promising prototypes and analyses with potential real-world application.
Team highlights
Clearview Interface — Visualizing metal-detector data for safer de-mining
Two undergraduates from Olin College developed a method for converting complex metal-detector audio signals — often an overwhelming sequence of indistinguishable beeps — into intuitive visual information. Their approach could help de-miners identify object types more quickly and accurately, enhancing both safety and mapping. The team reverse-engineered commercial detector outputs and produced a preliminary interface they plan to refine this spring.
HotPot — Acoustic monitoring for nuclear-reactor cooling systems
This team of seven (five at MIT and two from the Kharkiv Institute of Physics and Technology) worked to detect transitions from water to supercritical states inside steam pipes — a critical safety parameter in nuclear facilities that have remained in operation during wartime. Combining physics simulations, hardware engineering, and acoustics, the group analyzed data from Ukrainian partners and proposed a model capable of identifying supercritical conditions via remote monitoring.
Birdwatch — Acoustic detection of fiber-optic-controlled drones
With drones frequently used along the front and often tethered to fiber-optic control lines that evade RF detection, the Birdwatch team built an audio-based detection system using a network of cameras and microphones. They trained their model on drone signatures recorded across MIT’s campus and integrated early detections into a decision-support tool to help operators interpret and act on the alerts.
Hrobachki — Radiofrequency localization for long-range drones
Two MIT students, along with collaborators at Kenyon College, Olin College, and a partner in Cherkasy, Ukraine, focused on RF detection for drones operating beyond front-line distances. They established nodes at MIT, Olin, and the town of Milton, Massachusetts, demonstrating the feasibility of distributed RF sensing for aerial threat identification.
VibeTracking — Following the movement of disinformation narratives
The smallest team — a master’s student in Lviv supported by several advisors — collaborated with IN2 to build a large-language-model pipeline that classifies and groups narratives across platforms such as Telegram and X. Their system demonstrated the likely propagation path of a specific narrative, illustrating how early-stage disinformation can be identified before it reaches mainstream channels.
Resilience, connection, and next steps
On the final day of presentations, specialists from Ukrainian universities, industry partners, and MIT-affiliated programs filled the room and populated the Zoom call. Their response was enthusiastic, not only because of what the teams produced in four weeks, but because of the collaborative networks formed under difficult conditions.
“The most important outcome is the community that emerged,” Boriskina says. “These teams built tools — but they also built relationships that will carry this work forward.”
Organizers expect several projects to continue this spring through research internships, Undergraduate Research Opportunity Program projects, and follow-on collaborations with Ukrainian institutions.
Students interested in joining ongoing Build for Ukraine projects can email the MIT-Ukraine Program. To support MIT-Ukraine initiatives, contact Svitlana Krasynska.
Seeing sounds
As one of the first students in MIT’s new Music Technology and Computation Graduate Program, Mariano Salcedo ’25 is researching the intersection between artificial intelligence and music visuals.
Specifically, his graduate research focuses on neural cellular automata (NCA), which merges classical cellular automata with machine learning techniques to grow images that can regenerate.
When paired with a stimulus like music, these images can “show” sounds in action.
“This approach enables anyone to create music-driven visuals while leveraging the expressive and sometimes unpredictable dynamics of self-organized systems,” Salcedo says. Through the web interface Salcedo has designed, users can adjust the relationship between the music’s energy and the NCA system to create unique visual performances using any music audio stream.
“I want the visuals to complement and elevate the listening experience,” he says.
Last year Salcedo, the Alex Rigopulos (1992) Fellow in Music Technology and Computation, earned a BS in artificial intelligence and decision making from MIT, where he explored signal processing in machine learning and how a classical understanding of signals can inform how we understand AI. Now he’s one of five master’s students in the Music Technology and Computation Graduate Program’s inaugural cohort.
The program, directed by professor of the practice in music technology Eran Egozy ’93, MNG ’95, is a collaboration between MIT Music and Theater Arts in the School of Humanities, Arts, and Social Sciences, and the School of Engineering. It invites practitioners to study, discover, and develop new computational approaches to music. It also includes a speaker series that exposes students and the broader MIT community to music industry professionals, artists, technologists, and other researchers.
Rigopulos ’92, SM ’94, is a video game designer, musician, and former CEO of Harmonix Music Systems, a company he co-founded with Egozy in 1995. Harmonix is now a part of Epic Games, where Rigopulos is the director of game development for music.
“MIT is where I was first able to pursue my passion for music technology decades ago, and that experience was the springboard for a long and fulfilling career,” says Rigopulos. “So, when MIT launched an advanced degree program in music technology, I was thrilled to fund a fellowship to help propel this exciting new program.”
Egozy is enthusiastic about Salcedo’s work and his commitment to further exploring its possibilities. “He is a beautiful example of a multidisciplinary researcher who thinks deeply about how to best use technology to enhance and expand human creativity,” he says.
Salcedo has been selected to deliver the student address at the 2026 Advanced Degree Ceremony for the School of Humanities, Arts, and Social Sciences. “It’s an honor and it’s daunting,” he says. “It feels like a huge responsibility,” though one he’s eager to embrace. His selection also pleases Egozy. “I am super excited that Maraino was chosen to deliver this year’s keynote,” he enthuses.
Changing gears
Growing up in Mexico and Texas, Mariano Salcedo couldn’t readily indulge his passion for creating music. “There are no bands in Mexican public schools,” he says. While some families could pay for instruments and lessons, others like Salcedo’s were less fortunate.
“I’ve always loved music,” he continues. “I was a listener.”
Salcedo began his MIT journey as a mechanical engineering student, applying to MIT through the Questbridge program. “I heard if you like engineering and science that attending MIT would be a great choice,” he recalls. “Nerds are welcomed and embraced.” While he dutifully worked toward completing his MechE curriculum, music and technology came calling after a chance encounter with an LLM.
“I was introduced to an LLM chatbot and was blown away,” he recalls. “This was something that was speaking to me. I was both awed and frightened.” After his encounter with the chatbot, Salcedo switched his major from mechanical engineering to artificial intelligence and decision making.
“I basically started over after being two thirds of the way through the MechE curriculum,” he says. He learned about the possibilities available with AI but also confronted some of the challenges bedeviling researchers and developers including its potential power, ensuring its responsible use, human bias, limited access for people from underrepresented groups, and a lack of diversity among developers. He decided he might be able to change that picture.
“I thought one more person in the field could make a difference,” he says.
While completing his undergraduate studies, Salcedo’s love of music resurfaced. “I began DJ’ing at MIT and was hooked,” he says. While he hadn’t learned to play a traditional instrument, he discovered he could create engaging soundscapes with technology. “I bought a digital audio work station to help me make music,” he continues.
Egozy and Salcedo met in 2024 while Salcedo completed an Undergraduate Research Opportunities Program rotation as a game developer in Egozy’s lab. “He was incredibly curious and has grown tremendously over a very short time period,” Egozy says. Egozy became an informal, though important, mentor to Salcedo. “He brings great energy and thoughtfulness to his work, and to supporting others in the [music technology and computation graduate] program,” Egozy notes.
Salcedo also took a class with Egozy, 21M.385/21M.585/6.4450 (Interactive Music Systems), which further fed his appetite for the creativity he craved while also allowing him to indulge his fascination with music’s possibilities. By taking advantage of courses in the HASS curriculum, he further developed his understanding of music theory and related technologies.
“I took a class with professor Leslie Tilley, 21M.240 (Critically Thinking in Music), which helped establish a valuable framework for understanding music making,” he says, “while a class like 6.3000 (Signal Processing) helped me connect intuition with science.”
Working across disciplines
While Salcedo is passionate about his music and his research, he’s also invested in building relationships with his fellow students. He’s a member of the fraternity Sigma Nu, where he says he “found a home and community.” He also took a MISTI trip to Chile in summer 2023, where he conducted music technology research. Salcedo praises the culture of camaraderie at MIT and is grateful for its influence on his work as a scholar. “MIT has taught me how to learn,” he says.
Professors encouraged him to present his research and findings. He presented his work — Artificial Dancing Intelligence: Neural Cellular Automata for Visual Performance of Music — at the Association for the Advancement of Artificial Intelligence conference in Singapore in January 2026.
Salcedo believes his research can potentially move beyond music visualization. “What if we could improve the ways we model self-organized systems?” he asks. “That is, systems like multicellular organisms, flocks of birds, or societies that interact locally but exhibit interesting behaviors.” Any system, Salcedo says, where the whole is more than the sum of its parts.
Developing the technology used to design his application can potentially help answer important ethical questions regarding AI’s continued expansion and growth. The path to his work’s development is both daunting and lonely, but those challenges feed his work ethic.
“It’s intimidating to pursue this path when the academy is currently focused on LLMs,” he says. “But it’s also important to explain and explore the base technology before digging into more nuanced work, which can help audiences understand it better.” Knowing that he has the support of his professors helps Salcedo maintain excitement for his ideas. “They only ask that we ground our interests in research,” he says.
His investigations are impacting his work as a musician. “My music has gotten more interesting because of the classes I’m taking,” he says. He’s also interested in understanding whose music the academy and the world hears, exploring biases toward Western music in the canon and exploring how to reduce biases related to which kinds of music are valued.
“The work we do as technologists is far less subjective than we’re led to believe,” he believes.
Salcedo is especially grateful for the support he’s received during his time at MIT. “Program faculty encourage a variety of pursuits,” he says, “and ask us to advance our individual aims rather than focusing on theirs.” During his time in the graduate program, he notes with enthusiasm how often he’s been challenged to pursue his ideas.
Ultimately, Salcedo wants people to experience the joy he feels working at the intersection of the humanities and the sciences. Music and technology impact nearly everyone. Inviting audiences into his laboratory as participants in the creative and research processes offers the same kind of satisfaction he gets from crafting a great beat or solving for a thorny technical challenge. Helping audiences understand his work’s value fuels his drive to succeed.
“I want users to feel movement and explore sounds and their impact more fully,” he says.
MIT engineers design proteins by their motion, not just their shape
Proteins are far more than nutrients we track on a food label. Present in every cell of our bodies, they work like nature’s molecular machines. They walk, stretch, bend, and flex to do their jobs, pumping blood, fighting disease, building tissue, and many other jobs too small for the eye to see. Their power doesn’t come from shape alone, but from how they move.
In recent years, artificial intelligence has allowed scientists to design entirely new protein structures not found in nature tailored for specific functions, such as binding to viruses, or mimicking the mechanical properties of silk for sustainable materials. But designing for structure alone is like building a car body without any control over how the engine performs. The subtle vibrations, shifts, and mechanical dynamics of a protein are just as critical to its functions as its form.
Now, MIT engineers have taken a major step toward closing the gap with the development of an AI model known as VibeGen. If vibe coding lets programmers describe what they want and then AI generates the software, VibeGen does the same for living molecules: specify the vibe — the pattern of motion you want — and the model writes the protein.
The new model allows scientists to target how a protein flexes, vibrates, and shifts between shapes in response to its environment, opening a new frontier in the design of molecular mechanics. VibeGen builds on a series of advances from the Buehler lab in agentic AI for science — systems in which multiple AI models collaborate autonomously to solve problems too complex for any single model.
“The essence of life at fundamental molecular levels lies not just in structure, but in movement,” says Markus Buehler, the Jerry McAfee Professor of Engineering in the departments of Civil and Environmental Engineering and Mechanical Engineering. “Everything from protein folding to the deformation of materials under stress follows the fundamental laws of physics.”
Buehler and his former postdoc, Bo Ni, identified a critical need for what they call physics-aware AI: systems capable of reasoning about motion, not just snapshots of molecular structure. “AI must go beyond analyzing static forms to understanding how structure and motion are fundamentally intertwined,” Buehler adds.
The new approach, described in a paper March 24 in the journal Matter, uses generative AI to create proteins with tailor-made dynamics.
Training AI to think about motion
The revolution in AI-driven protein science has been, overwhelmingly, a revolution in structure. Tools like AlphaFold solved the decades-old problem of predicting a protein’s three-dimensional shape. Existing generative models learned to design new shapes from scratch. But in focusing on the folded snapshot — the protein frozen in place — the field largely set aside the property that makes proteins work: their motion. “Structure prediction was such a grand challenge that it absorbed the field’s attention,” Buehler says. “But a protein’s shape is just one frame of a much longer film, and the design space extends through space and time, where structure sits on a much broader manifold.” Scientists could design a protein with a particular architecture. They couldn’t yet specify how that protein would move, flex, or vibrate once it was built.
VibeGen does something no protein design tool has done before. It inverts the traditional problem. Rather than asking, “What shape will this sequence produce?” it asks, “What sequence will make a protein move in exactly this way?”
To build VibeGen, Buehler and Ni turned to a class of AI diffusion models, the same underlying technology that powers AI image generators capable of creating realistic pictures from pure noise. In VibeGen’s case, the model starts with a random sequence of amino acids and refines it, step by step, until it converges on a sequence predicted to vibrate and flex in a targeted way.
The system works through two cooperating agents that design and challenge each other. A “designer” proposes candidate sequences aimed at a target motion profile. A “predictor” evaluates those candidates, asking whether they’ll actually move the way the designer intended. The two models iterate back and forth like an internal dialogue, until the design stabilizes into something that meets the goal. By specifying this vibrational fingerprint as the design input, VibeGen inverts the usual logic: dynamics becomes the blueprint, and structure follows.
“It’s a collaborative system,” Ni says. “The designer proposes, the predictor critiques, and the design improves through that tension.”
Most sequences VibeGen produces are entirely de novo, not borrowed from nature, not a variation on something evolution already made. To confirm the designs actually work, the team ran detailed physics-based molecular simulations, and the proteins behaved exactly as intended, flexing and vibrating in the patterns VibeGen had targeted.
One of the study’s most striking findings is that many different protein sequences and folds can satisfy the same vibrational target — a property the researchers call functional degeneracy. Where evolution converged on one solution, VibeGen reveals an entire family of alternatives: proteins with different structures and sequences that nonetheless move in the same way. “It suggests that nature explored only a fraction of what’s possible,” Buehler says. “For any given dynamic behavior, there may be a large, untapped space of viable designs."
A new frontier in molecular engineering
Controlling protein dynamics could have wide-ranging applications. In medicine, proteins that can change shape on cue hold enormous potential. Many therapeutic proteins work by binding to a target molecule — a virus, a cancer cell, a misfiring receptor. How well they bind often depends not just on their shape, but on how flexibly they can adapt to their target. A protein that is engineered with motion could grip more precisely, reduce unintended interactions, and ultimately become a safer, more effective drug.
In materials science, which is an area of Buehler’s research, mechanical properties at the molecular scale affect their performance. Biological materials like silk and collagen get their strength and resilience from the coordinated motion of their molecular building blocks. Designing proteins that are stiffer, flexible, or vibrate in a certain way could lead to new sustainable fibers, impact-resistant materials, or biodegradable alternatives to petroleum-based plastics.
Buehler envisions further possibilities: structural materials for buildings or vehicles incorporating protein-based components that heal themselves after mechanical stress, or that adjust in response to heavy load.
By enabling researchers to specify motion as a direct design parameter, VibeGen treats proteins less like static shapes and more like programmable mechanical devices. The advance bridges artificial intelligence, medicine, synthetic biology, and materials engineering — toward a future in which molecular machines can be designed with the same precision and intentionality as bridges, engines, or microchips.
“VibeGen can venture into uncharted territory, proposing protein designs beyond the repertoire of evolution, tailored purely to our specifications. It’s as if we’ve invented a new creative engine that designs molecular machines on demand,” Buehler adds.
The researchers plan to refine the model further and validate their designs in the lab. They also hope to integrate motion-aware design with other AI tools, building toward systems that can design proteins to be not just dynamic, but multifunctional; machines that sense their environment, respond to signals, and adapt in real-time.
The word “vibe” comes from vibration, and Buehler sees the connection as more than wordplay. “We've turned 'vibe' into a metaphor, a feeling, something subjective,” he says. “But for a protein, the vibe is the physics. It is the actual pattern of motion that determines what the molecule can do, the very machinery of life.”
The research was supported by the U.S. Department of Agriculture, the MIT-IBM Watson AI Lab, and MIT’s Generative AI Initiative.
G. Anthony Grant named a 2025-26 NACDA Athletics Director of the Year
The National Association of Collegiate Directors of Athletics (NACDA) has announced that MIT Director of Athletics G. Anthony Grant, head of the MIT Department of Athletics, Physical Education, and Recreation, is among 28 winners of the 2025-26 NACDA Athletic Director of the Year (ADOY) Award.
The ADOY Award highlights the efforts of athletics directors at all levels for their commitment and positive contributions to student-athletes, campuses, and their surrounding communities. Grant is currently in his sixth year at MIT, leading one of the most comprehensive Division III athletics programs in the country. In his role, he directs a department featuring 33 intercollegiate teams, including four Division I rowing programs, while providing opportunities for over 800 student-athletes.
MIT achieved remarkable success under Grant's leadership during the 2024-25 academic year, winning four NCAA championships. Women's swimming and diving captured the first national title in program history, while the women's cross country and track and field program swept all three NCAA championships in 2024-25, a historic first for an NCAA Division III women's program and the first MIT women's titles in cross country, as well as women's indoor and outdoor track and field.
The year also saw MIT win 13 individual national champions, with 158 student-athletes earning All-American honors, 166 named All-Region, 227 named All-Conference, and 24 named CSC Academic All-America. Multiple head and assistant coaches claimed national, regional, and conference recognition. Nine teams claimed conference titles, while MIT earned seven NCAA/national top 10 finishes, as men's indoor track and field (7th), men's swimming and diving (9th) and men's lightweight crew joined the four national title winners.
Despite having begun his tenure at MIT just weeks prior to the start of the Covid-19 pandemic, MIT has continued to excel and grow under Grant's leadership. The Engineers have won six NCAA team national championships, finishing in the top seven of the NACDA LEARFIELD Directors' Cup standings every year since MIT returned to play following the pandemic. Most recently, MIT finished sixth in the final LEARFIELD Directors' Cup standings for the 2024-25 academic year, marking the 10th time the Engineers finished in the top 10, while MIT captured the NEWMAC Women's Presidents Cup for the 10th straight season and 11th time overall in 2024-25.
Grant was instrumental in negotiating an exciting re-branding effort that included the transition of the team uniforms and other apparel to Nike, as he worked in conjunction with BSN Sports as the official apparel provider. He also increased fundraising efforts with a record-breaking year for annual gifts in 2022. To wit, Grant has overseen several key initiatives, including a record-breaking fundraising campaign and a $5 million renovation to the varsity athletics Sports Performance Center that reopened in 2024-25. Most recently, Grant announced a state-of-the-art facility upgrade and turf renovation of the Fran O'Brien Baseball Field and Briggs Softball Field, with work currently underway.
In addition to the on- and off-field accomplishments of MIT's student-athletes and coaches, Grant has intentionally strengthened department culture by focusing on MIT's mission and shared values and behaviors, which were re-branded in 2020 under his leadership. Grant embodies an open-door leadership style, creating an environment where staff at all levels feel comfortable engaging with him. He values feedback and open communication, and fosters a supportive, respectful, and inclusive environment. He actively supports employee initiatives and has worked with student-athlete leaders to enhance the Student-Athlete Advisory Committee to improve real-time feedback collection and engagement at meetings.
Grant came to MIT from Metropolitan State University of Denver, where he also served as the director of athletics. Prior to MSU Denver, Grant served as the interim director of athletics at Millersville University in Pennsylvania, where he also worked as associate director of athletics for seven years. In addition, Grant has served as the athletic academic coordinator at the University of Iowa.
He earned his master's degree from Temple University in sport and recreation, along with a PhD in health and sport studies with a specialization in athletic administration from the University of Iowa. His leadership extends beyond MIT, as he is also involved with the National Association of Collegiate Directors of Athletics, the National Association of Division III Athletics Administrators (NADIIIAA), and the Minority Opportunities Athletic Association. Most recently, he was named to the NADIIIAA Board of Directors for 2025-26.
The ADOY Award program is in its 28th year and has recognized a total of 633 deserving athletics directors to date. The award spans seven divisions (NCAA FBS, FCS, Division I-AAA, II, III, NAIA/Other Four-Year Institutions and Junior College/Community Colleges). Winners will be recognized in conjunction with the 61st Annual NACDA and Affiliates Convention at Mandalay Bay Resort in Las Vegas, Nevada, at the beginning of the Association-Wide Featured Session on Tuesday, June 9. Additional history surrounding the ADOY award, including a list of past winners, can be found here.
Implantable islet cells could control diabetes without insulin injections
Most diabetes patients must carefully monitor their blood sugar levels and inject insulin multiple times per day, to help keep their blood sugar from getting too high.
As a possible alternative to those injections, MIT researchers are developing an implantable device that contains insulin-producing cells. The device encapsulates the cells, protecting them from immune rejection, and it also carries an on-board oxygen generator to keep the cells healthy.
This device, the researchers hope, could offer a way to achieve long-term control of type 1 diabetes. In a new study, they showed that these encapsulated pancreatic islet cells could survive in the body for at least 90 days. In mice that received the implants, the cells remained functional and produced enough insulin to control the animals’ blood sugar levels.
“Islet cell therapy can be a transformative treatment for patients. However, current methods also require immune suppression, which for some people can be really debilitating,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science. “Our goal is to find a way to give patients the benefit of cell therapy without the need for immune suppression.”
Anderson is the senior author of the study, which appears today in the journal Device. Former MIT research scientist Siddharth Krishnan, who is now an assistant professor of electrical engineering at Stanford University, and former MIT postdoc Matthew Bochenek are the lead authors of the paper. Robert Langer, the David H. Koch Institute Professor at MIT, is also a co-author.
Insulin on demand
Islet cell transplantation has already been used successfully to treat diabetes in patients. Those islet cells typically come from human cadavers, or more recently, can be generated from stem cells. In either case, patients must take immunosuppressive drugs to prevent their immune system from rejecting the transplanted cells.
Another way to prevent immune rejection is to encapsulate cells in a protective device. However, this raises new challenges, as the coating that surrounds the cells can prevent them from receiving enough oxygen.
In a 2023 study, Anderson and his colleagues reported an islet-encapsulation device that also carries an on-board oxygen generator. This generator consists of a proton-exchange membrane that can split water vapor (found abundantly in the body) into hydrogen and oxygen. The hydrogen diffuses harmlessly away, while oxygen goes into a storage chamber that feeds the islet cells through a thin, oxygen-permeable membrane.
Cells encapsulated within this device, they found, could produce insulin for up to a month after being implanted in mice.
“A month is a good timeframe in that it shows basic proof-of-concept. But from a translational standpoint, it’s important to show that you can go quite a bit longer than that,” Krishnan says.
In the new study, the researchers increased the lifespan of the devices by making them more waterproof and more resilient to cracking. They also improved the device electronics to deliver more power to the oxygen generator. The implant is powered wirelessly by an external antenna placed on the skin, which transfers energy to the device. By optimizing the circuitry, the researchers were able to increase the amount of power reaching the oxygen-generating system.
The additional power allowed the device to produce more oxygen, helping the encapsulated cells to survive and function more effectively. As a result, the cells were able to generate much more insulin over time.
Protein factories
In studies in rats and mice, the researchers showed that the new device could function for at least 90 days after being implanted under the skin. During this time, donor islet cells were able to produce enough insulin to keep the animals’ blood sugar levels within a healthy range.
The researchers saw similar results with islet cells derived from induced pluripotent stem cells, which could one day provide an indefinite supply that could be used for any patient who needs them. These islets didn’t fully reverse diabetes, but they did achieve some control of blood sugar levels.
“We’re hoping that in the future, if we can give the cells a little bit longer to fully mature, that they’ll secrete even more insulin to better regulate diabetes in the animals,” Bochenek says.
The researchers now plan to study whether they can get the devices to last for even longer in the body — up to two years, or longer.
“Long-term survival of the islets is an important goal,” Anderson says. “The cells, if they’re in the right environment, seem to be able to survive for a long time. We are excited by the duration we’ve already achieved, and we will be working to extend their function as long as possible.”
The researchers are also exploring the possibility of using this approach to deliver cells that could produce other useful proteins, such as antibodies, enzymes, or clotting factors.
“We think that these technologies could provide a long-term way to treat human disease by making drugs in the body instead of outside of the body,” Anderson says. “There are many protein therapies where patients must receive repeated, lengthy infusions. We think it may be possible to create a device that could continuously create protein therapeutics on demand and as needed by the patient.”
The research was funded, in part, by Breakthrough TID, the Leona M. and Harry B. Helmsley Charitable Trust, the National Institutes of Health, and a Koch Institute Support (core) Grant from the National Cancer Institute.
Study reveals why some cancer therapies don’t work for all patients
Drugs that block enzymes called tyrosine kinases are among the most effective targeted therapies for cancer. However, they typically work for only 40 to 80 percent of the patients who would be expected to respond to them.
In a new study, MIT researchers have figured out why those drugs don’t work in all cases: Many of these tumors have turned on a backup survival pathway that helps them keep growing when the targeted pathway is knocked out.
“This seems to be hardwired into the cells and seems to be providing activation of a critical survival pathway in cancer cells,” says Forest White, the Ned C. and Janet C. Rice Professor of Biological Engineering at MIT. “This pathway allows the cells to be resistant to a wide variety of therapies, including chemotherapies.”
Additionally, the researchers found that they could kill those drug-resistant cancer cells by treating with both a tyrosine kinase inhibitor and a drug that targets the backup pathway. Clinical trials are now underway to test one such combination in lung cancer patients.
White is the senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Cameron Flower PhD ’24, who is now a postdoc at Dana-Farber Cancer Institute and Boston Children’s Hospital, is the paper’s lead author.
Tumor survival
Tyrosine kinases are involved in many signaling pathways that allow cells to receive input from the external environment and convert it into a response such as growing or dividing. There are about 90 types of these kinases in human cells, and many of them are overactive in cancer cells.
“These kinases are very important for regulating cell growth and mitosis, and pushing the cell from a nondividing state to a dividing state depends on the activity of a lot of different tyrosine kinases,” Flower says. “We see a lot of mutations and overexpression of these kinases in cancer cells.”
These cancer-associated kinases include EGFR and BCR-ABL. Many cancer drugs targeting these kinases, including imatinib (Gleevec), have been approved to treat leukemia and other cancers. However, these drugs are not effective for all of the patients whose tumors overexpress tyrosine kinases — a phenomenon that has puzzled cancer researchers.
That lower-than-expected success rate motivated the MIT team to look into these drugs and try to figure out why some tumors do not respond to them.
For this study, the researchers examined six different cancer cell lines, which originally came from lung cancer patients. They chose two cell lines with EGFR mutations, two with mutations in a tyrosine kinase called MET and two with mutations in a tyrosine kinase called ALK. Each pair included one line that responded well to the tyrosine kinase inhibitor targeting the overactive pathway and one line that did not.
Using a technique called phosphoproteomics, the researchers were able to analyze the signaling pathways that were active in each of the cells, before and after treatment. Phosphoproteomics is used to identify proteins that have had a phosphate group added to them by a kinase. This process, known as phosphorylation, can activate or deactivate the target protein.
The researchers’ analysis revealed that the drugs were working as intended in all of the cancer cells. Even in resistant cells, the drugs did knock out signaling by their target kinase. However, in the cells that were resistant, an alternative network was already turned on, which helped the cells survive in spite of the treatment.
“Even before the therapy begins, the cells are in a state that intrinsically is resistant to the drug,” Flower says.
This survival network consists of signaling pathways that are regulated by another type of kinases known as SRC family kinases. Activation of this network appears to help cancer cells proliferate and possibly to migrate to new locations in the body. In addition to lung cancer, researchers from White’s lab have also found SRC family kinases activated in melanoma cells, where they also play a role in drug resistance, and in glioblastoma, a type of brain cancer.
“As inhibitors for SRC kinases are also drugs, the work suggests that combining inhibitors of driver oncogenes with SRC inhibitors could increase the number of patients who would benefit. This strategy merits testing in new clinical trials,” says Benjamin Neel, a professor of medicine at NYU Grossman School of Medicine, who was not involved in the study.
These findings might also explain why some patients who initially respond to tyrosine kinase inhibitors end up having their tumors recur later; the cells may end up activating this same survival pathway, but not until sometime after the initial treatment.
Combating resistance
The researchers also found that treating the resistant cells with both a tyrosine kinase inhibitor and a drug that inhibits SRC family kinases led to much greater cell death rates. By coincidence, a clinical trial testing the combination of a tyrosine kinase inhibitor called osmertinib and an SRC inhibitor is now underway, in patients with lung cancer. The MIT team now hopes to work with the same drug company to run a similar trial in pancreatic cancer patients.
The researchers also showed that they could use phosphoproteomics to analyze patient biopsy samples to see which cells already have the SRC pathways turned on.
“We are really excited to watch these clinical trials and to see how well patients do on these combinations. And I really think there’s a future for using tyrosine phosphoproteomics to guide this clinical decision-making,” White says.
This therapy might also be useful for patients whose tumors are originally susceptible to tyrosine kinase inhibitors but then later become resistant by turning on SRC pathways.
“Among the sensitive cells, some of them are able to upregulate this survival pathway and survive, which might be the residual disease that’s still there after treatment,” White says. “One of the interesting avenues here is, could we improve therapy for almost everybody, regardless of whether their tumors have intrinsic or adaptive resistance?”
The research was funded by the National Institutes of Health and the MIT Center for Precision Cancer Medicine.
“Near-misses” in particle accelerators can illuminate new physics, study finds
Particle accelerators reveal the heart of nuclear matter by smashing together atoms at close to the speed of light. The high-energy collisions produce a shower of subatomic fragments that scientists can then study to reconstruct the core building blocks of matter.
An MIT-led team has now used the world’s most powerful particle accelerator to discover new properties of matter, through particles’ “near-misses.” The approach has turned the particle accelerator into a new kind of microscope — and led to the discovery of new behavior in the forces that hold matter together.
In a study appearing this week in the journal Physical Review Letters, the team reports results from the Large Hadron Collider (LHC) — a massive underground, ring-shaped accelerator in Geneva, Switzerland. Rather than focus on the accelerator’s particle collisions, the MIT team searched for instances when particles barely glanced by each other.
When particles travel at close to the speed of light, they are surrounded by an electromagnetic halo that flattens when particles pass close but don’t collide. The pancaked energy fields produce extremely high-energy photons. Occasionally, a photon from one particle can ping off another particle, like an intense, quantum-sized pinprick of light.
The MIT team was able to pick out such near-miss pinpricks, or what scientists call “photonuclear interactions,” from the LHC’s particle-collision data. They found that when some photons pinged off a particle, they kicked out a type of subatomic particle, known as a D0 meson, that the scientists could measure for the first time.
D0 mesons are subatomic particles that contain a charm quark, a rare type of quark not normally found in ordinary nuclear matter. Quarks are the fundamental building blocks of all matter, and are bound by gluons, which are massless particles that are the invisible glue, or “strong force” that holds matter together. The rare charm quarks can only be created in high-energy interactions. As such, they provide an especially clean, unambiguous probe of quarks and gluons inside a nucleus.
Through their measurements of D0 mesons , the researchers could estimate how tightly gluons are packed, and, essentially, how strong the strong force is within a particle’s nucleus.
“Our result gives an indication that when nuclear matter is squeezed together, then gluons start behaving in a funny way,” says lead author Gian Michele Innocenti, an assistant professor of physics at MIT. “We need to know how these gluons behave in these extreme conditions because gluons keep the universe together. And at this point, photonuclear interactions are the best way we have to study gluon behavior.”
The study’s co-authors include members of the CMS Collaboration — a global consortium of physicists who operate and maintain the Compact Muon Solenoid (CMS) experiment, which is one of the largest detectors within the LHC that was used to collect the study’s data.
Bringing a “background” into focus
With each run, the Large Hadron Collider fires off needle-thin beams of particles in opposite directions around a 27-kilometer-long underground ring. When the beams cross paths, particles can collide. If the collisions happen to take place in a region of the ring where the CMS detector is set up, the detector can record the collisions, and scientists can then analyze the aftermath to reconstruct the fragments that make up the original particles.
Since the LHC began operations in 2008, the focus has been overwhelmingly on the detection and analysis of “head-on” collisions. Physicists have known that by accelerating particle beams, they would also produce photonuclear interactions — near-miss events where a particle might collide not with another particle, but with its cloud of photons. But such light-nucleus interactions were thought to be simply noise.
“These photonuclear events were considered a background that people wanted to cancel,” Innocenti says. “But now people want to use it as a signal because a collision between a photon and a nucleus can essentially be like a super-high-accuracy microscope for nuclear matter.”
When a photon pings off a particle, the abundance, direction, and energy of the produced D0 meson relates directly to the energy and density of the gluons in the nucleus. If scientists can detect and measure this photon interaction, it would be like using an extremely small and powerful flashlight to illuminate the nuclear structures. But until now, it was assumed that photonuclear interactions would be impossible to pick out amid the various physics processes that can occur in such collisions.
“People didn’t think it was possible to remove the huge mess of all these other collisions, to zoom in on single photons hitting single nuclei producing a D0 meson,” Innocenti says. “We had to devise a system to recognize those very rare photonuclear interactions while data was being taken of particle collisions.”
Illuminating charm
For their new study, Innocenti and his colleagues first simulated what a photonuclear interaction would look like amid a shower of other particle collisions. In particular, they simulated a scenario in which a photon pings off a nucleus and produces a D0 meson. Although these events are rare, D0 mesons are among the most abundant particles that contain a charm quark. The team reasoned that if they could detect signs of a charm quark in D0 mesons that are produced in a photonuclear interaction, it could give valuable information about the gluons that hold the nucleus together.
With their simulations, the researchers then developed an algorithm to detect photonuclear interactions. They implemented the algorithm at the CMS detector to search for signals in real-time during the LHC’s particle-colliding runs.
“We had to collect tens of billions of collisions in order to extract a few hundred of these rare instances where a photon hits a nucleus and produces one of these exotic D0 meson particles,” Innocenti explains.
From this enormous dataset, the team identified a clean sample of these rare events by exploiting CMS’s advanced detector capabilities to select near-miss events and reconstruct the properties of the D0 mesons.
Through this process, the team detected instances of D0 meson production and then worked back to calculate properties of the particles’ charm quarks and the gluons that would have held them together in the original nucleus.
“We are constraining what happens to gluons when they are squeezed in ions that are very large that are traveling very fast,” Innocenti says. “So far, our data confirms what people expect in terms of high-density nuclear matter. In reality, this is the first time we’ve shown this kind of measurement is feasible. ”
The team is working to improve the measurement’s accuracy in order to provide a clearer picture of how quarks and gluons are arranged inside a nucleus.
“Gluons are a very strong force that keeps the universe together,” Innocenti says. “The description of the strong force is at the basis of everything we see in nature. Now we have a way to either fully confirm, or show deviations from, that description.”
This work was supported, in part, by the U.S. Department of Energy, including support from a DOE Early Career Research Program award, and it builds on the contributions of a large MIT team of graduate students, undergraduate researchers, scientists, and postdocs.
AI system learns to keep warehouse robot traffic running smoothly
Inside a giant autonomous warehouse, hundreds of robots dart down aisles as they collect and distribute items to fulfill a steady stream of customer orders. In this busy environment, even small traffic jams or minor collisions can snowball into massive slowdowns.
To avoid such an avalanche of inefficiencies, researchers from MIT and the tech firm Symbotic developed a new method that automatically keeps a fleet of robots moving smoothly. Their method learns which robots should go first at each moment, based on how congestion is forming, and adapts to prioritize robots that are about to get stuck. In this way, the system can reroute robots in advance to avoid bottlenecks.
The hybrid system utilizes deep reinforcement learning, a powerful artificial intelligence method for solving complex problems, to figure out which robots should be prioritized. Then, a fast and reliable planning algorithm feeds instructions to the robots, enabling them to respond rapidly in constantly changing conditions.
In simulations inspired by actual e-commerce warehouse layouts, this new approach achieved about a 25 percent gain in throughput over other methods. Importantly, the system can quickly adapt to new environments with different quantities of robots or varied warehouse layouts.
“There are a lot of decision-making problems in manufacturing and logistics where companies rely on algorithms designed by human experts. But we have shown that, with the power of deep reinforcement learning, we can achieve super-human performance. This is a very promising approach, because in these giant warehouses even a 2 or 3 percent increase in throughput can have a huge impact,” says Han Zheng, a graduate student in the Laboratory for Information and Decision Systems (LIDS) at MIT and lead author of a paper on this new approach.
Zheng is joined on the paper by Yining Ma, a LIDS postdoc; Brandon Araki and Jingkai Chen of Symbotic; and senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of LIDS. The research appears today in the Journal of Artificial Intelligence Research.
Rerouting robots
Coordinating hundreds of robots in an e-commerce warehouse simultaneously is no easy task.
The problem is especially complicated because the warehouse is a dynamic environment, and robots continually receive new tasks after reaching their goals. They need to be rapidly redirected as they leave and enter the warehouse floor.
Companies often leverage algorithms written by human experts to determine where and when robots should move to maximize the number of packages they can handle.
But if there is congestion or a collision, a firm may have no choice but to shut down the entire warehouse for hours to manually sort the problem out.
“In this setting, we don’t have an exact prediction of the future. We only know what the future might hold, in terms of the packages that come in or the distribution of future orders. The planning system needs to be adaptive to these changes as the warehouse operations go on,” Zheng says.
The MIT researchers achieved this adaptability using machine learning. They began by designing a neural network model to take observations of the warehouse environment and decide how to prioritize the robots. They train this model using deep reinforcement learning, a trial-and-error method in which the model learns to control robots in simulations that mimic actual warehouses. The model is rewarded for making decisions that increase overall throughput while avoiding conflicts.
Over time, the neural network learns to coordinate many robots efficiently.
“By interacting with simulations inspired by real warehouse layouts, our system receives feedback that we use to make its decision-making more intelligent. The trained neural network can then adapt to warehouses with different layouts,” Zheng explains.
It is designed to capture the long-term constraints and obstacles in each robot’s path, while also considering dynamic interactions between robots as they move through the warehouse.
By predicting current and future robot interactions, the model plans to avoid congestion before it happens.
After the neural network decides which robots should receive priority, the system employs a tried-and-true planning algorithm to tell each robot how to move from one point to another. This efficient algorithm helps the robots react quickly in the changing warehouse environment.
This combination of methods is key.
“This hybrid approach builds on my group’s work on how to achieve the best of both worlds between machine learning and classical optimization methods. Pure machine-learning methods still struggle to solve complex optimization problems, and yet it is extremely time- and labor-intensive for human experts to design effective methods. But together, using expert-designed methods the right way can tremendously simplify the machine learning task,” says Wu.
Overcoming complexity
Once the researchers trained the neural network, they tested the system in simulated warehouses that were different than those it had seen during training. Since industrial simulations were too inefficient for this complex problem, the researchers designed their own environments to mimic what happens in actual warehouses.
On average, their hybrid learning-based approach achieved 25 percent greater throughput than traditional algorithms as well as a random search method, in terms of number of packages delivered per robot. Their approach could also generate feasible robot path plans that overcame congestion caused by traditional methods.
“Especially when the density of robots in the warehouse goes up, the complexity scales exponentially, and these traditional methods quickly start to break down. In these environments, our method is much more efficient,” Zheng says.
While their system is still far away from real-world deployment, these demonstrations highlight the feasibility and benefits of using a machine learning-guided approach in warehouse automation.
In the future, the researchers want to include task assignments in the problem formulation, since determining which robot will complete each task impacts congestion. They also plan to scale up their system to larger warehouses with thousands of robots.
This research was funded by Symbotic.
