MIT Latest News
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
How to create “humble” AI
Artificial intelligence holds promise for helping doctors diagnose patients and personalize treatment options. However, an international group of scientists led by MIT cautions that AI systems, as currently designed, carry the risk of steering doctors in the wrong direction because they may overconfidently make incorrect decisions.
One way to prevent these mistakes is to program AI systems to be more “humble,” according to the researchers. Such systems would reveal when they are not confident in their diagnoses or recommendations and would encourage users to gather additional information when the diagnosis is uncertain.
“We’re now using AI as an oracle, but we can use AI as a coach. We could use AI as a true co-pilot. That would not only increase our ability to retrieve information but increase our agency to be able to connect the dots,” says Leo Anthony Celi, a senior research scientist at MIT’s Institute for Medical Engineering and Science, a physician at Beth Israel Deaconess Medical Center, and an associate professor at Harvard Medical School.
Celi and his colleagues have created a framework that they say can guide AI developers in designing systems that display curiosity and humility. This new approach could allow doctors and AI systems to work as partners, the researchers say, and help prevent AI from exerting too much influence over doctors’ decisions.
Celi is the senior author of the study, which appears today in BMJ Health and Care Informatics. The paper’s lead author is Sebastián Andrés Cajas Ordoñez, a researcher at MIT Critical Data, a global consortium led by the Laboratory for Computational Physiology within the MIT Institute for Medical Engineering and Science.
Instilling human values
Overconfident AI systems can lead to errors in medical settings, according to the MIT team. Previous studies have found that ICU physicians defer to AI systems that they perceive as reliable even when their own intuition goes against the AI suggestion. Physicians and patients alike are more likely to accept incorrect AI recommendations when they are perceived as authoritative.
In place of systems that offer overconfident but potentially incorrect advice, health care facilities should have access to AI systems that work more collaboratively with clinicians, the researchers say.
“We are trying to include humans in these human-AI systems, so that we are facilitating humans to collectively reflect and reimagine, instead of having isolated AI agents that do everything. We want humans to become more creative through the usage of AI,” Cajas Ordoñez says.
To create such a system, the consortium designed a framework that includes several computational modules that can be incorporated into existing AI systems. The first of these modules requires an AI model to evaluate its own certainty when making diagnostic predictions. Developed by consortium members Janan Arslan and Kurt Benke of the University of Melbourne, the Epistemic Virtue Score acts as a self-awareness check, ensuring the system’s confidence is appropriately tempered by the inherent uncertainty and complexity of each clinical scenario.
With that self-awareness in place, the model can tailor its response to the situation. If the system detects that its confidence exceeds what the available evidence supports, it can pause and flag the mismatch, requesting specific tests or history that would resolve the uncertainty, or recommending specialist consultation. The goal is an AI that not only provides answers but also signals when those answers should be treated with caution.
“It’s like having a co-pilot that would tell you that you need to seek a fresh pair of eyes to be able to understand this complex patient better,” Celi says.
Celi and his colleagues have previously developed large-scale databases that can be used to train AI systems, including the Medical Information Mart for Intensive Care (MIMIC) database from Beth Israel Deaconess Medical Center. His team is now working on implementing the new framework into AI systems based on MIMIC and introducing it to clinicians in the Beth Israel Lahey Health system.
This approach could also be implemented in AI systems that are used to analyze X-ray images or to determine the best treatment options for patients in the emergency room, among others, the researchers say.
Toward more inclusive AI
This study is part of a larger effort by Celi and his colleagues to create AI systems that are designed by and for the people who are ultimately going to be most impacted by these tools. Many AI models, such as MIMIC, are trained on publicly available data from the United States, which can lead to the introduction of biases toward a certain way of thinking about medical issues, and exclusion of others.
Bringing in more viewpoints is critical to overcoming these potential biases, says Celi, emphasizing that each member of the global consortium brings a distinct perspective to a broader, collective understanding.
Another problem with existing AI systems used for diagnostics is that they are usually trained on electronic health records, which weren’t originally intended for that purpose. This means that the data lack much of the context that would be useful in making diagnoses and treatment recommendations. Additionally, many patients never get included in those datasets because of lack of access, such as people who live in rural areas.
At data workshops hosted by MIT Critical Data, groups of data scientists, health care professionals, social scientists, patients, and others work together on designing new AI systems. Before beginning, everyone is prompted to think about whether the data they’re using captures all the drivers of whatever they aim to predict, ensuring they don’t inadvertently encode existing structural inequities into their models.
“We make them question the dataset. Are they confident about their training data and validation data? Do they think that there are patients that were excluded, unintentionally or intentionally, and how will that affect the model itself?” he says. “Of course, we cannot stop or even delay the development of AI, not just in health care, but in every sector. But, we must be more deliberate and thoughtful in how we do this.”
The research was funded by the Boston-Korea Innovative Research Project through the Korea Health Industry Development Institute.
A complicated future for a methane-cleansing molecule
Methane is a powerful greenhouse gas that is second only to carbon dioxide in driving up global temperatures. But it doesn’t linger in the atmosphere for long thanks to molecules called hydroxyl radicals, which are known as the “atmosphere’s detergent” for their ability to break down methane. As the planet warms, however, it’s unclear how the air-cleaning agents will respond.
MIT scientists are now shedding some light on this. The team has developed a new model to study different processes that control how levels of hydroxyl radical will shift with warming temperatures.
They find that the picture is complicated. As temperatures increase, so too will water vapor in the atmosphere, which will in turn boost the molecule’s concentrations. But rising temperatures will also increase “biogenic volatile organic compound emissions” — gases that are naturally released by some plants and trees. These natural emissions can reduce hydroxyl radical and dampen water vapor’s boosting effect.
Specifically, the team finds that if the planet’s average temperatures rise by 2 degrees Celsius, the accompanying rise in water vapor will increase hydroxyl radical levels by about 9 percent. But the corresponding increase in biogenic emissions would in turn bring down hydroxyl radical levels by 6 percent. The final accounting could mean a small boost, of about 3 percent, in the atmosphere’s ability to break down methane and other chemical compounds as the planet warms.
“Hydroxyl radicals are important in determining the lifetime of methane and other reactive greenhouse gases, as well as gases that affect public health, including ozone and certain other air pollutants,” says study author Qindan Zhu, who led the work as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).
“There’s a whole range of environmental reasons why we want to understand what’s going on with this molecule,” adds Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in EAPS. “We want to make sure it’s around to chemically remove all these gases and pollutants.”
Fiore and Zhu’s new study appears today in the Journal of Advances in Modeling Earth Systems (JAMES). The study’s MIT co-authors include Jian Guan and Paolo Giani, along with Robert Pincus, Nicole Neumann, George Milly, and Clare Singer of Lamont-Doherty Earth Observatory and the Columbia Climate School, and Brian Medeiros at the National Center for Atmospheric Research.
A natural neutralizer
The hydroxyl radical, known chemically as OH, is made up of one oxygen atom and one hydrogen atom, along with an unpaired electron. This configuration makes the molecule extremely reactive. Like a chemical vacuum cleaner, OH easily pulls an electron or hydrogen atom away from other molecules, breaking them down into weaker, more water-soluble forms. In this way, OH reduces a vast range of chemicals, including some air pollutants, pathogens, and ozone. And changes in OH are a powerful lever on methane.
“For methane, the reaction with OH is considered the most important loss pathway,” Zhu says. “About 90 percent of the methane that’s removed from the atmosphere is due to the reaction with OH.”
Indeed, it’s thanks to reactions with hydroxyl radical that methane can only stick around in the atmosphere for about a decade — far shorter than carbon dioxide, which can linger for 1,000 years or longer. But even as OH breaks down methane already in the atmosphere, more methane continues to accumulate. Rising methane concentrations, in addition to human-derived emissions of carbon dioxide, are driving global warming, and it’s unclear how OH’s methane-clearing power will keep up.
“The questions we’re exploring here are: What are the main processes that control OH concentrations? And how will OH respond to climate change?” Fiore says.
An aquaplanet’s air
For their study, the researchers developed a new model to simulate levels of OH in the atmosphere under a current global climate scenario, compared to a future warmer climate. Their model, dubbed “AquaChem,” is an expansion of a simplified model that is part of a suite of tools developed by the Community Earth System Model (CESM) project. The model that the team chose to build off is one that represents the Earth as a simplified “aquaplanet,” with an entirely ocean-covered surface.
Aquaplanet models allow scientists to study detailed interactions in the atmosphere in response to changes in surface temperatures, without having to also spend computing time and energy on simulating complex dynamics between the land, water, and polar ice caps.
To the aquaplanet model, Zhu added an atmospheric chemistry component that simulates detailed chemical reactions in the atmosphere consistent with the applied surface temperatures. The chemical reactions that she modeled represent those that are known to affect OH concentrations.
OH is primarily produced when ozone interacts with sunlight in the presence of water vapor. For instance, scientists have found that OH levels can vary depending certain anthropogenic and natural emissions, all of which Zhu incorporated separately and together into the AquaChem model in order to isolate the impact of each process on OH.
The emissions in particular include carbon monoxide, methane, nitrogen oxides, and volatile organic compounds (VOCs), some of which are emitted through human practices, and others that are given off by natural processes. One type of naturally-derived VOCs are “biogenic” emissions — gases, such as isoprene, that some plants and trees emit through tiny pores called stomata during transpiration.
Into the AquaChem model, Zhu plugged in data that were available for each type of emissions from the year 2000 — a year that is generally considered to represent the current climate in a simplified form. She set the aquaplanet’s sea surface temperatures to the zonal annual mean of that year, and found that the model accurately reproduced the major sensitivities of OH chemistry to the underlying chemical processing as simulated in a more complex chemistry-climate model.
Then, Zhu ran the model under a second, globally warming scenario. She set the planet’s sea surface temperatures to warm by 2 degrees Celsius (a warming that is likely to occur unless global anthropogenic carbon emissions are mitigated). The team looked at how this warming would affect the various types of emissions and chemical processes, and how these changes would ultimately affect levels of OH in the atmosphere.
In the end, they found the two biggest drivers of OH levels were rising water vapor and biogenic emissions. They found that global warming would increase the amount of water vapor to the atmosphere, which in turn would boost production of OH by 9 percent. However, this same degree of warming would also increase biogenic emissions such as isoprene, which reacts with and breaks down OH, bringing down its levels by 6 percent.
The team recognizes that there are many other factors that affect the response of isoprene emissions to surface warming. Rising CO2, not considered in this study, may dampen this temperature-driven response. Of all the factors that can shift OH levels under global warming, the researchers caution that biogenic emissions are the most uncertain, even though they appear to have a large influence. Going forward, the scientists plan to update AquaChem to continue studying how biogenic emissions, as well as other processes and climate scenarios, could sway OH concentrations.
“We know that changes in atmospheric OH, even of a few percent, can actually matter for interpreting how methane might accumulate in the atmosphere,” Zhu says. “Understanding future trends of OH will allow us to determine future trends of methane.”
This work was supported, in part, by Spark Climate Solutions and the National Oceanic and Atmospheric Administration.
Advancing international trade research and finding community
The sense of support and community was palpable when Sojun Park, a postdoc at the MIT Center for International Studies (CIS), delivered a recent presentation on The Global Diffusion of AI Technologies and Its Political Drivers. The event, part of the CIS Global Research and Policy Seminar, filled the venue with audience members from across MIT.
“My work is directly connected to what CIS faculty have previously done on international trade and security,” Park said afterwards. “If I hadn’t received a postdoctoral fellowship and come to MIT, I wouldn’t have been able to think through the security implications of my intellectual property research. I’ve been tremendously motivated by these scholars.”
Park’s time at CIS has been both grounding and transformative, offering him a scholarly home that has shaped his research and helped broaden his intellectual horizons.
Pursuing interdisciplinary research and connections
Before pursuing a tenure-track position, Park set his sights on conducting research at MIT. When he came across a public posting about the CIS Postdoctoral Associate Program, he took a chance and applied.
“My own research is interdisciplinary, and I knew that I could really benefit from the interdisciplinary environment at MIT, and specifically at CIS, where faculty are coming not only from political science, but also affiliated with the Department of Economics and MIT Sloan [School of Management],” he says.
Park was thrilled to receive the paid fellowship, which offers an academic year at MIT and dedicated office space at CIS. At MIT, he is free to use his time toward his own research, and has found value in pursuing topics that are of interest to the CIS community — whether it’s AI or global governance. He’s published prolifically along the way, including two articles in the Review of International Organizations and the Review of International Political Economy.
He’s also continued to work on his forthcoming book, “From Privilege to Prosperity: Knowledge Diffusion and the Global Governance of Intellectual Property,” which examines how technologies can be transferred legitimately across borders. “By 'legitimately,' I am asking under what circumstances would firms volunteer to share their technologies? I’m interested in institutions and institutional environments that allow large businesses to share their technologies with smaller businesses based in the development world that may not possess the ability to come up with their own technologies,” he explains.
During the spring 2026 semester, he is collaborating with the center’s Undergraduate Fellows Program. This program enables postdocs to work on their research projects with MIT undergraduates. Park is working with two CIS undergraduate fellows to develop a new dataset examining international trade in green technologies. This opportunity reconnects Park to his early academic experiences in South Korea that set him on the path to MIT.
Path to MIT
“Students in South Korea are trained to be problem-solvers,” explains Park, who was born and raised in Seoul. The country’s rigorous college entrance exams reward those who can answer the most questions quickly and accurately in a limited amount of time.
While taking a test in high school, Park stumbled over a question that he couldn’t answer, regardless of how much time he spent concentrating on it. He handed in the exam, but took the problem home and spent hours puzzling over it — he just couldn’t let it go. “In hindsight, I see this as the moment I decided that I wanted to become a scholar,” Park says.
While majoring in international studies and economics (statistics) at Korea University, he had the opportunity to participate in a semester-long exchange program at the University of Texas at Austin. There, Park enrolled in a political science course on game theory that explored how individual state actors’ decisions influenced one another’s choices and outcomes in trade, conflict, and diplomacy. The instructor used the ongoing war between North and South Korea as a case study, demonstrating the unique circumstances for escalation or de-escalation depending upon how the key actors made choices along the way.
“I saw for the first time how quantitative methods could be applied to international relations and political economy,” Park says — and he knew that his next step was going to be graduate work in the United States. He began a joint MA and PhD program in political science at Princeton University the following year, supported by a Fulbright Fellowship.
Park’s 2025 dissertation examined the global governance of intellectual property rights — and it was timely. He began his PhD program in 2018, “the point at which the U.S. and China trade war had just begun.” During the pandemic, he was moved by the ongoing debates regarding vaccine inequality. “I realized then that intellectual property was at the center of these global economic challenges.” With little political science research on the topic, he “set out to create a systemic framework” to study it.
Simultaneously, he served as a teaching assistant in undergraduate courses in statistical analysis and realized that he deeply enjoyed the experience of teaching and interacting with students. It was a very different experience from his own college years.
“In South Korea, it’s common for the learning environment to be one in which the professor just delivers lectures, but I found that in the United States’ higher education system, the classroom is truly interactive. I learned something from each of my students.” Soon, Park was certain that he not only wanted to build a career in academic research, but also a future that heavily incorporated teaching and mentoring students.
Before graduating, he spent a year at Georgetown University as a predoctoral fellow affiliated with the Mortara Center for International Studies. This experience enabled him to explore the policy implications of his research and engage with policymakers in Washington — skills he will draw on in his new position.
Lasting lessons from CIS
Park recently accepted a position as assistant professor at the National University of Singapore. Beginning fall 2026, he will be teaching graduate students affiliated with the school of public policy — most of whom will have career experience as practitioners in the public or private sectors.
He’ll take many lessons from MIT to his new academic home, he says. “Based on what I learned in the United States, I’ll make the learning environment in the graduate courses I teach much more interactive and collaborative.”
At CIS, Mihaela Papa, director of research and principal research scientist, and Evan Lieberman, the center’s director and professor of political science, connected Park to associated faculty whose research interests were related with his own. “Meeting with all of these scholars whose research relates in some way to intellectual property rights made me think about how my own interests can expand to other topics,” Park explains.
But the biggest takeaway of all is that he learned how to share his own research with scholars who study unfamiliar topics, to exchange ideas and discover commonality. “I’ll never stop using the communication skills that I got here at MIT," Park says.
Investigating Antarctic ice shelf melting with global navigation satellite systems
Global navigation satellite systems (GNSS), which include GPS, are traditionally used for positioning, timing, and mapping information. In an open-access study published Feb. 27 in Geophysical Research Letters, MIT Haystack Observatory scientists report using existing GNSS satellites, in conjunction with 13 stations installed on the Ross Ice Shelf (RIS) in Antarctica, to measure atmospheric turbulence above the ice shelf that may have contributed to an unusual extensive surface melting in January 2016.
The RIS is a large, floating ice structure that fringes the western coast of Antarctica, buttressing the continental ice sheet. Normally, the RIS melts from underneath as warmer ocean water flows into its cavity underwater; in January 2016, warm, humid air caused an unusual melting event on the top side of the shelf. RIS stability is crucial to track, given that it regulates the amount of ice discharged into the ocean from Antarctica and thus significantly affects globally rising sea levels.
Understanding atmospheric conditions above the RIS helps to explain its surface melting events, but it is challenging to monitor these in situ due to dangerous conditions and the remote location.
Haystack scientists determined that a network of GNSS stations on the ice can be used to track atmospheric conditions above each station and across the network; water vapor in the lower atmosphere induces a delay in the GNSS signal that can be slightly different between stations, and changes over time. These spatial and temporal variations of water vapor allow researchers to track weather over the RIS and can be used to infer the strength (also called “rockiness”) of atmospheric turbulence.
During the unusual RIS surface melting event, the GNSS station data indicated turbulence at a level four times greater than usual. This novel application of the GNSS network systems to measure atmospheric conditions allows scientists to monitor distant, life-threatening locations remotely.
“In January 2016, Antarctica experienced a significant widespread summer melting, driven by the warm air intrusion from the Southern Ocean. Our study showed that atmospheric turbulence may have helped mix the air mass and aggravated the surface melting,” says Haystack Research Scientist Dhiman Mondal. “We can use a GNSS network as an atmospheric turbulence sensor and monitor the health of the ice sheets where meteorological measurements are sparse.”
MIT Haystack Observatory also recently developed and tested an instrument, the seismogeodetic ice penetrator, which will contribute to monitoring the atmospheric turbulence in Antarctica. Haystack scientists also plan to use this method of GNSS systems to monitor ice melt above the Greenland Ice Sheet.
Pedro Elosegui, head of the Haystack geodesy department, says, “The colossal Antarctic ice shelves, such as the RIS, are (generally) thinning and retreating. They lose mass by calving icebergs — some rather spectacularly, by collapsing — and by basal melting due to the interaction of warm and salty ocean waters. We found that the RIS can also lose mass to surface melting caused by warm and humid air from the Ross Sea, which brought about enhanced atmospheric turbulence and may have further strengthened the melting.”
3 Questions: Communicating about climate, in audio and beyond
Since her first journalism fellowship covering energy and the environment at the NPR station in Harrisburg, Pennsylvania, Madison Goldberg has been drawn to science communication and audio storytelling. Now, after reporting on topics from solar storms to sewer systems to cryptography, she’s bringing her passions to MIT as the new host of the Institute’s climate change podcast.
Launched in 2019 as TILclimate, the show began its eighth season this year with a new name: Ask MIT Climate. But the podcast’s mission remains the same: teaming up with scientists and subject matter experts to bring listeners clear, accessible information on climate change topics in 15 minutes or less.
In this interview, Goldberg talks about her path to science communication, the ideas she thinks it’s important for climate communicators to convey, and what makes MIT an exciting place to share knowledge with the world.
Q: Did you always know that you wanted to be a science communicator?
A: I didn’t! My first love in science was astronomy. I grew up looking at the stars a lot, and I was very lucky to do an internship in high school at UC Santa Cruz with a professor in their astronomy department. Space kind of puts everything in the biggest possible perspective, and for me, that’s a very calming thing.
And then in college, I wanted to do something closer to home, so to speak. I found that Earth science was very exciting to learn about, because pretty much all the sciences are somehow involved. You know, you’ve got chemistry, biology, physics ... everything all rolled into one. Also, I still got to tap into a lot of what I loved about astronomy, in terms of exploring deep time and big scales. And I was very motivated by a lot of the problems in Earth and climate science, because they tie so closely to people’s lives.
I expected to continue with research, but I discovered that what was especially compelling to me was learning about this stuff and then talking to people about it. And in my senior year of college I learned that science communication, and science journalism, was a field that you could be in.
I took a science podcasting course that year — which I still can’t believe even existed — and I got my first taste of interviewing people and working in audio, which was just incredible. I had loved podcasts for so long, and so the medium felt really familiar.
Q: What is important for science communicators to convey about climate change?
A: One of the ideas that I try to always keep in mind, and that I think is really important to convey, is that climate change affects every single aspect of our lives. And we need to communicate about it accordingly.
I think it’s crucial to consider the ways climate change intertwines with all these other realms of people’s experiences; it affects where we live, it affects what we eat, it affects the economy, it affects our health. Approaching it in isolation doesn’t seem to be the most productive framework. As communicators, we have a responsibility to listen and learn and talk about all these many and varied ways that climate change shows up in people’s lives.
This idea of things intertwining also reminds me of a really central theme in Ask MIT Climate: that working towards climate solutions not only allows us to avoid the worst impacts of climate change, but it can also help make people’s lives better in other ways. And we get to think expansively about the future we want to build.
Q: What makes MIT an exciting place to be engaged in climate communication?
A: The folks that I've talked to at MIT are just so kind and generous with their time. And these people are so busy! They have so much on their plates, and yet, somehow, even when I have a million follow-up questions, extremely prominent researchers will hop on a Zoom or exchange emails to answer them. I feel so lucky to be part of this community.
Related to what I mentioned earlier, I also appreciate the interdisciplinary climate work that happens at MIT. Tackling climate change is a generational challenge, and it requires inputs from all kinds of fields. And at MIT we have, for example, the Climate Project, the Climate Policy Center, the Center for Sustainability Science and Strategy, the Living Climate Futures Lab — all of these ways to approach the issue and bring folks into the conversation who have different expertise, experiences, and perspectives. I think it’s really special to be at MIT, to see that happen in real-time, and to see students, faculty, and staff working to bridge across subject matter boundaries.
Above all, I’ve been shown such generosity, and I’m so grateful. I feel like I can never express enough gratitude for the people inside and outside of MIT who have spoken to me about their work and about their lives. All I can hope to do is to communicate that information faithfully. Because I think there’s a huge number of people who are curious about climate change and what we can do about it, and who want to learn.
Stamping high-res imagery onto everyday items to “reprogram” their appearance
Imagine a world where you could change the designs you see on bags, shirts, and walls whenever you want. Typical clothes would become customizable fashion pieces, while your humble abode could turn into a smart home. That’s the vision of scientists like MIT electrical engineering and computer science PhD student Yunyi Zhu ’20, MEng ’21: technology that can “reprogram” the appearance of personal accessories, home decor, and office items.
At MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), she’s created clever hardware that can add, say, artwork to a sweater, then swap in a new illustration later. To do this, she coats items with an invisible ink called photochromic dye, which transforms into different colors when exposed to intense light. Her colleagues previously built a device called “PhotoChromeleon” that used a projector to activate this ink, but the system wasn’t portable, so Zhu built the LED-based tool “PortaChrome” to reprogram lower-resolution imagery on the go.
Zhu and her team now have the best of both worlds: a portable device called “ChromoLCD” that programs clear pictures onto T-shirts, tables, and whiteboards. It looks like a small printer on the outside, but inside, ChromoLCD combines the sharpness of liquid-crystal displays (LCDs) with the precision lighting of LEDs. The collective powers of these lights help users stamp designs onto flat surfaces (like walls) and soft ones (like clothes) after they’ve been coated with photochromic dye.
ChromoLCD can embed a digital rose onto a hoodie, for example. Once you’ve painted photochromic ink onto the surface you’d like to redesign, you upload your picture to the device via Bluetooth or USB port. Users can select and preview their designs from ChromoLCD’s display menu, then stamp the device onto their item. Within about 15 minutes, you’ll have a personalized piece, and if you’d like to change it, you can program a new design onto your object.
“We see ChromoLCD as a bridge between consumers and photochromic dyes,” says Zhu, who is also co-lead author on a paper presenting this work. “It’s basically a stamp, and it’s very easy to use. There are no alignment requirements, no 3D object texture creation. You just upload the image you’d like to put on your bag, place it on there, and then you’d have a personalized accessory.”
ChromoLCD showed it could add a personalized touch to accessories such as a handbag by stamping on colorful drawings of things like fish and flowers. It also embedded an augmented reality (AR) tag (much like a QR code) on a tiled kitchen counter, which linked to a cooking tutorial a user could watch while preparing a meal. The tool even reprogrammed a whiteboard to display high-resolution reference images, and could potentially turn any whiteboard into an interactive canvas that blends digital visuals with physical sketching.
Welcome to the light show
At its core, ChromoLCD is a tower of power. Its display screen sits atop a white shell, which houses a computer chip, a backlight made up of bright ultraviolet (UV) and red, green, and blue (RGB) LEDs, and an LCD panel. In other words, while ChromoLCD works its magic to customize an object, a light show takes place behind the scenes.
The system first produces a black-and-white video that outlines the brightness of particular pixels in the image you select. For example, a picture of a parrot will have some areas that are darker than others, such as the shadows cast under its wing. Then, a UV light darkens (or saturates) the dye on your object, followed by the RGB lights that brighten it up and color in each pixel. It’s kind of like when you open the shades in the morning — what starts as a blast of bright light soon becomes a more colorful visual. These lights are produced at precise frequencies that the LCD maps onto your target object.
Zhu and her colleagues note that these components are fairly easy to purchase, in case you want to make your own ChromoLCD at home. Recreating ChromoLCD could help you turn often-overlooked items into interactive displays that you can modify as you please. “A wall in your office can show your family’s pictures when you miss them, or perhaps a doormat can show a customized greeting for each of your guests,” says Zhu. “It’s sort of like turning the world into your canvas.”
What next?
Combined with PortaChrome and PhotoChromeleon, CSAIL researchers have developed systems that help us digitize our surroundings. The next step for them is to find a way to help with the creative process of what to put there. Currently, you still need to upload a picture or even create a texture image for a 3D object. With the recent advancements we’ve seen from AI in texture generation, though, users could make requests without as much effort. By simply turning on your phone’s camera (or wearing an AR helmet) and pointing it at a particular object, you could ask your generative system to “turn a cup into a medieval-style tankard.” Voilà: you’d have programmed drinkware.
In the meantime, Zhu and her colleagues are bringing photochromic material to larger surfaces by developing a reprogrammer in the shape of a wall-roller. The machine works much like painting a wall, allowing you to place larger designs onto a surface. CSAIL researchers are also exploring swiping and ironing motions, and even implementing their current technology into robots to help them communicate with humans and other machines. The machines would be able to essentially write what they’re doing onto a surface — for example, a Roomba vacuum could tell its robotic counterparts that it cleaned specific areas of a large floor by stamping a clearly displayed, high-resolution message on the ground.
Narges Pourjafarian, a postdoc at Northeastern University who wasn’t involved in the paper, says that ChromoLCD is more than a resolution upgrade over prior MIT projects. “It reframes monochromatic LCD panels as wavelength-selective fabrication tools, rather than merely display endpoints. This approach expands how we think about reprogrammable surface appearance, enabling high-resolution, reconfigurable graphics to be embedded directly into physical environments without the need for stationary projection enclosures. It opens a path toward compact, portable augmentation of garments, countertops, and shared surfaces.”
Zhu wrote the paper with six CSAIL affiliates. They are: MIT undergraduates Qingyuan Li (who is a co-lead author), Katherine Yan, Alex Luchianov, and Eden Hen; Harvard University graduate student and former visiting researcher Emily Guan; and MIT Associate Professor Stefanie Mueller, who is a CSAIL principal investigator and senior author on the work. The researchers will present their paper at the ACM International Conference on Tangible, Embedded, and Embodied Interaction.
On algorithms, life, and learning
From enhancing international business logistics to freeing up more hospital beds to helping farmers, MIT Professor Dimitris Bertsimas SM ’87, PhD ’88 summarized how his work in operations research has helped drive real-world improvements, while delivering the 54th annual James R. Killian Faculty Achievement Award Lecture at MIT on Thursday, March 19.
Bertsimas also described how artificial intelligence is now being used in some of his scholarly projects and as a tool in MIT Open Learning efforts, which he currently directs — another facet of a highly productive and lauded career over four decades at the Institute. The Killian Award is the highest prize MIT gives its faculty.
“I have tried to improve the human condition,” Bertsimas said, summarizing the breadth of his work and the many applications to everyday living that he has found for it.
At MIT, Bertsimas is the vice provost for open learning, associate dean for online education and artificial intelligence, Boeing Leaders for Global Operations Professor of Management, and professor of operations research in the MIT Sloan School of Management. He also served as the inaugural faculty director of the master of business analytics program at MIT Sloan, and has held the position of associate dean of business analytics.
Bertsimas’ remarks encompassed both his past insights and his ongoing studies, as well as his current efforts to add AI to his research. Describing the concept of “robust optimization,” a highly influential approach that Bertsimas helped develop in the early 2000s, he explained how it has enabled, for instance, more reliable shipping through the Panama Canal. Other approaches to optimization aimed at getting more vessels through the canal every day — up to 48 — but would encounter significant problems at times. Bertsimas’ approach identified that 45 vessels a day was better — a slightly lower number, but one that “was always feasible,” he noted.
Over time, Bertsimas’ work has helped structure all kinds of solutions in business logistics; it has even been used for the allocation of school buses in Boston.
More recently, as Bertsimas explained in the lecture, he and his collaborators have been working with Hartford HealthCare in Connecticut on a wide range of issues, and are increasingly incorporating AI into the development of tools for diagnostics, among other things. On the optimization front, their research has suggested ways to reduce the average stay of a hospital patient, from 5.38 days to 4.93 days. In the main Hartford hospital they have studied, given the number of existing beds, that reduction has enabled more than 5,000 additional patient stays per year.
“It’s a very different ballgame,” Bertsimas said.
Bertsimas delivered his lecture, titled “Algorithms for Life: AI and Operations Research Transforming Healthcare, Education, and Agriculture,” to an audience of over 300 MIT community members in Huntington Hall (Room 10-250) on campus.
The award was established in 1971 to honor James Killian, whose distinguished career included serving as MIT’s 10th president, from 1948 to 1959, and subsequently as chair of the MIT Corporation, from 1959 to 1971.
“Professor Bertsimas’ scholarly contributions are both extensive and groundbreaking,” said Roger Levy, chair of the MIT faculty and a professor in the Department of Brain and Cognitive Sciences, while making introductory remarks. “He’s one of the rare individuals who has made significant contributions to both intellectual threads in the field of operations research: one, optimization — combinatorial, linear, and nonlinear — and number two, stochastic processes.”
Indeed, Bertsimas’ work has helped develop both better tools for studying and conducting operations, while also having a wide range of applications. As Bertsimas noted in his lecture, the deaths of both of his parents in 2009 helped propel him to start looking at extensively at ways operations research could help health care.
Bertsimas received his BS in electrical engineering and computer science from the National Technical University of Athens in Greece. Moving to MIT for his graduate work, he then earned his MS in operations research and his PhD in applied mathematics and operations research. Bertsimas joined the MIT faculty after receiving his doctorate, and has remained at the Institute ever since.
Bertsimas is also known as an energetic teacher who has been the principal advisor to a remarkable number of PhD students — 106 and counting, at this point.
“It is far and away my favorite activity, to supervise my doctoral students,” Bertsimas said. “It is a privilege, in my opinion, to work with exceptional young people like the ones we have at MIT, in ability and character and aspiration. They actually make me a better scientist, and a better person.”
“MIT is part of my identity,” Bertsimas quipped while noting that he is the only faculty member on campus who has those three letters, in order, in his first name.
In the latter part of the lecture, Bertsimas highlighted work he has been doing as vice provost of open learning at MIT. He has personally developed an large online course based on his own material, “The Analytics Edge.” In his current role, Bertsimas said, he now aspires for MIT to reach a billion learners with online courses, part of his effort to “democratize access to education.”
Bertsimas also demonstrated for the audience some AI tools he and his colleagues are working to bring to online education, including ways of condensing material, and the translation of online material into other languages.
It is just one more chapter in a long and broad-ranging career dedicated to grasping phenomena and developing tools to help us navigate it.
Or as Berstimas noted while summarizing his scholarship at one point in the lecture, “I try to increase the human understanding of how the world works.”
Bridging medical realities in the study of technology and health
A few weeks ago, Amy Moran-Thomas and 20 students in her class 21A.311 (The Social Lives of Medical Objects) were gathered around a glucose meter, a jar of test strips, and various spare medical parts in the MIT Museum seminar room, talking about how to make them work better.
The class had just heard a presentation from the president of the Belize Diabetes Association in Dangriga, Norma Flores, a nurse whose hospital had recently received a huge shipment of insulin that, although durable in theory, seemed to have spoiled in a heat wave. Flores and the students discussed whether scientists could develop temperature-stable insulin and design repairable glucose meters and other technologies for hospitals worldwide.
“Whenever people keep saying they are concerned about an issue, but the medical literature doesn’t describe it yet, there is a key question about what’s happening,” says Moran-Thomas. “Ethnography can help us learn about it.”
For Moran-Thomas, an MIT anthropologist, that class session was a way of connecting people and ideas that are too often overlooked. Flores was a central figure in Moran-Thomas’ 2019 book, “Traveling with Sugar: Chronicles of a Global Epidemic,” about diabetes in Belize and the failures of medical technology designed to treat it. (At the end of class, Flores surprised Moran-Thomas with a framed commendation from the Belize Diabetes Association for their nearly 20 years of work together.)
That approach informs all of Moran-Thomas’ work. Currently she is co-leading a group working on a project called the “Sugar Atlas,” mapping the social and economic dimensions of diabetes in the Caribbean, in tandem with scholars Nicole Charles of the University of Toronto and Tonya Haynes of the University of West Indies. Moran-Thomas has also spent more than a decade following the case of notorious medical experiments that took place in Guatemala in the 1940s, the subject of a recent paper she published with Susan Reverby of Wellesley College.
Closer to home, Moran-Thomas is working on a book about how energy extraction affects chronic conditions and mental health in her native Pennsylvania, at a time of increasing hospital closures. As part of this research, she has been working with MIT seismologist William Frank to develop low-cost sensors that people can use to measure the impact of industrial activity on their home neighborhoods. The research team was recently awarded a grant by the MIT Human Insight Collaborative (MITHIC) for the work. And with another MITHIC grant, Moran-Thomas and several colleagues are working to create a new “Health and Society” educational program at MIT.
“A through line in my work is the question about how to put people at the center of health and medicine,” says Moran-Thomas, an associate professor in MIT’s anthropology program. “Because that’s not how it feels to most people in the world. Care technologies that work for everybody, and health technologies in relation to chronic disease, connect all these different projects.”
The work Moran-Thomas may be best known for occurred in 2020, during the Covid-19 pandemic, when her research recovered an array of neglected clinical studies showing that oximeters functioned differently depending on the skin color of patients. After she published a piece about it in the Boston Review, further hospital studies by physicians who found the essay confirmed a pattern of disproportionately inaccurate readings, leading to subsequent efforts to improve the technology — all steming from her careful, patient-centric approach.
“What anthropology has to offer the world, and other knowledge systems, is the insights of people that might be missing from many accounts, and highlighting these stories that are getting left out,” Moran-Thomas says. “Those are not footnotes, but the central thing to follow. And those histories are also alive in the material world around us.”
Thinking across medical and climate technologies
After growing up in Pennsylvania, Moran-Thomas majored in literature while earning her BA from American University. She decided to pursue ethnographic research as a graduate student, and entered Princeton University’s program in anthropology, earning an MA in 2008 and her PhD in 2012. After postdoc stints at Princeton and Brown University, Moran-Thomas joined the MIT faculty in 2015.
At Princeton, Moran-Thomas’ dissertation research examined the diabetes epidemic in Belize, forming the basis of her first book, “Traveling with Sugar,” whose title is an expression in Belize for living with diabetes. As she chronicles in the book, plantation-era changes that undermined indigenous agriculture, among other things, contributed to a local economy that made diets sugar-heavy, while medical technologies are often unreliable or ill-suited to local conditions. The book also traces breakdowns in care technologies, such as prosthetic limbs (often sought after diabetes-linked amputations), glucose meters, hyperbaric chambers, insulin supply chains, dialysis machines, and pain management technologies.
“Traveling with Sugar” also develops a critique that has become a theme of Moran-Thomas’ work: that society often shifts the blame for illness onto patients while minimizing the larger-scale factors affecting everyday health.
“There can be this focus on exclusively prevention without care, the implicit assumption that patients need to act differently,” Moran-Thomas says. “Blame falls on individuals and families instead of a focus on other questions. Why are these technologies always breaking down? How are they designed, and by whom, for whom? What role is history playing in the present? And how are people trying to remake those structures?”
Those issues are highlighted in Moran-Thomas’ ongoing project, “Sugar Atlas: Counter-Mapping Diabetes from the Caribbean,” which is backed by a two-year Digital Justice Seed Grant from the American Council of Learned Societies. Whereas international organizations tend to lump North America and the Caribbean together when tracking diabetes, this project zooms in on specific aspects of the disease and its historical and structural contributors in the Caribbean, such as the distance people must travel to buy vegetables, their proximity to insulin supplies, and the ways climate change is affecting sea life and fishing practices.
“We’re trying to create a community platform offering a different vision of these conditions,” Moran-Thomas says of the effort to map otherwise unrecorded aspects of the global diabetes epidemic, while tracing mutual aid networks and people’s “arts of care” in the present.
Better design for common devices
Following her research in Belize, where glucose meters were prone to breaking, Moran-Thomas began taking a more active focus on the design of medical technology. At MIT, she began co-teaching a course with tech innovator Jose Gomez-Marquez, 21A.311 (The Social Lives of Medical Objects). The idea was to get students to think about device design that could lead to more durable, fixable, and equitable products.
In turn, Moran-Thomas’ interest in devices led her to question the pulse oximeter readings she started seeing first-hand during the Covid-19 pandemic. Pulse oximeters measure oxygen saturation levels in patients and are a part of even routine appointment check-ins. They work visually, casting beams of light to measure the color of hemoglobin, which varies depending on how much oxygen it contains.
After firsthand encounters with the sensors led to more research, Moran-Thomas learned that some medical professionals had lingering, unanswered questions about pulse oximeters and they way they were calibrated. After she published her essay in the Boston Review, arguing for more data collection, medical researchers examined the issue more closely, finding that patients with darker skin were about three times more likely to have erroneous blood-oxygen readings than patients with lighter skin. Ultimately, an FDA panel recommended changes to the devices.
“A lot of my work has been learning about health and medicine technologies from the perspectives of patients, families, and nurses, rather than beginning with engineers and doctors,” Moran-Thomas says. “Those two projects, about blood sugar and blood oxygen, were about the shortcomings of those devices and how they could be improved. Those are perspectives I can highlight in hopes others will pick up on them and make other kinds of designs and policies possible.”
Moran-Thomas’ interest in device design has continued with her current book project, about the chronic health effects of energy production in Pennsylvania. She has worked with MIT seismologist William Frank, of the Department of Earth, Atmospheric and Planetary Sciences, to construct an inexpensive meter people can use to measure shaking in their homes caused by industrial activities. (After colleagues in western Pennsylvania reached out with seismic concerns, Moran-Thomas first got the idea to contact Frank after reading about his work in MIT News, incidentally).
The effort is also inspired by guidance from community leaders based at the Center for Coalfield Justice in western Pennsylvania. The research team has received a MITHIC Connectivity grant for their project, “Seismic Collaboratory: Rural Health, Missing Science, and Communicating the Chronic Impacts of Extraction.”
“I’ve met people who have been told by their doctors they must have vertigo, while they thought the walls of their house were really shaking,” Moran-Thomas says. “In a case like that, the device you need is not in the clinic, it’s a monitor at home.”
The book, overall, will examine the effects of energy production on chronic disease and mental health issues in Pennsylvania, something exacerbated by more hospitals being shuttered in the state.
Moran-Thomas is simultaneously working with several co-investigators to create the “Health and Society” educational program at MIT, including Katharina Ribbeck, Erica James, Aleshia Carlsen-Bryan, and Dina Asfaha. Their work was recently awarded an Education Innovation Seed Grant from MITHIC.
From small devices to large-scale changes in health care systems, from the U.S. to other regions, Moran-Thomas remains focused on a core set of issues about how to improve and broaden health care — and lessen the need for it in the first place.
“Thinking across scales is something that’s really useful about anthropology,” Moran-Thomas says. “Even one medical device is a tiny piece of a bigger infrastructure. In order to study that technology or device or sensor, you have to understand the bigger infrastructure it’s attached to, and that there are people involved in all parts of it.”
CryoPRISM: A new tool for observing cellular machinery in a more natural environment
The blobfish, once considered the ugliest animal in the world, has since had quite the redemption arc. Years after it was first discovered, scientists realized that the deep-sea creature appeared so unnervingly blobby only because it went through an extreme change in pressure when it was brought up to the surface. In its natural environment, 4,000 feet underwater, the fish looks perfectly handsome.
Structural biologists, whose goal is to deduce a molecule’s structure and function within a cell, face the risk of making a similar mistake. If biomolecular complexes are extracted from the cell, better-quality images can be obtained, but the molecules may not look natural. On the other hand, studying molecules without disrupting their environment at all is technically challenging, like filming deep underwater.
A new method, called purification-free ribosome imaging from subcellular mixtures (cryoPRISM), offers an appealing compromise. Developed by graduate students Mira May and Gabriela López-Pérez in the Davis lab in the MIT Department of Biology and recently published in PNAS, the technique allows biologists to visualize molecular complexes without taking them too far out of their natural context.
CryoPRISM captures molecular structures in cells that have just been broken open. This comes as close to preserving the natural interactions between molecules as possible, short of the extremely resource-intensive in-cell structural imaging, according to associate professor of biology Joey Davis, the faculty lead of the study.
“We think that the cryoPRISM method is a sweet spot where we preserve much of the native cellular contacts, but still have the resolution that lets us actually see molecular details,” Davis says. “Even in the extremely well-trodden system of translation in E. coli, which people have worked on for over 50 years, we are still finding new states that had just escaped people’s attention.”
A negative control that was not so negative
The development of cryoPRISM, as many discoveries in science, resulted from an unexpected observation that Mira May, the co-first author of the study, made while working on a different project.
Like all living organisms, bacteria rely on a process called translation to manufacture the proteins that carry out essential functions within the cell, from copying DNA to digesting nutrients. A key machine involved in translation is the ribosome — a biomolecular complex that assembles proteins based on instructions encoded by another molecule called mRNA. To regulate its activity, cells employ additional proteins that can change the shape of the ribosome, thus guiding its function.
May sought to identify new players in ribosomal regulation using cryoEM, by rapidly freezing lots of purified molecules and collecting thousands of 2D images to reconstruct their 3D structures. May was trying to pull ribosomes out of cells to visualize them together with their regulators. For her experiments, she designed a negative control containing unpurified bacterial lysate — a mixture of everything spilled from burst cells.
May expected to get noisy, low-quality images from this sample. To her surprise, instead, she saw intact ribosomes together with their natural interacting partners.
In just a few days, this technique experimentally validated data that would have taken months to acquire using other approaches.
“As I found more and more ribosomal states, this project became a method, not just a one-off finding,” May recalls.
Discovering new biology in a saturated field
Once May and her colleagues were confident that cryoPRISM could detect known ribosomal states, they began searching for ones that had previously escaped detection.
“It’s not just that we can recapitulate things that have been previously observed, but we can actually also discover novel ribosomal biology,” May says.
One of the novel states May identified has important implications for our understanding of the evolution of translation regulation.
During active translation, bacterial ribosomes are accompanied by a group of helper proteins called elongation factors. These factors bring in the materials for protein synthesis, like tRNAs and amino acids.
When cells encounter unfavorable conditions, such as colder temperatures, they reduce translation, which means that many ribosomes are out of work. These idle, hibernating ribosomes stop decoding mRNA, and the interface where they usually interact with helper molecules gets blocked by a hibernation factor called RaiA. This protein helps idle ribosomes avoid reactivation, like a sleeping mask that prevents a person from being woken up by light.
May observed the idle ribosomal state in her data, which on its own did not surprise her – this state had been described before. What surprised her was that some inactive ribosomes were interacting not only with RaiA, but also with an elongation factor called EF-G, which in bacteria was previously believed to only interact with active ribosomes.
A similar phenomenon has been seen before in more complex organisms, but observing it in a microbe suggests that its evolutionary origin may be older than previously thought.
“It fits an emerging model in the field, that elongation factors might bind to hibernating ribosomes to protect both the ribosome and themselves from degradation during periods of stress,” May explains. “Think of it like short-term storage.”
An unstressed cell might quickly eliminate unneeded inactive ribosomes, but because any stressor that puts ribosomes to sleep could be temporary, the cell may prefer to hold off on destroying them. That way, the ribosomes can be quickly reactivated if conditions improve.
The future of cryoPRISM
May has already teamed up with other MIT researchers to use cryoPRISM to visualize ribosomes in cells that are notoriously difficult to work with, including pathogenic organisms, which can be challenging to culture at the scale required for particle purification, and red blood cells isolated from patients, which cannot be cultured at all.
Besides its immediate application for translation research, cryoPRISM is a stepping stone toward the broader goal of structural biology: studying biomolecules in their natural environment.
To truly learn about deep-sea fish, scientists need to look at them in the deep sea; and to learn about cellular machines, scientists need to look at them in cells. According to Davis, cryoPRISM perfectly fits into the “theme of structural biology moving closer and closer to cellular context.”
Lasers, robots, action: MIT workshop explores Raman spectroscopy
Could a three-hour workshop on an advanced materials analysis technique turn someone into a detective — or perhaps an art restorer?
At MIT’s Center for Bits and Atoms (CBA) in late January, about a dozen students explored that possibility during an Independent Activities Period (IAP) workshop on Raman spectroscopy, a technique that uses laser light to “fingerprint” materials. The session even featured a robotic dog equipped with sensing equipment, demonstrating how chemical analysis can be done remotely.
The workshop, led by MIT postdoc Lamyaa Almehmadi in collaboration with the CBA, introduced participants to a powerful technique now used by law enforcement and first responders to identify narcotics and explosives, by gemologists to authenticate precious stones, and pharmaceutical companies to verify raw materials and ensure product quality. CBA graduate researcher Jiaming Liu co-hosted, delivering lectures, demonstrating Raman equipment, and contributing to the curriculum and hands-on demonstrations.
“It can open up new possibilities for innovation across many fields,” said Almehmadi, an analytical chemist in the Department of Materials Science and Engineering (DMSE). After attendees learned the fundamentals, she encouraged them to think creatively about new applications: “My hope is to inspire all of you to think about doing something with Raman spectroscopy that no one has done before.”
Fingerprinting materials
Participants brought items to class to analyze using handheld devices, which fire laser light and measure how it bounces back. The resulting pattern behaves like a molecular fingerprint, identifying the materials in the item — whether it’s a paper clip, a piece of tree bark, or a mixing bowl.
Workshop attendee Sarah Ciriello, an administrative assistant at DMSE who brought a stone she found at the beach, was taken aback by the results. The Raman device suggested a 39 percent probability that the sample contained concrete-like material, with the remaining readings matching synthetic compounds — blurring the line between natural and manufactured materials.
“It’s man-made — I was surprised,” Ciriello said.
Developed in 1928 by Indian scientist C.V. Raman, who later won the Nobel Prize in Physics, Raman spectroscopy was groundbreaking because it used visible light to probe materials without destroying them, a major advantage over other techniques at the time, such as chromatography or mass spectrometry. But for decades, the Raman signal — the light scattered back from a sample — was weak, and the instruments were big and bulky, limiting its practical use.
Advances in lasers, computing power, and miniaturized optics have transformed Raman spectroscopy into a portable tool. Today’s handheld devices can instantly compare a sample’s molecular fingerprint against vast digital libraries, allowing users to identify thousands of materials in seconds. Because it doesn’t destroy the sample, Raman is especially useful in fields that require preserving materials — such as law enforcement, where evidence must remain intact, and art restoration.
Almehmadi’s own research focuses on advancing Raman spectroscopy by developing highly sensitive, semiconductor-based sensors that make portable chemical analysis possible, with applications ranging from medical diagnostics to forensic and environmental monitoring.
“Raman can be used to analyze any material,” Almehmadi says. “That’s why I decided to introduce it to students from diverse backgrounds.”
IAP classes are open to students and staff across MIT, and the Raman workshop reflected that range — from administrative staff to graduate and undergraduate students and postdocs in departments and labs including DMSE, the Department of Mechanical Engineering, the Media Lab, and the Broad Institute.
Walking the robot dog
A crowd-pleasing element in the workshop was the integration of a robot dog that belongs to the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The demonstration highlighted how Raman technology can be used in dangerous environments, such as crime scenes or toxic industrial sites.
The handheld device was secured to the robot using tape, and Almehmadi showed how she could navigate the dog to a plastic bag filled with a white powder — baking soda.
But in a real-world scenario, “How can we know if it is baking soda or not?” she says. “So we just shined the light, and then the instrument told us what it was.”
Participants used a Wi-Fi app on their phones to view the results and a small remote controller to operate the robotic dog themselves.
“I loved the robot dog,” Ciriello says. “I was able to control it a bit, but it was challenging because the gauge was really sensitive.”
Michael Kitcher, a postdoc in DMSE, also praises the robot demonstration.
“Given that we just duct taped the device onto the dog — it was cool to see it actually worked,” he says.
Looking ahead
Kitcher, who researches magnetic materials for electronic applications, joined the workshop to learn more about Raman spectroscopy, which he had read about but never used. He was impressed by its versatility — in addition to the beach stone and baking soda, the device identified materials in a contact lens, cosmetics, and even a diamond.
Although it struggled to analyze a piece of chocolate he brought — other signals from the chocolate interfered — Kitcher sees strong potential for his own research. One area he’s interested in is unconventional magnetic materials, such as altermagnets, with unusual magnetic behavior that researchers hope to better understand and control for more energy-efficient electronics.
“Over the last couple of years, people have been trying to get a better sense of why these materials behave the way they do — how we can control this unconventional magnetic order,” he says. Raman spectroscopy can probe the vibrations of atoms in a material, helping researchers detect patterns in the crystal structure that underlie unusual magnetic behaviors. By understanding these vibrations, scientists could unlock material design rules that enable ultra-fast, low-energy computing.
Hands-on workshops like this — that inspire innovative future applications — Almehmadi says, are at the heart of an MIT education.
“I’ve always learned best by doing,” she says. “Lectures and reading are important, but real understanding comes from hands-on experience.”
Weekends@MIT offers connection through varied activities
Weekends at MIT are often a time for students to catch up on sleep or finish p-sets, lab work, and other school assignments. But for more than two decades, through a student-driven initiative supported by the Division of Student Life (DSL), students have been able to find welcoming activities designed to build community on Friday and Saturday nights through Weekends@MIT. All events are open to both graduate and undergraduate students.
At the heart of Weekends@MIT is a leadership team within the Wellbeing Ambassadors program. Ten leadership team members plan and host a variety of events from 9 to 11 p.m. in the MIT Wellbeing Lab, transforming the space into a hub for connection and creativity. While DSL staff provide advising, logistical support, and funding, event ideas come from students. Club members are committed to facilitating student social activities, all while increasing health awareness.
Student-led activities
Student ownership is intentional, says Robyn Priest, an assistant dean in the Division of Student Life. “All the ideas for activities come from the students. Leaders brainstorm themes, vote on their favorite concepts, and spearhead events in small teams. The only criterion is that it be substance-free. The students involved are dedicated, and the time commitment can be significant, so they are paid. But our students consistently step up, motivated by the opportunity to create experiences for their peers.”
Past events have included craft nights with boba tea, yoga, trivia competitions, bracelet-making workshops, waffle nights with customizable toppings, and even Spooky Skate, a Halloween costume ice-skating event hosted by the club in the Z Center.
Priest notes that just this past fall semester, more than 2,000 students attended the Friday night events, with many programs designed as drop-in experiences so students can participate around their busy schedules.
“I joined Weekends@MIT because I really liked the idea of helping organize activities on campus that promoted well-being for students and provided them with chill events that they can attend to build community and feel good on Friday nights,” says junior Emily Crespin Guerra.
Senior Ruting Hung adds, “I wanted to become more involved in promoting wellness on campus. Since then, I've found that it has also served as a way for me to recharge after a long week.”
Expanding Saturday events
Saturdays bring additional variety through collaborations with student clubs and groups. Organizations can apply for funding — typically several hundred dollars — to host events between 9 and 11 p.m. that are open to all students.
Undergraduate and graduate organizations, cultural groups, and hobby-based clubs have all contributed to programming. The partnerships also introduce new audiences to the Wellbeing Lab, helping the space become a familiar and welcoming destination across campus communities.
Connecting the campus through communication
Another key component of Weekends@MIT is a weekly newsletter distributed to thousands of students. The newsletter highlights upcoming programs in the Wellbeing Lab, along with other campus events that align with the initiative’s goals of connection and community without alcohol.
First-year student Vivian Dinh notes, “I love how the events provide a fun escape from the stress of classes and problem sets. The Wellbeing Lab is such a nice facility on campus for students to relax and enjoy themselves.”
A long tradition, evolving for the future
The current initiative builds on a long history of student-led weekend programming that began more than 20 years ago. Over time, the effort has evolved — from early safety campaigns to today’s comprehensive model focused on well-being, belonging, and social connection — but the core idea remains the same: students creating healthy spaces for other students.
Looking ahead, Weekends@MIT aims to continue expanding collaborations and exploring new ways to bring communities together on weekends. Additional events for this semester include: pupusas; blitz chess tournament with the Chess Club; craft night; movies and waffles; mocktails and latte art; a Bob Ross paint night, and much more.
What’s the right path for AI?
Who benefits from artificial intelligence? This basic question, which has been especially salient during the AI surge of the last few years, was front and center at a conference at MIT on Wednesday, as speakers and audience members grappled with the many dimensions of AI’s impact.
In one of the conferences’s keynote talks, journalist Karen Hao ’15 called for an altered trajectory of AI development, including a move away from the massive scale-up of data use, data centers, and models being used to develop tools under the rubric of “artificial general intelligence.”
“This scale is unnecessary,” said Hao, who has become a prominent voice in AI discussions. “You do not need this scale of AI and compute to realize the benefits.” Indeed, she added, “If we really want AI to be broadly beneficial, we urgently need to shift away from this approach.”
Hao is a former staff member at The Wall Street Journal and MIT Technology Review, and author of the 2025 book, “Empire of AI.” She has reported extensively on the growth of the AI industry.
In her remarks, Hao outlined the astonishing size of datasets now being used by the biggest AI firms to develop large language models. She also emphasized some of the tradeoffs in this scale-up, such as the massive energy consumption and emissions of hyper-scale data centers, which also consume large amounts of water. Drawing on her own reporting, Hao also noted the human toll from the input work that global gig-economy employees do, inputting data manually for the hyper-scale models.
By contrast, Hao offered, an alternate path for AI might exist in the example of AlphaFold, the Nobel Prize-winning tool used to identify protein structures. This represents the concept of the “small, task-specific AI model tackling a well-scoped problem that lends itself to the computational strengths of AI,” Hao said.
She added: “It’s trained on highly curated data sets that only have to do with the problem at hand: protein folding and amino acid sequences. … There’s no need for fast supercomputing because the datasets are small, the model is small, and it’s still unlocking enormous benefit.”
In a second keynote address, scholar Paola Ricaurte underscored the desirability of purpose-driven AI approaches, outlining a number of conceptual keys to evaluating the usefulness of AI.
“There is no sense in having technologies that are not going to respond to the communities that are going to use them,” said Ricaurte.
She is a professor at Tecnologico de Monterrey in Mexico and a faculty associate at Harvard University’s Berkman Klein Center for Internet and Society. Ricaurte has also served on expert committees such as the Global Partnership for AI, UNESCO’s AI Ethics Experts Without Borders, and the Women for Ethical AI project.
The event was hosted by the MIT Program in Women’s and Gender Studies. Manduhai Buyandelger, the program’s director and a professor of anthropology, provided introductory remarks.
Titled “Gender, Empire, and AI: Symposium and Design Workshop,” the event was held in the conference space at the MIT Schwartzman College of Computing, with over 300 people in attendance for the keynote talks. There was also a segment of the event devoted to discussion groups, and an afternoon session on design, in a half-dozen different subject areas.
In her talk, Hao decried the often-vague nature of AI discourse, suggesting it impedes a more thoughtful discussion about the industry’s direction.
“Part of the challenge in talking about AI is the complete lack of specificity in the term ‘artificial intelligence,’” Hao said. “It’s like the word ‘transportation.’ You could be referring to anything from a bicycle to a rocket.” As a result, she said, “when we talk about accessing its benefits, we actually have to be very specific. Which AI technologies are we talking about, and which ones do we want more of?”
In her view, the smaller-sized tools — more akin to the bicycle, by analogy — are more useful on an everyday basis. As another example, Hao mentioned the project Climate Change AI, focused on tools that can help improve the energy efficiency of buildings, track emissions, optimize supply chains, forecast extreme weather, and more.
“This is the vision of AI that we should be building towards,” Hao said.
In conclusion, Hao encouraged audience members to be active participants in AI-related discourse and projects, saying the trajectory of the technology was not yet fixed, and that public interventions matter.
Citing the writer Rebecca Solnit, Hao suggested to the audience that “Hope locates itself in the premise that we don’t know what will happen, and that in the spaciousness of uncertainty is room to act.” She also noted, “Each and every one of you has an active role to play in shaping technology development.”
Ricaurte, similarly, encouraged attendees to be proactive participants in AI matters, noting that technologies will work best when the pressing everyday needs of all citizens are addressed.
“We have the responsibility to make hope possible,” Ricaurte said.
After 16 years leading Picower Institute, Li-Huei Tsai will sharpen focus on research, teaching
MIT Picower Professor Li-Huei Tsai, who has led The Picower Institute for Learning and Memory since 2009, will step down from the role of director at the end of the academic year in May. Her decision frees her to focus exclusively on her academic work, including her continued leadership of MIT’s Aging Brain Initiative and the Alana Down Syndrome Center. Meanwhile, the search for the Picower Institute’s next director has begun.
“During her exceptional 16-year tenure in the role of director, Li-Huei has led substantial growth at the Picower Institute,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis and Kathleen Marble professor of astrophysics. “She has markedly expanded the faculty — eight of the current 16 labs joined Picower under her directorship — through successful recruitment of highly talented neuroscientists. She has done this, and more, all while leading one of our most productive and influential labs, working on a quintessentially grand challenge in human health: combating Alzheimer’s disease.”
To conduct the search for a new Picower Institute director, Mavalvala has appointed a committee led by Sherman Fairchild Professor Matthew Wilson, associate director of the institute. Serving with Wilson are Picower Professor and former institute director Mark Bear, Menicon Professor Troy Littleton, Assistant Professor Sara Prescott, and Professor Fan Wang. They will identify and interview candidates, producing a report to Mavalvala later this spring.
Growing an institute
Tsai, a professor in MIT’s Department of Brain and Cognitive Sciences and a member of The Broad Institute of MIT and Harvard, says she is grateful to have had the opportunity to build the Picower Institute into a preeminent center for neuroscience research.
“I’m immensely proud of what our institute represents: world-renowned neuroscience research that is creative, rigorous, novel, and impactful,” Tsai says. “Our labs produce innovations, discoveries, and often translational strategies that have broken new ground and pushed science, medicine, and technology forward. We also provide excellent training that has enabled us to launch the careers of many of the field’s new and future leaders. It has been a tremendous honor to be able to build on the incredible foundation and inspiration provided by my predecessors Susumu Tonegawa and Mark Bear to enable the institute’s growth and success.”
Founded by Tonegawa as the Center for Learning and Memory in 1994, and then renamed The Picower Institute for Learning and Memory after a transformative gift by Barbara and Jeffry Picower in 2002, the institute now comprises about 400 scientists, students, and staff across 16 labs in MIT’s buildings 46 and 68.
But when Tsai became director in July 2009, just three years after coming to MIT from Harvard Medical School, the Picower Institute was a smaller enterprise of 11 labs, and a community closer to 200 members. Over the ensuing years, Tsai worked closely with the Picowers’ foundation, formerly the JPB Foundation and now the Freedom Together Foundation, to develop several strategic initiatives to accelerate growth and enhance research productivity. These have included programs specifically designed to support junior faculty, to catalyze more applications for more private grant funding, and to sustain fellowships for more than 18 postdocs and graduate students. Working with the foundation, she has also expanded the scope of research support provided by the Picower Institute Innovation Fund begun under Bear.
Eager to galvanize colleagues across MIT in fighting neurodegenerative diseases and neurological disorders affecting cognition, Tsai also built and launched two campus-wide initiatives: The Aging Brain Initiative, founded in 2015 and sustained by a broad coalition of donors, and the Alana Down Syndrome Center, established in 2019 with a gift from The Alana Foundation.
Research focus
As the Picower Institute has grown, Tsai’s research has, too. In work spanning molecular, cellular, circuit, and network scales in the brain, Tsai has led numerous highly cited discoveries about the neurobiology of Alzheimer’s disease and has translated several of these insights into specific therapeutic strategies, including one now undergoing a national phase III clinical trial. In all, she has published more than 230 peer-reviewed neuroscience studies, generated numerous patents, and helped launch several startups. She has been named a fellow of the National Academy of Medicine, the American Academy of Arts and Sciences, and the National Academy of Inventors, and received awards including the Society for Neuroscience Mika Salpeter Lifetime Achievement Award and the Hans Wigzell Prize.
Tsai’s earliest discoveries identified key roles in neurodegeneration for the enzyme CDK5. She has pioneered understanding of how epigenetic changes in brain cells affect Alzheimer’s pathology and memory. Her work has also highlighted a critical role for DNA double-strand breaks in disease.
In more recent work, Tsai’s lab has conducted several studies using innovative human stem-cell-based cultures to advance understanding of how the biggest genetic risk factor for Alzheimer’s (a gene variant called APOE4) contributes to pathology, and how some existing medications and supplements might help. In collaboration with MIT professor of computer science Manolis Kellis, she has also published several sweeping atlases documenting how gene expression and epigenetics differ in Alzheimer’s disease. These studies have provided the field with troves of new data and have yielded new insights into what makes the brain vulnerable to disease, and what helps some people remain resilient.
Tsai has also led a collaboration with professors Emery N. Brown and Edward S. Boyden that’s discovered a potential noninvasive, device-based treatment for Alzheimer’s and possibly other neurological disorders. Called “Gamma Entrainment Using Sensory Stimuli” (GENUS), the technology stimulates the senses (vision, hearing, or touch) to increase the power and synchrony of 40Hz frequency “gamma” waves in the brain. Numerous studies, involving either lab animals or human volunteers by her group and others, have shown that the approach can preserve brain volume and learning and memory and reduce signs of Alzheimer’s pathology. Via an MIT spinoff company, the technology has now advanced to pivotal clinical trial enrolling hundreds of people around the country.
“After 16 years leading the Picower Institute, I’m now eager to sharpen my focus on advancing human health through the work in my lab, the Aging Brain Initiative, and the Alana Center,” Tsai says.
MIT and Hasso Plattner Institute establish collaborative hub for AI and creativity
The following is a joint announcement from the MIT School of Architecture and Planning, MIT Schwarzman College of Computing, Hasso Plattner Institute, and Hasso Plattner Foundation.
The MIT Morningside Academy for Design (MAD), MIT Schwarzman College of Computing, Hasso Plattner Institute (HPI), and Hasso Plattner Foundation celebrated the launch of the MIT and HPI AI and Creativity Hub (MHACH) at a signing ceremony this week. This 10-year initiative aims to deepen ties between computing and design as advances in artificial intelligence are reshaping how ideas are conceived and shared.
Funded by the Hasso Plattner Foundation, MIT and HPI will work together to foster collaborative interdisciplinary research and support a portfolio of educational programs, fellowships, and faculty engagement focused on AI and creativity, expanding scholarly inquiry into AI applications across disciplines, industries, and societal challenges. The collaboration begins with an inaugural two-day workshop March 19-20 at MIT, bringing together faculty, students, and researchers to set early priorities.
“As we hear from our faculty, as the Information Age gives way to an era of imagination, we expect a new emphasis on human creativity,” reflects MIT President Sally Kornbluth. “Through this collaboration, MIT and HPI are creating a shared space where students and faculty will come together across disciplines to explore new ideas, experiment with emerging tools, and invent new frontiers at the intersection of human creativity and AI.”
“The best minds need the right environment to do their most creative work,” says Rouven Westphal, from the Hasso Plattner Foundation. “When HPI and MIT come together across disciplines and borders, they create exactly that. The Hasso Plattner Foundation is committed to supporting this collaboration for the long term, building on Hasso Plattner’s vision of uniting technological excellence with human-centered design and creativity.”
Deepening collaboration at the intersection of technology, creativity, and societal impact
Building on the success of the Hasso Plattner Institute-MIT Research Program on Designing for Sustainability, established in 2022 between MIT MAD and HPI, the new MHACH hub represents a commitment to deepen collaboration at the intersection of technology, creativity, and societal impact.
“MIT and HPI share a common commitment to turning scientific excellence into real-world impact. Through this collaboration, we will create an environment where students and researchers from both sides of the Atlantic can work together, experiment across disciplines, and learn from one another — at a time when artificial intelligence is set to profoundly shape our lives. We are convinced that this collaboration will generate ideas with impact far beyond both institutions and inspire international cooperation and innovation,” says Professor Tobias Friedrich, dean and managing director of the Hasso Plattner Institute.
“HPI and MIT exist at the nexus of technology and creativity. Expanding this dynamic relationship will generate new paths for the infusion of AI, design, and creativity, enabling students, faculty, and researchers to dream and discover novel solutions, moving more quickly than ever from idea to implementation. MAD was established to connect thinkers across and beyond the Institute, and this new era of collaboration with HPI advances that mission on a global scale,” comments Hashim Sarkis, dean of the MIT School of Architecture and Planning and the Elizabeth and James Killian (1926) Professor.
Academic leadership from MIT and HPI will jointly shape the hub’s research and teaching agenda. Based in Potsdam, Germany, HPI is a center of excellence for digital engineering advancing research, education, and societal transfer in IT systems engineering, data engineering, cybersecurity, entrepreneurship, and digital health. Through its globally recognized HPI d-school and pioneering work in design thinking methodology, HPI brings a distinctive perspective on human-centered innovation to the collaboration, alongside a strong record in AI and data science research and technology transfer.
Expanding research and education on AI and creativity
The efforts of this multifaceted initiative are intended to foster a dynamic academic community spanning MIT and HPI, anchored by Hasso Plattner–named professorships and graduate fellowships whose recipients will be actively engaged in the hub. The long-term framework is designed to provide continuity for faculty appointments, doctoral training, and cross-campus research.
The agreement also includes the development of classes and educational programs in areas of shared AI focus, along with expanded experiential opportunities through AI-focused workshops, hackathons, and summer exchanges. A steering committee composed of representatives from the MIT School of Architecture and Planning, MIT Schwarzman College of Computing, and Hasso Plattner Institute will facilitate the shared governance of MHACH.
“Creativity has always been about extending human capability. At its core, this collaboration asks what it truly means to create something new. The question isn’t whether AI diminishes creativity, but how new forms of intelligence can deepen and enrich that process. Our goal is to explore that intersection with rigor and build a cross-disciplinary scholarly and research community that shapes how AI supports the creation of new ideas and knowledge,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science.
This collaboration is made possible by the Hasso Plattner Foundation’s long-term philanthropic commitment to institutions that connect technological innovation with design thinking and education. The Hasso Plattner Foundation has played a central role in establishing and supporting institutions such as the Hasso Plattner Institute and international design thinking programs that bridge disciplines and geographies.
Preserving Keres
Growing up in the village of Kewa — located between Santa Fe and Albuquerque in New Mexico — William Pacheco, a member of the Santo Domingo Pueblo, learned the value of his language, its history, and the traditions it carries.
“We speak Keres, a language isolate found in seven villages and communities in central New Mexico,” he says. “It’s an endangered language with fewer than 10,000 speakers.” The Pueblos’ conception of ‘language,’ according to Pacheco, evokes the idea that speaking “comes from deep within.”
Pacheco is a graduate student in the MIT Indigenous Languages Initiative, a special master’s program in linguistics for members of communities whose languages are threatened. The two-year program provides its graduates with the linguistic knowledge to help them keep their communities’ languages alive. The initiative also offers expanded opportunities for students and faculty to become involved in Indigenous and endangered languages, working with both native speaker linguists in the master’s program and outside groups, ideas that appealed to him.
“There’s some complexity to our language that defies traditional instruction,” says Pacheco, who will complete his studies this spring. “I want to develop the linguistic tools I need to improve my understanding of its construction and how best to teach and preserve it.” Pacheco is keenly aware of cultural differences in how language transmission occurs. Language, he believes, evolves over time and is best learned experientially; the Western model of language learning prioritizes immediacy and test-taking.
A variety of factors complicate efforts to preserve and potentially teach Keres. Each of the villages where it’s spoken has its own distinct dialect. These dialects are mutually intelligible to various degrees based on where they’re being spoken. Additionally, the last three decades have seen a significant increase in English usage by young Pueblos, which further endangers Keres’ existence.
Furthermore, Keres isn’t a written language. For centuries, the Pueblo have relied on daily use within their homes and communities to maintain its vitality. “The community doesn’t want it written,” Pacheco says.
Contact with the wider world has previously imperiled Indigenous ideas, an outcome Pacheco wants to avoid. “We believe [Keres] is a form of intellectual property, a tradition and artifact that is best served by empowering our people to preserve it,” he says.
From the Southwest to MIT
While he’s now passionate about linguistics, languages weren’t Pacheco’s first choice when considering an educational path. “I always admired [MIT alumnus and Nobel laureate] Richard Feynman,” he recalls. “I wanted to study physics.”
After earning an undergraduate degree from the University of New Mexico, Pacheco, who’d been working as a K-12 educator, began efforts to preserve Keres, increasing the language’s vitality and preserving its usefulness for, and value to, future generations. He sought permission and certification from the tribe to teach the language at the Santa Fe Indian School, an off-reservation boarding school. He soon discovered that a traditional Western approach to language learning wouldn’t suffice.
“Students weren’t taking the course to be scholars of the language; they wanted to learn it to build community and create opportunities to connect with elders,” Pacheco says. It was students’ advocacy, he notes, that led to the Keres learning initiative. While designing the course, however, he found gaps in his knowledge that led him to consider graduate study.
“There are fascinating idiosyncrasies in Keres, including, for example, verb morphology — the ways in which verbs and verb sounds change,” he notes. “I wasn’t sure about how to teach them.” He sought to improve his understanding and ability by earning a master’s degree in learning design, innovation, and technology from Harvard University. While completing his studies there, he had another burst of inspiration.
“I thought a background in linguistics would prove useful,” he says. “An advisor told me about the Indigenous Languages Initiative at MIT and recommended I apply.” Pacheco knew of Professor Emeritus Noam Chomsky’s pioneering work in generative linguistics at the Institute and sought to learn more about the field’s potential to help him become a better, more effective educator and linguist.
Upon arriving at MIT in 2024, Pacheco found himself embraced by faculty and students alike. “[MIT linguists] Adam Albright and Norvin Richards have been wonderfully supportive mentors, offering enthusiasm and expertise” he says. “I’ve benefited from MIT’s approach to linguistics and its use of scientific inquiry as a tool to explore language.” Engaging with other students working to preserve languages at risk of extinction continues to drive his work.
“MIT continually encourages us to use its resources, to collaborate, and to help one another find solutions to our unique challenges,” he says. “Networking, gathering good ideas, and having access to professors and students from a variety of disciplines is incredibly valuable.”
MIT’s scholars, Pacheco says, are experienced with Indigenous language learning, education, and pedagogy.
Developing an organized approach to Keres research and instruction
While gratified that his work created opportunities for him to preserve and teach Keres, Pacheco marvels at his path to the Institute and its impact on his life. “It was my language, not my interest in physics, which led me to Harvard and MIT,” he says. “How did I end up at these places?”
An advantage of language and linguistics education at MIT is the rigor with which it explores language acquisition modeling and allows for alternatives to established systems. Pacheco is after new ideas for Keres language learning and education, working to develop an effective course based on generative linguistics that both preserves the Pueblos’ approach to community and offers an educational model students are likely to embrace. He’s already had opportunities to test novel theories and practices as an educator back home.
“I was teaching students to use Keres as a programming tool,” he says. “We modeled a robot as a member of the community navigating a maze, and students would have to teach it to accept commands in Keres.”
Pacheco also wants to explore community-centered language issues. He wants to standardize the development and education of community linguists, creating a cohort of scholars trained to use the tools he designs that are deeply invested in Keres’ preservation and instruction.
“We want to drive inquiries into Keres and how it’s taught,” he says, “while also centering Indigenous knowledge systems and expanding access to linguistics study for Indigenous scholars.”
Pacheco believes there’s value in exposing scholars and communities to the cultural and ideological exchanges he’s enjoyed between the sciences, humanities, Indigenous ideas, and experiences. “Indigenous scholars exist at MIT,” he says. “We’re here, and the Institute’s support helps preserve languages like Keres as important communal and cultural artifacts.”
Pacheco is grateful for the opportunities his research at MIT have afforded him. While his education as a linguist and scholar continues, Pacheco’s community, culture, and support for Keres language learning remain top priorities.
“I want to amplify the impact in tribal language policy and Indigenous-centered education,” he says. “Language, its study, and its transmission is both science and art.”
Improving cartilage repair through cell therapy
Researchers have developed a new method for monitoring iron flux — the movement and rate at which cells take in, store, use and release iron — in stem cells known as mesenchymal stromal cells (MSCs). The system can provide insights within a minute about a cell’s ability to grow cartilage tissue for cartilage repair.
The breakthrough offers a promising pathway toward more consistent and efficient manufacturing of high‑quality MSCs for regenerative therapies to treat joint diseases such as osteoarthritis, chronic joint degeneration conditions, and cartilage injuries.
The work was led by researchers from the Critical Analytics for Manufacturing Personalized-Medicine (CAMP) group within the Singapore-MIT Alliance for Research and Technology (SMART), and was supported by the SMART Antimicrobial Resistance (AMR) research group, in collaboration with MIT and the National University of Singapore (NUS).
A paper describing the work, “Cellular iron flux measurement by micromagnetic resonance relaxometry as a critical quality attribute of mesenchymal stromal cells,” was published in February in the journal Stem Cells Translational Medicine.
Regenerative therapies hold significant promise for patients with the potential to repair damaged tissues rather than simply manage symptoms. However, one of the biggest challenges in bringing these therapies to patients lies in the unpredictable quality of the MSC’s chondrogenic potential — a cell’s ability to develop and form cartilage tissue — during the in vitro manufacturing process.
Even when grown under controlled laboratory conditions, MSCs are prone to losing some of their potential and ability to form cartilage tissue, leading to inconsistent cartilage repair outcomes due to the varying quality of MSC batches. Existing tests that evaluate the quality of MSCs’ cartilage‑forming potential are destructive in nature, which causes irreversible damage to the cells being tested and renders them unusable for further therapeutic or manufacturing purposes.
In addition, the tests require a prolonged — up to 21-day — period for cells to grow. This slows decision‑making, extends production timelines, and can hinder the timely translation of MSC-based therapies into clinical use and delay treatment for patients. As MSCs can lose chondrogenic potential during this process, early assessment is essential for manufacturers to determine whether a batch should be continued or discontinued. Hence, there is a need for a reliable and rapid method to predict MSCs’ chondrogenic potential during the cell manufacturing process.
The new developement represents a rapid, non-destructive method to monitor iron flux in MSCs by measuring iron changes in spent media — residual components in the culture medium after cell growth. Using an inexpensive benchtop micromagnetic resonance relaxometry (µMRR) device, the approach enables real‑time monitoring of cellular iron changes without damaging the cells. The inexpensive µMRR device can be easily integrated into existing laboratories and manufacturing workflows, enabling routine, real‑time quality monitoring without significant infrastructure or cost barriers.
Iron homeostasis is a critical process that maintains normal levels of iron for cell function, maintaining the balance between providing sufficient iron for essential processes, while preventing toxic accumulation. The study found that iron homeostasis is highly correlated with the MSC’s chondrogenic potential, where significant iron uptake and accumulation will reduce the cell’s ability to form cartilage. The researchers also found that supplementing the cell growth process with ascorbic acid (AA) helps regulate iron homeostasis by limiting iron flux, thereby improving the MSC’s chondrogenic potential.
Using this novel method, spent media are collected as samples and treated with AA. The µMRR device is then used to track and provide real-time insights into small iron concentration changes within the spent media. These iron concentration changes reflect how MSCs take up and release iron and can provide an early indicator of whether a batch is likely to succeed in forming good cartilage.
These findings allow manufacturers to not only monitor MSCs quality for cartilage repair in real-time, but also to assess when, and to what extent, interventions such as AA supplementation are likely to be beneficial - supporting efficient manufacturing of more effective and consistent MSC‑based therapies.
“One of the key challenges in cartilage regeneration is the inability to reliably predict whether MSCs will retain their chondrogenic potential during manufacturing. Our study addresses this by introducing a rapid, non-destructive method to monitor iron flux dynamics as a novel critical quality attribute (CQA) of MSCs' chondrogenic capacity. This approach enables early identification of suboptimal cell batches during culture, enhancing quality control efficiency, reducing manufacturing costs, and accelerating clinical translation,” says Yanmeng Yang, CAMP postdoc and first author of the paper.
“Our research sheds light on a fundamental biological process that, until now, has been extremely difficult to measure. By monitoring iron flux in real-time without destroying the cells, we can gain actionable insights into a cell batch’s chondrogenic potential, which allows for early decision-making during the manufacturing process. The findings support µMRR‑based iron monitoring as an effective quality control strategy for MSC-based therapy manufacturing, paving the way for more consistent and clinically viable regenerative medicine for cartilage regeneration,” says MIT Professor Jongyoon Han, co-head CAMP PI, AMP PI, and corresponding author of the paper.
This method represents a promising step toward improving manufacturing consistency and functional characterisation of MSC-based cellular products. Beyond advancing cell therapy manufacturing, it contributes to the scientific industry studying iron biology by providing real-time iron flux measurements that were previously unavailable. The research also advances clinical translation of high-quality cell therapies for cartilage regeneration, bringing these closer to patients with joint degeneration conditions and cartilage injuries.
Building on these findings, the researchers plan to carry out future preclinical and clinical studies to expand this approach beyond quality control in manufacturing, with the aim of establishing µMRR as a validated method for the clinical translation of MSC-based therapies in patients for cartilage repair.
The research, conducted at SMART, was supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program.
Generative AI improves a wireless vision system that sees through obstructions
MIT researchers have spent more than a decade studying techniques that enable robots to find and manipulate hidden objects by “seeing” through obstacles. Their methods utilize surface-penetrating wireless signals that reflect off concealed items.
Now, the researchers are leveraging generative artificial intelligence models to overcome a longstanding bottleneck that limited the precision of prior approaches. The result is a new method that produces more accurate shape reconstructions, which could improve a robot’s ability to reliably grasp and manipulate objects that are blocked from view.
This new technique builds a partial reconstruction of a hidden object from reflected wireless signals and fills in the missing parts of its shape using a specially trained generative AI model.
The researchers also introduced an expanded system that uses generative AI to accurately reconstruct an entire room, including all the furniture. The system utilizes wireless signals sent from one stationary radar, which reflect off humans moving in the space.
This overcomes one key challenge of many existing methods, which require a wireless sensor to be mounted on a mobile robot to scan the environment. And unlike some popular camera-based techniques, their method preserves the privacy of people in the environment.
These innovations could enable warehouse robots to verify packed items before shipping, eliminating waste from product returns. They could also allow smart home robots to understand someone’s location in a room, improving the safety and efficiency of human-robot interaction.
“What we’ve done now is develop generative AI models that help us understand wireless reflections. This opens up a lot of interesting new applications, but technically it is also a qualitative leap in capabilities, from being able to fill in gaps we were not able to see before to being able to interpret reflections and reconstruct entire scenes,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of two papers on these techniques. “We are using AI to finally unlock wireless vision.”
Adib is joined on the first paper by lead author and research assistant Laura Dodds; as well as research assistants Maisy Lam, Waleed Akbar, and Yibo Cheng; and on the second paper by lead author and former postdoc Kaichen Zhou; Dodds; and research assistant Sayed Saad Afzal. Both papers will be presented at the IEEE Conference on Computer Vision and Pattern Recognition.
Surmounting specularity
The Adib Group previously demonstrated the use of millimeter wave (mmWave) signals to create accurate reconstructions of 3D objects that are hidden from view, like a lost wallet buried under a pile.
These waves, which are the same type of signals used in Wi-Fi, can pass through common obstructions like drywall, plastic, and cardboard, and reflect off hidden objects.
But mmWaves usually reflect in a specular manner, which means a wave reflects in a single direction after striking a surface. So large portions of the surface will reflect signals away from the mmWave sensor, making those areas effectively invisible.
“When we want to reconstruct an object, we are only able to see the top surface and we can’t see any of the bottom or sides,” Dodds explains.
The researchers previously used principles from physics to interpret reflected signals, but this limits the accuracy of the reconstructed 3D shape.
In the new papers, they overcame that limitation by using a generative AI model to fill in parts that are missing from a partial reconstruction.
“But the challenge then becomes: How do you train these models to fill in these gaps?” Adib says.
Usually, researchers use extremely large datasets to train a generative AI model, which is one reason models like Claude and Llama exhibit such impressive performance. But no mmWave datasets are large enough for training.
Instead, the researchers adapted the images in large computer vision datasets to mimic the properties in mmWave reflections.
“We were simulating the property of specularity and the noise we get from these reflections so we can apply existing datasets to our domain. It would have taken years for us to collect enough new data to do this,” Lam says.
The researchers embed the physics of mmWave reflections directly into these adapted data, creating a synthetic dataset they use to teach a generative AI model to perform plausible shape reconstructions.
The complete system, called Wave-Former, proposes a set of potential object surfaces based on mmWave reflections, feeds them to the generative AI model to complete the shape, and then refines the surfaces until it achieves a full reconstruction.
Wave-Former was able to generate faithful reconstructions of about 70 everyday objects, such as cans, boxes, utensils, and fruit, boosting accuracy by nearly 20 percent over state-of-the-art baselines. The objects were hidden behind or under cardboard, wood, drywall, plastic, and fabric.
Seeing “ghosts”
The team used this same approach to build an expanded system that fully reconstructs entire indoor scenes by leveraging mmWave reflections off humans moving in a room.
Human motion generates multipath reflections. Some mmWaves reflect off the human, then reflect again off a wall or object, and then arrive back at the sensor, Dodds explains.
These secondary reflections create so-called “ghost signals,” which are reflected copies of the original signal that change location as a human moves. These ghost signals are usually discarded as noise, but they also hold information about the layout of the room.
“By analyzing how these reflections change over time, we can start to get a coarse understanding of the environment around us. But trying to directly interpret these signals is going to be limited in accuracy and resolution.” Dodds says.
They used a similar training method to teach a generative AI model to interpret those coarse scene reconstructions and understand the behavior of multipath mmWave reflections. This model fills in the gaps, refining the initial reconstruction until it completes the scene.
They tested their scene reconstruction system, called RISE, using more than 100 human trajectories captured by a single mmWave radar. On average, RISE generated reconstructions that were about twice as precise than existing techniques.
In the future, the researchers want to improve the granularity and detail in their reconstructions. They also want to build large foundation models for wireless signals, like the foundation models GPT, Claude, and Gemini for language and vision, which could open new applications.
This work is supported, in part, by the National Science Foundation (NSF), the MIT Media Lab, and Amazon.
A better method for identifying overconfident large language models
Large language models (LLMs) can generate credible but inaccurate responses, so researchers have developed uncertainty quantification methods to check the reliability of predictions. One popular method involves submitting the same prompt multiple times to see if the model generates the same answer.
But this method measures self-confidence, and even the most impressive LLM might be confidently wrong. Overconfidence can mislead users about the accuracy of a prediction, which might result in devastating consequences in high-stakes settings like health care or finance.
To address this shortcoming, MIT researchers introduced a new method for measuring a different type of uncertainty that more reliably identifies confident but incorrect LLM responses.
Their method involves comparing a target model’s response to responses from a group of similar LLMs. They found that measuring cross-model disagreement more accurately captures this type of uncertainty than traditional approaches.
They combined their approach with a measure of LLM self-consistency to create a total uncertainty metric, and evaluated it on 10 realistic tasks, such as question-answering and math reasoning. This total uncertainty metric consistently outperformed other measures and was better at identifying unreliable predictions.
“Self-consistency is being used in a lot of different approaches for uncertainty quantification, but if your estimate of uncertainty only relies on a single model’s outcome, it is not necessarily trustable. We went back to the beginning to understand the limitations of current approaches and used those as a starting point to design a complementary method that can empirically improve the results,” says Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on this technique.
She is joined on the paper by Veronika Thost, a research scientist at the MIT-IBM Watson AI Lab; Walter Gerych, a former MIT postdoc who is now an assistant professor at Worcester Polytechnic Institute; Mikhail Yurochkin, a staff research scientist at the MIT-IBM Watson AI Lab; and senior author Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.
Understanding overconfidence
Many popular methods for uncertainty quantification involve asking a model for a confidence score or testing the consistency of its responses to the same prompt. These methods estimate aleatoric uncertainty, or how internally confident a model is in its own prediction.
However, LLMs can be confident when they are completely wrong. Research has shown that epistemic uncertainty, or uncertainty about whether one is using the right model, can be a better way to assess true uncertainty when a model is overconfident.
The MIT researchers estimate epistemic uncertainty by measuring disagreement across a similar group of LLMs.
“If I ask ChatGPT the same question multiple times and it gives me the same answer over and over again, that doesn’t mean the answer is necessarily correct. If I switch to Claude or Gemini and ask them the same question, and I get a different answer, that is going to give me a sense of the epistemic uncertainty,” Hamidieh explains.
Epistemic uncertainty attempts to capture how far a target model diverges from the ideal model for that task. But since it is impossible to build an ideal model, researchers use surrogates or approximations that often rely on faulty assumptions.
To improve uncertainty quantification, the MIT researchers needed a more accurate way to estimate epistemic uncertainty.
An ensemble approach
The method they developed involves measuring the divergence between the target model and a small ensemble of models with similar size and architecture. They found that comparing semantic similarity, or how closely the meanings of the responses match, could provide a better estimate of epistemic uncertainty.
To achieve the most accurate estimate, the researchers needed a set of LLMs that covered diverse responses, weren’t too similar to the target model, and were weighted based on credibility.
“We found that the easiest way to satisfy all these properties is to take models that are trained by different companies. We tried many different approaches that were more complex, but this very simple approach ended up working best,” Hamidieh says.
Once they had developed this method for estimating epistemic uncertainty, they combined it with a standard approach that measures aleatoric uncertainty. This total uncertainty metric (TU) offered the most accurate reflection of whether a model’s confidence level is trustworthy.
“Uncertainty depends on the uncertainty of the given prompt as well as how close our model is to the optimal model. This is why summing up these two uncertainty metrics is going to give us the best estimate,” Hamidieh says.
TU could more effectively identify situations where an LLM is hallucinating, since epistemic uncertainty can flag confidently wrong outputs that aleatoric uncertainty might miss. It could also enable researchers to reinforce an LLM’s confidently correct answers during training, which may improve performance.
They tested TU using multiple LLMs on 10 common tasks, such as question-answering, summarization, translation, and math reasoning. Their method more effectively identified unreliable predictions than either measure on its own.
Measuring total uncertainty often required fewer queries than calculating aleatoric uncertainty, which could reduce computational costs and save energy.
Their experiments also revealed that epistemic uncertainty is most effective on tasks with a unique correct answer, like factual question-answering, but may underperform on more open-ended tasks.
In the future, the researchers could adapt their technique to improve its performance on open-ended queries. They may also build on this work by exploring other forms of aleatoric uncertainty.
This work is funded, in part, by the MIT-IBM Watson AI Lab.
New model predicts how mosquitoes will fly
A mosquito finds its target with the help of certain cues in its environment, such as a person’s silhouette and the carbon dioxide they exhale.
Now researchers at MIT and Georgia Tech have found that these visual and chemical cues help determine the insects’ flight paths. The team has developed the first three-dimensional model of mosquito flight, based on experiments with mosquitoes flying in the presence of different sensory cues.
Their model, reported today in the journal Science Advances, identifies three flight patterns that mosquitoes exhibit in response to sensory stimuli.
When they can only see a potential target, mosquitoes take a “fly-by” approach, quickly diving in toward the target, then flying back out if they do not detect any other host-confirming cues.
When they can’t see a target but can smell a chemical cue such as carbon dioxide, mosquitoes will do “double-takes,” slowing down and flitting back and forth to keep close to the source.
Interestingly, when mosquitoes receive both visual and chemical cues, such as seeing a silhouette and smelling carbon dioxide, they switch to an “orbiting” pattern, flying around a target at a steady speed as they prepare to land, much like a shark circling its prey.
The researchers say the new model can be used to predict how mosquitoes will fly in response to other cues, such as heat, humidity, and certain odors. Such predictions could help to design more effective traps and mosquito control strategies.
“Our work suggests that mosquito traps need specifically calibrated, multisensory lures to keep mosquitoes engaged long enough to be captured,” says study author Jörn Dunkel, MathWorks Professor of Mathematics at MIT. “We hope this establishes a new paradigm for studying pest behavior by using 3D tracking and data-driven modeling to decode their movement and solve major public health challenges.”
The study’s MIT co-authors are Chenyi Fei, a postdoc in MIT’s Department of Mathematics, and Alexander Cohen PhD ’26, a recent MIT chemical engineering PhD student advised by Dunkel and Professor Martin Bazant, along with Christopher Zuo, Soohwan Kim, and David L. Hu ’01, PhD ’06 of Georgia Tech, and Ring Carde of the University of California at Riverside.
Flight by numbers
Mosquitoes are considered to be the most dangerous animals in the world, given their collective impact on human health. The blood-sucking insects transmit malaria, dengue fever, West Nile virus, and other deadly diseases that together cause over 770,000 deaths each year.
Of the 3,500 known species of mosquitoes, around 100 have evolved to specifically target humans, including Aedes aegypti, a species that uses a variety of cues to seek out human hosts. Scientists have studied how certain cues attract mosquitoes, mainly by setting up experiments in wind tunnels, where they can waft cues such as carbon dioxide and study how mosquitoes respond. Such experiments have mainly recorded data such as where and when the insects land. The researchers say no study has explored how mosquitoes fly as they hunt for a host.
“The big question was: How do mosquitoes find a human target?” says Fei. “There were previous experimental studies on what kind of cues might be important. But nothing has been especially quantitative.”
At MIT, Dunkel’s group develops mathematical models to describe and predict the behavior of complex living systems, such as how worms untangle, how starfish embryos develop and swim, and how microbes evolve their community structure over time.
Dunkel looked to apply similar quantitative techniques to predict flight patterns of mosquitoes after giving a talk at Georgia Tech. David Hu, a former MIT graduate student who is now a professor of mechanical engineering at Georgia Tech, proposed a collaboration; Hu’s lab was carrying out experiments with mosquitoes at a facility at the Centers of Disease Control and Prevention in Atlanta, where they were studying the insects’ behavior in response to sensory cues. Could Dunkel’s group use the collected data to identify significant flight behavior that could ultimately help scientists control mosquito populations?
“One of the original motivations was designing better traps for mosquitoes,” says Cohen. “Figuring out how they fly around a human gives insights on how we can avoid them.”
Taking cues
For their new study, Hu and his colleagues at Georgia Tech carried out experiments with 50 to 100 mosquitoes of the Aedes aegypti species. The insects flew around inside a long, white, slightly angled rectangular room as cameras around the room captured detailed three-dimensional trajectories of each mosquito as it flew around. In the center of the room, they placed an object to represent a certain visual or chemical cue.
In some trials, they placed a black Styrofoam sphere on a stand to represent a simple visual cue. (Mosquitoes would be able to see the black sphere against the room’s white background). In other trials, they set up a white sphere with a tube running through to pump out carbon dioxide at rates similar to what humans breathe out. These trials represented the presence of a chemical cue, but not a visual cue.
The researchers also studied the mosquitoes’ response to both visual and chemical cues, using a black sphere that emitted carbon dioxide. Finally, they observed how mosquitoes behaved around a human volunteer who wore protective clothing that was black on one side and white on the other.
Across 20 experiments, the team generated more than 53 million data points and over 477,220 mosquito flight paths. Hu shared the data with Dunkel, whose group used the measurements to develop a model for mosquito flight behavior.
“We are proposing a very broad range of dynamical equations, and when you start out, the equation to predict a mosquito’s flight path is very complicated, with a lot of terms, including the relative importance of a visual versus a chemical cue,” Dunkel explains. “Then through iteration against data, we reduce the complexity of that equation until we get the simplest model that still agrees with the data.”
In the end, the group whittled down a simple model that accurately predicts how a mosquito will fly, given the presence of a visual cue, a chemical cue, or both. The flight paths in response to one or the other cue are markedly different. And interestingly, when both cues are present, the researchers noted that the resulting path is not “additive.” In other words, a mosquito does not simply combine the paths that it would separately take when it can both see and smell a target. Instead, the insects take a distinct path, circling, rather than diving or darting around their target.
“Our work suggests that mosquito traps need specifically calibrated ‘multisensory’ lures to keep mosquitoes engaged long enough to be captured,” Dunkel says.
“Obviously there are additional cues that humans emit, like odor, heat, and humidity,” Cohen notes. “For the species we study, visual and carbon dioxide cues are the most important. But we can apply this model to study different species and how they respond to other sensory cues.”
The researchers have developed an interactive app that incorporates the new mosquito flight model. Users can experiment with different objects and set parameters such as the number of mosquitoes around the object and the type of sensory cue that is present. The model then visualizes how the mosquitoes would fly in response.
“The original hope was to have a quantitative model that can simulate mosquito behavior around various trap designs,” Cohen says. “Now that we have a model, we can start to design more intelligent traps.”
This work was supported, in part, by the National Science Foundation, Schmidt Sciences, LLC, the NDSEG Fellowship Program, and the MIT MathWorks Professorship Fund.
