MIT Latest News

Compassionate leadership
Professors Emery Brown and Hamsa Balakrishnan work in vastly different fields, but are united by their deep commitment to mentoring students. While each has contributed to major advancements in their respective areas — statistical neuroscience for Brown, and large-scale transportation systems for Balakrishnan — their students might argue that their greatest impact comes from the guidance, empathy, and personal support they provide.
Emery Brown: Holistic mentorship
Brown is the Edward Hood Professor of Medical Engineering and Computational Neuroscience at MIT and a practicing anesthesiologist at Massachusetts General Hospital. Brown’s experimental research has made important contributions toward understanding the neuroscience of how anesthetics act in the brain to create the states of general anesthesia.
One of the biggest challenges in academic environments is knowing how to chart a course. Brown takes the time to connect with students individually, helping them identify meaningful pathways that they may not have considered for themselves. In addition to mentoring his graduate students and postdocs, Brown also hosts clinicians and faculty from around the world. Their presence in the lab exposes students to a number of career opportunities and connections outside of MIT’s academic environment.
Brown also continues to support former students beyond their time in his lab, offering guidance on personal and professional development even after they have moved on to other roles. “Knowing that I have Emery at my back as someone I can always turn to … is such a source of confidence and strength as I go forward into my own career,” one nominator wrote.
When Brown faced a major career decision recently, he turned to his students to ask how his choice might affect them. He met with students individually to understand the personal impact that each might experience. Brown was adamant in ensuring that his professional advancement would not jeopardize his students, and invested a great deal of thought and effort in ensuring a positive outcome for them.
Brown is deeply committed to the health and well-being of his students, with many nominators sharing examples of his constant support through challenging personal circumstances. When one student reached out to Brown, overwhelmed by research, recent personal loss, and career uncertainty, Brown created a safe space for vulnerable conversations.
“He listened, supported me, and encouraged me to reflect on my aspirations for the next five years, assuring me that I should pursue them regardless of any obstacles,” the nominator shared. “Following our conversation, I felt more grounded and regained momentum in my research project.”
In summation, his student felt that Brown’s advice was “simple, yet enlightening, and exactly what I needed to hear at that moment.”
Hamsa Balakrishnan: Unequivocal advocacy
Balakrishnan is the William E. Leonhard Professor of Aeronautics and Astronautics at MIT. She leads the Dynamics, Infrastructure Networks, and Mobility (DINaMo) Research Group. Her current research interests are in the design, analysis, and implementation of control and optimization algorithms for large-scale cyber-physical infrastructures, with an emphasis on air transportation systems.
Her nominators commended Balakrishnan for her efforts to support and advocate for all of her students. In particular, she connects her students to academic mentors within the community, which contributes to their sense of acceptance within the field.
Balakrishnan’s mindfulness in respecting personal expression and her proactive approach to making everyone feel welcome have made a lasting impact on her students. “Hamsa’s efforts have encouraged me to bring my full self to the workplace,” one student wrote; “I will be forever grateful for her mentorship and kindness as an advisor.”
One student shared their experience of moving from a difficult advising situation to working with Balakrishnan, describing how her mentorship was crucial in the nominator’s successful return to research: “Hamsa’s mentorship has been vital to building up my confidence as a researcher, as she [often] provides helpful guidance and positive affirmation.”
Balakrishnan frequently gives her students freedom to independently explore and develop their research interests. When students wanted to delve into new areas like space research — far removed from her expertise in air traffic management and uncrewed aerial vehicles — Balakrishnan embraced the challenge and learned about these topics in order to provide better guidance.
One student described how Balakrishnan consistently encouraged the lab to work on topics that interested them. This led the student to develop a novel research topic and publish a first author paper within months of joining the lab.
Balakrishnan is deeply committed to promoting a healthy work-life balance for her students. She ensures that mentees do not feel compelled to overwork by encouraging them to take time off. Even if students do not have significant updates, Balakrishnan encourages weekly meetings to foster an open line of communication. She helps them set attainable goals, especially when it comes to tasks like paper reading and writing, and never pressures them to work late hours in order to meet paper or conference deadlines.
How nature organizes itself, from brain cells to ecosystems
Look around, and you’ll see it everywhere: the way trees form branches, the way cities divide into neighborhoods, the way the brain organizes into regions. Nature loves modularity — a limited number of self-contained units that combine in different ways to perform many functions. But how does this organization arise? Does it follow a detailed genetic blueprint, or can these structures emerge on their own?
A new study from MIT Professor Ila Fiete suggests a surprising answer.
In findings published Feb. 18 in Nature, Fiete, an associate investigator in the McGovern Institute for Brain Research and director of the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, reports that a mathematical model called peak selection can explain how modules emerge without strict genetic instructions. Her team’s findings, which apply to brain systems and ecosystems, help explain how modularity occurs across nature, no matter the scale.
Joining two big ideas
“Scientists have debated how modular structures form. One hypothesis suggests that various genes are turned on at different locations to begin or end a structure. This explains how insect embryos develop body segments, with genes turning on or off at specific concentrations of a smooth chemical gradient in the insect egg,” says Fiete, who is the senior author of the paper. Mikail Khona PhD '25, a former graduate student and K. Lisa Yang ICoN Center graduate fellow, and postdoc Sarthak Chandra also led the study.
Another idea, inspired by mathematician Alan Turing, suggests that a structure could emerge from competition — small-scale interactions can create repeating patterns, like the spots on a cheetah or the ripples in sand dunes.
Both ideas work well in some cases, but fail in others. The new research suggests that nature need not pick one approach over the other. The authors propose a simple mathematical principle called peak selection, showing that when a smooth gradient is paired with local interactions that are competitive, modular structures emerge naturally. “In this way, biological systems can organize themselves into sharp modules without detailed top-down instruction,” says Chandra.
Modular systems in the brain
The researchers tested their idea on grid cells, which play a critical role in spatial navigation as well as the storage of episodic memories. Grid cells fire in a repeating triangular pattern as animals move through space, but they don’t all work at the same scale — they are organized into distinct modules, each responsible for mapping space at slightly different resolutions.
No one knows how these modules form, but Fiete’s model shows that gradual variations in cellular properties along one dimension in the brain, combined with local neural interactions, could explain the entire structure. The grid cells naturally sort themselves into distinct groups with clear boundaries, without external maps or genetic programs telling them where to go. “Our work explains how grid cell modules could emerge. The explanation tips the balance toward the possibility of self-organization. It predicts that there might be no gene or intrinsic cell property that jumps when the grid cell scale jumps to another module,” notes Khona.
Modular systems in nature
The same principle applies beyond neuroscience. Imagine a landscape where temperatures and rainfall vary gradually over a space. You might expect species to be spread, and also to vary, smoothly over this region. But in reality, ecosystems often form species clusters with sharp boundaries — distinct ecological “neighborhoods” that don’t overlap.
Fiete’s study suggests why: local competition, cooperation, and predation between species interact with the global environmental gradients to create natural separations, even when the underlying conditions change gradually. This phenomenon can be explained using peak selection — and suggests that the same principle that shapes brain circuits could also be at play in forests and oceans.
A self-organizing world
One of the researchers’ most striking findings is that modularity in these systems is remarkably robust. Change the size of the system, and the number of modules stays the same — they just scale up or down. That means a mouse brain and a human brain could use the same fundamental rules to form their navigation circuits, just at different sizes.
The model also makes testable predictions. If it’s correct, grid cell modules should follow simple spacing ratios. In ecosystems, species distributions should form distinct clusters even without sharp environmental shifts.
Fiete notes that their work adds another conceptual framework to biology. “Peak selection can inform future experiments, not only in grid cell research but across developmental biology.”
Study: Climate change will reduce the number of satellites that can safely orbit in space
MIT aerospace engineers have found that greenhouse gas emissions are changing the environment of near-Earth space in ways that, over time, will reduce the number of satellites that can sustainably operate there.
In a study appearing today in Nature Sustainability, the researchers report that carbon dioxide and other greenhouse gases can cause the upper atmosphere to shrink. An atmospheric layer of special interest is the thermosphere, where the International Space Station and most satellites orbit today. When the thermosphere contracts, the decreasing density reduces atmospheric drag — a force that pulls old satellites and other debris down to altitudes where they will encounter air molecules and burn up.
Less drag therefore means extended lifetimes for space junk, which will litter sought-after regions for decades and increase the potential for collisions in orbit.
The team carried out simulations of how carbon emissions affect the upper atmosphere and orbital dynamics, in order to estimate the “satellite carrying capacity” of low Earth orbit. These simulations predict that by the year 2100, the carrying capacity of the most popular regions could be reduced by 50-66 percent due to the effects of greenhouse gases.
“Our behavior with greenhouse gases here on Earth over the past 100 years is having an effect on how we operate satellites over the next 100 years,” says study author Richard Linares, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro).
“The upper atmosphere is in a fragile state as climate change disrupts the status quo,” adds lead author William Parker, a graduate student in AeroAstro. “At the same time, there’s been a massive increase in the number of satellites launched, especially for delivering broadband internet from space. If we don’t manage this activity carefully and work to reduce our emissions, space could become too crowded, leading to more collisions and debris.”
The study includes co-author Matthew Brown of the University of Birmingham.
Sky fall
The thermosphere naturally contracts and expands every 11 years in response to the sun’s regular activity cycle. When the sun’s activity is low, the Earth receives less radiation, and its outermost atmosphere temporarily cools and contracts before expanding again during solar maximum.
In the 1990s, scientists wondered what response the thermosphere might have to greenhouse gases. Their preliminary modeling showed that, while the gases trap heat in the lower atmosphere, where we experience global warming and weather, the same gases radiate heat at much higher altitudes, effectively cooling the thermosphere. With this cooling, the researchers predicted that the thermosphere should shrink, reducing atmospheric density at high altitudes.
In the last decade, scientists have been able to measure changes in drag on satellites, which has provided some evidence that the thermosphere is contracting in response to something more than the sun’s natural, 11-year cycle.
“The sky is quite literally falling — just at a rate that’s on the scale of decades,” Parker says. “And we can see this by how the drag on our satellites is changing.”
The MIT team wondered how that response will affect the number of satellites that can safely operate in Earth’s orbit. Today, there are over 10,000 satellites drifting through low Earth orbit, which describes the region of space up to 1,200 miles (2,000 kilometers), from Earth’s surface. These satellites deliver essential services, including internet, communications, navigation, weather forecasting, and banking. The satellite population has ballooned in recent years, requiring operators to perform regular collision-avoidance maneuvers to keep safe. Any collisions that do occur can generate debris that remains in orbit for decades or centuries, increasing the chance for follow-on collisions with satellites, both old and new.
“More satellites have been launched in the last five years than in the preceding 60 years combined,” Parker says. “One of key things we’re trying to understand is whether the path we’re on today is sustainable.”
Crowded shells
In their new study, the researchers simulated different greenhouse gas emissions scenarios over the next century to investigate impacts on atmospheric density and drag. For each “shell,” or altitude range of interest, they then modeled the orbital dynamics and the risk of satellite collisions based on the number of objects within the shell. They used this approach to identify each shell’s “carrying capacity” — a term that is typically used in studies of ecology to describe the number of individuals that an ecosystem can support.
“We’re taking that carrying capacity idea and translating it to this space sustainability problem, to understand how many satellites low Earth orbit can sustain,” Parker explains.
The team compared several scenarios: one in which greenhouse gas concentrations remain at their level from the year 2000 and others where emissions change according to the Intergovernmental Panel on Climate Change (IPCC) Shared Socioeconomic Pathways (SSPs). They found that scenarios with continuing increases in emissions would lead to a significantly reduced carrying capacity throughout low Earth orbit.
In particular, the team estimates that by the end of this century, the number of satellites safely accommodated within the altitudes of 200 and 1,000 kilometers could be reduced by 50 to 66 percent compared with a scenario in which emissions remain at year-2000 levels. If satellite capacity is exceeded, even in a local region, the researchers predict that the region will experience a “runaway instability,” or a cascade of collisions that would create so much debris that satellites could no longer safely operate there.
Their predictions forecast out to the year 2100, but the team says that certain shells in the atmosphere today are already crowding up with satellites, particularly from recent “megaconstellations” such as SpaceX’s Starlink, which comprises fleets of thousands of small internet satellites.
“The megaconstellation is a new trend, and we’re showing that because of climate change, we’re going to have a reduced capacity in orbit,” Linares says. “And in local regions, we’re close to approaching this capacity value today.”
“We rely on the atmosphere to clean up our debris. If the atmosphere is changing, then the debris environment will change too,” Parker adds. “We show the long-term outlook on orbital debris is critically dependent on curbing our greenhouse gas emissions.”
This research is supported, in part, by the U.S. National Science Foundation, the U.S. Air Force, and the U.K. Natural Environment Research Council.
Study: Tuberculosis relies on protective genes during airborne transmission
Tuberculosis lives and thrives in the lungs. When the bacteria that cause the disease are coughed into the air, they are thrust into a comparatively hostile environment, with drastic changes to their surrounding pH and chemistry. How these bacteria survive their airborne journey is key to their persistence, but very little is known about how they protect themselves as they waft from one host to the next.
Now MIT researchers and their collaborators have discovered a family of genes that becomes essential for survival specifically when the pathogen is exposed to the air, likely protecting the bacterium during its flight.
Many of these genes were previously considered to be nonessential, as they didn’t seem to have any effect on the bacteria’s role in causing disease when injected into a host. The new work suggests that these genes are indeed essential, though for transmission rather than proliferation.
“There is a blind spot that we have toward airborne transmission, in terms of how a pathogen can survive these sudden changes as it circulates in the air,” says Lydia Bourouiba, who is the head of the Fluid Dynamics of Disease Transmission Laboratory, an associate professor of civil and environmental engineering and mechanical engineering, and a core faculty member in the Instiute for Medical Engineering and Science at MIT. “Now we have a sense, through these genes, of what tools tuberculosis uses to protect itself.”
The team’s results, appearing this week in the Proceedings of the National Academy of Sciences, could provide new targets for tuberculosis therapies that simultaneously treat infection and prevent transmission.
“If a drug were to target the product of these same genes, it could effectively treat an individual, and even before that person is cured, it could keep the infection from spreading to others,” says Carl Nathan, chair of the Department of Microbiology and Immunology and R.A. Rees Pritchett Professor of Microbiology at Weill Cornell Medicine.
Nathan and Bourouiba are co-senior authors of the study, which includes MIT co-authors and mentees of Bourouiba in the Fluids and Health Network: co-lead author postdoc Xiaoyi Hu, postdoc Eric Shen, and student mentees Robin Jahn and Luc Geurts. The study also includes collaborators from Weill Cornell Medicine, the University of California at San Diego, Rockefeller University, Hackensack Meridian Health, and the University of Washington.
Pathogen’s perspective
Tuberculosis is a respiratory disease caused by Mycobacterium tuberculosis, a bacterium that most commonly affects the lungs and is transmitted through droplets that an infected individual expels into the air, often through coughing or sneezing. Tuberculosis is the single leading cause of death from infection, except during the major global pandemics caused by viruses.
“In the last 100 years, we have had the 1918 influenza, the 1981 HIV AIDS epidemic, and the 2019 SARS Cov2 pandemic,” Nathan notes. “Each of those viruses has killed an enormous number of people. And as they have settled down, we are left with a ‘permanent pandemic’ of tuberculosis.”
Much of the research on tuberculosis centers on its pathophysiology — the mechanisms by which the bacteria take over and infect a host — as well as ways to diagnose and treat the disease. For their new study, Nathan and Bourouiba focused on transmission of tuberculosis, from the perspective of the bacterium itself, to investigate what defenses it might rely on to help it survive its airborne transmission.
“This is one of the first attempts to look at tuberculosis from the airborne perspective, in terms of what is happening to the organism, at the level of being protected from these sudden changes and very harsh biophysical conditions,” Bourouiba says.
Critical defense
At MIT, Bourouiba studies the physics of fluids and the ways in which droplet dynamics can spread particles and pathogens. She teamed up with Nathan, who studies tuberculosis, and the genes that the bacteria rely on throughout their life cycle.
To get a handle on how tuberculosis can survive in the air, the team aimed to mimic the conditions that the bacterium experiences during transmission. The researchers first looked to develop a fluid that is similar in viscosity and droplet sizes to what a patient would cough or sneeze out into the air. Bourouiba notes that much of the experimental work that has been done on tuberculosis in the past has been based on a liquid solution that scientists use to grow the bacteria. But the team found that this liquid has a chemical composition that is very different from the fluid that tuberculosis patients actually cough and sneeze into the air.
Additionally, Bourouiba notes that fluid commonly sampled from tuberculosis patients is based on sputum that a patient spits out, for instance for a diagnostic test. “The fluid is thick and gooey and it’s what most of the tuberculosis world considers to represent what is happening in the body,” she says. “But it’s extraordinarily inefficient in spreading to others because it’s too sticky to break into inhalable droplets.”
Through Bourouiba’s work with fluid and droplet physics, the team determined the more realistic viscosity and likely size distribution of tuberculosis-carrying microdroplets that would be transmitted through the air. The team also characterized the droplet compositions, based on analyses of patient samples of infected lung tissues. They then created a more realistic fluid, with a composition, viscosity, surface tension and droplet size that is similar to what would be released into the air from exhalations.
Then, the researchers deposited different fluid mixtures onto plates in tiny individual droplets and measured in detail how they evaporate and what internal structure they leave behind. They observed that the new fluid tended to shield the bacteria at the center of the droplet as the droplet evaporated, compared to conventional fluids where bacteria tended to be more exposed to the air. The more realistic fluid was also capable of retaining more water.
Additionally, the team infused each droplet with bacteria containing genes with various knockdowns, to see whether the absence of certain genes would affect the bacteria’s survival as the droplets evaporated.
In this way, the team assessed the activity of over 4,000 tuberculosis genes and discovered a family of several hundred genes that seemed to become important specifically as the bacteria adapted to airborne conditions. Many of these genes are involved in repairing damage to oxidized proteins, such as proteins that have been exposed to air. Other activated genes have to do with destroying damaged proteins that are beyond repair.
“What we turned up was a candidate list that’s very long,” Nathan says. “There are hundreds of genes, some more prominently implicated than others, that may be critically involved in helping tuberculosis survive its transmission phase.”
The team acknowledges the experiments are not a complete analog of the bacteria’s biophysical transmission. In reality, tuberculosis is carried in droplets that fly through the air, evaporating as they go. In order to carry out their genetic analyses, the team had to work with droplets sitting on a plate. Under these constraints, they mimicked the droplet transmission as best they could, by setting the plates in an extremely dry chamber to accelerate the droplets’ evaporation, analogous to what they would experience in flight.
Going forward, the researchers have started experimenting with platforms that allow them to study the droplets in flight, in a range of conditions. They plan to focus on the new family of genes in even more realistic experiments, to confirm whether the genes do indeed shield Mycobacterium tuberculosis as it is transmitted through the air, potentially opening the way to weakening its airborne defenses.
“The idea of waiting to find someone with tuberculosis, then treating and curing them, is a totally inefficient way to stop the pandemic,” Nathan says. “Most people who exhale tuberculosis do not yet have a diagnosis. So we have to interrupt its transmission. And how do you do that, if you don’t know anything about the process itself? We have some ideas now.”
This work was supported, in part, by the National Institutes of Health, the Abby and Howard P. Milstein Program in Chemical Biology and Translational Medicine, and the Potts Memorial Foundation, the National Science Foundation Center for Analysis and Prediction of Pandemic Expansion (APPEX), Inditex, NASA Translational Research Institute for Space Health , and Analog Devices, Inc.
Robotic helper making mistakes? Just nudge it in the right direction
Imagine that a robot is helping you clean the dishes. You ask it to grab a soapy bowl out of the sink, but its gripper slightly misses the mark.
Using a new framework developed by MIT and NVIDIA researchers, you could correct that robot’s behavior with simple interactions. The method would allow you to point to the bowl or trace a trajectory to it on a screen, or simply give the robot’s arm a nudge in the right direction.
Unlike other methods for correcting robot behavior, this technique does not require users to collect new data and retrain the machine-learning model that powers the robot’s brain. It enables a robot to use intuitive, real-time human feedback to choose a feasible action sequence that gets as close as possible to satisfying the user’s intent.
When the researchers tested their framework, its success rate was 21 percent higher than an alternative method that did not leverage human interventions.
In the long run, this framework could enable a user to more easily guide a factory-trained robot to perform a wide variety of household tasks even though the robot has never seen their home or the objects in it.
“We can’t expect laypeople to perform data collection and fine-tune a neural network model. The consumer will expect the robot to work right out of the box, and if it doesn’t, they would want an intuitive mechanism to customize it. That is the challenge we tackled in this work,” says Felix Yanwei Wang, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this method.
His co-authors include Lirui Wang PhD ’24 and Yilun Du PhD ’24; senior author Julie Shah, an MIT professor of aeronautics and astronautics and the director of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); as well as Balakumar Sundaralingam, Xuning Yang, Yu-Wei Chao, Claudia Perez-D’Arpino PhD ’19, and Dieter Fox of NVIDIA. The research will be presented at the International Conference on Robots and Automation.
Mitigating misalignment
Recently, researchers have begun using pre-trained generative AI models to learn a “policy,” or a set of rules, that a robot follows to complete an action. Generative models can solve multiple complex tasks.
During training, the model only sees feasible robot motions, so it learns to generate valid trajectories for the robot to follow.
While these trajectories are valid, that doesn’t mean they always align with a user’s intent in the real world. The robot might have been trained to grab boxes off a shelf without knocking them over, but it could fail to reach the box on top of someone’s bookshelf if the shelf is oriented differently than those it saw in training.
To overcome these failures, engineers typically collect data demonstrating the new task and re-train the generative model, a costly and time-consuming process that requires machine-learning expertise.
Instead, the MIT researchers wanted to allow users to steer the robot’s behavior during deployment when it makes a mistake.
But if a human interacts with the robot to correct its behavior, that could inadvertently cause the generative model to choose an invalid action. It might reach the box the user wants, but knock books off the shelf in the process.
“We want to allow the user to interact with the robot without introducing those kinds of mistakes, so we get a behavior that is much more aligned with user intent during deployment, but that is also valid and feasible,” Wang says.
Their framework accomplishes this by providing the user with three intuitive ways to correct the robot’s behavior, each of which offers certain advantages.
First, the user can point to the object they want the robot to manipulate in an interface that shows its camera view. Second, they can trace a trajectory in that interface, allowing them to specify how they want the robot to reach the object. Third, they can physically move the robot’s arm in the direction they want it to follow.
“When you are mapping a 2D image of the environment to actions in a 3D space, some information is lost. Physically nudging the robot is the most direct way to specifying user intent without losing any of the information,” says Wang.
Sampling for success
To ensure these interactions don’t cause the robot to choose an invalid action, such as colliding with other objects, the researchers use a specific sampling procedure. This technique lets the model choose an action from the set of valid actions that most closely aligns with the user’s goal.
“Rather than just imposing the user’s will, we give the robot an idea of what the user intends but let the sampling procedure oscillate around its own set of learned behaviors,” Wang explains.
This sampling method enabled the researchers’ framework to outperform the other methods they compared it to during simulations and experiments with a real robot arm in a toy kitchen.
While their method might not always complete the task right away, it offers users the advantage of being able to immediately correct the robot if they see it doing something wrong, rather than waiting for it to finish and then giving it new instructions.
Moreover, after a user nudges the robot a few times until it picks up the correct bowl, it could log that corrective action and incorporate it into its behavior through future training. Then, the next day, the robot could pick up the correct bowl without needing a nudge.
“But the key to that continuous improvement is having a way for the user to interact with the robot, which is what we have shown here,” Wang says.
In the future, the researchers want to boost the speed of the sampling procedure while maintaining or improving its performance. They also want to experiment with robot policy generation in novel environments.
SMART researchers pioneer nanosensor for real-time iron detection in plants
Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, in collaboration with Temasek Life Sciences Laboratory (TLL) and MIT, have developed a groundbreaking near-infrared (NIR) fluorescent nanosensor capable of simultaneously detecting and differentiating between iron forms — Fe(II) and Fe(III) — in living plants.
Iron is crucial for plant health, supporting photosynthesis, respiration, and enzyme function. It primarily exists in two forms: Fe(II), which is readily available for plants to absorb and use, and Fe(III), which must first be converted into Fe(II) before plants can utilize it effectively. Traditional methods only measure total iron, missing the distinction between these forms — a key factor in plant nutrition. Distinguishing between Fe(II) and Fe(III) provides insights into iron uptake efficiency, helps diagnose deficiencies or toxicities, and enables precise fertilization strategies in agriculture, reducing waste and environmental impact while improving crop productivity.
The first-of-its-kind nanosensor developed by SMART researchers enables real-time, nondestructive monitoring of iron uptake, transport, and changes between its different forms — providing precise and detailed observations of iron dynamics. Its high spatial resolution allows precise localization of iron in plant tissues or subcellular compartments, enabling the measurement of even minute changes in iron levels within plants — changes that can inform how a plant handles stress and uses nutrients.
Traditional detection methods are destructive, or limited to a single form of iron. This new technology enables the diagnosis of deficiencies and optimization of fertilization strategies. By identifying insufficient or excessive iron intake, adjustments can be made to enhance plant health, reduce waste, and support more sustainable agriculture. While the nanosensor was tested on spinach and bok choy, it is species-agnostic, allowing it to be applied across a diverse range of plant species without genetic modification. This capability enhances our understanding of iron dynamics in various ecological settings, providing comprehensive insights into plant health and nutrient management. As a result, it serves as a valuable tool for both fundamental plant research and agricultural applications, supporting precision nutrient management, reducing fertilizer waste, and improving crop health.
“Iron is essential for plant growth and development, but monitoring its levels in plants has been a challenge. This breakthrough sensor is the first of its kind to detect both Fe(II) and Fe(III) in living plants with real-time, high-resolution imaging. With this technology, we can ensure plants receive the right amount of iron, improving crop health and agricultural sustainability,” says Duc Thinh Khong, DiSTAP research scientist and co-lead author of the paper.
“In enabling non-destructive real-time tracking of iron speciation in plants, this sensor opens new avenues for understanding plant iron metabolism and the implications of different iron variations for plants. Such knowledge will help guide the development of tailored management approaches to improve crop yield and more cost-effective soil fertilization strategies,” says Grace Tan, TLL research scientist and co-lead author of the paper.
The research, recently published in Nano Letters and titled, “Nanosensor for Fe(II) and Fe(III) Allowing Spatiotemporal Sensing in Planta,” builds upon SMART DiSTAP’s established expertise in plant nanobionics, leveraging the Corona Phase Molecular Recognition (CoPhMoRe) platform pioneered by the Strano Lab at SMART DiSTAP and MIT. The new nanosensor features single-walled carbon nanotubes (SWNTs) wrapped in a negatively charged fluorescent polymer, forming a helical corona phase structure that interacts differently with Fe(II) and Fe(III). Upon introduction into plant tissues and interaction with iron, the sensor emits distinct NIR fluorescence signals based on the iron type, enabling real-time tracking of iron movement and chemical changes.
The CoPhMoRe technique was used to develop highly selective fluorescent responses, allowing precise detection of iron oxidation states. The NIR fluorescence of SWNTs offers superior sensitivity, selectivity, and tissue transparency while minimizing interference, making it more effective than conventional fluorescent sensors. This capability allows researchers to track iron movement and chemical changes in real time using NIR imaging.
“This sensor provides a powerful tool to study plant metabolism, nutrient transport, and stress responses. It supports optimized fertilizer use, reduces costs and environmental impact, and contributes to more nutritious crops, better food security, and sustainable farming practices,” says Professor Daisuke Urano, TLL senior principal investigator, DiSTAP principal investigator, National University of Singapore adjunct assistant professor, and co-corresponding author of the paper.
“This set of sensors gives us access to an important type of signalling in plants, and a critical nutrient necessary for plants to make chlorophyll. This new tool will not just help farmers to detect nutrient deficiency, but also give access to certain messages within the plant. It expands our ability to understand the plant response to its growth environment,” says Professor Michael Strano, DiSTAP co-lead principal investigator, Carbon P. Dubbs Professor of Chemical Engineering at MIT, and co-corresponding author of the paper.
Beyond agriculture, this nanosensor holds promise for environmental monitoring, food safety, and health sciences, particularly in studying iron metabolism, iron deficiency, and iron-related diseases in humans and animals. Future research will focus on leveraging this nanosensor to advance fundamental plant studies on iron homeostasis, nutrient signaling, and redox dynamics. Efforts are also underway to integrate the nanosensor into automated nutrient management systems for hydroponic and soil-based farming and expand its functionality to detect other essential micronutrients. These advancements aim to enhance sustainability, precision, and efficiency in agriculture.
The research is carried out by SMART, and supported by the National Research Foundation under its Campus for Research Excellence And Technological Enterprise program.
3 Questions: Visualizing research in the age of AI
For over 30 years, science photographer Felice Frankel has helped MIT professors, researchers, and students communicate their work visually. Throughout that time, she has seen the development of various tools to support the creation of compelling images: some helpful, and some antithetical to the effort of producing a trustworthy and complete representation of the research. In a recent opinion piece published in Nature magazine, Frankel discusses the burgeoning use of generative artificial intelligence (GenAI) in images and the challenges and implications it has for communicating research. On a more personal note, she questions whether there will still be a place for a science photographer in the research community.
Q: You’ve mentioned that as soon as a photo is taken, the image can be considered “manipulated.” There are ways you’ve manipulated your own images to create a visual that more successfully communicates the desired message. Where is the line between acceptable and unacceptable manipulation?
A: In the broadest sense, the decisions made on how to frame and structure the content of an image, along with which tools used to create the image, are already a manipulation of reality. We need to remember the image is merely a representation of the thing, and not the thing itself. Decisions have to be made when creating the image. The critical issue is not to manipulate the data, and in the case of most images, the data is the structure. For example, for an image I made some time ago, I digitally deleted the petri dish in which a yeast colony was growing, to bring attention to the stunning morphology of the colony. The data in the image is the morphology of the colony. I did not manipulate that data. However, I always indicate in the text if I have done something to an image. I discuss the idea of image enhancement in my handbook, “The Visual Elements, Photography.”
Q: What can researchers do to make sure their research is communicated correctly and ethically?
A: With the advent of AI, I see three main issues concerning visual representation: the difference between illustration and documentation, the ethics around digital manipulation, and a continuing need for researchers to be trained in visual communication. For years, I have been trying to develop a visual literacy program for the present and upcoming classes of science and engineering researchers. MIT has a communication requirement which mostly addresses writing, but what about the visual, which is no longer tangential to a journal submission? I will bet that most readers of scientific articles go right to the figures, after they read the abstract.
We need to require students to learn how to critically look at a published graph or image and decide if there is something weird going on with it. We need to discuss the ethics of “nudging” an image to look a certain predetermined way. I describe in the article an incident when a student altered one of my images (without asking me) to match what the student wanted to visually communicate. I didn’t permit it, of course, and was disappointed that the ethics of such an alteration were not considered. We need to develop, at the very least, conversations on campus and, even better, create a visual literacy requirement along with the writing requirement.
Q: Generative AI is not going away. What do you see as the future for communicating science visually?
A: For the Nature article, I decided that a powerful way to question the use of AI in generating images was by example. I used one of the diffusion models to create an image using the following prompt:
“Create a photo of Moungi Bawendi’s nano crystals in vials against a black background, fluorescing at different wavelengths, depending on their size, when excited with UV light.”
The results of my AI experimentation were often cartoon-like images that could hardly pass as reality — let alone documentation — but there will be a time when they will be. In conversations with colleagues in research and computer-science communities, all agree that we should have clear standards on what is and is not allowed. And most importantly, a GenAI visual should never be allowed as documentation.
But AI-generated visuals will, in fact, be useful for illustration purposes. If an AI-generated visual is to be submitted to a journal (or, for that matter, be shown in a presentation), I believe the researcher MUST
- clearly label if an image was created by an AI model;
- indicate what model was used;
- include what prompt was used; and
- include the image, if there is one, that was used to help the prompt.
A leg up for STEM majors
Senior Kevin Guo, a computer science major, and junior Erin Hovendon, studying mechanical engineering, are on widely divergent paths at MIT. But their lives do intersect in one dimension: They share an understanding that their political science and public policy minors provide crucial perspectives on their research and future careers.
For Guo, the connection between computer science and policy emerged through his work at MIT's Election Data and Science Lab. “When I started, I was just looking for a place to learn how to code and do data science,” he reflects. “But what I found was this fascinating intersection where technical skills could directly shape democratic processes.”
Hovendon is focused on sustainable methods for addressing climate change. She is currently participating in a multisemester research project at MIT's Environmental Dynamics Lab (ENDLab) developing monitoring technology for marine carbon dioxide removal (mCDR).
She believes the success of her research today and in the future depends on understanding its impact on society. Her academic track in policy provides that grounding. “When you’re developing a new technology, you need to focus as well on how it will be applied,” she says. “This means learning about the policies required to scale it up, and about the best ways to convey the value of what you’re working on to the public.”
Bridging STEM and policy
For both Hovendon and Guo, interdisciplinary study is proving to be a valuable platform for tangibly addressing real-world challenges.
Guo came to MIT from Andover, Massachusetts, the son of parents who specialize in semiconductors and computer science. While math and computer science were a natural track for him, Guo was also keenly interested in geopolitics. He enrolled in class 17.40 (American Foreign Policy). “It was my first engagement with MIT political science and I liked it a lot, because it dealt with historical episodes I wanted to learn more about, like World War II, the Korean War, and Vietnam,” says Guo.
He followed up with a class on American Military History and on the Rise of Asia, where he found himself enrolled with graduate students and active duty U.S. military officers. “I liked attending a course with people who had unusual insights,” Guo remarks. “I also liked that these humanities classes were small seminars, and focused a lot on individual students.”
From coding to elections
It was in class 17.835 (Machine Learning and Data Science in Politics) that Guo first realized he could directly connect his computer science and math expertise to the humanities. “They gave us big political science datasets to analyze, which was a pretty cool application of the skills I learned in my major,” he says.
Guo springboarded from this class to a three-year, undergraduate research project in the Election Data and Science Lab. “The hardest part is data collection, which I worked on for an election audit project that looked at whether there were significant differences between original vote counts and audit counts in all the states, at the precinct level,” says Guo. “We had to scrape data, raw PDFs, and create a unified dataset, standardized to our format, that we could publish.”
The data analysis skills he acquired in the lab have come in handy in the professional sphere in which he has begun training: investment finance.
“The workflow is very similar: clean the data to see what you want, analyze it to see if I can find an edge, and then write some code to implement it,” he says. “The biggest difference between finance and the lab research is that the development cycle is a lot faster, where you want to act on a dataset in a few days, rather than weeks or months.”
Engineering environmental solutions
Hovendon, a native of North Carolina with a deep love for the outdoors, arrived at MIT committed “to doing something related to sustainability and having a direct application in the world around me,” she says.
Initially, she headed toward environmental engineering, “but then I realized that pretty much every major can take a different approach to that topic,” she says. “So I ended up switching to mechanical engineering because I really enjoy the hands-on aspects of the field.”
In parallel to her design and manufacturing, and mechanics and materials courses, Hovendon also immersed herself in energy and environmental policy classes. One memorable anthropology class, 21A.404 (Living through Climate Change), asked students to consider whether technological or policy solutions could be fully effective on their own for combating climate change. “It was useful to apply holistic ways of exploring human relations to the environment,” says Hovendon.
Hovendon brings this well-rounded perspective to her research at ENDLab in marine carbon capture and fluid dynamics. She is helping to develop verification methods for mCDR at a pilot treatment plant in California. The facility aims to remove 100 tons of carbon dioxide directly from the ocean by enhancing natural processes. Hovendon hopes to design cost-efficient monitoring systems to demonstrate the efficacy of this new technology. If scaled up, mCDR could enable oceans to store significantly more atmospheric carbon, helping cool the planet.
But Hovendon is well aware that innovation with a major impact cannot emerge on the basis of technical efficacy alone.
“You're going to have people who think that you shouldn't be trying to replicate or interfere with a natural system, and if you're putting one of these facilities somewhere in water, then you're using public spaces and resources,” she says. “It's impossible to come up with any kind of technology, but especially any kind of climate-related technology, without first getting the public to buy into it.”
She recalls class 17.30J (Making Public Policy), which emphasized the importance of both economic and social analysis to the successful passage of highly impactful legislation, such as the Affordable Care Act.
“I think that breakthroughs in science and engineering should be evaluated not just through their technological prowess, but through the success of their implementation for general societal benefit,” she says. “Understanding the policy aspects is vital for improving accessibility for scientific advancements.”
Beyond the dome
Guo will soon set out for a career as a quantitative financial trader, and he views his political science background as essential to his success. While his expertise in data cleaning and analysis will come into play, he believes other skills will as well: “Understanding foreign policy, considering how U.S. policy impacts other places, that's actually very important in finance,” he explains. “Macroeconomic changes and politics affect trading volatility and markets in general, so it's very important to understand what's going on.”
With one year to go, Hovendon is contemplating graduate school in mechanical engineering, perhaps designing renewable energy technologies. “I just really hope that I'm working on something I'm genuinely passionate about, something that has a broader purpose,” she says. “In terms of politics and technology, I also hope that at least some government research and development will still go to climate work, because I'm sure there will be an urgent need for it.”
Knitted microtissue can accelerate healing
Treating severe or chronic injury to soft tissues such as skin and muscle is a challenge in health care. Current treatment methods can be costly and ineffective, and the frequency of chronic wounds in general from conditions such as diabetes and vascular disease, as well as an increasingly aging population, is only expected to rise.
One promising treatment method involves implanting biocompatible materials seeded with living cells (i.e., microtissue) into the wound. The materials provide a scaffolding for stem cells, or other precursor cells, to grow into the wounded tissue and aid in repair. However, current techniques to construct these scaffolding materials suffer a recurring setback. Human tissue moves and flexes in a unique way that traditional soft materials struggle to replicate, and if the scaffolds stretch, they can also stretch the embedded cells, often causing those cells to die. The dead cells hinder the healing process and can also trigger an inadvertent immune response in the body.
"The human body has this hierarchical structure that actually un-crimps or unfolds, rather than stretches," says Steve Gillmer, a researcher in MIT Lincoln Laboratory's Mechanical Engineering Group. "That's why if you stretch your own skin or muscles, your cells aren't dying. What's actually happening is your tissues are uncrimping a little bit before they stretch."
Gillmer is part of a multidisciplinary research team that is searching for a solution to this stretching setback. He is working with Professor Ming Guo from MIT's Department of Mechanical Engineering and the laboratory's Defense Fabric Discovery Center (DFDC) to knit new kinds of fabrics that can uncrimp and move just as human tissue does.
The idea for the collaboration came while Gillmer and Guo were teaching a course at MIT. Guo had been researching how to grow stem cells on new forms of materials that could mimic the uncrimping of natural tissue. He chose electrospun nanofibers, which worked well, but were difficult to fabricate at long lengths, preventing him from integrating the fibers into larger knit structures for larger-scale tissue repair.
"Steve mentioned that Lincoln Laboratory had access to industrial knitting machines," Guo says. These machines allowed him to switch focus to designing larger knits, rather than individual yarns. "We immediately started to test new ideas through internal support from the laboratory."
Gillmer and Guo worked with the DFDC to discover which knit patterns could move similarly to different types of soft tissue. They started with three basic knit constructions called interlock, rib, and jersey.
"For jersey, think of your T-shirt. When you stretch your shirt, the yarn loops are doing the stretching," says Emily Holtzman, a textile specialist at the DFDC. "The longer the loop length, the more stretch your fabric can accommodate. For ribbed, think of the cuff on your sweater. This fabric construction has a global stretch that allows the fabric to unfold like an accordion."
Interlock is similar to ribbed but is knitted in a denser pattern and contains twice as much yarn per inch of fabric. By having more yarn, there is more surface area on which to embed the cells. "Knit fabrics can also be designed to have specific porosities, or hydraulic permeability, created by the loops of the fabric and yarn sizes," says Erin Doran, another textile specialist on the team. "These pores can help with the healing process as well."
So far, the team has conducted a number of tests embedding mouse embryonic fibroblast cells and mesenchymal stem cells within the different knit patterns and seeing how they behave when the patterns are stretched. Each pattern had variations that affected how much the fabric could uncrimp, in addition to how stiff it became after it started stretching. All showed a high rate of cell survival, and in 2024 the team received an R&D 100 award for their knit designs.
Gillmer explains that although the project began with treating skin and muscle injuries in mind, their fabrics have the potential to mimic many different types of human soft tissue, such as cartilage or fat. The team recently filed a provisional patent that outlines how to create these patterns and identifies the appropriate materials that should be used to make the yarn. This information can be used as a toolbox to tune different knitted structures to match the mechanical properties of the injured tissue to which they are applied.
"This project has definitely been a learning experience for me," Gillmer says. "Each branch of this team has a unique expertise, and I think the project would be impossible without them all working together. Our collaboration as a whole enables us to expand the scope of the work to solve these larger, more complex problems."
Study: The ozone hole is healing, thanks to global reduction of CFCs
A new MIT-led study confirms that the Antarctic ozone layer is healing, as a direct result of global efforts to reduce ozone-depleting substances.
Scientists including the MIT team have observed signs of ozone recovery in the past. But the new study is the first to show, with high statistical confidence, that this recovery is due primarily to the reduction of ozone-depleting substances, versus other influences such as natural weather variability or increased greenhouse gas emissions to the stratosphere.
“There’s been a lot of qualitative evidence showing that the Antarctic ozone hole is getting better. This is really the first study that has quantified confidence in the recovery of the ozone hole,” says study author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry. “The conclusion is, with 95 percent confidence, it is recovering. Which is awesome. And it shows we can actually solve environmental problems.”
The new study appears today in the journal Nature. Graduate student Peidong Wang from the Solomon group in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) is the lead author. His co-authors include Solomon and EAPS Research Scientist Kane Stone, along with collaborators from multiple other institutions.
Roots of ozone recovery
Within the Earth’s stratosphere, ozone is a naturally occurring gas that acts as a sort of sunscreen, protecting the planet from the sun’s harmful ultraviolet radiation. In 1985, scientists discovered a “hole” in the ozone layer over Antarctica that opened up during the austral spring, between September and December. This seasonal ozone depletion was suddenly allowing UV rays to filter down to the surface, leading to skin cancer and other adverse health effects.
In 1986, Solomon, who was then working at the National Oceanic and Atmospheric Administration (NOAA), led expeditions to the Antarctic, where she and her colleagues gathered evidence that quickly confirmed the ozone hole’s cause: chlorofluorocarbons, or CFCs — chemicals that were then used in refrigeration, air conditioning, insulation, and aerosol propellants. When CFCs drift up into the stratosphere, they can break down ozone under certain seasonal conditions.
The following year, those relevations led to the drafting of the Montreal Protocol — an international treaty that aimed to phase out the production of CFCs and other ozone-depleting substances, in hopes of healing the ozone hole.
In 2016, Solomon led a study reporting key signs of ozone recovery. The ozone hole seemed to be shrinking with each year, especially in September, the time of year when it opens up. Still, these observations were qualitative. The study showed large uncertainties regarding how much of this recovery was due to concerted efforts to reduce ozone-depleting substances, or if the shrinking ozone hole was a result of other “forcings,” such as year-to-year weather variability from El Niño, La Niña, and the polar vortex.
“While detecting a statistically significant increase in ozone is relatively straightforward, attributing these changes to specific forcings is more challenging,” says Wang.
Anthropogenic healing
In their new study, the MIT team took a quantitative approach to identify the cause of Antarctic ozone recovery. The researchers borrowed a method from the climate change community, known as “fingerprinting,” which was pioneered by Klaus Hasselmann, who was awarded the Nobel Prize in Physics in 2021 for the technique. In the context of climate, fingerprinting refers to a method that isolates the influence of specific climate factors, apart from natural, meteorological noise. Hasselmann applied fingerprinting to identify, confirm, and quantify the anthropogenic fingerprint of climate change.
Solomon and Wang looked to apply the fingerprinting method to identify another anthropogenic signal: the effect of human reductions in ozone-depleting substances on the recovery of the ozone hole.
“The atmosphere has really chaotic variability within it,” Solomon says. “What we’re trying to detect is the emerging signal of ozone recovery against that kind of variability, which also occurs in the stratosphere.”
The researchers started with simulations of the Earth’s atmosphere and generated multiple “parallel worlds,” or simulations of the same global atmosphere, under different starting conditions. For instance, they ran simulations under conditions that assumed no increase in greenhouse gases or ozone-depleting substances. Under these conditions, any changes in ozone should be the result of natural weather variability. They also ran simulations with only increasing greenhouse gases, as well as only decreasing ozone-depleting substances.
They compared these simulations to observe how ozone in the Antarctic stratosphere changed, both with season, and across different altitudes, in response to different starting conditions. From these simulations, they mapped out the times and altitudes where ozone recovered from month to month, over several decades, and identified a key “fingerprint,” or pattern, of ozone recovery that was specifically due to conditions of declining ozone-depleting substances.
The team then looked for this fingerprint in actual satellite observations of the Antarctic ozone hole from 2005 to the present day. They found that, over time, the fingerprint that they identified in simulations became clearer and clearer in observations. In 2018, the fingerprint was at its strongest, and the team could say with 95 percent confidence that ozone recovery was due mainly to reductions in ozone-depleting substances.
“After 15 years of observational records, we see this signal to noise with 95 percent confidence, suggesting there’s only a very small chance that the observed pattern similarity can be explained by variability noise,” Wang says. “This gives us confidence in the fingerprint. It also gives us confidence that we can solve environmental problems. What we can learn from ozone studies is how different countries can swiftly follow these treaties to decrease emissions.”
If the trend continues, and the fingerprint of ozone recovery grows stronger, Solomon anticipates that soon there will be a year, here and there, when the ozone layer stays entirely intact. And eventually, the ozone hole should stay shut for good.
“By something like 2035, we might see a year when there’s no ozone hole depletion at all in the Antarctic. And that will be very exciting for me,” she says. “And some of you will see the ozone hole go away completely in your lifetimes. And people did that.”
This research was supported, in part, by the National Science Foundation and NASA.
Why rationality can push people in different directions
It’s not a stretch to suggest that when we disagree with other people, we often regard them as being irrational. Kevin Dorst PhD ’19 has developed a body of research with surprising things to say about that.
Dorst, an associate professor of philosophy at MIT, studies rationality: how we apply it, or think we do, and how that bears out in society. The goal is to help us think clearly and perhaps with fresh eyes about something we may take for granted.
Throughout his work, Dorst specializes in exploring the nuances of rationality. To take just one instance, consider how ambiguity can interact with rationality. Suppose there are two studies about the effect of a new housing subdivision on local traffic patterns: One shows there will be a substantial increase in traffic, and one shows a minor effect. Even if both studies are sound in their methods and data, neither may have a totally airtight case. People who regard themselves as rationally assessing the numbers will likely disagree about which is most valid, and — though this may not be entirely rational — may use their prior beliefs to poke holes in the study that does not represent their prior beliefs.
Among other things, this process also calls into question the widespread “Bayesian” conception that people’s views shift and come into alignment as they’re presented with new evidence. It may be that instead, people apply rationality while their views diverge, not converge.
This is also the kind of phenomenon Dorst explores in the paper “Rational Polarization,” published in The Philosophical Review in 2023; currently Dorst is working on a book about how people can take rational approaches but still wind up with different conclusions about the world. Dorst combines careful argumentation, mathematically structured descriptions of thinking, and even experimental evidence about cognition and people’s views, an increasing trend in philosophy.
“There’s something freeing about how methodologically open philosophy is,” says Dorst, a good-humored and genial conversationalist. “A question can be philosophical if it’s important and we don’t yet have settled methods for answering it, because in philosophy it’s always okay to ask what methods we should be using. It’s one of the exciting things about philosophy.”
For his research and teaching, Dorst was awarded tenure at MIT last year.
Show me your work
Dorst grew up in Missouri, not exactly expecting to become a philosopher, but he started following in the academic trail of his older brother, who had become interested in philosophy.
“We didn’t know what philosophy was growing up, but once my brother started getting interested, there was a little bootstrapping, egging each other on, and having someone to talk to,” Dorst says.
As an undergraduate at Washington University in St. Louis, Dorst majored in philosophy and political science. By graduation, he had become sold on studying philosophy full-time, and was accepted into MIT’s program as a doctoral student.
At the Institute, he started specializing in the problems he now studies full-time, about how we know things and how much we are thinking rationally, while working with Roger White as his primary adviser, along with faculty members Robert Stalnaker and Kieran Setiya of MIT and Branden Fitelson of Northeastern University.
After earning his PhD, Dorst spent a year as a fellow at Oxford University’s Magdalen College, then joined faculty of the University of Pittsburgh. He returned to MIT, this time on the faculty, in 2022. Now settled in the MIT philosophy faculty, Dorst tries to continue the department’s tradition of engaged teaching with his students.
“They wrestle like everyone does with the conceptual and philosophical questions, but the speed with which you can get through technical things in a course is astounding,” Dorst says of MIT undergraduates.
New methods, time-honored issues
At present Dorst, who has published widely in philosophy journals, is grinding through the process of writing a book manuscript about the complexity of rationality. Chapter subjects include hindsight bias, confirmation bias, overconfidence, and polarization.
In the process, Dorst is also developing and conducting more experiments than ever before, to look at the way people process information and regard themselves as being rational.
“There’s this whole movement of experimental philosophy, using experimental data, being sensitive to cognitive science and being interested in connecting questions we have to it,” Dorst says.
In his case, he adds, “The big picture is trying to connect the theoretical work on rationality with the more empirical work about what leads to polarization,” he says. The salience of the work, meanwhile, applies to a wide range of subjects: “People have been polarized forever over everything.”
As he explains all of this, Dorst looks up at the whiteboard in his office, where an extensive set of equations represents the output of some experiments and his ongoing effort to comprehend the results, as part of the book project. When he finishes, he hopes to have work broadly useful in philosophy, cognitive science, and other fields.
“We might use some different models in philosophy,” he says, “but let’s all try to figure out how people process information and regard arguments.”
Study suggests new molecular strategy for treating fragile X syndrome
Building on more than two decades of research, a study by MIT neuroscientists at The Picower Institute for Learning and Memory reports a new way to treat pathology and symptoms of fragile X syndrome, the most common genetically-caused autism spectrum disorder. The team showed that augmenting a novel type of neurotransmitter signaling reduced hallmarks of fragile X in mouse models of the disorder.
The new approach, described in Cell Reports, works by targeting a specific molecular subunit of “NMDA” receptors that they discovered plays a key role in how neurons synthesize proteins to regulate their connections, or “synapses,” with other neurons in brain circuits. The scientists showed that in fragile X model mice, increasing the receptor’s activity caused neurons in the hippocampus region of the brain to increase molecular signaling that suppressed excessive bulk protein synthesis, leading to other key improvements.
Setting the table
“One of the things I find most satisfying about this study is that the pieces of the puzzle fit so nicely into what had come before,” says study senior author Mark Bear, Picower Professor in MIT’s Department of Brain and Cognitive Sciences. Former postdoc Stephanie Barnes, now a lecturer at the University of Glasgow, is the study’s lead author.
Bear’s lab studies how neurons continually edit their circuit connections, a process called “synaptic plasticity” that scientists believe to underlie the brain’s ability to adapt to experience and to form and process memories. These studies led to two discoveries that set the table for the newly published advance. In 2011, Bear’s lab showed that fragile X and another autism disorder, tuberous sclerosis (Tsc), represented two ends of a continuum of a kind of protein synthesis in the same neurons. In fragile X there was too much. In Tsc there was too little. When lab members crossbred fragile X and Tsc mice, in fact, their offspring emerged healthy, as the mutations of each disorder essentially canceled each other out.
More recently, Bear’s lab showed a different dichotomy. It has long been understood from their influential work in the 1990s that the flow of calcium ions through NMDA receptors can trigger a form of synaptic plasticity called “long-term depression” (LTD). But in 2020, they found that another mode of signaling by the receptor — one that did not require ion flow — altered protein synthesis in the neuron and caused a physical shrinking of the dendritic “spine” structures housing synapses.
For Bear and Barnes, these studies raised the prospect that if they could pinpoint how NMDA receptors affect protein synthesis they might identify a new mechanism that could be manipulated therapeutically to address fragile X (and perhaps tuberous sclerosis) pathology and symptoms. That would be an important advance to complement ongoing work Bear’s lab has done to correct fragile X protein synthesis levels via another receptor called mGluR5.
Receptor dissection
In the new study, Bear and Barnes’ team decided to use the non-ionic effect on spine shrinkage as a readout to dissect how NMDARs signal protein synthesis for synaptic plasticity in hippocampus neurons. They hypothesized that the dichotomy of ionic effects on synaptic function and non-ionic effects on spine structure might derive from the presence of two distinct components of NMDA receptors: “subunits” called GluN2A and GluN2B. To test that, they used genetic manipulations to knock out each of the subunits. When they did so, they found that knocking out “2A” or “2B” could eliminate LTD, but that only knocking out 2B affected spine size. Further experiments clarified that 2A and 2B are required for LTD, but that spine shrinkage solely depends on the 2B subunit.
The next task was to resolve how the 2B subunit signals spine shrinkage. A promising possibility was a part of the subunit called the “carboxyterminal domain,” or CTD. So, in a new experiment Bear and Barnes took advantage of a mouse that had been genetically engineered by researchers at the University of Edinburgh so that the 2A and 2B CTDs could be swapped with one another. A telling result was that when the 2B subunit lacked its proper CTD, the effect on spine structure disappeared. The result affirmed that the 2B subunit signals spine shrinkage via its CTD.
Another consequence of replacing the CTD of the 2B subunit was an increase in bulk protein synthesis that resembled findings in fragile X. Conversely, augmenting the non-ionic signaling through the 2B subunit suppressed bulk protein synthesis, reminiscent of Tsc.
Treating fragile X
Putting the pieces together, the findings indicated that augmenting signaling through the 2B subunit might, like introducing the mutation causing Tsc, rescue aspects of fragile X.
Indeed, when the scientists swapped in the 2B subunit CTD of NMDA receptor in fragile X model mice they found correction of not only the excessive bulk protein synthesis, but also altered synaptic plasticity, and increased electrical excitability that are hallmarks of the disease. To see if a treatment that targets NMDA receptors might be effective in fragile X, they tried an experimental drug called Glyx-13. This drug binds to the 2B subunit of NMDA receptors to augment signaling. The researchers found that this treatment can also normalize protein synthesis and reduced sound-induced seizures in the fragile X mice.
The team now hypothesizes, based on another prior study in the lab, that the beneficial effect to fragile X mice of the 2B subunit’s CTD signaling is that it shifts the balance of protein synthesis away from an all-too-efficient translation of short messenger RNAs (which leads to excessive bulk protein synthesis) toward a lower-efficiency translation of longer messenger RNAs.
Bear says he does not know what the prospects are for Glyx-13 as a clinical drug, but he noted that there are some drugs in clinical development that specifically target the 2B subunit of NMDA receptors.
In addition to Bear and Barnes, the study’s other authors are Aurore Thomazeau, Peter Finnie, Max Heinreich, Arnold Heynen, Noboru Komiyama, Seth Grant, Frank Menniti, and Emily Osterweil.
The FRAXA Foundation, The Picower Institute for Learning and Memory, The Freedom Together Foundation, and the National Institutes of Health funded the study.
Developing materials for stellar performance in fusion power plants
When Zoe Fisher was in fourth grade, her art teacher asked her to draw her vision of a dream job on paper. At the time, those goals changed like the flavor of the week in an ice cream shop — “zookeeper” featured prominently for a while — but Zoe immediately knew what she wanted to put down: a mad scientist.
When Fisher stumbled upon the drawing in her parents’ Chicago home recently, it felt serendipitous because, by all measures, she has realized that childhood dream. The second-year doctoral student at MIT's Department of Nuclear Science and Engineering (NSE) is studying materials for fusion power plants at the Plasma Science and Fusion Center (PSFC) under the advisement of Michael Short, associate professor at NSE. Dennis Whyte, Hitachi America Professor of Engineering at NSE, serves as co-advisor.
On track to an MIT education
Growing up in Chicago, Fisher had heard her parents remarking on her reasoning abilities. When she was barely a preschooler she argued that she couldn’t have been found in a purple speckled egg, as her parents claimed they had done.
Fisher didn’t put together just how much she had gravitated toward science until a high school physics teacher encouraged her to apply to MIT. Passionate about both the arts and sciences, she initially worried that pursuing science would be very rigid, without room for creativity. But she knows now that exploring solutions to problems requires plenty of creative thinking.
It was a visit to MIT through the Weekend Immersion in Science and Engineering (WISE) that truly opened her eyes to the potential of an MIT education. “It just seemed like the undergraduate experience here is where you can be very unapologetically yourself. There’s no fronting something you don’t want to be like. There’s so much authenticity compared to most other colleges I looked at,” Fisher says. Once admitted, Campus Preview Weekend confirmed that she belonged. “We got to be silly and weird — a version of the Mafia game was a hit — and I was like, ‘These are my people,’” Fisher laughs.
Pursuing fusion at NSE
Before she officially started as a first-year in 2018, Fisher enrolled in the Freshman Pre-Orientation Program (FPOP), which starts a week before orientation starts. Each FPOP zooms into one field. “I’d applied to the nuclear one simply because it sounded cool and I didn’t know anything about it,” Fisher says. She was intrigued right away. “They really got me with that ‘star in a bottle’ line,” she laughs. (The quest for commercial fusion is to create the energy equivalent of a star in a bottle). Excited by a talk by Zachary Hartwig, Robert N. Noyce Career Development Professor at NSE, Fisher asked if she could work on fusion as an undergraduate as part of an Undergraduate Research Opportunities Program (UROP) project. She started with modeling solders for power plants and was hooked. When Fisher requested more experimental work, Hartwig put her in touch with Research Scientist David Fischer at the Plasma Science and Fusion Center (PSFC). Fisher eventually moved on to explore superconductors, which eventually morphed into research for her master’s thesis.
For her doctoral research, Fisher is extending her master’s work to explore defects in ceramics, specifically in alumina (aluminum oxide). Sapphire coatings are the single-crystal equivalent of alumina, an insulator being explored for use in fusion power plants. “I eventually want to figure out what types of charge defects form in ceramics during radiation damage so we can ultimately engineer radiation-resistant sapphire,” Fisher says.
When you introduce a material in a fusion power plant, stray high-energy neutrons born from the plasma can collide and fundamentally reorder the lattice, which is likely to change a range of thermal, electrical, and structural properties. “Think of a scaffolding outside a building, with each one of those joints as a different atom that holds your material in place. If you go in and you pull a joint out, there’s a chance that you pulled out a joint that wasn’t structurally sound, in which case everything would be fine. But there’s also a chance that you pull a joint out and everything alters. And [such unpredictability] is a problem,” Fisher says. “We need to be able to account for exactly how these neutrons are going to alter the lattice property,” Fisher says, and it’s one of the topics her research explores.
The studies, in turn, can function as a jumping-off point for irradiating superconductors. The goals are two-fold: “I want to figure out how I can make an industry-usable ceramic you can use to insulate the inside of a fusion power plant, and then also figure out if I can take this information that I’m getting with ceramics and make it superconductor-relevant,” Fisher says. “Superconductors are the electromagnets we will use to contain the plasma inside fusion power plants. However, they prove pretty difficult to study. Since they are also ceramic, you can draw a lot of parallels between alumina and yttrium barium copper oxide (YBCO), the specific superconductor we use,” she adds. Fisher is also excited about the many experiments she performs using a particle accelerator, one of which involves measuring exactly how surface thermal properties change during radiation.
Sailing new paths
It’s not just her research that Fisher loves. As an undergrad, and during her master’s, she was on the varsity sailing team. “I worked my way into sailing with literal Olympians, I did not see that coming,” she says. Fisher participates in Chicago’s Race to Mackinac and the Melges 15 Series every chance she gets. Of all the types of boats she has sailed, she prefers dinghy sailing the most. “It’s more physical, you have to throw yourself around a lot and there’s this immediate cause and effect, which I like,” Fisher says. She also teaches sailing lessons in the summer at MIT’s Sailing Pavilion — you can find her on a small motorboat, issuing orders through a speaker.
Teaching has figured prominently throughout Fisher’s time at MIT. Through MISTI, Fisher has taught high school classes in Germany and a radiation and materials class in Armenia in her senior year. She was delighted by the food and culture in Armenia and by how excited people were to learn new ideas. Her love of teaching continues, as she has reached out to high schools in the Boston area. “I like talking to groups and getting them excited about fusion, or even maybe just the concept of attending graduate school,” Fisher says, adding that teaching the ropes of an experiment one-on-one is “one of the most rewarding things.”
She also learned the value of resilience and quick thinking on various other MISTI trips. Despite her love of travel, Fisher has had a few harrowing experiences with tough situations and plans falling through at the last minute. It’s when she tells herself, “Well, the only thing that you’re gonna do is you’re gonna keep doing what you wanted to do.”
That eyes-on-the-prize focus has stood Fisher in good stead, and continues to serve her well in her research today.
Letterlocking: A new look at a centuries-old practice
For as long as people have been communicating through writing, they have found ways to keep their messages private. Before the invention of the gummed envelope in 1830, securing correspondence involved letterlocking, an ingenious process of folding a flat sheet of paper to become its own envelope, often using a combination of folds, tucks, slits, or adhesives such as sealing wax. Letter writers from Erasmus to Catherine de’ Medici to Emily Dickinson employed these techniques, which Jana Dambrogio, the MIT Libraries’ Thomas F. Peterson (1957) Conservator, has named “letterlocking.”
“The study of letterlocking very consciously bridges humanities and sciences,” says Dambrogio, who first became interested in the practice as a fellow in the conservation studio of the Vatican Apostolic Archives, where she discovered examples from the 15th and 16th centuries. “It draws on the perspectives of not only conservators and historians, but also engineers, imaging experts, and scientists.”
Now the rich history of this centuries-old document security technology is the subject of a new book, “Letterlocking: The Hidden History of the Letter,” published by the MIT Press and co-authored with Daniel Starza Smith, a lecturer in early modern English literature at King’s College London. Dambrogio and Smith have pioneered the field of letterlocking research over the last 10 years, working with an international and interdisciplinary collection of experts, the Unlocking History Research Group.
With more than 300 images and diagrams, “Letterlocking” explores the practice’s history through real examples from all over the world. It includes a dictionary of 60 technical terms and concepts, systems the authors developed while studying more than 250,000 historic letters. The book aims to be a springboard for new discoveries, whether providing a new lens on history or spurring technological advancements.
In working with the Brienne Collection — a 17th-century postal trunk full of undelivered letters — the Unlocking History Research Group sought to study intact examples of locked letters without destroying them in the process. This stimulated advances in conservation, radiology, and computational algorithms. In 2020, the team collaborated with researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Amanda Ghassaei SM ’17, and Holly Jackson ’22, to develop new algorithms that could virtually read an unopened letter, publishing the results in Nature Communications in 2021.
“Letterlocking” also offers a comprehensive guide to making one’s own locked letters. “The best introduction to letterlocking is to make some models,” says Dambrogio. “Feel the shape and the weight; see how easy it would be to conceal or hard to open without being noticed. We’re inviting people to explore and expand this new field of study through ‘mind and hand.’”
Designing better ways to deliver drugs
When Louis DeRidder was 12 years old, he had a medical emergency that nearly cost him his life. The terrifying experience gave him a close-up look at medical care and made him eager to learn more.
“You can’t always pinpoint exactly what gets you interested in something, but that was a transformative moment,” says DeRidder.
In high school, he grabbed the chance to participate in a medicine-focused program, spending about half of his days during his senior year in high school learning about medical science and shadowing doctors.
DeRidder was hooked. He became fascinated by the technologies that make treatments possible and was particularly interested in how drugs are delivered to the brain, a curiosity that sparked a lifelong passion.
“Here I was, a 17-year-old in high school, and a decade later, that problem still fascinates me,” he says. “That’s what eventually got me into the drug delivery field.”
DeRidder’s interests led him to transfer half-way through his undergraduate studies to Johns Hopkins University, where he performed research he had proposed in a Goldwater Scholarship proposal. The research focused on the development of a nanoparticle-drug conjugate to deliver a drug to brain cells in order to transform them from a pro-inflammatory to an anti-inflammatory phenotype. Such a technology could be valuable in the treatment of neurodegenerative diseases, including Alzheimer’s and Parkinson’s.
In 2019, DeRidder entered the joint Harvard-MIT Health Sciences and Technology program, where he has embarked on a somewhat different type of drug delivery project — developing a device that measures the concentration of a chemotherapy drug in the blood while it is being administered and adjusts the infusion rate so the concentration is optimal for the patient. The system is known as CLAUDIA, or Closed-Loop AUtomated Drug Infusion RegulAtor, and can allow for the personalization of drug dosing for a variety of different drugs.
The project stemmed from discussions with his faculty advisors — Robert Langer, the David H. Koch Institute Professor, and Giovanni Traverso, the Karl Van Tassel Career Development Professor and a gastroenterologist at Brigham and Women’s Hospital. They explained to him that chemotherapy dosing is based on a formula developed in 1916 that estimates a patient’s body surface area. The formula doesn’t consider important influences such as differences in body composition and metabolism, or circadian fluctuations that can affect how a drug interacts with a patient.
“Once my advisors presented the reality of how chemotherapies are dosed,” DeRidder says, “I thought, ‘This is insane. How is this the clinical reality?’”
He and his advisors agreed this was a great project for his PhD.
“After they gave me the problem statement, we began to brainstorm ways that we could develop a medical device to improve the lives of patients” DeRidder says, adding, “I love starting with a blank piece of paper and then brainstorming to work out the best solution.”
Almost from the start, DeRidder’s research process involved MATLAB and Simulink, developed by the mathematical computer software company MathWorks.
“MathWorks and Simulink are key to what we do,” DeRidder says. “They enable us to model the drug pharmacokinetics — how the body distributes and metabolizes the drug. We also model the components of our system with their software. That was especially critical for us in the very early days, because it let us know whether it was even possible to control the concentration of the drug. And since then, we’ve continuously improved the control algorithm, using these simulations. You simulate hundreds of different experiments before performing any experiments in the lab.”
With his innovative use of the MATLAB and Simulink tools, DeRidder was awarded MathWorks fellowships both last year and this year. He has also received a National Science Foundation Graduate Research Fellowship.
“The fellowships have been critical to our development of the CLAUDIA drug-delivery system,” DeRidder says, adding that he has “had the pleasure of working with a great team of students and researchers in the lab.”
He says he would like to move CLAUDIA toward clinical use, where he thinks it could have significant impact. “Whatever I can do to help push it toward the clinic, including potentially helping to start a company to help commercialize the system, I’m definitely interested in doing it.”
In addition to developing CLAUDIA, DeRidder is working on developing new nanoparticles to deliver therapeutic nucleic acids. The project involves synthesizing new nucleic acid molecules, as well as developing the new polymeric and lipid nanoparticles to deliver the nucleic acids to targeted tissue and cells.
DeRidder says he likes working on technologies at different scales, from medical devices to molecules — all with the potential to improve the practice of medicine.
Meanwhile, he finds time in his busy schedule to do community service. For the past three years, he has spent time helping the homeless on Boston streets.
“It’s easy to lose track of the concrete, simple ways that we can serve our communities when we’re doing research,” DeRidder says, “which is why I have often sought out ways to serve people I come across every day, whether it is a student I mentor in lab, serving the homeless, or helping out the stranger you meet in the store who is having a bad day.”
Ultimately, DeRidder says, he’ll head back to work that also recalls his early exposure to the medical field in high school, where he interacted with a lot of people with different types of dementia and other neurological diseases at a local nursing home.
“My long-term plan includes working on developing devices and molecular therapies to treat neurological diseases, in addition to continuing to work on cancer,” he says. “Really, I’d say that early experience had a big impact on me.”
Breakfast of champions: MIT hosts top young scientists
On Feb. 14, some of the nation’s most talented high school researchers convened in Boston for the annual American Junior Academy of Science (AJAS) conference, held alongside the American Association for the Advancement of Science (AAAS) annual meeting. As a highlight of the event, MIT once again hosted its renowned “Breakfast with Scientists,” offering students a unique opportunity to connect with leading scientific minds from around the world.
The AJAS conference began with an opening reception at the MIT Schwarzman College of Computing, where professor of biology and chemistry Catherine Drennan delivered the keynote address, welcoming 162 high school students from 21 states. Delegates were selected through state Academy of Science competitions, earning the chance to share their work and connect with peers and professionals in science, technology, engineering, and mathematics (STEM).
Over breakfast, students engaged with distinguished scientists, including MIT faculty, Nobel laureates, and industry leaders, discussing research, career paths, and the broader impact of scientific discovery.
Amy Keating, MIT biology department head, sat at a table with students ranging from high school juniors to college sophomores. The group engaged in an open discussion about life as a scientist at a leading institution like MIT. One student expressed concern about the competitive nature of innovative research environments, prompting Keating to reassure them, saying, “MIT has a collaborative philosophy rather than a competitive one.”
At another table, Nobel laureate and former MIT postdoc Gary Ruvkun shared a lighthearted moment with students, laughing at a TikTok video they had created to explain their science fair project. The interaction reflected the innate curiosity and excitement that drives discovery at all stages of a scientific career.
Donna Gerardi, executive director of the National Association of Academies of Science, highlighted the significance of the AJAS program. “These students are not just competing in science fairs; they are becoming part of a larger scientific community. The connections they make here can shape their careers and future contributions to science.”
Alongside the breakfast, AJAS delegates participated in a variety of enriching experiences, including laboratory tours, conference sessions, and hands-on research activities.
“I am so excited to be able to discuss my research with experts and get some guidance on the next steps in my academic trajectory,” said Andrew Wesel, a delegate from California.
A defining feature of the AJAS experience was its emphasis on mentorship and collaboration rather than competition. Delegates were officially inducted as lifetime Fellows of the American Junior Academy of Science at the conclusion of the conference, joining a distinguished network of scientists and researchers.
Sponsored by the MIT School of Science and School of Engineering, the breakfast underscored MIT’s longstanding commitment to fostering young scientific talent. Faculty and researchers took the opportunity to encourage students to pursue careers in STEM fields, providing insights into the pathways available to them.
“It was a joy to spend time with such passionate students,” says Kristala Prather, head of the Department of Chemical Engineering at MIT. “One of the brightest moments for me was sitting next to a young woman who will be joining MIT in the fall — I just have to convince her to study ChemE!”
Markus Buehler receives 2025 Washington Award
MIT Professor Markus J. Buehler has been named the recipient of the 2025 Washington Award, one of the nation’s oldest and most esteemed engineering honors.
The Washington Award is conferred to “an engineer(s) whose professional attainments have preeminently advanced the welfare of humankind,” recognizing those who have made a profound impact on society through engineering innovation. Past recipients of this award include influential figures such as Herbert Hoover, the award’s inaugural recipient in 1919, as well as Orville Wright, Henry Ford, Neil Armstrong, John Bardeen, and renowned MIT affiliates Vannevar Bush, Robert Langer, and software engineer Margaret Hamilton.
Buehler was selected for his “groundbreaking accomplishments in computational modeling and mechanics of biological materials, and his contributions to engineering education and leadership in academia.” Buehler has authored over 500 peer-reviewed publications, pioneering the atomic-level properties and structures of biomaterials such as silk, elastin, and collagen, utilizing computational modeling to characterize, design, and create sustainable materials with features spanning from the nano- to the macro- scale. Buehler was the first to explain how hydrogen bonds, molecular confinement, and hierarchical architectures govern the mechanics of biological materials via the development of a theory that bridges molecular interactions with macroscale properties.
His innovative research includes the development of physics-aware artificial intelligence methods that integrate computational mechanics, bioinformatics, and generative AI to explore universal design principles of biological and bioinspired materials. His work has advanced the understanding of hierarchical structures in nature, revealing the mechanics by which complex biomaterials achieve remarkable strength, flexibility, and resilience through molecular interactions across scales.
Buehler's research included the use of deep learning models to predict and generate new protein structures, self-assembling peptides, and sustainable biomimetic materials. His work on materiomusic — converting molecular structures into musical compositions — has provided new insights into the hidden patterns within biological systems.
Buehler is the Jerry McAfee (1940) Professor in Engineering in the departments of Civil and Environmental Engineering (CEE) and Mechanical Engineering. He served as the department head of CEE from 2013 to 2020, as well as in other leadership roles, including as president of the Society of Engineering Science.
A dedicated educator, Buehler has played a vital role in mentoring future engineers, leading K-12 STEM summer camps to inspire the next generation and serving as an instructor for MIT Professional Education summer courses.
His achievements have been recognized with numerous prestigious honors, including the Feynman Prize, the Drucker Medal, the Leonardo da Vinci Award, and the J.R. Rice Medal, and election to the National Academy of Engineering. His work continues to push the boundaries of computational science, materials engineering, and biomimetic design.
The Washington Award was presented during National Engineers Week in February, in a ceremony attended by members of prominent engineering societies, including the Western Society of Engineers; the American Institute of Mining, Metallurgical and Petroleum Engineers; the American Society of Civil Engineers; the American Society of Mechanical Engineers; the Institute of Electrical and Electronics Engineers; the National Society of Professional Engineers; and the American Nuclear Society. The event also celebrated nearly 100 pre-college students recognized for their achievements in regional STEM competitions, highlighting the next generation of engineering talent.
Seeing more in expansion microscopy
In biology, seeing can lead to understanding, and researchers in Professor Edward Boyden’s lab at the McGovern Institute for Brain Research are committed to bringing life into sharper focus. With a pair of new methods, they are expanding the capabilities of expansion microscopy — a high-resolution imaging technique the group introduced in 2015 — so researchers everywhere can see more when they look at cells and tissues under a light microscope.
“We want to see everything, so we’re always trying to improve it,” says Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT. “A snapshot of all life, down to its fundamental building blocks, is really the goal.” Boyden is also a Howard Hughes Medical Institute investigator and a member of the Yang Tan Collective at MIT.
With new ways of staining their samples and processing images, users of expansion microscopy can now see vivid outlines of the shapes of cells in their images and pinpoint the locations of many different proteins inside a single tissue sample with resolution that far exceeds that of conventional light microscopy. These advances, both reported in open-access form in the journal Nature Communications, enable new ways of tracing the slender projections of neurons and visualizing spatial relationships between molecules that contribute to health and disease.
Expansion microscopy uses a water-absorbing hydrogel to physically expand biological tissues. After a tissue sample has been permeated by the hydrogel, it is hydrated. The hydrogel swells as it absorbs water, preserving the relative locations of molecules in the tissue as it gently pulls them away from one another. As a result, crowded cellular components appear separate and distinct when the expanded tissue is viewed under a light microscope. The approach, which can be performed using standard laboratory equipment, has made super-resolution imaging accessible to most research teams.
Since first developing expansion microscopy, Boyden and his team have continued to enhance the method — increasing its resolution, simplifying the procedure, devising new features, and integrating it with other tools.
Visualizing cell membranes
One of the team’s latest advances is a method called ultrastructural membrane expansion microscopy (umExM), which they described in the Feb. 12 issue of Nature Communications. With it, biologists can use expansion microscopy to visualize the thin membranes that form the boundaries of cells and enclose the organelles inside them. These membranes, built mostly of molecules called lipids, have been notoriously difficult to densely label in intact tissues for imaging with light microscopy. Now, researchers can use umExM to study cellular ultrastructure and organization within tissues.
Tay Shin SM ’20, PhD ’23, a former graduate student in Boyden’s lab and a J. Douglas Tan Fellow in the Tan-Yang Center for Autism Research at MIT, led the development of umExM. “Our goal was very simple at first: Let’s label membranes in intact tissue, much like how an electron microscope uses osmium tetroxide to label membranes to visualize the membranes in tissue,” he says. “It turns out that it’s extremely hard to achieve this.”
The team first needed to design a label that would make the membranes in tissue samples visible under a light microscope. “We almost had to start from scratch,” Shin says. “We really had to think about the fundamental characteristics of the probe that is going to label the plasma membrane, and then think about how to incorporate them into expansion microscopy.” That meant engineering a molecule that would associate with the lipids that make up the membrane and link it to both the hydrogel used to expand the tissue sample and a fluorescent molecule for visibility.
After optimizing the expansion microscopy protocol for membrane visualization and extensively testing and improving potential probes, Shin found success one late night in the lab. He placed an expanded tissue sample on a microscope and saw sharp outlines of cells.
Because of the high resolution enabled by expansion, the method allowed Boyden’s team to identify even the tiny dendrites that protrude from neurons and clearly see the long extensions of their slender axons. That kind of clarity could help researchers follow individual neurons’ paths within the densely interconnected networks of the brain, the researchers say.
Boyden calls tracing these neural processes “a top priority of our time in brain science.” Such tracing has traditionally relied heavily on electron microscopy, which requires specialized skills and expensive equipment. Shin says that because expansion microscopy uses a standard light microscope, it is far more accessible to laboratories worldwide.
Shin and Boyden point out that users of expansion microscopy can learn even more about their samples when they pair the new ability to reveal lipid membranes with fluorescent labels that show where specific proteins are located. “That’s important, because proteins do a lot of the work of the cell, but you want to know where they are with respect to the cell’s structure,” Boyden says.
One sample, many proteins
To that end, researchers no longer have to choose just a few proteins to see when they use expansion microscopy. With a new method called multiplexed expansion revealing (multiExR), users can now label and see more than 20 different proteins in a single sample. Biologists can use the method to visualize sets of proteins, see how they are organized with respect to one another, and generate new hypotheses about how they might interact.
A key to that new method, reported Nov. 9, 2024, in Nature Communications, is the ability to repeatedly link fluorescently labeled antibodies to specific proteins in an expanded tissue sample, image them, then strip these away and use a new set of antibodies to reveal a new set of proteins. Postdoc Jinyoung Kang fine-tuned each step of this process, assuring tissue samples stayed intact and the labeled proteins produced bright signals in each round of imaging.
After capturing many images of a single sample, Boyden’s team faced another challenge: how to ensure those images were in perfect alignment so they could be overlaid with one another, producing a final picture that showed the precise positions of all of the proteins that had been labeled and visualized one by one.
Expansion microscopy lets biologists visualize some of cells’ tiniest features — but to find the same features over and over again during multiple rounds of imaging, Boyden’s team first needed to home in on a larger structure. “These fields of view are really tiny, and you’re trying to find this really tiny field of view in a gel that’s actually become quite large once you’ve expanded it,” explains Margaret Schroeder, a graduate student in Boyden’s lab who, with Kang, led the development of multiExR.
To navigate to the right spot every time, the team decided to label the blood vessels that pass through each tissue sample and use these as a guide. To enable precise alignment, certain fine details also needed to consistently appear in every image; for this, the team labeled several structural proteins. With these reference points and customized imaging processing software, the team was able to integrate all of their images of a sample into one, revealing how proteins that had been visualized separately were arranged relative to one another.
The team used multiExR to look at amyloid plaques — the aberrant protein clusters that notoriously develop in brains affected by Alzheimer’s disease. “We could look inside those amyloid plaques and ask, what’s inside of them? And because we can stain for many different proteins, we could do a high-throughput exploration,” Boyden says. The team chose 23 different proteins to view in their images. The approach revealed some surprises, such as the presence of certain neurotransmitter receptors (AMPARs). “Here’s one of the most famous receptors in all of neuroscience, and there it is, hiding out in one of the most famous molecular hallmarks of pathology in neuroscience,” says Boyden. It’s unclear what role, if any, the receptors play in Alzheimer’s disease — but the finding illustrates how the ability to see more inside cells can expose unexpected aspects of biology and raise new questions for research.
Funding for this work came from MIT, Lisa Yang and Y. Eva Tan, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, the U.S. Army, Cancer Research U.K., the New York Stem Cell Foundation, the U.S. National Institutes of Health, Lore McGovern, Good Ventures, Schmidt Futures, Samsung, MathWorks, the Collamore-Rogers Fellowship, the U.S. National Science Foundation, Alana Foundation USA, the Halis Family Foundation, Lester A. Gimpelson, Donald and Glenda Mattes, David B. Emmes, Thomas A. Stocky, Avni U. Shah, Kathleen Octavio, Good Ventures/Open Philanthropy, and the European Union’s Horizon 2020 program.
Times Higher Education ranks MIT No. 1 in arts and humanities, business and economics, and social sciences
The 2025 Times Higher Education World University Ranking has ranked MIT first in three subject categories: Arts and Humanities, Business and Economics, and Social Sciences.
The Times Higher Education World University Ranking is an annual publication of university rankings by Times Higher Education, a leading British education magazine. The subject rankings are based on 18 rigorous performance indicators. Criteria include teaching, research environment, research volume and influence, industry, and international outlook.
Disciplines included in the 2025 top-ranked subjects are housed in the School of Humanities, Arts, and Social Sciences (SHASS), the School of Architecture and Planning (SA+P), and the MIT Sloan School of Management.
“The rankings are a testament to the extraordinary quality of the research and teaching that takes place in SHASS and across MIT,” says Agustín Rayo, Kenan Sahin Dean of SHASS and professor of philosophy. “There has never been a more important time to ensure that we train students who understand the social, economic, political, and human aspects of the great challenges of our time.”
The Arts and Humanities ranking evaluated 750 universities from 72 countries in the disciplines of languages, literature, and linguistics; history, philosophy, and theology; architecture; archaeology; and art, performing arts, and design. This marks the first time MIT has earned the top spot in this subject since Times Higher Education began publishing rankings in 2011.
The ranking for Business and Economics evaluated 990 institutions from 85 countries and territories across three core disciplines: business and management; accounting and finance; and, economics and econometrics. This is the fourth consecutive year MIT has been ranked first in this subject.
The Social Sciences ranking evaluated 1,093 institutions from 100 countries and territories in the disciplines of political science and international studies; sociology, geography, communication and media studies; and anthropology. The areas under evaluation include political science and international relations; sociology; geography; communication and media studies; and anthropology. MIT claimed the top spot alone in this subject, after tying for first in 2024 with Stanford University.
In other subjects, MIT was also named among the top universities, ranking third in Computer Science, Engineering, and Life Sciences, and fourth in Physical Sciences. Overall, MIT ranked second in the Times Higher Education 2025 World University Ranking.
A personalized heart implant wins MIT Sloan health care prize
An MIT startup’s personalized heart implants, designed to help prevent strokes, won this year’s MIT Sloan Healthcare Innovation Prize (SHIP) on Thursday.
Spheric Bio’s implants grow inside the body once injected, to fit within the patient’s unique anatomy. This could improve stroke prevention because existing implants are one-size-fits-all devices that can fail to fully block the most at-risk regions, leading to leakages and other complications.
“Our mission is to transform stroke prevention by building personalized medical devices directly inside patients’ hearts,” said Connor Verheyen PhD ’23, a postdoc in the Harvard-MIT Program in Health Sciences and Technology (HST), who made the winning pitch.
Verheyen’s co-founders are MIT Associate Professor Ellen Roche and HST postdoc Markus Horvath PhD ’22.
Spheric Bio was one of seven teams that pitched their solution at the event, which was held in the MIT Media Lab and kicked off the MIT Sloan Healthcare and BioInnovations Conference.
Spheric took home the event’s $25,000 first-place prize. The second-place prize went to nurtur, another MIT alumnus-founded startup, that has developed an artificial intelligence-powered platform designed to detect and prevent postpartum depression. Last summer, nurtur participated in the delta v startup accelerator program organized by the Martin Trust Center for MIT Entrepreneurship.
The audience choice award was given to Merunova, which is using AI and MRI diagnostics to improve the diagnosis and treatment of spinal cord disorders. Merunova was co-founded by Dheera Ananthakrishnan, a former spine surgeon who completed an executive MBA from the MIT Sloan School of Management in 2023.
Personalized stroke prevention
Spheric Bio’s first implants aim to solve the problem of atrial fibrillation, a condition that causes areas of the heart to beat irregularly and rapidly, leading to a dramatic increase in stroke risk. The problem begins when blood pools and clots in the heart. Those clots then move to the brain and cause a stroke.
“This is a problem I’ve witnessed firsthand in my family,” says Verheyen. “It’s so common that millions of families around the world have had to experience a loved one go through a stroke as well.”
Patients with atrial fibrillation today can either go on blood thinners, in many cases for years or even life, or undergo a procedure in which surgeons insert a device into the heart to close off an area known as the left atrial appendage, where about 90 percent of such originate.
The implants on the market today for that procedure are typically prefabricated metal devices that don’t account for the wide variations seen in patient heart anatomy. Verheyen says up to half of the devices fail to seal the appendage. They can also lead to complications and complex care pathways designed to manage those shortcomings.
“There’s a fundamental mismatch between the devices available and what human patients actually look like,” says Verheyen. “Humans are infinitely variable in shape and size, and these tissues in particular are really soft, complex, delicate tissues. It leaves you with a pretty profound incompatibility.”
Spheric Bio’s implants are designed to conform to a patient’s anatomy like water filling a glass. The implant is made of biomaterials developed over years of research at MIT. They are delivered through a catheter and then expand and self-heal to custom fit the patient.
“This gives us complete closure of the appendage for every patient, every time,” said Verheyen, who has successfully tested the device in animals. “It also allows us to reduce device-related complications and simplifies deployment for operators.”
Verheyen conducted his PhD work on medical imaging and medical physics in Roche’s lab. Roche is also the associate head of Department of Mechanical Engineering at MIT.
Innovations for impact
The 23rd annual pitch competition offered anyone interested in health care innovation a look at the promising new solutions being developed at universities. The event is open to all early-stage health care startups with at least one student or recent graduate co-founder.
The event was the result of a months-long process in which more than 100 applicants were whittled down over the course of three rounds by a group of 20 judges.
The final competition also kicked off the MIT Sloan Healthcare and BioInnovations Conference, which took place Feb. 27 and 28. This year’s conference was titled From Innovation to Impact: The Changing Face of Healthcare, and featured keynotes with health care industry veterans including Chris Boerner, the CEO of Bristole Myers Squibb, and James Davis, the CEO of Quest Diagnostics.
The competition’s keynote was delivered by Iterative Health CEO Jonathan Ng, who was a finalist in the competition in 2017. Ng expressed admiration for this year’s contestants.
“It’s inspiring to look around and see people who want to change the world,” said Ng, whose company is using cameras and AI to improve colorectal cancer screening. “There’s a lot of easier industries to work in, but MIT is such a good place to find your tribe: to find people who want to make the same sort of impact on the world as you.”