MIT Latest News

In kids, EEG monitoring of consciousness safely reduces anesthetic use
Newly published results of a randomized, controlled clinical trial in Japan among more than 170 children aged 1 to 6 who underwent surgery show that by using electroencephalogram (EEG) readings of brain waves to monitor unconsciousness, an anesthesiologist can significantly reduce the amount of the anesthesia administered to safely induce and sustain each patient’s anesthetized state. On average, the little patients experienced significant improvements in several post-operative outcomes, including quicker recovery and reduced incidence of delirium.
“I think the main takeaway is that in kids, using the EEG, we can reduce the amount of anesthesia we give them and maintain the same level of unconsciousness,” says study co-author Emery N. Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT, an anesthesiologist at Massachusetts General Hospital, and a professor at Harvard Medical School. The study appeared April 21 in JAMA Pediatrics.
Yasuko Nagasaka, chair of anesthesiology at Tokyo Women’s Medical University and a former colleague of Brown’s in the United States, designed the study. She asked Brown to train and advise lead author Kiyoyuki Miyasaka of St. Luke’s International Hospital in Tokyo on how to use EEG to monitor unconsciousness and adjust anesthesia dosing in children. Miyasaka then served as the anesthesiologist for all patients in the trial. Attending anesthesiologists not involved in the study were always on hand to supervise.
Brown’s research in The Picower Institute for Learning and Memory, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences at MIT has shown that a person’s level of consciousness under any particular anesthetic drug is discernible from patterns of their brain waves. Each child’s brain waves were measured with EEG, but in the control group Miyasaka adhered to standard anesthesia dosing protocols while in the experimental group he used the EEG measures as a guide for dosing. The results show that when he used EEG, he was able to induce the desired level of unconsciousness with a concentration of 2 percent sevoflurane gas, rather than the standard 5 percent. Maintenance of unconsciousness, meanwhile, only turned out to require 0.9 percent concentration, rather than the standard 2.5 percent.
Meanwhile, a separate researcher, blinded to whether EEG or standard protocols were used, assessed the kids for “pediatric anesthesia emergence delirium” (PAED), in which children sometimes wake up from anesthesia with a set of side effects including lack of eye contact, inconsolability, unawareness of surroundings, restlessness, and non-purposeful movements. Children who received standard anesthesia dosing met the threshold for PAED in 35 percent of cases (30 out of 86), while children who received EEG-guided dosing met the threshold in 21 percent of cases (19 out of 91). The difference of 14 percentage points was statistically significant.
Meanwhile, the authors reported that, on average, EEG-guided patients had breathing tubes removed 3.3 minutes earlier, emerged from anesthesia 21.4 minutes earlier, and were discharged from post-acute care 16.5 minutes earlier than patients who received anesthesia according to the standard protocol. All of these differences were statistically significant. Also, no child in the study ever became aware during surgery.
The authors noted that the quicker recovery among patients who received EEG-guided anesthesia was not only better medically, but also reduced health-care costs. Time in post-acute care in the United States costs about $46 a minute, so the average reduced time of 16.5 minutes would save about $750 per case. Sevoflurane is also a potent greenhouse gas, Brown notes, so reducing its use is better for the environment.
In the study, the authors also present comparisons of the EEG recordings from children in the control and experimental groups. There are notable differences in the “spectrograms” that charted the power of individual brain wave frequencies both as children were undergoing surgery and while they were approaching emergence from anesthesia, Brown says.
For instance, among children who received EEG-guided dosing, there are well-defined bands of high power at about 1-3 Hertz and 10-12 Hz. In children who received standard protocol dosing, the entire range of frequencies up to about 15 Hz are at high power. In another example, children who experienced PAED showed higher power at several frequencies up to 30Hz than children who did not experience PAED.
The findings further validate the idea that monitoring brain waves during surgery can provide anesthesiologists with actionable guidance to improve patient care, Brown says. Training in reading EEGs and guiding dosing can readily be integrated in the continuing medical education practices of hospitals, he adds.
In addition to Miyasuka, Brown, and Nagasaka, Yasuyuki Suzuki is a study co-author.
Funding sources for the study include the MIT-Massachusetts General Brigham Brain Arousal State Control Innovation Center, the Freedom Together Foundation, and the Picower Institute.
Lighting up biology’s basement lab
For more than 30 years, Course 7 (Biology) students have descended to the expansive, windowless basement of Building 68 to learn practical skills that are the centerpiece of undergraduate biology education at the Institute. The lines of benches and cabinets of supplies that make up the underground MIT Biology Teaching Lab could easily feel dark and isolated.
In the corner of this room, however, sits Senior Technical Instructor Vanessa Cheung ’02, who manages to make the space seem sunny and communal.
“We joke that we could rig up a system of mirrors to get just enough daylight to bounce down from the stairwell,” Cheung says with a laugh. “It is a basement, but I am very lucky to have this teaching lab space. It is huge and has everything we need.”
This optimism and gratitude fostered by Cheung is critical, as MIT undergrad students enrolled in classes 7.002 (Fundamentals of Experimental Molecular Biology) and 7.003 (Applied Molecular Biology Laboratory) spend four-hour blocks in the lab each week, learning the foundations of laboratory technique and theory for biological research from Cheung and her colleagues.
Running toward science education
Cheung’s love for biology can be traced back to her high school cross country and track coach, who also served as her second-year biology teacher. The sport and the fundamental biological processes she was learning about in the classroom were, in fact, closely intertwined.
“He told us about how things like ATP [adenosine triphosphate] and the energy cycle would affect our running,” she says. “Being able to see that connection really helped my interest in the subject.”
That inspiration carried her through a move from her hometown of Pittsburgh, Pennsylvania, to Cambridge, Massachusetts, to pursue an undergraduate degree at MIT, and through her thesis work to earn a PhD in genetics at Harvard Medical School. She didn’t leave running behind either: To this day, she can often be found on the Charles River Esplanade, training for her next marathon.
She discovered her love of teaching during her PhD program. She enjoyed guiding students so much that she spent an extra semester as a teaching assistant, outside of the one required for her program.
“I love research, but I also really love telling people about research,” Cheung says.
Cheung herself describes lab instruction as the “best of both worlds,” enabling her to pursue her love of teaching while spending every day at the bench, doing experiments. She emphasizes for students the importance of being able not just to do the hands-on technical lab work, but also to understand the theory behind it.
“The students can tend to get hung up on the physical doing of things — they are really concerned when their experiments don’t work,” she says. “We focus on teaching students how to think about being in a lab — how to design an experiment and how to analyze the data.”
Although her talent for teaching and passion for science led her to the role, Cheung doesn’t hesitate to identify the students as her favorite part of the job.
“It sounds cheesy, but they really do keep the job very exciting,” she says.
Using mind and hand in the lab
Cheung is the type of person who lights up when describing how much she “loves working with yeast.”
“I always tell the students that maybe no one cares about yeast except me and like three other people in the world, but it is a model organism that we can use to apply what we learn to humans,” Cheung explains.
Though mastering basic lab skills can make hands-on laboratory courses feel “a bit cookbook,” Cheung is able to get the students excited with her enthusiasm and clever curriculum design.
“The students like things where they can get their own unique results, and things where they have a little bit of freedom to design their own experiments,” she says. So, the lab curriculum incorporates opportunities for students to do things like identify their own unique yeast mutants and design their own questions to test in a chemical engineering module.
Part of what makes theory as critical as technique is that new tools and discoveries are made frequently in biology, especially at MIT. For example, there has been a shift from a focus on RNAi to CRISPR as a popular lab technique in recent years, and Cheung muses that CRISPR itself may be overshadowed within only a few more years — keeping students learning at the cutting edge of biology is always on Cheung’s mind.
“Vanessa is the heart, soul, and mind of the biology lab courses here at MIT, embodying ‘mens et manus’ [‘mind and hand’],” says technical lab instructor and Biology Teaching Lab Manager Anthony Fuccione.
Support for all students
Cheung’s ability to mentor and guide students earned her a School of Science Dean’s Education and Advising Award in 2012, but her focus isn’t solely on MIT undergraduate students.
In fact, according to Cheung, the earlier students can be exposed to science, the better. In addition to her regular duties, Cheung also designs curriculum and teaches in the LEAH Knox Scholars Program. The two-year program provides lab experience and mentorship for low-income Boston- and Cambridge-area high school students.
Paloma Sanchez-Jauregui, outreach programs coordinator who works with Cheung on the program, says Cheung has a standout “growth mindset” that students really appreciate.
“Vanessa teaches students that challenges — like unexpected PCR results — are part of the learning process,” Sanchez-Jauregui says. “Students feel comfortable approaching her for help troubleshooting experiments or exploring new topics.”
Cheung’s colleagues report that they admire not only her talents, but also her focus on supporting those around her. Technical Instructor and colleague Eric Chu says Cheung “offers a lot of help to me and others, including those outside of the department, but does not expect reciprocity.”
Professor of biology and co-director of the Department of Biology undergraduate program Adam Martin says he “rarely has to worry about what is going on in the teaching lab.” According to Martin, Cheung is ”flexible, hard-working, dedicated, and resilient, all while being kind and supportive to our students. She is a joy to work with.”
Exploring new frontiers in mineral extraction
The ocean’s deep-sea bed is scattered with ancient rocks, each about the size of a closed fist, called “polymetallic nodules.” Elsewhere, along active and inactive hydrothermal vents and the deep ocean’s ridges, volcanic arcs, and tectonic plate boundaries, and on the flanks of seamounts, lie other types of mineral-rich deposits containing high-demand minerals.
The minerals found in the deep ocean are used to manufacture products like the lithium-ion batteries used to power electric vehicles, cell phones, or solar cells. In some cases, the estimated resources of critical mineral deposits in parts of the abyssal ocean exceed global land-based reserves severalfold.
“Society wants electric-powered vehicles, solar cells for clean energy, but all of this requires resources,” says Thomas Peacock, professor of mechanical engineering at MIT, in a video discussing his research. “Land-based resources are getting depleted, or are more challenging to access. In parts of the ocean, there are much more of these resources than in land-based reserve. The question is: Can it be less impactful to mine some of these resources from the ocean, rather than from land?”
Deep-sea mining is a new frontier in mineral extraction, with potentially significant implications for industry and the global economy, and important environmental and societal considerations. Through research, scientists like Peacock study the impacts of deep-sea mining activity objectively and rigorously, and can bring evidence to bear on decision-making.
Mining activities, whether on land or at sea, can have significant impacts on the environment at local, regional, and global scales. As interest in deep-seabed mining is increasing, driven by the surging demand for critical minerals, scientific inquiries help illuminate the trade-offs.
Peacock has long studied the potential impacts of deep-sea mining in a region of the Pacific Ocean known as the Clarion Clipperton Zone (CCZ), where polymetallic nodules abound. A decade ago, his research group began studying deep-sea mining, seeing a critical need to develop monitoring and modeling capabilities for assessing the scale of impact.
Today, his MIT Environmental Dynamics Laboratory (ENDLab) is at the forefront of advancing understanding for emerging ocean utilization technologies. With research anchored in fundamental fluid dynamics, the team is developing cutting-edge monitoring programs, novel sensors, and modeling tools.
“We are studying the form of suspended sediment from deep sea mining operations, testing a new sensor for sediment and another new sensor for turbulence, studying the initial phases of the sediment plume development, and analyzing data from the 2021 and 2022 technology trials in the Pacific Ocean,” he explains.
In deep-sea nodule mining, vehicles collect nodules from the ocean floor and convey them back to a vessel above. After the critical materials are collected on the vessel, some leftover sediment may be returned to the deep-water column. The resulting sediment plumes, and their potential impacts, are a key focus of the team’s work.
A 2022 study conducted in the CCZ investigated the dynamics of sediment plumes near a deep-seabed polymetallic nodule mining vehicle. The experiments reveal most of the released sediment-laden water, between 92 and 98 percent, stayed close to the sea-bed floor, spreading laterally. The results suggest that turbidity current dynamics set the fraction of sediment that remains suspended in the water, along with the scale of the subsequent ambient sediment plume. The implications of the process, which had been previously overlooked, are substantial for plume modeling and informative for environmental impact statements.
“New model breakthroughs can help us make increasingly trustworthy predictions,” he says. The team also contributed to a recent study, published in the journal Nature, which showed that sediment deposited away from a test mining site gets cleared away, most likely by ocean currents, and reported on any observed biological recovery.
Researchers observed a site four decades after a nodule test mining experiment. Although biological impacts in many groups of organisms were present, populations of several organisms, including sediment macrofauna, mobile deposit feeders, and even large-sized sessile fauna, had begun to reestablish despite persistent physical changes at the seafloor. The study was led by the National Oceanography Centre in the U.K.
“A great deal has been learned about the fluid mechanics of deep-sea mining, in particular when it comes to deep-sea mining sediment plumes,” says Peacock, adding that the scientific progress continues with more results on the way. The work is setting new standards for in-situ monitoring of suspended sediment properties, and for how to interpret field data from recent technical trials.
Response to infection highlights the nervous system’s surprising degrees of flexibility
Whether you are a person about town or a worm in a dish, life can throw all kinds of circumstances your way. What you need is a nervous system flexible enough to cope. In a new study, MIT neuroscientists show how even a simple animal can repurpose brain circuits and the chemical signals, or “neuromodulators,” in its brain to muster an adaptive response to an infection. The study therefore may provide a model for understanding how brains in more complex organisms, including ourselves, manage to use what they have to cope with shifting internal states.
“Neuromodulators play pivotal roles in coupling changes in animals’ internal states to their behavior,” the scientists write in their paper, recently published in Nature Communications. “How combinations of neuromodulators released from different neuronal sources control the diverse internal states that animals exhibit remains an open question.”
When C. elegans worms fed on infectious Pseudomonas bacteria, they ate less and became more lethargic. When the researchers looked across the nervous system to see how that behavior happened, they discovered that the worm had completely revamped the roles of several of its 302 neurons and some of the peptides they secrete across the brain to modulate behavior. Systems that responded to stress in one case or satiety in another became reconfigured to cope with the infection.
“This is a question of, how do you adapt to your environment with the highest level of flexibility given the set of neurons and neuromodulators you have,” says postdoc Sreeparna Pradhan, co-lead author of the new study in Nature Communications. “How do you make the maximum set of options available to you?”
The research to find out took place in the lab of senior author Steve Flavell, an associate professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences and an investigator of the Howard Hughes Medical Institute. Pradhan, who was supported by a fellowship from MIT’s K. Lisa Yang Brain-Body Center during the work, teamed up with former Flavell Lab graduate student Gurrein Madan to lead the research.
Pradhan says the team discovered several surprises in the course of the study, including that a neuropeptide called FLP-13 completely flipped its function in infected animals versus animals experiencing other forms of stress. Previous research had shown that when worms are stressed by heat, a neuron called ALA releases FLP-13 to cause the worms to go into quiescence, a sleep-like state. But when the worms in the new study ate Pseudomonas bacteria, a band of other neurons released FLP-13 to fight off quiescence, enabling the worms to survive longer. Meanwhile, ALA took on a completely different role during sickness: leading the charge to suppress feeding by emitting a different group of peptides.
A comprehensive approach
To understand how the worms responded to infection, the team tracked many features of the worms’ behavior for days and made genetic manipulations to probe the underlying mechanisms at play. They also recorded activity across the worms' whole brains. This kind of a comprehensive observation and experimentation is difficult to achieve in more complex animals, but C. elegans’ relative simplicity makes it a tractable testbed, Pradhan says. The team’s approach also is what allowed it to make so many unexpected findings.
For instance, Pradhan didn’t suspect that the ALA neuron would turn out to be the neuron that suppressed feeding, but when she observed their behavior for long enough, she started to realize the reduced feeding arose from the worms taking little breaks that they wouldn’t normally take. As she and Madan were manipulating more than a dozen genes they thought might be affecting behavior and feeding in the worm, she included another called ceh-17 that she had read about years ago that seemed to promote bouts of “microsleep” in the worms. When they knocked out ceh-17, they found that those worms didn’t reduce feeding when they got infected, unlike normal animals. It just so happens that ceh-17 is specifically needed for ALA to function properly, so that’s when the team realized ALA might be involved in the feeding-reduction behavior.
To know for sure, they then knocked out the various peptides that ALA releases and saw that when they knocked out three in particular, flp-24, nlp-8 and flp-7, infected worms didn’t exhibit reduced feeding upon infection. That clinched that ALA drives the reduced feeding behavior by emitting those three peptides.
Meanwhile, Pradhan and Madan’s screens also revealed that when infected worms were missing flp-13, they would go into a quiescence state much sooner than infected worms with the peptide available. Notably, the worms that fought off the quiescence state lived longer. They found that fighting off quiescence depended on the FLP-13 coming from four neurons (I5, I1, ASH and OLL), but not from ALA. Further experiments showed that FLP-13 acted on a widespread neuropeptide receptor called DMSR-1 to prevent quiescence.
Having a little nap
The last major surprise of the study was that the quiescence that Pseudomonas infection induces in worms is not the same as other forms of sleepiness that show up in other contexts, such as after satiety or heat stress. In those cases, worms don’t wake easily (with a little poke), but amid infection their quiescence was readily reversible. It seemed more like lethargy than sleep. Using the lab’s ability image all neural activity during behavior, Pradhan and Madan discerned that a neuron called ASI was particularly active during the bouts of lethargy. That observation solidified further when they showed that ASI’s secretion of the peptide DAF-7 was required for the quiescence to emerge in infected animals.
In all, the study showed that the worms repurpose and reconfigure — sometimes to the point of completely reversing — the functions of neurons and peptides to mount an adaptive response to infection, versus a different problem like stress. The results therefore shed light on what has been a tricky question to resolve. How do brains use their repertoire of cells, circuits, and neuromodulators to deal with what life hands them? At least part of the answer seems to be by reshuffling existing components, rather than creating unique ones for each situation.
“The states of stress, satiety, and infection are not induced by unique sets of neuromodulators," the authors wrote in their paper. "Instead, one larger set of neuromodulators may be deployed from different sources and in different combinations to specify these different internal states.”
In addition to Pradhan, Madan, and Flavell, the paper’s other authors are Di Kang, Eric Bueno, Adam Atanas, Talya Kramer, Ugur Dag, Jessica Lage, Matthew Gomes, Alicia Kun-Yang Lu, and Jungyeon Park.
Support for the research came from the the Picower Institute, the Freedom Together Foundation, the K. Lisa Yang Brain-Body Center, and the Yang Tan Collective at MIT; the National Institutes of Health; the McKnight Foundation; the Alfred P. Sloan Foundation; and the Howard Hughes Medical Institute.
Will the vegetables of the future be fortified using tiny needles?
When farmers apply pesticides to their crops, 30 to 50 percent of the chemicals end up in the air or soil instead of on the plants. Now, a team of researchers from MIT and Singapore has developed a much more precise way to deliver substances to plants: tiny needles made of silk.
In a study published today in Nature Nanotechnology, the researchers developed a way to produce large amounts of these hollow silk microneedles. They used them to inject agrochemicals and nutrients into plants, and to monitor their health.
“There’s a big need to make agriculture more efficient,” says Benedetto Marelli, the study’s senior author and an associate professor of civil and environmental engineering at MIT. “Agrochemicals are important for supporting our food system, but they’re also expensive and bring environmental side effects, so there’s a big need to deliver them precisely.”
Yunteng Cao PhD ’22, currently a postdoc Yale University, and Doyoon Kim, a former postdoc in the Marelli lab, led the study, which included a collaboration with the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group at the Singapore-MIT Alliance for Research and Technology (SMART).
In demonstrations, the team used the technique to give plants iron to treat a disease known as chlorosis, and to add vitamin B12 to tomato plants to make them more nutritious. The researchers also showed the microneedles could be used to monitor the quality of fluids flowing into plants and to detect when the surrounding soil contained heavy metals.
Overall, the researchers believe the microneedles could serve as a new kind of plant interface for real-time health monitoring and biofortification.
“These microneedles could be a tool for plant scientists so they can understand more about plant health and how they grow,” Marelli says. “But they can also be used to add value to crops, making them more resilient and possibly even increasing yields.”
The inner workings of plants
Accessing the inner tissues of living plants requires scientists to get through the plants’ waxy skin without causing too much stress. In previous work, the researchers used silk-based microneedles to deliver agrochemicals to plants in lab environments and to detect pH changes in living plants. But these initial efforts involved small payloads, limiting their applications in commercial agriculture.
“Microneedles were originally developed for the delivery of vaccines or other drugs in humans,” Marelli explains. “Now we’ve adapted it so that the technology can work with plants, but initially we could not deliver sufficient doses of agrochemicals and nutrients to mitigate stressors or enhance crop nutritional values.”
Hollow structures could increase the amount of chemicals microneedles can deliver, but Marelli says creating those structures at scale has historically required clean rooms and expensive facilities like the ones found inside the MIT.nano building.
For this study, Cao and Kim created a new way to manufacture hollow silk microneedles by combining silk fibroin protein with a salty solution inside tiny, cone-shaped molds. As water evaporated from the solution, the silk solidified into the mold while the salt forms crystalline structures inside the molds. When the salt was removed, it left behind in each needle a hollow structure or tiny pores, depending on the salt concentration and the separation of the organic and inorganic phases.
“It’s a pretty simple fabrication process. It can be done outside of a clean room — you could do it in your kitchen if you wanted,” Kim says. “It doesn’t require any expensive machinery.”
The researchers then tested their microneedles’ ability to deliver iron to iron-deficient tomato plants, which can cause a disease known as chlorosis. Chlorosis can decrease yields, but treating it by spraying crops is inefficient and can have environmental side effects. The researchers showed that their hollow microneedles could be used for the sustained delivery of iron without harming the plants.
The researchers also showed their microneedles could be used to fortify crops while they grow. Historically, crop fortification efforts have focused on minerals like zinc or iron, with vitamins only added after the food is harvested.
In each case, the researchers applied the microneedles to the stalks of plants by hand, but Marelli envisions equipping autonomous vehicles and other equipment already used in farms to automate and scale the process.
As part of the study, the researchers used microneedles to deliver vitamin B12, which is primarily found naturally in animal products, into the stalks of growing tomatoes, showing that vitamin B12 moved into the tomato fruits before harvest. The researchers propose their method could be used to fortify more plants with the vitamin.
Co-author Daisuke Urano, a plant scientist with DiSTAP, explains that “through a comprehensive assessment, we showed minimal adverse effects from microneedle injections in plants, with no observed short- or long-term negative impacts.”
“This new delivery mechanism opens up a lot of potential applications, so we wanted to do something nobody had done before,” Marelli explains.
Finally, the researchers explored the use of their microneedles to monitor the health of plants by studying tomatoes growing in hydroponic solutions contaminated with cadmium, a toxic metal commonly found in farms close to industrial and mining sites. They showed their microneedles absorbed the toxin within 15 minutes of being injected into the tomato stalks, offering a path to rapid detection.
Current advanced techniques for monitoring plant health, such as colorimetric and hyperspectral lead analyses, can only detect problems after plants growth is already being stunted. Other methods, such as sap sampling, can be too time-consuming.
Microneedles, in contrast, could be used to more easily collect sap for ongoing chemical analysis. For instance, the researchers showed they could monitor cadmium levels in tomatoes over the course of 18 hours.
A new platform for farming
The researchers believe the microneedles could be used to complement existing agricultural practices like spraying. The researchers also note the technology has applications beyond agriculture, such as in biomedical engineering.
“This new polymeric microneedle fabrication technique may also benefit research in microneedle-mediated transdermal and intradermal drug delivery and health monitoring,” Cao says.
For now, though, Marelli believes the microneedles offer a path to more precise, sustainable agriculture practices.
“We want to maximize the growth of plants without negatively affecting the health of the farm or the biodiversity of surrounding ecosystems,” Marelli says. “There shouldn’t be a trade-off between the agriculture industry and the environment. They should work together.”
This work was supported, in part, by the U.S. Office of Naval Research, the U.S. National Science Foundation, SMART, the National Research Foundation of Singapore, and the Singapore Prime Minister’s Office.
At the Venice Biennale, design through flexible thinking
When the Venice Biennale’s 19th International Architecture Exhibition launches on May 10, its guiding theme will be applying nimble, flexible intelligence to a demanding world — an ongoing focus of its curator, MIT faculty member Carlo Ratti.
The Biennale is the world’s most renowned exhibition of its kind, an international event whose subject matter shifts over time, with a new curator providing new focus every two years. This year, the Biennale’s formal theme is “Intelligens,” the Latin word behind “intelligence,” in English, and “intelligenza,” in Italian — a word that evokes both the exhibition’s international scope and the many ways humans learn, adapt, and create.
“Our title is ‘Intelligens. Natural, artificial, collective,’” notes Ratti, who is a professor of the practice of urban technologies and planning in the MIT School of Architecture and Planning. “One key point is how we can go beyond what people normally think about intelligence, whether in people or AI. In the built environment we deal with many types of feedback and need to leverage all types of intelligence to collect and use it all.”
That applies to the subject of climate change, as adaptation is an ongoing focal point for the design community, whether facing the need to rework structures or to develop new, resilient designs for cities and regions.
“I would emphasize how eager architects are today to play a big role in addressing the big crises we face on the planet we live in,” Ratti says. “Architecture is the only discipline to bring everybody together, because it means rethinking the built environment, the places we all live.”
He adds: “If you think about the fires in Los Angeles, or the floods in Valencia or Bangladesh, or the drought in Sicily, these are cases where architecture and design need to apply feedback and use intelligence.”
Not just sharing design, but creating it
The Venice Biennale is the leading event of its kind globally and one of the earliest: It started with art exhibitions in 1895 and later added biannual shows focused on other facets of culture. Since 1980, the Biennale of Architecture was held every two years, until the 2020 exhibition — curated by MIT’s Hashim Sarkis — was rescheduled to 2021 due to the Covid-19 pandemic. It is now continuing in odd-numbered years.
After its May 10 opening, this year’s exhibition runs until Nov. 23.
Ratti is a wide-ranging scholar, designer, and writer, and the long-running director of MIT’s Senseable City Lab, which has been on the leading edge of using data to understand cities as living systems.
Additionally, Ratti is a founding partner of the international design firm Carlo Ratti Associati. He graduated from the Politecnico di Torino and the École Nationale des Ponts et Chaussées in Paris, then earned his MPhil and PhD at Cambridge University. He has authored and co-authored hundeds of publications, including the books “Atlas of the Senseable City” (2023) and “The City of Tomorrow” (2016). Ratti’s work has been exhibited at the Venice Biennale, the Design Museum in Barcelona, the Science Museum in London, and the Museum of Modern Art in New York, among other venues.
In his role as curator of this year’s Biennale, Ratti adapted the traditional format to engage with some of the leading questions design faces. Ratti and the organizers created multiple forums to gather feedback about the exhibition’s possibilities, sifting through responses during the planning process.
Ratti has also publicly called this year’s Biennale a “living lab,” not just an exhibition, in accordance with the idea of learning from feedback and developing designs in response.
Back in 1895, Ratti notes, the Biennale was principally “a place to share existing knowledge, with artists and architectures coming together every two years. Today, and for a few decades, you can find almost anything in architecture and art immediately online. I think Biennales can not only be places where you share existing knowledge, but places where you create new knowledge.”
At this moment, he emphasizes, that will often mean listening to nature as we grapple with climate solutions. It also implies recognizing that nature itself inevitably responds to inputs, too.
In this vein, Ratti says, “Remember what the great architect Carlo Scarpa once said: ‘Between a tree and a house, choose the tree.’ I see that as a powerful call to learn from nature — a vast lab of trial and error, guided by feedback loops. Too often in the 20th century, architects believed they had the solution and simply needed to scale it up. The results? Frequently disastrous. Especially now, when adaptability is everything, I believe in a different approach: experimentation, feedback, iteration. That’s the spirit I hope defines this year’s Biennale.”
An MIT touch
This year, MIT will again have a robust presence at the Biennale, even beyond Ratti’s presence as curator. In the first place, he emphasizes, there is a strong team organizing the Biennale. That includes MIT graduate student Claire Gorman, who has taken a year out of her studies to serve as principal assistant to the Biennale curator.
Many of the Biennale’s projects, Gorman observes, “align ecology, technology, and culture in stunning illustrations of the fact that intelligence emerges from the complex behaviors of many parts working together. Visitors to the exhibition will discover robots and artisans collaborating alongside algae, 3D printers, ancient building practices, and new materials. … One of the strengths of the exhibition is that it includes participants who approach similar topics from different points of view.”
Overall, Gorman adds, “Our hope is that visitors will come away from the exhibition with a sense of optimism about the capacity of design fields to unite many forms of expertise.”
Numerous other Institute faculty and researchers are represented as well. For instance, Daniela Rus, head of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), has helped design an installation about using robotics in the restoration of ancient structures. And famed MIT computer scientist Tim Berners-Lee, creator of the World Wide Web, is participating in a Biennale event on intelligence.
“In choosing ‘Intelligens’ as the Venice Biennale theme, Carlo Ratti recognizes that our moment requires a holistic understanding of how different forms of intelligence — from social and ecological to computational and spatial — converge to shape our built environment,” Rus says. “The Biennale offers a timely platform to explore how architecture can mediate between these intelligences, creating buildings and cities that think with and for us.”
Even as the Biennale runs, there is also a separate exhibit in Venice showcasing MIT work in architecture and design. Running from May 10 through Nov. 23, at the Palazzo Diedo, the show, “The Next Earth: Computation, Crisis, Cosmology,” features the work of 40 faculty members in MIT’s Department of Architecture, along with entries from the think tank Antikythera.
Meanwhile, for the Biennale itself, the main exhibition hall, the Arsenale, is open, but other event spaces are being renovated. That means the organizers are using additional spaces in the city of Venice this year to showcase cutting-edge design work and installations.
“We’re turning Venice into a living lab — taking the Biennale beyond its usual borders,” Ratti says. “But there’s a bigger picture: Venice may be the world’s most fragile city, caught between rising seas and the crush of mass tourism. That’s why it could become a true laboratory for the future. Venice today could be a glimpse of the world tomorrow.”
Merging design and computer science in creative ways
The speed with which new technologies hit the market is nothing compared to the speed with which talented researchers find creative ways to use them, train them, even turn them into things we can’t live without. One such researcher is MIT MAD Fellow Alexander Htet Kyaw, a graduate student pursuing dual master’s degrees in architectural studies in computation and in electrical engineering and computer science.
Kyaw takes technologies like artificial intelligence, augmented reality, and robotics, and combines them with gesture, speech, and object recognition to create human-AI workflows that have the potential to interact with our built environment, change how we shop, design complex structures, and make physical things.
One of his latest innovations is Curator AI, for which he and his MIT graduate student partners took first prize — $26,000 in OpenAI products and cash — at the MIT AI Conference’s AI Build: Generative Voice AI Solutions, a weeklong hackathon at MIT with final presentations held last fall in New York City. Working with Kyaw were Richa Gupta (architecture) and Bradley Bunch, Nidhish Sagar, and Michael Won — all from the MIT Department of Electrical Engineering and Computer Science (EECS).
Curator AI is designed to streamline online furniture shopping by providing context-aware product recommendations using AI and AR. The platform uses AR to take the dimensions of a room with locations of windows, doors, and existing furniture. Users can then speak to the software to describe what new furnishings they want, and the system will use a vision-language AI model to search for and display various options that match both the user’s prompts and the room’s visual characteristics.
“Shoppers can choose from the suggested options, visualize products in AR, and use natural language to ask for modifications to the search, making the furniture selection process more intuitive, efficient, and personalized,” Kyaw says. “The problem we’re trying to solve is that most people don’t know where to start when furnishing a room, so we developed Curator AI to provide smart, contextual recommendations based on what your room looks like.” Although Curator AI was developed for furniture shopping, it could be expanded for use in other markets.
Another example of Kyaw’s work is Estimate, a product that he and three other graduate students created during the MIT Sloan Product Tech Conference’s hackathon in March 2024. The focus of that competition was to help small businesses; Kyaw and team decided to base their work on a painting company in Cambridge that employs 10 people. Estimate uses AR and an object-recognition AI technology to take the exact measurements of a room and generate a detailed cost estimate for a renovation and/or paint job. It also leverages generative AI to display images of the room or rooms as they might look like after painting or renovating, and generates an invoice once the project is complete.
The team won that hackathon and $5,000 in cash. Kyaw’s teammates were Guillaume Allegre, May Khine, and Anna Mathy, all of whom graduated from MIT in 2024 with master’s degrees in business analytics.
In April, Kyaw will give a TedX talk at his alma mater, Cornell University, in which he’ll describe Curator AI, Estimate, and other projects that use AI, AR, and robotics to design and build things.
One of these projects is Unlog, for which Kyaw connected AR with gesture recognition to build a software that takes input from the touch of a fingertip on the surface of a material, or even in the air, to map the dimensions of building components. That’s how Unlog — a towering art sculpture made from ash logs that stands on the Cornell campus — came about.
Unlog represents the possibility that structures can be built directly from a whole log, rather than having the log travel to a lumber mill to be turned into planks or two-by-fours, then shipped to a wholesaler or retailer. It’s a good representation of Kyaw’s desire to use building materials in a more sustainable way. A paper on this work, “Gestural Recognition for Feedback-Based Mixed Reality Fabrication a Case Study of the UnLog Tower,” was published by Kyaw, Leslie Lok, Lawson Spencer, and Sasa Zivkovic in the Proceedings of the 5th International Conference on Computational Design and Robotic Fabrication, January 2024.
Another system Kyaw developed integrates physics simulation, gesture recognition, and AR to design active bending structures built with bamboo poles. Gesture recognition allows users to manipulate digital bamboo modules in AR, and the physics simulation is integrated to visualize how the bamboo bends and where to attach the bamboo poles in ways that create a stable structure. This work appeared in the Proceedings of the 41st Education and Research in Computer Aided Architectural Design in Europe, August 2023, as “Active Bending in Physics-Based Mixed Reality: The Design and Fabrication of a Reconfigurable Modular Bamboo System.”
Kyaw pitched a similar idea using bamboo modules to create deployable structures last year to MITdesignX, an MIT MAD program that selects promising startups and provides coaching and funding to launch them. Kyaw has since founded BendShelters to build the prefabricated, modular bamboo shelters and community spaces for refugees and displaced persons in Myanmar, his home country.
“Where I grew up, in Myanmar, I’ve seen a lot of day-to-day effects of climate change and extreme poverty,” Kyaw says. “There’s a huge refugee crisis in the country, and I want to think about how I can contribute back to my community.”
His work with BendShelters has been recognized by MIT Sandbox, PKG Social Innovation Challenge, and the Amazon Robotics’ Prize for Social Good.
At MIT, Kyaw is collaborating with Professor Neil Gershenfeld, director of the Center for Bits and Atoms, and PhD student Miana Smith to use speech recognition, 3D generative AI, and robotic arms to create a workflow that can build objects in an accessible, on-demand, and sustainable way. Kyaw holds bachelor’s degrees in architecture and computer science from Cornell. Last year, he was awarded an SJA Fellowship from the Steve Jobs Archive, which provides funding for projects at the intersection of technology and the arts.
“I enjoy exploring different kinds of technologies to design and make things,” Kyaw says. “Being part of MAD has made me think about how all my work connects, and helped clarify my intentions. My research vision is to design and develop systems and products that enable natural interactions between humans, machines, and the world around us.”
New chip tests cooling solutions for stacked microelectronics
As demand grows for more powerful and efficient microelectronics systems, industry is turning to 3D integration — stacking chips on top of each other. This vertically layered architecture could allow high-performance processors, like those used for artificial intelligence, to be packaged closely with other highly specialized chips for communication or imaging. But technologists everywhere face a major challenge: how to prevent these stacks from overheating.
Now, MIT Lincoln Laboratory has developed a specialized chip to test and validate cooling solutions for packaged chip stacks. The chip dissipates extremely high power, mimicking high-performance logic chips, to generate heat through the silicon layer and in localized hot spots. Then, as cooling technologies are applied to the packaged stack, the chip measures temperature changes. When sandwiched in a stack, the chip will allow researchers to study how heat moves through stack layers and benchmark progress in keeping them cool.
"If you have just a single chip, you can cool it from above or below. But if you start stacking several chips on top of each other, the heat has nowhere to escape. No cooling methods exist today that allow industry to stack multiples of these really high-performance chips," says Chenson Chen, who led the development of the chip with Ryan Keech, both of the laboratory’s Advanced Materials and Microsystems Group.
The benchmarking chip is now being used at HRL Laboratories, a research and development company co-owned by Boeing and General Motors, as they develop cooling systems for 3D heterogenous integrated (3DHI) systems. Heterogenous integration refers to the stacking of silicon chips with non-silicon chips, such as III-V semiconductors used in radio-frequency (RF) systems.
"RF components can get very hot and run at very high powers — it adds an extra layer of complexity to 3D integration, which is why having this testing capability is so needed," Keech says.
The Defense Advanced Research Projects Agency (DARPA) funded the laboratory's development of the benchmarking chip to support the HRL program. All of this research stems from DARPA's Miniature Integrated Thermal Management Systems for 3D Heterogeneous Integration (Minitherms3D) program.
For the Department of Defense, 3DHI opens new opportunities for critical systems. For example, 3DHI could increase the range of radar and communication systems, enable the integration of advanced sensors on small platforms such as uncrewed aerial vehicles, or allow artificial intelligence data to be processed directly in fielded systems instead of remote data centers.
The test chip was developed through collaboration between circuit designers, electrical testing experts, and technicians in the laboratory's Microelectronics Laboratory.
The chip serves two functions: generating heat and sensing temperature. To generate heat, the team designed circuits that could operate at very high power densities, in the kilowatts-per-square-centimeter range, comparable to the projected power demands of high-performance chips today and into the future. They also replicated the layout of circuits in those chips, allowing the test chip to serve as a realistic stand-in.
"We adapted our existing silicon technology to essentially design chip-scale heaters," says Chen, who brings years of complex integration and chip design experience to the program. In the 2000s, he helped the laboratory pioneer the fabrication of two- and three-tier integrated circuits, leading early development of 3D integration.
The chip's heaters emulate both the background levels of heat within a stack and localized hot spots. Hot spots often occur in the most buried and inaccessible areas of a chip stack, making it difficult for 3D-chip developers to assess whether cooling schemes, such as microchannels delivering cold liquid, are reaching those spots and are effective enough.
That's where temperature-sensing elements come in. The chip is distributed with what Chen likens to "tiny thermometers" that read out the temperature in multiple locations across the chip as coolants are applied.
These thermometers are actually diodes, or switches that allow current to flow through a circuit as voltage is applied. As the diodes heat up, the current-to-voltage ratio changes. "We're able to check a diode's performance and know that it's 200 degrees C, or 100 degrees C, or 50 degrees C, for example," Keech says. "We thought creatively about how devices could fail from overheating, and then used those same properties to design useful measurement tools."
Chen and Keech — along with other design, fabrication, and electrical test experts across the laboratory — are now collaborating with HRL Laboratories researchers as they couple the chip with novel cooling technologies, and integrate those technologies into a 3DHI stack that could boost RF signal power. "We need to cool the heat equivalent of more than 190 laptop CPUs [central processing units], but in the size of a single CPU package," Christopher Roper, co-principal investigator at HRL, said in a recent press release announcing their program.
According to Keech, the rapid timeline for delivering the chip was a challenge overcome by teamwork through all phases of the chip's design, fabrication, test, and 3D heterogenous integration.
"Stacked architectures are considered the next frontier for microelectronics," he says. "We want to help the U.S. government get ahead in finding ways to integrate them effectively and enable the highest performance possible for these chips."
The laboratory team presented this work at the annual Government Microcircuit Applications and Critical Technology Conference (GOMACTech), held March 17-20.
A new computational framework illuminates the hidden ecology of diseased tissues
To understand what drives disease progression in tissues, scientists need more than just a snapshot of cells in isolation — they need to see where the cells are, how they interact, and how that spatial organization shifts across disease states. A new computational method called MESA (Multiomics and Ecological Spatial Analysis), detailed in a study published in Nature Genetics, is helping researchers study diseased tissues in more meaningful ways.
The work details the results of a collaboration between researchers from MIT, Stanford University, Weill Cornell Medicine, the Ragon Institute of MGH, MIT, and Harvard, and the Broad Institute of MIT and Harvard, and was led by the Stanford team.
MESA brings an ecology-inspired lens to tissue analysis. It offers a pipeline to interpret spatial omics data — the product of cutting-edge technology that captures molecular information along with the location of cells in tissue samples. These data provide a high-resolution map of tissue “neighborhoods,” and MESA helps make sense of the structure of that map.
“By integrating approaches from traditionally distinct disciplines, MESA enables researchers to better appreciate how tissues are locally organized and how that organization changes in different disease contexts, powering new diagnostics and the identification of new targets for preventions and cures,” says Alex K. Shalek, the director of the Institute for Medical Engineering and Science (IMES), the J. W. Kieckhefer Professor in IMES and the Department of Chemistry, and an extramural member of the Koch Institute for Integrative Cancer Research at MIT, as well as an institute member of the Broad Institute and a member of the Ragon Institute.
“In ecology, people study biodiversity across regions — how animal species are distributed and interact,” explains Bokai Zhu, MIT postdoc and author on the study. “We realized we could apply those same ideas to cells in tissues. Instead of rabbits and snakes, we analyze T cells and B cells.”
By treating cell types like ecological species, MESA quantifies “biodiversity” within tissues and tracks how that diversity changes in disease. For example, in liver cancer samples, the method revealed zones where tumor cells consistently co-occurred with macrophages, suggesting these regions may drive unique disease outcomes.
“Our method reads tissues like ecosystems, uncovering cellular ‘hotspots’ that mark early signs of disease or treatment response,” Zhu adds. “This opens new possibilities for precision diagnostics and therapy design.”
MESA also offers another major advantage: It can computationally enrich tissue data without the need for more experiments. Using publicly available single-cell datasets, the tool transfers additional information — such as gene expression profiles — onto existing tissue samples. This approach deepens understanding of how spatial domains function, especially when comparing healthy and diseased tissue.
In tests across multiple datasets and tissue types, MESA uncovered spatial structures and key cell populations that were previously overlooked. It integrates different types of omics data, such as transcriptomics and proteomics, and builds a multilayered view of tissue architecture.
Currently available as a Python package, MESA is designed for academic and translational research. Although spatial omics is still too resource-intensive for routine in-hospital clinical use, the technology is gaining traction among pharmaceutical companies, particularly for drug trials where understanding tissue responses is critical.
“This is just the beginning,” says Zhu. “MESA opens the door to using ecological theory to unravel the spatial complexity of disease — and ultimately, to better predict and treat it.”
Gene circuits enable more precise control of gene therapy
Many diseases are caused by a missing or defective copy of a single gene. For decades, scientists have been working on gene therapy treatments that could cure such diseases by delivering a new copy of the missing genes to the affected cells.
Despite those efforts, very few gene therapy treatments have been approved by the FDA. One of the challenges to developing these treatments has been achieving control over how much the new gene is expressed in cells — too little and it won’t succeed, too much and it could cause serious side effects.
To help achieve more precise control of gene therapy, MIT engineers have tuned and applied a control circuit that can keep expression levels within a target range. In human cells, they showed that they could use this method to deliver genes that could help treat diseases including fragile X syndrome, a disorder that leads to intellectual disability and other developmental problems.
“In theory, gene supplementation can solve monogenic disorders that are very diverse but have a relatively straightforward gene therapy fix if you could control the therapy well enough,” says Katie Galloway, the W. M. Keck Career Development Professor in Biomedical Engineering and Chemical Engineering and the senior author of the new study.
MIT graduate student Kasey Love is the lead author of the paper, which appears today in Cell Systems. Other authors of the paper include MIT graduate students Christopher Johnstone, Emma Peterman, and Stephanie Gaglione, and Michael Birnbaum, an associate professor of biological engineering at MIT.
Delivering genes
While gene therapy holds promise for treating a variety of diseases, including hemophilia and sickle cell anemia, only a handful of treatments have been approved so far, for an inherited retinal disease and certain blood cancers.
Most gene therapy approaches use a virus to deliver a new copy of a gene, which is then integrated into the DNA of host cells. Some cells may take up many copies of the gene, while others don’t receive any.
“Simple overexpression of that payload can result in a really wide range of expression levels in the target genes as they take up different numbers of copies of those genes or just have different expression levels,” Love says. “If it's not expressing enough, that defeats the purpose of the therapy. But on the other hand, expressing at too high levels is also a problem, as that payload can be toxic.”
To try to overcome this, scientists have experimented with different types of control circuits that constrain expression of the therapeutic gene. In this study, the MIT team decided to use a type of circuit called an incoherent feedforward loop (IFFL).
In an IFFL circuit, activation of the target gene simultaneously activates production of a molecule that suppresses gene expression. One type of molecule that can be used to achieve that suppression is microRNA — a short RNA sequence that binds to messenger RNA, preventing it from being translated into protein.
In this study, the MIT team designed an IFFL circuit, called “ComMAND” (Compact microRNA-mediated attenuator of noise and dosage), so that a microRNA strand that represses mRNA translation is encoded within the therapeutic gene. The microRNA is located within a short segment called an intron, which gets spliced out of the gene when it is transcribed into mRNA. This means that whenever the gene is turned on, both the mRNA and the microRNA that represses it are produced in roughly equal amounts.
This approach allows the researchers to control the entire ComMAND circuit with just one promoter — the DNA site where gene transcription is turned on. By swapping in promoters of different strengths, the researchers can tailor how much of the therapeutic gene will be produced.
In addition to offering tighter control, the circuit’s compact design allows it to be carried on a single delivery vehicle, such as a lentivirus or adeno-associated virus, which could improve the manufacturability of these therapies. Both of those viruses are frequently used to deliver therapeutic cargoes.
“Other people have developed microRNA based incoherent feed forward loops, but what Kasey has done is put it all on a single transcript, and she showed that this gives the best possible control when you have variable delivery to cells,” Galloway says.
Precise control
To demonstrate this system, the researchers designed ComMAND circuits that could deliver the gene FXN, which is mutated in Friedreich’s ataxia — a disorder that affects the heart and nervous system. They also delivered the gene Fmr1, whose dysfunction causes fragile X syndrome. In tests in human cells, they showed that they could tune gene expression levels to about eight times the levels normally seen in healthy cells.
Without ComMAND, gene expression was more than 50 times the normal level, which could pose safety risks. Further tests in animal models would be needed to determine the optimal levels, the researchers say.
The researchers also performed tests in rat neurons, mouse fibroblasts, and human T-cells. For those cells, they delivered a gene that encodes a fluorescent protein, so they could easily measure the gene expression levels. In those cells, too, the researchers found that they could control gene expression levels more precisely than without the circuit.
The researchers now plan to study whether they could use this approach to deliver genes at a level that would restore normal function and reverse signs of disease, either in cultured cells or animal models.
“There's probably some tuning that would need to be done to the expression levels, but we understand some of those design principles, so if we needed to tune the levels up or down, I think we'd know potentially how to go about that,” Love says.
Other diseases that this approach could be applied to include Rett syndrome, muscular dystrophy and spinal muscular atrophy, the researchers say.
“The challenge with a lot of those is they're also rare diseases, so you don't have large patient populations,” Galloway says. “We're trying to build out these tools that are robust so people can figure out how to do the tuning, because the patient populations are so small and there isn't a lot of funding for solving some of these disorders.”
The research was funded by the National Institute of General Medical Sciences, the National Science Foundation, the Institute for Collaborative Biotechnologies, and the Air Force Research Laboratory.
Novel method detects microbial contamination in cell cultures
The chemistry of creativity
Senior Madison Wang, a double major in creative writing and chemistry, developed her passion for writing in middle school. Her interest in chemistry fit nicely alongside her commitment to producing engaging narratives.
Wang believes that world-building in stories supported by science and research can make for a more immersive reader experience.
“In science and in writing, you have to tell an effective story,” she says. “People respond well to stories.”
A native of Buffalo, New York, Wang applied early action for admission to MIT and learned quickly that the Institute was where she wanted to be. “It was a really good fit,” she says. “There was positive energy and vibes, and I had a great feeling overall.”
The power of science and good storytelling
“Chemistry is practical, complex, and interesting,” says Wang. “It’s about quantifying natural laws and understanding how reality works.”
Chemistry and writing both help us “see the world’s irregularity,” she continues. Together, they can erase the artificial and arbitrary line separating one from the other and work in concert to tell a more complete story about the world, the ways in which we participate in building it, and how people and objects exist in and move through it.
“Understanding magnetism, material properties, and believing in the power of magic in a good story … these are why we’re drawn to explore,” she says. “Chemistry describes why things are the way they are, and I use it for world-building in my creative writing.”
Wang lauds MIT’s creative writing program and cites a course she took with Comparative Media Studies/Writing Professor and Pulitzer Prize winner Junot Díaz as an affirmation of her choice. Seeing and understanding the world through the eyes of a scientist — its building blocks, the ways the pieces fit and function together — help explain her passion for chemistry, especially inorganic and physical chemistry.
Wang cites the work of authors like Sam Kean and Knight Science Journalism Program Director Deborah Blum as part of her inspiration to study science. The books “The Disappearing Spoon” by Kean and “The Poisoner’s Handbook” by Blum “both present historical perspectives, opting for a story style to discuss the events and people involved,” she says. “They each put a lot of work into bridging the gap between what can sometimes be sterile science and an effective narrative that gets people to care about why the science matters.”
Genres like fantasy and science fiction are complementary, according to Wang. “Constructing an effective world means ensuring readers understand characters’ motivations — the ‘why’ — and ensuring it makes sense,” she says. “It’s also important to show how actions and their consequences influence and motivate characters.”
As she explores the world’s building blocks inside and outside the classroom, Wang works to navigate multiple genres in her writing, as with her studies in chemistry. “I like romance and horror, too,” she says. “I have gripes with committing to a single genre, so I just take whatever I like from each and put them in my stories.”
In chemistry, Wang favors an environment in which scientists can regularly test their ideas. “It’s important to ground chemistry in the real world to create connections for students,” she argues. Advancements in the field have occurred, she notes, because scientists could exit the realm of theory and apply ideas practically.
“Fritz Haber’s work on ammonia synthesis revolutionized approaches to food supply chains,” she says, referring to the German chemist and Nobel laureate. “Converting nitrogen and hydrogen gas to ammonia for fertilizer marked a dramatic shift in how farming could work.” This kind of work could only result from the consistent, controlled, practical application of the theories scientists consider in laboratory environments.
A future built on collaboration and cooperation
Watching the world change dramatically and seeing humanity struggle to grapple with the implications of phenomena like climate change, political unrest, and shifting alliances, Wang emphasizes the importance of deconstructing silos in academia and the workplace. Technology can be a tool for harm, she notes, so inviting more people inside previously segregated spaces helps everyone.
Criticism in both chemistry and writing, Wang believes, are valuable tools for continuous improvement. Effective communication, explaining complex concepts, and partnering to develop long-term solutions are invaluable when working at the intersection of history, art, and science. In writing, Wang says, criticism can help define areas to improve writers’ stories and shape interesting ideas.
“We’ve seen the positive results that can occur with effective science writing, which requires rigor and fact-checking,” she says. “MIT’s cross-disciplinary approach to our studies, alongside feedback from teachers and peers, is a great set of tools to carry with us regardless of where we are.”
Wang explores connections between science and stories in her leisure time, too. “I’m a member of MIT’s Anime Club and I enjoy participating in MIT’s Sport Taekwondo Club,” she says. The competitive aspect in tae kwon do allows for her to feed her competitive drive and gets her out of her head. Her participation in DAAMIT (Digital Art and Animation at MIT) creates connections with different groups of people and gives her ideas she can use to tell better stories. “It’s fascinating exploring others’ minds,” she says.
Wang argues that there’s a false divide between science and the humanities and wants the work she does after graduation to bridge that divide. “Writing and learning about science can help,” she asserts. “Fields like conservation and history allow for continued exploration of that intersection.”
Ultimately, Wang believes it’s important to examine narratives carefully and to question notions of science’s inherent superiority over humanities fields. “The humanities and science have equal value,” she says.
Artificial intelligence enhances air mobility planning
Every day, hundreds of chat messages flow between pilots, crew, and controllers of the Air Mobility Command's 618th Air Operations Center (AOC). These controllers direct a thousand-wide fleet of aircraft, juggling variables to determine which routes to fly, how much time fueling or loading supplies will take, or who can fly those missions. Their mission planning allows the U.S. Air Force to quickly respond to national security needs around the globe.
"It takes a lot of work to get a missile defense system across the world, for example, and this coordination used to be done through phone and email. Now, we are using chat, which creates opportunities for artificial intelligence to enhance our workflows," says Colonel Joseph Monaco, the director of strategy at the 618th AOC, which is the Department of Defense's largest air operations center.
The 618th AOC is sponsoring Lincoln Laboratory to develop these artificial intelligence tools, through a project called Conversational AI Technology for Transition (CAITT).
During a visit to Lincoln Laboratory from the 618th AOC's headquarters at Scott Air Force Base in Illinois, Colonel Monaco, Lieutenant Colonel Tim Heaton, and Captain Laura Quitiquit met with laboratory researchers to discuss CAITT. CAITT is a part of a broader effort to transition AI technology into a major Air Force modernization initiative, called the Next Generation Information Technology for Mobility Readiness Enhancement (NITMRE).
The type of AI being used in this project is natural language processing (NLP), which allows models to read and process human language. "We are utilizing NLP to map major trends in chat conversations, retrieve and cite specific information, and identify and contextualize critical decision points," says Courtland VanDam, a researcher in Lincoln Laboratory's AI Technology and Systems Group, which is leading the project. CAITT encompasses a suite of tools leveraging NLP.
One of the most mature tools, topic summarization, extracts trending topics from chat messages and formats those topics in a user-friendly display highlighting critical conversations and emerging issues. For example, a trending topic might read, "Crew members missing Congo visas, potential for delay." The entry shows the number of chats related to the topic and summarizes in bullet points the main points of conversations, linking back to specific chat exchanges.
"Our missions are very time-dependent, so we have to synthesize a lot of information quickly. This feature can really cue us as to where our efforts should be focused," says Monaco.
Another tool in production is semantic search. This tool improves upon the chat service's search engine, which currently returns empty results if chat messages do not contain every word in the query. Using the new tool, users can ask questions in a natural language format, such as why a specific aircraft is delayed, and receive intelligent results. "It incorporates a search model based on neural networks that can understand the user intent of the query and go beyond term matching," says VanDam.
Other tools under development aim to automatically add users to chat conversations deemed relevant to their expertise, predict the amount of ground time needed to unload specific types of cargo from aircraft, and summarize key processes from regulatory documents as a guide to operators as they develop mission plans.
The CAITT project grew out of the DAF–MIT AI Accelerator, a three-pronged effort between MIT, Lincoln Laboratory, and the Department of the Air Force (DAF) to develop and transition AI algorithms and systems to advance both the DAF and society. "Through our involvement in the AI Accelerator via the NITMRE project, we realized we could do something innovative with all of the unstructured chat information in the 618th AOC," says Heaton.
As laboratory researchers advance their prototypes of CAITT tools, they have begun to transition them to the 402nd Software Engineering Group, a software provider for the Department of Defense. That group will implement the tools into the operational software environment in use by the 618th AOC.
Designing a new way to optimize complex coordinated systems
Coordinating complicated interactive systems, whether it’s the different modes of transportation in a city or the various components that must work together to make an effective and efficient robot, is an increasingly important subject for software designers to tackle. Now, researchers at MIT have developed an entirely new way of approaching these complex problems, using simple diagrams as a tool to reveal better approaches to software optimization in deep-learning models.
They say the new method makes addressing these complex tasks so simple that it can be reduced to a drawing that would fit on the back of a napkin.
The new approach is described in the journal Transactions of Machine Learning Research, in a paper by incoming doctoral student Vincent Abbott and Professor Gioele Zardini of MIT’s Laboratory for Information and Decision Systems (LIDS).
“We designed a new language to talk about these new systems,” Zardini says. This new diagram-based “language” is heavily based on something called category theory, he explains.
It all has to do with designing the underlying architecture of computer algorithms — the programs that will actually end up sensing and controlling the various different parts of the system that’s being optimized. “The components are different pieces of an algorithm, and they have to talk to each other, exchange information, but also account for energy usage, memory consumption, and so on.” Such optimizations are notoriously difficult because each change in one part of the system can in turn cause changes in other parts, which can further affect other parts, and so on.
The researchers decided to focus on the particular class of deep-learning algorithms, which are currently a hot topic of research. Deep learning is the basis of the large artificial intelligence models, including large language models such as ChatGPT and image-generation models such as Midjourney. These models manipulate data by a “deep” series of matrix multiplications interspersed with other operations. The numbers within matrices are parameters, and are updated during long training runs, allowing for complex patterns to be found. Models consist of billions of parameters, making computation expensive, and hence improved resource usage and optimization invaluable.
Diagrams can represent details of the parallelized operations that deep-learning models consist of, revealing the relationships between algorithms and the parallelized graphics processing unit (GPU) hardware they run on, supplied by companies such as NVIDIA. “I’m very excited about this,” says Zardini, because “we seem to have found a language that very nicely describes deep learning algorithms, explicitly representing all the important things, which is the operators you use,” for example the energy consumption, the memory allocation, and any other parameter that you’re trying to optimize for.
Much of the progress within deep learning has stemmed from resource efficiency optimizations. The latest DeepSeek model showed that a small team can compete with top models from OpenAI and other major labs by focusing on resource efficiency and the relationship between software and hardware. Typically, in deriving these optimizations, he says, “people need a lot of trial and error to discover new architectures.” For example, a widely used optimization program called FlashAttention took more than four years to develop, he says. But with the new framework they developed, “we can really approach this problem in a more formal way.” And all of this is represented visually in a precisely defined graphical language.
But the methods that have been used to find these improvements “are very limited,” he says. “I think this shows that there’s a major gap, in that we don’t have a formal systematic method of relating an algorithm to either its optimal execution, or even really understanding how many resources it will take to run.” But now, with the new diagram-based method they devised, such a system exists.
Category theory, which underlies this approach, is a way of mathematically describing the different components of a system and how they interact in a generalized, abstract manner. Different perspectives can be related. For example, mathematical formulas can be related to algorithms that implement them and use resources, or descriptions of systems can be related to robust “monoidal string diagrams.” These visualizations allow you to directly play around and experiment with how the different parts connect and interact. What they developed, he says, amounts to “string diagrams on steroids,” which incorporates many more graphical conventions and many more properties.
“Category theory can be thought of as the mathematics of abstraction and composition,” Abbott says. “Any compositional system can be described using category theory, and the relationship between compositional systems can then also be studied.” Algebraic rules that are typically associated with functions can also be represented as diagrams, he says. “Then, a lot of the visual tricks we can do with diagrams, we can relate to algebraic tricks and functions. So, it creates this correspondence between these different systems.”
As a result, he says, “this solves a very important problem, which is that we have these deep-learning algorithms, but they’re not clearly understood as mathematical models.” But by representing them as diagrams, it becomes possible to approach them formally and systematically, he says.
One thing this enables is a clear visual understanding of the way parallel real-world processes can be represented by parallel processing in multicore computer GPUs. “In this way,” Abbott says, “diagrams can both represent a function, and then reveal how to optimally execute it on a GPU.”
The “attention” algorithm is used by deep-learning algorithms that require general, contextual information, and is a key phase of the serialized blocks that constitute large language models such as ChatGPT. FlashAttention is an optimization that took years to develop, but resulted in a sixfold improvement in the speed of attention algorithms.
Applying their method to the well-established FlashAttention algorithm, Zardini says that “here we are able to derive it, literally, on a napkin.” He then adds, “OK, maybe it’s a large napkin.” But to drive home the point about how much their new approach can simplify dealing with these complex algorithms, they titled their formal research paper on the work “FlashAttention on a Napkin.”
This method, Abbott says, “allows for optimization to be really quickly derived, in contrast to prevailing methods.” While they initially applied this approach to the already existing FlashAttention algorithm, thus verifying its effectiveness, “we hope to now use this language to automate the detection of improvements,” says Zardini, who in addition to being a principal investigator in LIDS, is the Rudge and Nancy Allen Assistant Professor of Civil and Environmental Engineering, and an affiliate faculty with the Institute for Data, Systems, and Society.
The plan is that ultimately, he says, they will develop the software to the point that “the researcher uploads their code, and with the new algorithm you automatically detect what can be improved, what can be optimized, and you return an optimized version of the algorithm to the user.”
In addition to automating algorithm optimization, Zardini notes that a robust analysis of how deep-learning algorithms relate to hardware resource usage allows for systematic co-design of hardware and software. This line of work integrates with Zardini’s focus on categorical co-design, which uses the tools of category theory to simultaneously optimize various components of engineered systems.
Abbott says that “this whole field of optimized deep learning models, I believe, is quite critically unaddressed, and that’s why these diagrams are so exciting. They open the doors to a systematic approach to this problem.”
“I’m very impressed by the quality of this research. ... The new approach to diagramming deep-learning algorithms used by this paper could be a very significant step,” says Jeremy Howard, founder and CEO of Answers.ai, who was not associated with this work. “This paper is the first time I’ve seen such a notation used to deeply analyze the performance of a deep-learning algorithm on real-world hardware. ... The next step will be to see whether real-world performance gains can be achieved.”
“This is a beautifully executed piece of theoretical research, which also aims for high accessibility to uninitiated readers — a trait rarely seen in papers of this kind,” says Petar Velickovic, a senior research scientist at Google DeepMind and a lecturer at Cambridge University, who was not associated with this work. These researchers, he says, “are clearly excellent communicators, and I cannot wait to see what they come up with next!”
The new diagram-based language, having been posted online, has already attracted great attention and interest from software developers. A reviewer from Abbott’s prior paper introducing the diagrams noted that “The proposed neural circuit diagrams look great from an artistic standpoint (as far as I am able to judge this).” “It’s technical research, but it’s also flashy!” Zardini says.
Martina Solano Soto wants to solve the mysteries of the universe, and MIT Open Learning is part of her plan
Martina Solano Soto is on a mission to pursue her passion for physics and, ultimately, to solve big problems. Since she was a kid, she has had a lot of questions: Why do animals exist? What are we doing here? Why don’t we know more about the Big Bang? And she has been determined to find answers.
“That’s why I found MIT OpenCourseWare,” says Solano, of Girona, Spain. “When I was 14, I started to browse and wanted to find information that was reliable, dynamic, and updated. I found MIT resources by chance, and it’s one of the biggest things that has happened to me.”
In addition to OpenCourseWare, which offers free, online, open educational resources from more than 2,500 courses that span the MIT undergraduate and graduate curriculum, Solano also took advantage of the MIT Open Learning Library. Part of MIT Open Learning, the library offers free courses and invites people to learn at their own pace while receiving immediate feedback through interactive content and exercises.
Solano, who is now 17, has studied quantum physics via OpenCourseWare — also part of MIT Open Learning — and she has taken Open Learning Library courses on electricity and magnetism, calculus, quantum computation, and kinematics. She even created her own syllabus, complete with homework, to ensure she stayed on track and kept her goals in mind. Those goals include studying math and physics as an undergraduate. She also hopes to study general relativity and quantum mechanics at the doctoral level. “I really want to unify them to find a theory of quantum gravity,” she says. “I want to spend all my life studying and learning.”
Solano was particularly motivated by Barton Zwiebach, professor of physics, whose courses Quantum Physics I and Quantum Physics II are available on MIT OpenCourseWare. She took advantage of all of the resources that were provided: video lectures, assignments, lecture notes, and exams.
“I was fascinated by the way he explained. I just understood everything, and it was amazing,” she says. “Then, I learned about his book, 'A First Course in String Theory,' and it was because of him that I learned about black holes and gravity. I’m extremely grateful.”
While Solano gives much credit to the variety and quality of Open Learning resources, she also stresses the importance of being organized. As a high school student, she has things other than string theory on her mind: her school, extracurriculars, friends, and family.
For anyone in a similar position, she recommends “figuring out what you’re most interested in and how you can take advantage of the flexibility of Open Learning resources. Is there a half-hour before bed to watch a video, or some time on the weekend to read lecture notes? If you figure out how to make it work for you, it is definitely worth the effort.”
“If you do that, you are going to grow academically and personally,” Solano says. “When you go to school, you will feel more confident.”
And Solano is not slowing down. She plans to continue using Open Learning resources, this time turning her attention to graduate-level courses, all in service of her curiosity and drive for knowledge.
“When I was younger, I read the book 'The God Equation,' by Michio Kaku, which explains quantum gravity theory. Something inside me awoke,” she recalls. “I really want to know what happens at the center of a black hole, and how we unify quantum mechanics, black holes, and general relativity. I decided that I want to invest my life in this.”
She is well on her way. Last summer, Solano applied for and received a scholarship to study particle physics at the Autonomous University of Barcelona. This summer, she’s applying for opportunities to study the cosmos. All of this, she says, is only possible thanks to what she has learned with MIT Open Learning resources.
“The applications ask you to explain what you like about physics, and thanks to MIT, I’m able to express that,” Solano says. “I’m able to go for these scholarships and really fight for what I dream.”
Luna: A moon on Earth
On March 6, MIT launched its first lunar landing mission since the Apollo era, sending three payloads — the AstroAnt, the RESOURCE 3D camera, and the HUMANS nanowafer — to the moon’s south polar region. The mission was based out of Luna, a mission control space designed by MIT Department of Architecture students and faculty in collaboration with the MIT Space Exploration Initiative, Inploration, and Simpson Gumpertz and Heger. It is installed in the MIT Media Lab ground-floor gallery and is open to the public as part of Artfinity, MIT’s Festival for the Arts. The installation allows visitors to observe payload operators at work and interact with the software used for the mission, thanks to virtual reality.
A central hub for mission operations, the control room is a structural and conceptual achievement, balancing technical challenges with a vision for an immersive experience, and the result of a multidisciplinary approach. “This will be our moon on Earth,” says Mateo Fernandez, a third-year MArch student and 2024 MAD Design Fellow, who designed and fabricated Luna in collaboration with Nebyu Haile, a PhD student in the Building Technology program in the Department of Architecture, and Simon Lesina Debiasi, a research assistant in the SMArchS Computation program and part of the Self-Assembly Lab. “The design was meant for people — for the researchers to be able to see what’s happening at all times, and for the spectators to have a 360 panoramic view of everything that’s going on,” explains Fernandez. “A key vision of the team was to create a control room that broke away from the traditional, closed-off model — one that instead invited the public to observe, ask questions, and engage with the mission,” adds Haile.
For this project, students were advised by Skylar Tibbits, founder and co-director of the Self-Assembly Lab, associate professor of design research, and the Morningside Academy for Design (MAD)’s assistant director for education; J. Roc Jih, associate professor of the practice in architectural design; John Ochsendorf, MIT Class of 1942 Professor with appointments in the departments of Architecture and Civil and Environmental Engineering, and founding director of MAD; and Brandon Clifford, associate professor of architecture. The team worked closely with Cody Paige, director of the Space Exploration Initiative at the Media Lab, and her collaborators, emphasizing that they “tried to keep things very minimal, very simple, because at the end of the day,” explains Fernandez, “we wanted to create a design that allows the researchers to shine and the mission to shine.”
“This project grew out of the Space Architecture class we co-taught with Cody Paige and astronaut and MIT AeroAstro [Department of Aeronautics and Astronautics] faculty member Jeff Hoffman” in the fall semester, explains Tibbits. “Mateo was part of that studio, and from there, Cody invited us to design the mission control project. We then brought Mateo onboard, Simon, Nebyu, and the rest of the project team.” According to Tibbits, “this project represents MIT’s mind-and-hand ethos. We had designers, architects, artists, computational experts, and engineers working together, reflecting the polymath vision — left brain, right brain, the creative and the technical coming together to make this possible.”
Luna was funded and informed by Tibbits and Jih’s Professor Amar G. Bose Research Grant Program. “J. Jih and I had been doing research for the Bose grant around basalt and mono-material construction,” says Tibbits, adding that they “had explored foamed glass materials similar to pumice or foamed basalt, which are also similar to lunar regolith.” “FOAMGLAS is typically used for insulation, but it has diverse applications, including direct ground contact and exterior walls, with strong acoustic and thermal properties,” says Jih. “We helped Mateo understand how the material is used in architecture today, and how it could be applied in this project, aligning with our work on new material palettes and mono-material construction techniques.”
Additional funding came from Inploration, a project run by creative director, author, and curator Lawrence Azerrad, as well as expeditionary artist, curator, and analog astronaut artist Richelle Ellis, and Comcast, a Media Lab member company. It was also supported by the MIT Morningside Academy for Design through Fernandez’s Design Fellowship. Additional support came from industry members such as Owens Corning (construction materials), Bose (communications), as well as MIT Media Lab member companies Dell Technologies (operations hardware) and Steelcase (operations seating).
A moon on Earth
While the lunar mission ended prematurely, the team says it achieved success in the design and construction of a control room embodying MIT’s design approach and capacity to explore new technologies while maintaining simplicity. Luna looks like variations of the moon, offering different perspectives of the moon’s round or crescent shape, depending on the viewer’s position.
“What’s remarkable is how close the final output is to Mateo’s original sketches and renderings,” Tibbits notes. “That often doesn’t happen — where the final built project aligns so precisely with the initial design intent.”
Luna’s entire structure is built from FOAMGLAS, a durable material composed of glass cells usually used for insulation. “FOAMGLAS is an interesting material,” says Lesina Debiasi, who supported fabrication efforts, ensuring a fast and safe process. “It’s relatively durable and light, but can easily be crumbled with a sharp edge or blade, requiring every step of the fabrication process — cutting, texturing, sealing — to be carefully controlled.”
Fernandez, whose design experience was influenced by the idea that “simple moves” are most powerful, explains: “We’re giving a second life to materials that are not thought of for building construction … and I think that’s an effective idea. Here, you don’t need wood, concrete, rebar — you can build with one material only.” While the interior of the dome-shaped construction is smooth, the exterior was hand textured to evoke the basalt-like surface of the moon.
The lightweight cellular glass produced by Owens Corning, which sponsored part of the material, comes as an unexpected choice for a compression structure — a type of architectural design where stability is achieved through the natural force of compression, usually implying heavy materials. The control room doesn’t use connections or additional supports, and depends upon the precise placement, size, and weight of individual blocks to create a stable form from a succession of arches.
“Traditional compression structures rely on their own weight for stability, but using a material that is more than 10 times lighter than masonry meant we had to rethink everything. It was about finding the perfect balance between design vision and structural integrity,” reflects Haile, who was responsible for the structural calculations for the dome and its support.
Compression relies on gravity, and wouldn’t be a viable construction method on the moon itself. “We’re building using physics, loads, structures, and equilibrium to create this thing that looks like the moon, but depends on Earth’s forces to be built. I think people don’t see that at first, but there’s something cheeky and ironic about it,” confides Fernandez, acknowledging that the project merges historical building methods with contemporary design.
The location and purpose of Luna — both a work space and an installation engaging the public — implied balancing privacy and transparency to achieve functionality. “One of the most important design elements that reflected this vision was the openness of the dome,” says Haile. “We worked closely from the start to find the right balance — adjusting the angle and size of the opening to make the space feel welcoming, while still offering some privacy to those working inside.”
The power of collaboration
With the FOAMGLAS material, the team had to invent a fabrication process that would achieve the initial vision while maintaining structural integrity. Sourcing a material with radically different properties compared to conventional construction implied collaborating closely on the engineering front, the lightweight nature of the cellular glass requiring creative problem-solving: “What appears perfect in digital models doesn’t always translate seamlessly into the real world,” says Haile. “The slope, curves, and overall geometry directly determine whether the dome will stand, requiring Mateo and me to work in sync from the very beginning through the end of construction.” While the engineering was primarily led by Haile and Ochsendorf, the structural design was officially reviewed and approved by Paul Kassabian at Simpson Gumpertz and Heger (SGH), ensuring compliance with engineering standards and building codes.
“None of us had worked with FOAMGLAS before, and we needed to figure out how best to cut, texture, and seal it,” says Lesina Debiasi. “Since each row consists of a distinct block shape and specific angles, ensuring accuracy and repeatability across all the blocks became a major challenge. Since we had to cut each individual block four times before we were able to groove and texture the surface, creating a safe production process and mitigating the distribution of dust was critical,” he explains. “Working inside a tent, wearing personal protective equipment like masks, visors, suits, and gloves made it possible to work for an extended period with this material.”
In addition, manufacturing introduced small margins of error threatening the structural integrity of the dome, prompting hands-on experimentation. “The control room is built from 12 arches,” explains Fernandez. “When one of the arches closes, it becomes stable, and you can move on to the next one … Going from side to side, you meet at the middle and close the arch using a special block — a keystone, which was cut to measure,” he says. “In conversations with our advisors, we decided to account for irregularities in the final keystone of each row. Once this custom keystone sat in place, the forces would stabilize the arch and make it secure,” adds Lesina Debiasi.
“This project exemplified the best practices of engineers and architects working closely together from design inception to completion — something that was historically common but is less typical today,” says Haile. “This collaboration was not just necessary — it ultimately improved the final result.”
Fernandez, who is supported this year by the MAD Design Fellowship, expressed how “the fellowship gave [him] the freedom to explore [his] passions and also keep [his] agency.”
“In a way, this project embodies what design education at MIT should be,” Tibbits reflects. “We’re building at full scale, with real-world constraints, experimenting at the limits of what we know — design, computation, engineering, and science. It’s hands-on, highly experimental, and deeply collaborative, which is exactly what we dream of for MAD, and MIT’s design education more broadly.”
“Luna, our physical lunar mission control, highlights the incredible collaboration across the Media Lab, Architecture, and the School of Engineering to bring our lunar mission to the world. We are democratizing access to space for all,” says Dava Newman, Media Lab director and Apollo Professor of Astronautics.
A full list of contributors and supporters can be found at the Morningside Academy of Design's website.
Six from MIT elected to American Academy of Arts and Sciences for 2025
Six MIT faculty members are among the nearly 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced April 23.
One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.
Those elected from MIT in 2025 are:
- Lotte Bailyn, T. Wilson Professor of Management Emerita;
- Gareth McKinley, School of Engineering Professor of Teaching Innovation;
- Nasser Rabbat, Aga Khan Professor;
- Susan Silbey, Leon and Anne Goldberg Professor of Humanities and professor of sociology and anthropology;
- Anne Whiston Spirn, Cecil and Ida Green Distinguished Professor of Landscape Architecture and Planning; and
- Catherine Wolfram, William Barton Rogers Professor in Energy and professor of applied economics.
“These new members’ accomplishments speak volumes about the human capacity for discovery, creativity, leadership, and persistence. They are a stellar testament to the power of knowledge to broaden our horizons and deepen our understanding,” says Academy President Laurie L. Patton. “We invite every new member to celebrate their achievement and join the Academy in our work to promote the common good.”
Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.
Robotic system zeroes in on objects most relevant for helping humans
For a robot, the real world is a lot to take in. Making sense of every data point in a scene can take a huge amount of computational effort and time. Using that information to then decide how to best help a human is an even thornier exercise.
Now, MIT roboticists have a way to cut through the data noise, to help robots focus on the features in a scene that are most relevant for assisting humans.
Their approach, which they aptly dub “Relevance,” enables a robot to use cues in a scene, such as audio and visual information, to determine a human’s objective and then quickly identify the objects that are most likely to be relevant in fulfilling that objective. The robot then carries out a set of maneuvers to safely offer the relevant objects or actions to the human.
The researchers demonstrated the approach with an experiment that simulated a conference breakfast buffet. They set up a table with various fruits, drinks, snacks, and tableware, along with a robotic arm outfitted with a microphone and camera. Applying the new Relevance approach, they showed that the robot was able to correctly identify a human’s objective and appropriately assist them in different scenarios.
In one case, the robot took in visual cues of a human reaching for a can of prepared coffee, and quickly handed the person milk and a stir stick. In another scenario, the robot picked up on a conversation between two people talking about coffee, and offered them a can of coffee and creamer.
Overall, the robot was able to predict a human’s objective with 90 percent accuracy and to identify relevant objects with 96 percent accuracy. The method also improved a robot’s safety, reducing the number of collisions by more than 60 percent, compared to carrying out the same tasks without applying the new method.
“This approach of enabling relevance could make it much easier for a robot to interact with humans,” says Kamal Youcef-Toumi, professor of mechanical engineering at MIT. “A robot wouldn’t have to ask a human so many questions about what they need. It would just actively take information from the scene to figure out how to help.”
Youcef-Toumi’s group is exploring how robots programmed with Relevance can help in smart manufacturing and warehouse settings, where they envision robots working alongside and intuitively assisting humans.
Youcef-Toumi, along with graduate students Xiaotong Zhang and Dingcheng Huang, will present their new method at the IEEE International Conference on Robotics and Automation (ICRA) in May. The work builds on another paper presented at ICRA the previous year.
Finding focus
The team’s approach is inspired by our own ability to gauge what’s relevant in daily life. Humans can filter out distractions and focus on what’s important, thanks to a region of the brain known as the Reticular Activating System (RAS). The RAS is a bundle of neurons in the brainstem that acts subconsciously to prune away unnecessary stimuli, so that a person can consciously perceive the relevant stimuli. The RAS helps to prevent sensory overload, keeping us, for example, from fixating on every single item on a kitchen counter, and instead helping us to focus on pouring a cup of coffee.
“The amazing thing is, these groups of neurons filter everything that is not important, and then it has the brain focus on what is relevant at the time,” Youcef-Toumi explains. “That’s basically what our proposition is.”
He and his team developed a robotic system that broadly mimics the RAS’s ability to selectively process and filter information. The approach consists of four main phases. The first is a watch-and-learn “perception” stage, during which a robot takes in audio and visual cues, for instance from a microphone and camera, that are continuously fed into an AI “toolkit.” This toolkit can include a large language model (LLM) that processes audio conversations to identify keywords and phrases, and various algorithms that detect and classify objects, humans, physical actions, and task objectives. The AI toolkit is designed to run continuously in the background, similarly to the subconscious filtering that the brain’s RAS performs.
The second stage is a “trigger check” phase, which is a periodic check that the system performs to assess if anything important is happening, such as whether a human is present or not. If a human has stepped into the environment, the system’s third phase will kick in. This phase is the heart of the team’s system, which acts to determine the features in the environment that are most likely relevant to assist the human.
To establish relevance, the researchers developed an algorithm that takes in real-time predictions made by the AI toolkit. For instance, the toolkit’s LLM may pick up the keyword “coffee,” and an action-classifying algorithm may label a person reaching for a cup as having the objective of “making coffee.” The team’s Relevance method would factor in this information to first determine the “class” of objects that have the highest probability of being relevant to the objective of “making coffee.” This might automatically filter out classes such as “fruits” and “snacks,” in favor of “cups” and “creamers.” The algorithm would then further filter within the relevant classes to determine the most relevant “elements.” For instance, based on visual cues of the environment, the system may label a cup closest to a person as more relevant — and helpful — than a cup that is farther away.
In the fourth and final phase, the robot would then take the identified relevant objects and plan a path to physically access and offer the objects to the human.
Helper mode
The researchers tested the new system in experiments that simulate a conference breakfast buffet. They chose this scenario based on the publicly available Breakfast Actions Dataset, which comprises videos and images of typical activities that people perform during breakfast time, such as preparing coffee, cooking pancakes, making cereal, and frying eggs. Actions in each video and image are labeled, along with the overall objective (frying eggs, versus making coffee).
Using this dataset, the team tested various algorithms in their AI toolkit, such that, when receiving actions of a person in a new scene, the algorithms could accurately label and classify the human tasks and objectives, and the associated relevant objects.
In their experiments, they set up a robotic arm and gripper and instructed the system to assist humans as they approached a table filled with various drinks, snacks, and tableware. They found that when no humans were present, the robot’s AI toolkit operated continuously in the background, labeling and classifying objects on the table.
When, during a trigger check, the robot detected a human, it snapped to attention, turning on its Relevance phase and quickly identifying objects in the scene that were most likely to be relevant, based on the human’s objective, which was determined by the AI toolkit.
“Relevance can guide the robot to generate seamless, intelligent, safe, and efficient assistance in a highly dynamic environment,” says co-author Zhang.
Going forward, the team hopes to apply the system to scenarios that resemble workplace and warehouse environments, as well as to other tasks and objectives typically performed in household settings.
“I would want to test this system in my home to see, for instance, if I’m reading the paper, maybe it can bring me coffee. If I’m doing laundry, it can bring me a laundry pod. If I’m doing repair, it can bring me a screwdriver,” Zhang says. “Our vision is to enable human-robot interactions that can be much more natural and fluent.”
This research was made possible by the support and partnership of King Abdulaziz City for Science and Technology (KACST) through the Center for Complex Engineering Systems at MIT and KACST.
Wearable device tracks individual cells in the bloodstream in real time
Researchers at MIT have developed a noninvasive medical monitoring device powerful enough to detect single cells within blood vessels, yet small enough to wear like a wristwatch. One important aspect of this wearable device is that it can enable continuous monitoring of circulating cells in the human body.
The technology was presented online on March 3 by the journal npj Biosensing and is forthcoming in the journal’s print version.
The device — named CircTrek — was developed by researchers in the Nano-Cybernetic Biotrek research group, led by Deblina Sarkar, assistant professor at MIT and AT&T Career Development Chair at the MIT Media Lab. This technology could greatly facilitate early diagnosis of disease, detection of disease relapse, assessment of infection risk, and determination of whether a disease treatment is working, among other medical processes.
Whereas traditional blood tests are like a snapshot of a patient’s condition, CircTrek was designed to present real-time assessment, referred to in the npj Biosensing paper as having been “an unmet goal to date.” A different technology that offers monitoring of cells in the bloodstream with some continuity, in vivo flow cytometry, “requires a room-sized microscope, and patients need to be there for a long time,” says Kyuho Jang, a PhD student in Sarkar’s lab.
CircTrek, on the other hand, which is equipped with an onboard Wi-Fi module, could even monitor a patient’s circulating cells at home and send that information to the patient’s doctor or care team.
“CircTrek offers a path to harnessing previously inaccessible information, enabling timely treatments, and supporting accurate clinical decisions with real-time data,” says Sarkar. “Existing technologies provide monitoring that is not continuous, which can lead to missing critical treatment windows. We overcome this challenge with CircTrek.”
The device works by directing a focused laser beam to stimulate cells beneath the skin that have been fluorescently labeled. Such labeling can be accomplished with a number of methods, including applying antibody-based fluorescent dyes to the cells of interest or genetically modifying such cells so that they express fluorescent proteins.
For example, a patient receiving CAR T cell therapy, in which immune cells are collected and modified in a lab to fight cancer (or, experimentally, to combat HIV or Covid-19), could have those cells labeled at the same time with fluorescent dyes or genetic modification so the cells express fluorescent proteins. Importantly, cells of interest can also be labeled with in vivo labeling methods approved in humans. Once the cells are labeled and circulating in the bloodstream, CircTrek is designed to apply laser pulses to enhance and detect the cells’ fluorescent signal while an arrangement of filters minimizes low-frequency noise such as heartbeats.
“We optimized the optomechanical parts to reduce noise significantly and only capture the signal from the fluorescent cells,” says Jang.
Detecting the labeled CAR T cells, CircTrek could assess whether the cell therapy treatment is working. As an example, persistence of the CAR T cells in the blood after treatment is associated with better outcomes in patients with B-cell lymphoma.
To keep CircTrek small and wearable, the researchers were able to miniaturize the components of the device, such as the circuit that drives the high-intensity laser source and keeps the power level of the laser stable to avoid false readings.
The sensor that detects the fluorescent signals of the labeled cells is also minute, and yet it is capable of detecting a quantity of light equivalent to a single photon, Jang says.
The device’s subcircuits, including the laser driver and the noise filters, were custom-designed to fit on a circuit board measuring just 42 mm by 35 mm, allowing CircTrek to be approximately the same size as a smartwatch.
CircTrek was tested on an in vitro configuration that simulated blood flow beneath human skin, and its single-cell detection capabilities were verified through manual counting with a high-resolution confocal microscope. For the in vitro testing, a fluorescent dye called Cyanine5.5 was employed. That particular dye was selected because it reaches peak activation at wavelengths within skin tissue’s optical window, or the range of wavelengths that can penetrate the skin with minimal scattering.
The safety of the device, particularly the temperature increase on experimental skin tissue caused by the laser, was also investigated. An increase of 1.51 degrees Celsius at the skin surface was determined to be well below heating that would damage tissue, with enough of a margin that even increasing the device’s area of detection, and its power, in order to ensure the observation of at least one blood vessel could be safely permitted.
While clinical translation of CircTrek will require further steps, Jang says its parameters can be modified to broaden its potential, so that doctors could be provided with critical information on nearly any patient.
A brief history of expansion microscopy
Nearly 150 years ago, scientists began to imagine how information might flow through the brain based on the shapes of neurons they had seen under the microscopes of the time. With today’s imaging technologies, scientists can zoom in much further, seeing the tiny synapses through which neurons communicate with one another, and even the molecules the cells use to relay their messages. These inside views can spark new ideas about how healthy brains work and reveal important changes that contribute to disease.
This sharper view of biology is not just about the advances that have made microscopes more powerful than ever before. Using methodology developed in the lab of MIT McGovern Institute for Brain Research investigator Edward Boyden, researchers around the world are imaging samples that have been swollen to as much as 20 times their original size so their finest features can be seen more clearly.
“It’s a very different way to do microscopy,” says Boyden, who is also a Howard Hughes Medical Institute (HHMI) investigator, a professor of brain and cognitive sciences and biological engineering, and a member of the Yang Tan Collective at MIT. “In contrast to the last 300 years of bioimaging, where you use a lens to magnify an image of light from an object, we physically magnify objects themselves.” Once a tissue is expanded, Boyden says, researchers can see more even with widely available, conventional microscopy hardware.
Boyden’s team introduced this approach, which they named expansion microscopy (ExM), in 2015. Since then, they have been refining the method and adding to its capabilities, while researchers at MIT and beyond deploy it to learn about life on the smallest of scales.
“It’s spreading very rapidly throughout biology and medicine,” Boyden says. “It’s being applied to kidney disease, the fruit fly brain, plant seeds, the microbiome, Alzheimer’s disease, viruses, and more.”
Origins of ExM
To develop expansion microscopy, Boyden and his team turned to hydrogel, a material with remarkable water-absorbing properties that had already been put to practical use; it’s layered inside disposable diapers to keep babies dry. Boyden’s lab hypothesized that hydrogels could retain their structure while they absorbed hundreds of times their original weight in water, expanding the space between their chemical components as they swell.
After some experimentation, Boyden’s team settled on four key steps to enlarging tissue samples for better imaging. First, the tissue must be infused with a hydrogel. Components of the tissue, biomolecules, are anchored to the gel’s web-like matrix, linking them directly to the molecules that make up the gel. Then the tissue is chemically softened and water is added. As the hydrogel absorbs the water, it swells and the tissue expands, growing evenly so the relative positions of its components are preserved.
Boyden and graduate students Fei Chen and Paul Tillberg’s first report on expansion microscopy was published in the journal Science in 2015. In it, the team demonstrated that by spreading apart molecules that had been crowded inside cells, features that would have blurred together under a standard light microscope became separate and distinct. Light microscopes can discriminate between objects that are separated by about 300 nanometers — a limit imposed by the laws of physics. With expansion microscopy, Boyden’s group reported an effective resolution of about 70 nanometers, for a fourfold expansion.
Boyden says this is a level of clarity that biologists need. “Biology is fundamentally, in the end, a nanoscale science,” he says. “Biomolecules are nanoscale, and the interactions between biomolecules are over nanoscale distances. Many of the most important problems in biology and medicine involve nanoscale questions.” Several kinds of sophisticated microscopes, each with their own advantages and disadvantages, can bring this kind of detail to light. But those methods are costly and require specialized skills, making them inaccessible for most researchers. “Expansion microscopy democratizes nanoimaging,” Boyden says. “Now, anybody can go look at the building blocks of life and how they relate to each other.”
Empowering scientists
Since Boyden’s team introduced expansion microscopy in 2015, research groups around the world have published hundreds of papers reporting on discoveries they have made using expansion microscopy. For neuroscientists, the technique has lit up the intricacies of elaborate neural circuits, exposed how particular proteins organize themselves at and across synapses to facilitate communication between neurons, and uncovered changes associated with aging and disease.
It has been equally empowering for studies beyond the brain. Sabrina Absalon uses expansion microscopy every week in her lab at Indiana University School of Medicine to study the malaria parasite, a single-celled organism packed with specialized structures that enable it to infect and live inside its hosts. The parasite is so small, most of those structures can’t be seen with ordinary light microscopy. “So as a cell biologist, I’m losing the biggest tool to infer protein function, organelle architecture, morphology, linked to function, and all those things — which is my eye,” she says. With expansion, she can not only see the organelles inside a malaria parasite, she can watch them assemble and follow what happens to them when the parasite divides. Understanding those processes, she says, could help drug developers find new ways to interfere with the parasite’s life cycle.
Absalon adds that the accessibility of expansion microscopy is particularly important in the field of parasitology, where a lot of research is happening in parts of the world where resources are limited. Workshops and training programs in Africa, South America, and Asia are ensuring the technology reaches scientists whose communities are directly impacted by malaria and other parasites. “Now they can get super-resolution imaging without very fancy equipment,” Absalon says.
Always improving
Since 2015, Boyden’s interdisciplinary lab group has found a variety of creative ways to improve expansion microscopy and use it in new ways. Their standard technique today enables better labeling, bigger expansion factors, and higher-resolution imaging. Cellular features less than 20 nanometers from one another can now be separated enough to appear distinct under a light microscope.
They’ve also adapted their protocols to work with a range of important sample types, from entire roundworms (popular among neuroscientists, developmental biologists, and other researchers) to clinical samples. In the latter regard, they’ve shown that expansion can help reveal subtle signs of disease, which could enable earlier or less-costly diagnoses.
Originally, the group optimized its protocol for visualizing proteins inside cells, by labeling proteins of interest and anchoring them to the hydrogel prior to expansion. With a new way of processing samples, users can now re-stain their expanded samples with new labels for multiple rounds of imaging, so they can pinpoint the positions of dozens of different proteins in the same tissue. That means researchers can visualize how molecules are organized with respect to one another and how they might interact, or survey large sets of proteins to see, for example, what changes with disease.
But better views of proteins were just the beginning for expansion microscopy. “We want to see everything,” Boyden says. “We’d love to see every biomolecule there is, with precision down to atomic scale.” They’re not there yet — but with new probes and modified procedures, it’s now possible to see not just proteins, but also RNA and lipids in expanded tissue samples.
Labeling lipids, including those that form the membranes surrounding cells, means researchers can now see clear outlines of cells in expanded tissues. With the enhanced resolution afforded by expansion, even the slender projections of neurons can be traced through an image. Typically, researchers have relied on electron microscopy, which generates exquisitely detailed pictures but requires expensive equipment, to map the brain’s circuitry. “Now, you can get images that look a lot like electron microscopy images, but on regular old light microscopes — the kind that everybody has access to,” Boyden says.
Boyden says expansion can be powerful in combination with other cutting-edge tools. When expanded samples are used with an ultra-fast imaging method developed by Eric Betzig, an HHMI investigator at the University of California at Berkeley, called lattice light-sheet microscopy, the entire brain of a fruit fly can be imaged at high resolution in just a few days.
And when RNA molecules are anchored within a hydrogel network and then sequenced in place, scientists can see exactly where inside cells the instructions for building specific proteins are positioned, which Boyden’s team demonstrated in a collaboration with Harvard University geneticist George Church and then-MIT-professor Aviv Regev. “Expansion basically upgrades many other technologies’ resolutions,” Boyden says. “You’re doing mass-spec imaging, X-ray imaging, or Raman imaging? Expansion just improved your instrument.”
Expanding possibilities
Ten years past the first demonstration of expansion microscopy’s power, Boyden and his team are committed to continuing to make expansion microscopy more powerful. “We want to optimize it for different kinds of problems, and making technologies faster, better, and cheaper is always important,” he says. But the future of expansion microscopy will be propelled by innovators outside the Boyden lab, too. “Expansion is not only easy to do, it’s easy to modify — so lots of other people are improving expansion in collaboration with us, or even on their own,” Boyden says.
Boyden points to a group led by Silvio Rizzoli at the University Medical Center Göttingen in Germany that, collaborating with Boyden, has adapted the expansion protocol to discern the physical shapes of proteins. At the Korea Advanced Institute of Science and Technology, researchers led by Jae-Byum Chang, a former postdoc in Boyden’s group, have worked out how to expand entire bodies of mouse embryos and young zebra fish, collaborating with Boyden to set the stage for examining developmental processes and long-distance neural connections with a new level of detail. And mapping connections within the brain’s dense neural circuits could become easier with light-microscopy based connectomics, an approach developed by Johann Danzl and colleagues at the Institute of Science and Technology in Austria that takes advantage of both the high resolution and molecular information that expansion microscopy can reveal.
“The beauty of expansion is that it lets you see a biological system down to its smallest building blocks,” Boyden says.
His team is intent on pushing the method to its physical limits, and anticipates new opportunities for discovery as they do. “If you can map the brain or any biological system at the level of individual molecules, you might be able to see how they all work together as a network — how life really operates,” he says.