MIT Latest News
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Polar weather on Jupiter and Saturn hints at the planets’ interior details
Over the years, passing spacecraft have observed mystifying weather patterns at the poles of Jupiter and Saturn. The two planets host very different types of polar vortices, which are huge atmospheric whirlpools that rotate over a planet’s polar region. On Saturn, a single massive polar vortex appears to cap the north pole in a curiously hexagonal shape, while on Jupiter, a central polar vortex is surrounded by eight smaller vortices, like a pan of swirling cinnamon rolls.
Given that both planets are similar in many ways — they are roughly the same size and made from the same gaseous elements — the stark difference in their polar weather patterns has been a longstanding mystery.
Now, MIT scientists have identified a possible explanation for how the two different systems may have evolved. Their findings could help scientists understand not only the planets’ surface weather patterns, but also what might lie beneath the clouds, deep within their interiors.
In a study appearing this week in the Proceedings of the National Academy of Sciences, the team simulates various ways in which well-organized vortex patterns may form out of random stimulations on a gas giant. A gas giant is a large planet that is made mostly of gaseous elements, such as Jupiter and Saturn. Among a wide range of plausible planetary configurations, the team found that, in some cases, the currents coalesced into a single large vortex, similar to Saturn’s pattern, whereas other simulations produced multiple large circulations, akin to Jupiter’s vortices.
After comparing simulations, the team found that vortex patterns, and whether a planet develops one or multiple polar vortices, comes down to one main property: the “softness” of a vortex’s base, which is related to the interior composition. The scientists liken an individual vortex to a whirling cylinder spinning through a planet’s many atmospheric layers. When the base of this swirling cylinder is made of softer, lighter materials, any vortex that evolves can only grow so large. The final pattern can then allow for multiple smaller vortices, similar to those on Jupiter. In contrast, if a vortex’s base is made of harder, denser stuff, it can grow much larger and subsequently engulf other vortices to form one single, massive vortex, akin to the monster cyclone on Saturn.
“Our study shows that, depending on the interior properties and the softness of the bottom of the vortex, this will influence the kind of fluid pattern you observe at the surface,” says study author Wanying Kang, assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “I don’t think anyone’s made this connection between the surface fluid pattern and the interior properties of these planets. One possible scenario could be that Saturn has a harder bottom than Jupiter.”
The study’s first author is MIT graduate student Jiaru Shi.
Spinning up
Kang and Shi’s new work was inspired by images of Jupiter and Saturn that have been taken by the Juno and Cassini missions. NASA’s Juno spacecraft has been orbiting around Jupiter since 2016, and has captured stunning images of the planet’s north pole and its multiple swirling vortices. From these images, scientists have estimated that each of Jupiter’s vortices is immense, spanning about 3,000 miles across — almost half as wide as the Earth itself.
The Cassini spacecraft, prior to intentionally burning up in Saturn’s atmosphere in 2017, orbited the ringed planet for 13 years. Its observations of Saturn’s north pole recorded a single, hexagonal-shaped polar vortex, about 18,000 miles wide.
“People have spent a lot of time deciphering the differences between Jupiter and Saturn,” Shi says. “The planets are about the same size and are both made mostly of hydrogen and helium. It’s unclear why their polar vortices are so different.”
Shi and Kang set out to identify a physical mechanism that would explain why one planet might evolve a single vortex, while the other hosts multiple vortices. To do so, they worked with a two-dimensional model of surface fluid dynamics. While a polar vortex is three-dimensional in nature, the team reasoned that they could accurately represent vortex evolution in two dimensions, as the fast rotation of Jupiter and Saturn enforces uniform motion along the rotating axis.
“In a fast-rotating system, fluid motion tends to be uniform along the rotating axis,” Kang explains. “So, we were motivated by this idea that we can reduce a 3D dynamical problem to a 2D problem because the fluid pattern does not change in 3D. This makes the problem hundreds of times faster and cheaper to simulate and study.”
Getting to the bottom
Following this reasoning, the team developed a two-dimensional model of vortex evolution on a gas giant, based on an existing equation that describes how swirling fluid evolves over time.
“This equation has been used in many contexts, including to model midlatitude cyclones on Earth,” Kang says. “We adapted the equation to the polar regions of Jupiter and Saturn.”
The team applied their two-dimensional model to simulate how fluid would evolve over time on a gas giant under different scenarios. In each scenario, the team varied the planet’s size, its rate of rotation, its internal heating, and the softness or hardness of the rotating fluid, among other parameters. They then set a random “noise” condition, in which fluid initially flowed in random patterns across the planet’s surface. Finally, they observed how the fluid evolved over time given the scenario’s specific conditions.
Over multiple different simulations, they observed that some scenarios evolved to form a single large polar vortex, like Saturn, whereas others formed multiple smaller vortices, like Jupiter. After analyzing the combinations of parameters and variables in each scenario and how they related to the final outcome, they landed on a single mechanism to explain whether a single or multiple vortices evolve: As random fluid motions start to coalesce into individual vortices, the size to which a vortex can grow is limited by how soft the bottom of the vortex is. The softer, or lighter the gas is that is rotating at the bottom of a vortex, the smaller the vortex is in the end, allowing for multiple smaller-scale vortices to coexist at a planet’s pole, similar to those on Jupiter.
Conversely, the harder or denser a vortex bottom is, the larger the system can grow, to a size where eventually it can follow the planet’s curvature as a single, planetary-scale vortex, like the one on Saturn.
If this mechanism is indeed what is at play on both gas giants, it would suggest that Jupiter could be made of softer, lighter material, while Saturn may harbor heavier stuff in its interior.
“What we see from the surface, the fluid pattern on Jupiter and Saturn, may tell us something about the interior, like how soft the bottom is,” Shi says. “And that is important because maybe beneath Saturn’s surface, the interior is more metal-enriched and has more condensable material which allows it to provide stronger stratification than Jupiter. ”
"Because Jupiter and Saturn are otherwise so similar, their different polar weather has been a puzzle,” says Yohai Kaspi, a professor of geophysical fluid dynamics at the Weizmann Institute of Science, and a member of the Juno mission’s science team, who was not involved in the new study. “The work by Shi and Kang reveals a surprising link between these differences and the planets’ deep interior ‘softness’, offering a new way to map the key internal properties that shape their atmospheres."
This research was supported, in part, by a Mathworks Fellowship and endowed funding from MIT’s Department of Earth, Atmospheric and Planetary Sciences.
Demystifying college for enlisted veterans and service members
“I went into the military right after high school, mostly because I didn’t really see the value of academics,” says Air Force veteran and MIT sophomore Justin Cole.
His perspective on education shifted, however, after he experienced several natural disasters during his nine years of service. As a satellite systems operator in Colorado, Cole volunteered in the aftermath of the 2013 Black Forest fire, the state’s most destructive fire at the time. And in 2018, while he was leading a team in Okinawa conducting signal-monitoring work on communications satellites, two Category 5 typhoons barreled through the area within 26 days.
“I realized, this climate stuff is really a prerequisite to national security objectives in almost every sense, so I knew that school was going to be the thing that would help prepare me to make a difference,” he says. In 2023, after leaving the Air Force to work for climate-focused nonprofits and take engineering courses, Cole participated in an intense, weeklong STEM boot camp at MIT. “It definitely reaffirmed that I wanted to continue down the path of at least getting a bachelor’s, and it also inspired me to apply to MIT,” he says. He transferred in 2024 and is majoring in climate system science and engineering.
“It’s a lot like the MIT experience”
MIT runs the boot camp every summer as part of the nonprofit Warrior-Scholar Project (WSP), which started at Yale University in 2012. WSP offers a range of programming designed to help enlisted veterans and service members transition from the military to higher education. The academic boot camp program, which aims to simulate a week of undergraduate life, is offered at 19 schools nationwide in three areas: business, college readiness, and STEM.
MIT joined WSP in 2017 as one of the first three campuses to offer the STEM boot camp. “It was definitely rigorous,” Cole recalls, “not getting tons of sleep, grinding psets at night with friends … it’s a lot like the MIT experience.” In addition to problem sets, every day at MIT-WSP is packed with faculty lectures on math and physics, recitations, working on research projects, and tours of MIT campus labs. Scholars also attend daily college success workshops on topics such as note taking, time management, and applying to college. The schedule is meticulously mapped out — including travel times — from 0845 to 2200, Sunday through Friday.
Michael McDonald, an associate professor of physics at the Kavli Institute for Astrophysics and Space Research, and Navy veteran Nelson Olivier MBA ’17 have run the MIT-WSP program since its inception. At the time, WSP wanted to expand its STEM boot camps to other universities, so a Yale astrophysicist colleague recruited McDonald. Meanwhile, Olivier’s former Navy SEAL Team THREE teammate — who happened to be the WSP CEO — convinced Olivier to help launch the program while he was at the MIT Sloan School of Management, along with classmate Bill Kindred MBA ’17.
Now in its 10th year, MIT-WSP has hosted over 120 scholars, 93 percent of whom have gone on to attend schools like Stanford University, Georgetown University, University of Notre Dame, Harvard University, and the University of California at Berkeley. MIT-WSP alumni who have graduated now work at employers such as Meta, Price Waterhouse Coopers, Boeing, and BAE Systems.
Translating helicopter repairs to Newton’s laws
McDonald has a lot of fun teaching WSP scholars every summer. “When I pose a question to my first-year physics class in September, no one wants to meet my eyes or raise their hand for fear of embarrassing themselves,” he says. “But I ask a question to this group of, say, 12 vets, and 12 hands shoot up, they are all answering over each other, and then asking questions to follow up on the question. They are just curious and hungry, and they couldn’t care less about how they come off. … As a professor, it’s like your dream class.”
Every year, McDonald witnesses a predictable transformation among the scholars. They start off eager enough, however “by Tuesday, they are miserable, they’re pretty beaten down. But by the end of the week, they’re like, ‘I could do another week,’” he says.
Their confidence grows as they recognize that, while they may not have taken college courses, their military experience is invaluable. “It’s just a matter of convincing these guys that what they are already doing is what we are looking for. We have guys that say, ‘I don’t know if I can succeed in an engineering program,’ but then in the field, they are repairing helicopters. And I’m like, ‘Oh no, you can do this stuff!’ They just need to understand the background of why that helicopter that they are building works.”
Olivier agrees. “The enlisted veteran has a leg up because they’ve already done this before. They are just translating it from either fixing a radio or messing around with the components of a bomb to understanding Newton’s laws. That’s a thing of beauty, when you see that.”
Fostering a virtuous cycle
While just seeing themselves succeed at MIT-WSP helps instill confidence among scholars, meeting veterans who have made the leap into academia has a multiplier effect. To that end, the WSP organization provides each academic boot camp with alumni, called fellows, to teach college success workshops, provide support, and share their experiences in higher education.
“When I was at boot camp, we had two WSP fellows who were at Columbia, one at Princeton, and one who just got accepted to Harvard,” Cole recalls. “Just seeing people existing at these institutions made me realize, this is a thing that is doable.” The following summer, he became a fellow as well.
Former Marine Corps communications operator Aaron Kahler, who attended MIT-WSP in 2024, particularly recalls meeting a veteran PhD student while the group toured the neuroscience facility. “It was really cool seeing instances of successful vets doing their thing at MIT,” he says. “There were a lot more than we thought.”
Over the years, McDonald has made an effort to recruit more MIT veterans to staff the program. One of them is Andrea Henshall, a retired major in the Air Force and a PhD student in the Department of Aeronautics and Astronautics. After joining the Ask Me Anything panel a few years ago, she’s become increasingly involved, presenting lectures, mentoring participants, offering tours of the motion capture lab where she conducts experiments, and informally mentoring scholars.
“It’s so inspiring to hear so many students at the end of the week say, ‘I never considered a place like MIT until the boot camp, or until somebody told me, hey, you can be here, too.’ Or they see examples of enlisted veterans, like Justin, who’ve transitioned to a place like MIT and shown that it’s possible,” says Henshall.
At the conclusion of MIT-WSP, scholars receive a tangible reminder of what’s possible: a challenge coin designed by Olivier and McDonald. “In the military, the challenge coin usually has the emblem of the unit and symbolizes the ethos of the unit,” Olivier explains. On one side of the MIT-WSP coin are Newton’s laws of motion, superimposed over the WSP logo. MIT's “mens et manus” (“mind and hand”) motto appears on the other side, beneath an image of the Great Dome inscribed with the scholar’s name.
“As you go into Killian Court you see all the names of Pasteur, Newton, et cetera, but Building 10 doesn’t have a name on it,” he says. “So we say, ‘earn your space there on these buildings. Do something significant that will impact the human experience.’ And that’s what we think each one of these guys and gals can do.”
Kahler keeps the coin displayed on his desk at MIT, where he’s now a first-year student, for inspiration. “I don’t think I would be here if it weren’t for the Warrior-Scholar Project,” he says.
How collective memory of the Rwandan genocide was preserved
The 1994 genocide in Rwanda took place over a little more than three months, during which militias representing the Hutu ethnic group conducted a mass murder of members of the Tutsi ethnic group along with some politically moderate members of the Hutu and Twa groups. Soon after, local citizens and aid workers began to document the atrocities that had occurred in the country.
They were establishing evidence of a genocide that many outsiders were slow to acknowledge; other countries and the U.N. did not recognize it until 1998. By preserving scenes of massacre and victims’ remains, this effort allowed foreigners, journalists, and neighbors to witness what had happened. Though the citizens’ work was emotionally and physically challenging, they used these sites of memory to seek justice for victims who had been killed and harmed.
In so doing, these efforts turned memory into officially recognized history. Now, in a new book, MIT scholar Delia Wendel carefully explores this work, shedding new light on the people who created the state’s genocide memorials, and the decisions they made in the process — such as making the remains of the dead available for public viewing. She also examines how the state gained control of the effort and has chosen to represent the past through these memorials.
“I’m seeking to recuperate this forgotten history of the ethics of the work, while also contending with the motivations of state sovereignty that has sustained it,” says Wendel, who is the Class of 1922 Career Development Associate Professor of Urban Studies and International Development in MIT’s Department of Urban Studies and Planning (DUSP).
That book, “Rwanda’s Genocide Heritage: Between Justice and Sovereignty,” is published by Duke University Press and is freely available through the MIT Libraries. In it, Wendel uncovers new details about the first efforts to preserve the memory of the genocide, analyzes the social and political dynamics, and examines their impact on people and public spaces.
“The shift from memory to history is important because it also requires recognition that is official or more public in nature,” Wendel says. “Survivors, their kin, their relatives, they know their histories. What they’re wishing to happen is a form of repair, or justice, or empowerment, that comes with disclosing those histories. That truth-telling aspect is really important.”
Conversations and memory
Wendel’s book was well over a decade in the making — and emerged from a related set of scholarly inquiries about peace-building activities in the wake of genocide. For this project, about memorializing genocide, Wendel visited over 30 villages in Rwanda over a span of many years, gradually making connections and building dialogues with citizens, in addition to conducting more conventional social science research.
“Speaking with rual residents started to unlock a lot of different types of conversations,” Wendel says of those visits. “A good deal of those conversations had to do with memory, and with relationships to place, neighbors, and authority.” She adds: “These are topics that people are very hesitant to speak about, and rightly so. This has been a book that took a long time to research and build some semblance of trust.”
During her research, Wendel also talked at length with some key figures involved in the process, including Louis Kanamugire, a Rwandan who became the first head of the country’s post-war Genocide Memorial Commission. Kanamugire, who lost his parents in the genocide, felt it was necessary to preserve and display the remains of genocide victims, including at four key sites that later become official state memorials.
This process involved, as Wendel puts it, the “gruesome” work of cleaning and preserving bodies and bones and preserving material remains to provide both material evidence of genocide and the grounds for beginning the work of societal repair and individual healing.
Wendel also uncovers, in detail for the first time, the work done by Mario Ibarra, a Chilean aid worker for the U.N. who also investigated atrocities, photographed evidence extensively, conducted preservation work, and contributed to the country’s Genocide Memorial Commission as well. The relationships between global human rights practice and genocide survivors seeking justice, in terms of preserving and documenting evidence, is at the core of the book and, Wendel believes, a previously underappreciated aspect of this topic.
“The story of Rwanda memorialization that has typically been told is one of state control,” Wendel says. “But in the beginning, the government followed independent initiatives by this human rights worker and local residents who really spurred this on.”
In the book, Wendel also examines how Rwanda’s memorialization practices relates to those of other countries, often in the so-called Global South. This phenomenon is something she terms “trauma heritage,” and has followed similar trajectories across countries in Africa and South America, for instance.
“Trauma heritage is the act of making visible the violence that had been actively hidden, and intervening in the dynamics of power,” she says. “Making such public spaces for silenced pain is a way of seeking recognition of those harms, and [seeking] forms of justice and repair.”
The tensions of memorialization
To be clear, Rwanda has been able to construct genocide memorials in the first place because, in the mid-1990s, Tutsi troops regained power in the country by defeating their Hutu adversaries. Subsequently, in a state without unlimited free expression, the government has considerable control over the content and forms of memorialization that take place.
Meanwhile, there have always been differing views about, say, displaying victims’ remains, and to what degree such a practice underlines their humanity or emphasizes the dehumanizing treatment they suffered. Then too, atrocities can produce a wide range of psychological responses among the living, including survivors’ guilt and the sheer difficulty many experience in expressing what they have witnessed. The process of memorialization, in such circumstances, will likely be fraught.
“The book is about the tensions and paradoxes between the ethics of this work and its politics, which have a lot to do with state sovereignty and control,” Wendel says. “It’s rooted in the tension between what’s invisible and what’s visible, between this bid to be seen and to recognize the humanity of the victims and yet represent this dehumanizing violence. These are irresolvable dilemmas that were felt by the people doing this work.”
Or, as Wendel writes in the book, Rwandans and others immersed in similar struggles for justice around the world have had to grapple with the “messy politics of repair, searching for seemingly impossible redress for injustice.”
Other experts have praised Wendel’s book, such as Pumla Gobodo-Madikizela, a professor at Stellenbosch University in South Africa, who studies the psychological effects of mass violence. Gobodo-Madikizela has cited Wendel’s “extraordinary narratives” about the book’s principal figures, observing that they “not only preserve the remains but also reclaim the victims’ humanity. … Wendel shows how their labor becomes a defiant insistence on visibility that transforms the act of cleaning into a form of truth-telling, making injustice materially and spatially undeniable.”
For her part, Wendel hopes the book will engage readers interested in multiple related issues, including Rwandan and African history, the practices and politics of public memory, human rights and peace-building, and the design of public memorials and related spaces, including those built in the aftermath of traumatic historical episodes.
“Rwanda’s genocide heritage remains an important endeavor in memory justice, even if its politics need to be contended with at the same time,” Wendel says.
Helping companies with physical operations around the world run more intelligently
Running large companies in construction, logistics, energy, and manufacturing requires careful coordination between millions of people, devices, and systems. For more than a decade, Samsara has helped those companies connect their assets to get work done more intelligently.
Founded by John Bicket SM ’05 and Sanjit Biswas SM ’05, Samsara’s platform gives companies with physical operations a central hub to track and learn from workers, equipment, and other infrastructure. Layered on top of that platform are real-time analytics and notifications designed to prevent accidents, reduce risks, save fuel, and more.
Tens of thousands of customers have used Samsara’s platform to improve their operations since its founding in 2015. Home Depot, for instance, used Samsara’s artificial intelligence-equipped dashcams to reduce their total auto liability claims by 65 percent in one year. Maxim Crane Works saved more than $13 million in maintenance costs using Samsara’s equipment and vehicle diagnostic data in 2024. Mohawk Industries, the world’s largest flooring manufacturer, improved their route efficiency and saved $7.75 million annually.
“It’s all about real-world impact,” says Biswas, Samsara’s CEO. “These organizations have complex operations and are functioning at a massive scale. Workers are driving millions of miles and consuming tons of fuel. If you can understand what’s happening and run analysis in the cloud, you can find big efficiency improvements. In terms of safety, these workers are putting their lives at risk every day to keep this infrastructure running. You can literally save lives if you can reduce risk.”
Finding big problems
Biswas and Bicket started PhD programs at MIT in 2002, both conducting research around networking in the Computer Science and Artificial Intelligence Laboratory (CSAIL). They eventually applied their studies to build a wireless network called MIT RoofNet.
Upon graduating with master’s degrees, Biswas and Bicket decided to commercialize the technologies they worked on, founding the company Meraki in 2006.
“How do you get big Wi-Fi networks out in the world?” Biswas asks. “With MIT RoofNet, we covered Cambridge in Wi-Fi. We wanted to enable other people to build big Wi-Fi networks and make Wi-Fi go mainstream for larger campuses and offices.”
Over the next six years, Meraki’s technology was used to create millions of Wi-Fi networks around the world. In 2012, Meraki was acquired by Cisco. Biswas and Bicket left Cisco in 2015, unsure of what they’d work on next.
“The way we found ourselves to Samsara was through the same curiosity we had as graduate students,” Biswas says. “This time it dealt more with the planet’s infrastructure. We were thinking about how utilities work, and how construction happens at the scale of cities and states. It drew us into operations, which is the infrastructure backbone of the planet.”
As the founders learned about industries like logistics, utilities, and construction, they realized they could use their technical background to improve safety and efficiency.
“All these industries have a lot in common,” Biswas says. “They have a lot of field workers — often thousands of them — they have a lot of assets like trucks and equipment, and they’re trying to orchestrate it all. The throughline was the importance of data.”
When they founded Samsara 10 years ago, many people were still collecting field data with pen and paper.
“Because of our technical background, we knew that if you could collect the data and run sophisticated algorithms like AI over it, you could get a ton of insights and improve the way those operations run,” Biswas says.
Biswas says extracting insights from data is easy. Making field-ready products and getting them into the hands of frontline workers took longer.
Samsara started by tapping into existing sensors in buildings, cars, and other assets. They also built their own, including AI-equipped cameras and GPS trackers that can monitor driving behavior. That formed the foundation of Samsara’s Connected Operations Platform. On top of that, Samsara Intelligence processes data in the cloud and provides insights like ways to calculate the best routes for commercial vehicles, be more proactive with maintenance, and reduce fuel consumption.
Samsara’s platform can be used to detect if a commercial vehicle or snowplow driver is on their phone and send an audio message nudging them to stay safe and focused. The platform can also deliver training and coaching.
“That’s the kind of thing that reduces risk, because workers are way less likely to be distracted,” Biswas says. “If you do for millions of workers, you reduce risk at scale.”
The platform also allows managers to query their data in a ChatGPT-style interface, asking questions such as: Who are my safest drivers? Which vehicles need maintenance? And what are my least fuel-efficient trucks?
“Our platform helps recognize frontline workers who are safe and efficient in their job,” Biswas says. “These people are largely unsung heroes. They keep our planet running, but they don’t hear ‘thank you’ very often. Samsara helps companies recognize the safest workers on the field and give them recognition and rewards. So, it’s about modernizing equipment but also improving the experience of millions of people that help run this vital infrastructure.”
Continuing to grow
Today Samsara processes 20 trillion data points a year and monitors 90 million miles of driving. The company employs about 4,000 people across North America and Europe.
“It still feels early for us,” Biswas says. “We’ve been around for 10 years and gotten some scale, but we needed to build this platform to be able to build more products and have more impact. If you step back, operations is 40 percent of the world’s GDP, so we see a lot of opportunities to do more with this data. For instance, weather is part of Samsara Intelligence, and weather is 20 to 25 percent of the risk, and so we’re training AI models to reduce risk from the weather. And on the sustainability side, the more data we have, the more we can help optimize for things like fuel consumption or transitioning to electric vehicles. Maintenance is another fascinating data problem.”
The founders have also maintained a connection with MIT — and not just because the City of Boston’s Department of Public Works and the MBTA are customers. Last year, the Biswas Family Foundation announced funding for a four-year postdoctoral fellowship program at MIT for early-stage researchers working to improve health care.
Biswas says Samsara’s journey has been incredibly rewarding and notes the company is well-positioned to leverage advances in AI to further its impact going forward.
“It’s been a lot of fun and also a lot of hard work,” Biswas says. “What’s exciting is that each decade of the company feels different. It’s almost like a new chapter — or a whole new book. Right now, there’s so many incredible things happening with data and AI. It feels as exciting as it did in the early days of the company. It feels very much like a startup.”
How an online MIT course in supply chain management sparked a new career
As a college student, Kevin Power never considered working in supply chain management; in fact, he didn’t know it was an option. He earned an undergraduate degree in manufacturing engineering while working full time at an oil refinery, which demanded a rigorous routine of shift work, long days, and evening classes.
After graduation, he found himself searching for new learning opportunities, and stumbled upon the online courses of the MITx MicroMasters Program in Supply Chain Management, an online program of the MIT Center for Transportation and Logistics. Starting with Supply Chain Analytics (SC0x), Power was drawn in immediately by how directly applicable the lessons were to real work.
“So many courses that you do are more theoretical,” he reflects. “Everything I learned, I could apply it directly to my work and see the value in doing it. So as soon as I finished Supply Chain Analytics, I decided, OK, I’ll finish the whole program.” What he didn’t yet know was that he belonged to the very audience the MicroMasters was designed for — lifelong learners. Learners are often working professionals who want deep, flexible training while continuing their careers.
After completing the five-course MicroMasters track and earning his credential, Power uncovered another opportunity: the MIT SCM Blended Master’s Program, which pairs the online credential with a one-semester, on-campus program, resulting in a master of applied science degree in supply chain management.
For Power, the blend of online and in-person learning proved pivotal. He describes his MicroMasters experience as fertile ground for deep, self-paced study. “I’m a very introverted kind of learner, so I prefer to just learn out of a textbook and online,” he says. But, once in the MIT SCM program, he tapped into the soft skills he needs to stand out in the industry. “When I came to campus, it was more about networking and being able to communicate with executives, on top of our academic work,” he says. The immersive environment of combining scholarly rigor with real-world experience among peers across the supply chain industry is at the heart of what the blended program aims to facilitate.
During his time on campus, Power’s research included simulation modeling in port shipping and generative-AI–driven projects focused on supply chain resilience. “I had never done simulation modeling before, and right now it’s huge in the industry,” he says. “If I were trying to apply for a simulation modeling job, I’m sure it would help me greatly having done this.”
His project, completed with fellow MIT SCM student Yassine Lahlou-Kamal, was one of the winners at the 2025 Annual MIT Global SCALE Network Supply Chain Student Research Expo, in which students showcased their industry-sponsored thesis and capstone projects. This experience pays off in his current work with Elenna Dugundji in her Deep Knowledge Lab for Supply Chain and Logistics.
Beyond academics and research, Power threw himself into the fast-paced world of hackathons, despite having never participated in one before. “I’m very competitive,” Power confesses, “and I feel like I learn something new every time.” His first effort, an internal MIT competition called Hack-Nation’s Global AI Hackathon, earned him a win with an AI sports-betting agent project that fuses model-driven analysis with web scraping. Soon after, he tackled the OpenAI Red Teaming Challenge on Kaggle. Despite joining the competition halfway through the 15-day window, he raced through the final week and was selected as one of the winners. “It gave me a lot of confidence … that the things I’m working on right now are cutting-edge, even in the eyes of OpenAI.”
In terms of his return on investment in the degree, Power says, “I’m getting so much value out of being here. Even from just doing the Kaggle competition, I won more than the cost of my full MIT degree.” Long-term, Power has been impressed that “as far as I know, everybody that was looking for a job in the supply chain program has one.” The data back him up, as every student from the MIT SCM residential program Class of 2025 secured a job within six months of graduation.
Now a current master’s student in the MIT Technology and Policy Program, looking ahead, Power says, “I want to do a startup. A lot of the ideas came from research I’ve done here.”
Reflecting on the transformation he’s experienced in just 10 months of the program, he calls it “crazy.” “The SCM program really is amazing … I’d recommend it to anyone.”
Fostering MIT’s Japan connection
Born and raised in Japan as part of a military family, Christine Pilcavage knows first-hand about the value of an immersive approach to exploration.
“Any experience in a different context improves an individual,” says Pilcavage, who has also lived in Cambodia, the Philippines, and Kenya.
It’s that ethos that Pilcavage brings to her role as managing director of MISTI Japan, which connects MIT students and faculty to Institute collaborators in Japan. In her role, Pilcavage sends students to Japan for internship and research opportunities. She also shares Japanese culture on campus with activities like Ikebana classes during Independent Activities Period and a Japanese Film Festival.
MIT’s connection to Japan dates back before 1874, when its first Japanese student graduated. Later, 1911 saw the foundation of the MIT Association of Japan, Japan’s first MIT trans-Pacific alumni club. That organization later evolved into the MIT Club of Japan.
MISTI Japan predates the MIT International Science and Technology Initiatives (MISTI)’s creation. The MIT-Japan Program was established in 1981 to prepare MIT students to be better scientists and engineers who understand and work effectively with Japan. The program sought to foster a deeper U.S.-Japan collaboration in science and technology amidst Japan's growing economic and technological power. MIT-Japan began sending students to Japan in 1983.
Students in the MIT-Japan Program complete a three-to-12-month internship at their host institution, and the immersive experiences are invaluable. “Japan is so different from the Western world,” Pilcavage notes. “For example, in Japanese, verbs end sentences, so it’s important to develop patience and listen carefully when communicating.”
Pilcavage believes there is tremendous value in creating and supporting a program like MISTI at MIT. Traveling to areas outside the Institute and the United States can expose students to diverse cultures, aid the exploration of challenges, help them discover solutions, improve language learning, and foster communication.
“We want our students to think and create,” she says. “They need to see beyond the MIT bubble and think carefully about how to solve difficult problems and help others.”
Japan, Pilcavage continues, is monocultural in ways the United States isn’t. While English is spoken in larger cities, it’s harder to find it spoken in rural areas. “MIT students teach STEM topics to rural Japanese kids in Japanese,” Pilcavage says, citing a program that’s been teaching STEAM workshops in the tsunami-affected area in Northern Japan since 2017. “Learning to code switch means they improve their language skills while also learning important cultural nuances, like body language.”
Pilcavage emphasizes the importance of “learning differently” for MIT students and the Japanese people with whom they interact. “I wanted our students to engage with the local population,” she says, encouraging them to develop what she calls “cultural resilience.”
Journey to MIT
Pilcavage — whose educational background includes master’s degrees in international affairs and public health, and undergraduate study in economics and psychology — has also worked with the United States Agency for International Development (USAID), the Japanese government, the Japan International Cooperation Agency (JICA), and the World Health Organization on global health and educational issues in Africa and Asia.
Pilcavage first came to Cambridge, Massachusetts, looking for hands-on experience in public health and community outcomes in a role with Management Sciences for Health, co-founded by MIT Sloan School of Management alumnus Ron O’Connor SM ’71. There, she investigated reproductive and women’s health and supported a Japanese nonprofit affiliated with the organization.
She has since developed strong ties to Cambridge and MIT. “I was married in the MIT Chapel to an MIT alum, and our reception was held in Walker Memorial,” she says. “I was a migratory bird who landed on a tree, and my husband is the tree that has deep local roots here.”
In keeping with her ethos of overcoming roadblocks to success, Pilcavage encourages students to challenge themselves. “I’ve tried to model that behavior throughout my career,” she says.
Following her arrival at MIT In 2013, Pilcavage worked with the Comprehensive Initiative on Technology Evaluation (CITE), an MIT Department of Urban Studies and Planning project established in 2012 to develop new methods for product evaluation in global development. Formerly funded by USAID, Pilcavage administered the $10 million research program, which sought to learn which low-cost interventions worked best by evaluating products designed for people living in lower-income communities.
“It’s important to learn how to manage real-world challenges and deal with them effectively,” she argues. “Creating a collaborative environment in which people can discover solutions is how things get done.”
A career of service
Pilcavage has been recognized for her outstanding contributions to encouraging positive relations between America and Japan. She received the Foreign Minister's Commendation from the Japanese Ministry of Foreign Affairs and the John E. Thayer III Award from the Japan Society of Boston.
“I’m honored to join a community of people who have dedicated their lives to strengthening ties between the U.S. and Japan,” Pilcavage says when asked about the awards. “It’s exciting and humbling to be recognized for doing something I love.”
“Chris is a determined, empathetic leader who inspires our students and is committed to advancing both MIT’s mission and U.S.-Japan relations,” says Richard Samuels, the Ford International Professor of Political Science at MIT, and founder and faculty director of MISTI Japan. “I can think of no one more deserving of these awards.”
Pilcavage is excited about new MISTI Japan initiatives that are in development or already underway. “We’re launching our first global classroom with [MIT historian] Hiromu Nagahara and [lecturer in Japanese] Takako Aikawa,” she notes. “Students will visit cities like Kyoto and Hiroshima, and explore Japanese history and culture up close.”
Additionally, Pilcavage is developing social impact workshops and consistently questioning how to improve MIT Japan’s work and its impact. She’s always looking for new projects and new ways to engage and encourage students. “How can I make the program better?” she asks when considering MISTI Japan and its value to MIT and its students.
“I tell people I have the best job in the world,” she says. “I get to share my culture with the MIT community and work with the best colleagues who are nurturing and supportive. I believe I’ve found my home here.”
Efficient cooling method could enable chip-based trapped-ion quantum computers
Quantum computers could rapidly solve complex problems that would take the most powerful classical supercomputers decades to unravel. But they’ll need to be large and stable enough to efficiently perform operations. To meet this challenge, researchers at MIT and elsewhere are developing trapped-ion quantum computers based on ultra-compact photonic chips. These chip-based systems offer a scalable alternative to existing trapped-ion quantum computers, which rely on bulky optical equipment.
The ions in these quantum computers must be cooled to extremely cold temperatures to minimize vibrations and prevent errors. So far, such trapped-ion systems based on photonic chips have been limited to inefficient and slow cooling methods.
Now, a team of researchers at MIT and MIT Lincoln Laboratory has implemented a much faster and more energy-efficient method for cooling trapped ions using photonic chips. Their approach achieved cooling to about 10 times below the limit of standard laser cooling.
Key to this technique is a photonic chip that incorporates precisely designed antennas to manipulate beams of tightly focused, intersecting light.
The researchers’ initial demonstration takes a key step toward scalable chip-based architectures that could someday enable quantum computing systems with greater efficiency and stability.
“We were able to design polarization-diverse integrated-photonics devices, utilize them to develop a variety of novel integrated-photonics-based systems, and apply them to show very efficient ion cooling. However, this is just the beginning of what we can do using these devices. By introducing polarization diversity to integrated-photonics-based trapped-ion systems, this work opens the door to a variety of advanced operations for trapped ions that weren’t previously attainable, even beyond efficient ion cooling — all research directions we are excited to explore in the future,” says Jelena Notaros, the Robert J. Shillman Career Development Associate Professor of Electrical Engineering and Computer Science (EECS) at MIT, a member of the Research Laboratory of Electronics, and senior author of a paper on this architecture.
She is joined on the paper by lead authors Sabrina Corsetti, an EECS graduate student; Ethan Clements, a former postdoc who is now a staff scientist at MIT Lincoln Laboratory; Felix Knollmann, a graduate student in the Department of Physics; John Chiaverini, senior member of the technical staff at Lincoln Laboratory and a principal investigator in MIT’s Center for Quantum Engineering; as well as others at Lincoln Laboratory and MIT. The research appears today in two joint publications in Light: Science and Applications and Physical Review Letters.
Seeking scalability
While there are many types of quantum systems, this research is focused on trapped-ion quantum computing. In this application, a charged particle called an ion is formed by peeling an electron from an atom, and then trapped using radio-frequency signals and manipulated using optical signals.
Researchers use lasers to encode information in the trapped ion by changing its state. In this way, the ion can be used as a quantum bit, or qubit. Qubits are the building blocks of a quantum computer.
To prevent collisions between ions and gas molecules in the air, the ions are held in vacuum, often created with a device known as a cryostat. Traditionally, bulky lasers sit outside the cryostat and shoot different light beams through the cryostat’s windows toward the chip. These systems require a room full of optical components to address just a few dozen ions, making it difficult to scale to the large numbers of ions needed for advanced quantum computing. Slight vibrations outside the cryostat can also disrupt the light beams, ultimately reducing the accuracy of the quantum computer.
To get around these challenges, MIT researchers have been developing integrated-photonics-based systems. In this case, the light is emitted from the same chip that traps the ion. This improves scalability by eliminating the need for external optical components.
“Now, we can envision having thousands of sites on a single chip that all interface up to many ions, all working together in a scalable way,” Knollmann says.
But integrated-photonics-based demonstrations to date have achieved limited cooling efficiencies.
Keeping their cool
To enable fast and accurate quantum operations, researchers use optical fields to reduce the kinetic energy of the trapped ion. This causes the ion to cool to nearly absolute zero, an effective temperature even colder than cryostats can achieve.
But common methods have a higher cooling floor, so the ion still has a lot of vibrational energy after the cooling process completes. This would make it hard to use the qubits for high-quality computations.
The MIT researchers utilized a more complex approach, known as polarization-gradient cooling, which involves the precise interaction of two beams of light.
Each light beam has a different polarization, which means the field in each beam is oscillating in a different direction (up and down, side to side, etc.). Where these beams intersect, they form a rotating vortex of light that can force the ion to stop vibrating even more efficiently.
Although this approach had been shown previously using bulk optics, it hadn’t been shown before using integrated photonics.
To enable this more complex interaction, the researchers designed a chip with two nanoscale antennas, which emit beams of light out of the chip to manipulate the ion above it.
These antennas are connected by waveguides that route light to the antennas. The waveguides are designed to stabilize the optical routing, which improves the stability of the vortex pattern generated by the beams.
“When we emit light from integrated antennas, it behaves differently than with bulk optics. The beams, and generated light patterns, become extremely stable. Having these stable patterns allows us to explore ion behaviors with significantly more control,” Clements says.
The researchers also designed the antennas to maximize the amount of light that reaches the ion. Each antenna has tiny curved notches that scatter light upward, spaced just right to direct light toward the ion.
“We built upon many years of development at Lincoln Laboratory to design these gratings to emit diverse polarizations of light,” Corsetti says.
They experimented with several architectures, characterizing each to better understand how it emitted light.
With their final design in place, the researchers demonstrated ion cooling that was nearly 10 times below the limit of standard laser cooling, referred to as the Doppler limit. Their chip was able to reach this limit in about 100 microseconds, several times faster than other techniques.
“The demonstration of enhanced performance using optics integrated in the ion-trap chip lays the foundation for further integration that can allow new approaches for quantum-state manipulation, and that could improve the prospects for practical quantum-information processing,” adds Chiaverini. “Key to achieving this advance was the cross-Institute collaboration between the MIT campus and Lincoln groups, a model that we can build on as we take these next steps.”
In the future, the team plans to conduct characterization experiments on different chip architectures and demonstrate polarization-gradient cooling with multiple ions. In addition, they hope to explore other applications that could benefit from the stable light beams they can generate with this architecture.
Other authors who contributed to this research are Ashton Hattori (MIT), Zhaoyi Li (MIT), Milica Notaros (MIT), Reuel Swint (Lincoln Laboratory), Tal Sneh (MIT), Patrick Callahan (Lincoln Laboratory), May Kim (Lincoln Laboratory), Aaron Leu (MIT), Gavin West (MIT), Dave Kharas (Lincoln Laboratory), Thomas Mahony (Lincoln Laboratory), Colin Bruzewicz (Lincoln Laboratory), Cheryl Sorace-Agaskar (Lincoln Laboratory), Robert McConnell (Lincoln Laboratory), and Isaac Chuang (MIT).
This work is funded, in part, by the U.S. Department of Energy, the U.S. National Science Foundation, the MIT Center for Quantum Engineering, the U.S. Department of Defense, an MIT Rolf G. Locher Endowed Fellowship, and an MIT Frederick and Barbara Cronin Fellowship.
At MIT, a continued commitment to understanding intelligence
The MIT Siegel Family Quest for Intelligence (SQI), a research unit in the MIT Schwarzman College of Computing, brings together researchers from across MIT who combine their diverse expertise to understand intelligence through tightly coupled scientific inquiry and rigorous engineering. These researchers engage in collaborative efforts spanning science, engineering, the humanities, and more.
SQI seeks to comprehend how brains produce intelligence and how it can be replicated in artificial systems to address real-world problems that exceed the capabilities of current artificial intelligence technologies.
“In SQI, we are studying intelligence scientifically and generically, in the hope that by studying neuroscience and behavior in humans and animals, and also studying what we can build as intelligent engineering artifacts, we'll be able to understand the fundamental underlying principles of intelligence,” says Leslie Pack Kaelbling, SQI director of research and the Panasonic Professor in the MIT Department of Electrical Engineering and Computer Science.
“We in SQI believe that understanding human intelligence is one of the greatest open questions in science — right up there with the origin of the universe and our place in it, and the origin of life. The question of human intelligence has two parts: how it works, and where it comes from. If we understand those, we will see payoffs well beyond our current imaginings," says Jim DiCarlo, SQI director and the Peter de Florez Professor of Neuroscience in the MIT Department of Brain and Cognitive Sciences.
Exploring the great mysteries of the mind
The MIT Siegel Family Quest for Intelligence was recently renamed in recognition of a major gift from the Siegel Family Endowment that is enabling further growth in SQI’s research and activities.
SQI’s efforts are organized around missions — long-term, collaborative projects rooted in foundational questions about intelligence and supported by platforms — systems, and software that enable new research and create benchmarking and testing interfaces.
“Ours is the only unit at MIT dedicated to building a scientific understanding of intelligence while working with researchers across the entire Institute,” DiCarlo says. “There has been remarkable progress in AI over the past decade, but I believe the next decade will bring even greater advances in our understanding of human intelligence — advances that will reshape what we call AI. By supporting us, David Siegel, the Siegel Family Endowment, and our other donors are demonstrating their confidence in our approach."
A legacy of interdisciplinary support
In 2011, David Siegel SM ’86, PhD ’91 founded the Siegel Family Endowment (SFE) to support organizations working at the intersections of learning, workforce, and infrastructure. SFE funds organizations addressing society’s most critical challenges while supporting innovative civic and community leaders, social entrepreneurs, researchers, and others driving this work forward. Siegel is a computer scientist, entrepreneur, and philanthropist. While in graduate school at MIT’s Artificial Intelligence Lab, he worked on robotics in the group of Tomás Lozano-Pérez — currently the School of Engineering Professor of Teaching Excellence — focusing on sensing and grasping. Later, he co-founded Two Sigma with the belief that innovative technology, AI, and data science could help uncover value in the world’s data. Today, Two Sigma drives transformation across the financial services industry in investment management, venture capital, private equity, and real estate.
Siegel explains, “The human brain may very well be the most complex physical system in the universe, yet most people haven't shown much interest in how it works. People take the mind for granted, yet wonder so much about other scientific mysteries, such as the origin of the universe. My fascination with the brain and its intersection with artificial intelligence stems from this. I don’t care whether there are commercial applications for this quest; instead, we should pursue research like that done at the MIT Siegel Family Quest for Intelligence to advance our understanding of ourselves. As we uncover more about human intelligence, I am hopeful that we will lay the groundwork not only for advancing artificial intelligence but also for extending our own thinking.”
As a long-time champion of the Center for Brains, Minds, and Machines (CBMM), a National Science Foundation-funded collaborative interdisciplinary research thrust, and one of the first donors to the MIT Quest for Intelligence, David Siegel helped lay the foundation for the research underway today. In early 2024, he founded Open Athena, a nonprofit that bridges the gap between academic research and the cutting edge of AI. Open Athena equips universities with elite AI and data engineering talent to accelerate breakthrough discoveries at scale. Siegel serves on the MIT Corporation Executive Committee, is vice-chair of the Scratch Foundation, and is a member of the Cornell Tech Council. He also sits on the boards of Re:Build Manufacturing, Khan Academy, NYC FIRST, and Carnegie Hall.
A Catalyst for Global Collaboration
MIT President Sally Kornbluth says, “Of all the donors and supporters whose generosity fueled the Quest for Intelligence, no one has been more important from the beginning than David Siegel. Without his longstanding commitment to CBMM and his support for the Quest, this community might never have formed. There’s every reason to think that David’s recent gift, which renames the Quest for Intelligence and also supports the Schwarzman College of Computing, will be even more powerful in shaping the future of this initiative and of the field itself.” She continues, “Fueled by generous donors — particularly David Siegel’s transformative gift — SQI is poised to take on an even more important role.”
SQI scientists and engineers are presenting their work broadly, publishing papers, and developing new tools and technologies that are used in research institutions worldwide, as they engage with colleagues in disciplines across the Institute and in universities and institutions around the globe. DiCarlo explains, “We're part of the Schwarzman College of Computing, at the nexus between the people interested in biology and various forms of intelligence and the people interested in AI. We're working with partners at other universities, in nonprofits, and in industry — we can't do it alone.”
“Fundamentally, we're not an AI effort. We're a human intelligence effort using the tools of engineering,” DiCarlo says. “That gives us, among other things, very useful insights for human learning and health, but also very useful tools for AI — including AI that will just work a lot better in a human world.”
The entire SQI community of faculty, students, and staff is excited to face new challenges in the efforts to understand the fundamentals of intelligence.
New missions and next horizons
SQI research is broadening: Mission principal investigators are integrating their efforts across areas of interest, increasing their impact on the field. In the coming months, the organization plans to launch a new Social Intelligence Mission.
"We need to focus on problems that mirror natural and artificial intelligence — making sure that we are evaluating new models on tasks that mirror what humans and other natural intelligence can do,” says Nick Roy, SQI director of systems engineering and professor of aeronautics and astronautics at MIT. He predicts that SQI’s future research will rely on asking the right questions: “[While] we are good at picking tasks that test our computational models, and we're extremely good at picking tasks that kind of align with what our models can already do, we need to get better at choosing tasks and benchmarks that also elicit something about natural intelligence,” he says.
On November 24, 2025, faculty, staff, students, and supporters gathered at an event titled “The Next Horizon: Quest’s Future” to celebrate SQI’s next chapter. The event consisted of an afternoon of research updates, a panel discussion, and a poster session on new and evolving research, and was attended by David Siegel, representatives from the Siegel Family Endowment, and various members of the MIT Corporation. Recordings of the presentations from the event are available on SQI’s YouTube channel.
Generative AI tool helps 3D print personal items that sustain daily use
Generative artificial intelligence models have left such an indelible impact on digital content creation that it’s getting harder to recall what the internet was like before it. You can call on these AI tools for clever projects such as videos and photos — but their flair for the creative hasn’t quite crossed over into the physical world just yet.
So why haven’t we seen generative AI-enabled personalized objects, such as phone cases and pots, in places like homes, offices, and stores yet? According to MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers, a key issue is the mechanical integrity of the 3D model.
While AI can help generate personalized 3D models that you can fabricate, those systems don’t often consider the physical properties of the 3D model. MIT Department of Electrical Engineering and Computer Science (EECS) PhD student and CSAIL engineer Faraz Faruqi has explored this trade-off, creating generative AI-based systems that can make aesthetic changes to designs while preserving functionality, and another that modifies structures with the desired tactile properties users want to feel.
Making it real
Together with researchers at Google, Stability AI, and Northeastern University, Faruqi has now found a way to make real-world objects with AI, creating items that are both durable and exhibit the user’s intended appearance and texture. With the AI-powered “MechStyle” system, users simply upload a 3D model or select a preset asset of things like vases and hooks, and prompt the tool using images or text to create a personalized version. A generative AI model then modifies the 3D geometry, while MechStyle simulates how those changes will impact particular parts, ensuring vulnerable areas remain structurally sound. When you’re happy with this AI-enhanced blueprint, you can 3D print it and use it in the real world.
You could select a model of, say, a wall hook, and the material you’ll be printing it with (for example, plastics like polylactic acid). Then, you can prompt the system to create a personalized version, with directions like, “generate a cactus-like hook.” The AI model will work in tandem with the simulation module and generate a 3D model resembling a cactus while also having the structural properties of a hook. This green, ridged accessory can then be used to hang up mugs, coats, and backpacks. Such creations are possible thanks, in part, to a stylization process, where the system changes a model’s geometry based on its understanding of the text prompt, and working with the feedback received from the simulation module.
According to CSAIL researchers, 3D stylization used to come with unintended consequences. Their formative study revealed that only about 26 percent of 3D models remained structurally viable after they were modified, meaning that the AI system didn’t understand the physics of the models it was modifying.
“We want to use AI to create models that you can actually fabricate and use in the real world,” says Faruqi, who is a lead author on a paper presenting the project. “So MechStyle actually simulates how GenAI-based changes will impact a structure. Our system allows you to personalize the tactile experience for your item, incorporating your personal style into it while ensuring the object can sustain everyday use.”
This computational thoroughness could eventually help users personalize their belongings, creating a unique pair of glasses with speckled blue and beige dots resembling fish scales, for example. It also produced a pillbox with a rocky texture that’s checkered with pink and aqua spots. The system’s potential extends to crafting unique home and office decor, like a lampshade resembling red magma. It can even design assistive technology fit to users’ specifications, such as finger splints to aid with dexterous injuries and utensil grips to aid with motor impairments.
In the future, MechStyle could also be useful in creating prototypes for accessories and other handheld products you might sell in a toy shop, hardware store, or craft boutique. The goal, CSAIL researchers say, is for both expert and novice designers to spend more time brainstorming and testing out different 3D designs, instead of assembling and customizing items by hand.
Staying strong
To ensure MechStyle’s creations could withstand daily use, the researchers augmented their generative AI technology with a type of physics simulation called a finite element analysis (FEA). You can imagine a 3D model of an item, such as a pair of glasses, with a sort of heat map indicating which regions are structurally viable under a realistic amount of weight, and which ones aren’t. As AI refines this model, the physics simulations highlight which parts of the model are getting weaker and prevent further changes.
Faruqi adds that running these simulations every time a change is made drastically slows down the AI process, so MechStyle is designed to know when and where to do additional structural analyses. “MechStyle’s adaptive scheduling strategy keeps track of what changes are happening in specific points in the model. When the genAI system makes tweaks that endanger certain regions of the model, our approach simulates the physics of the design again. MechStyle will make subsequent modifications to make sure the model doesn’t break after fabrication.”
Combining the FEA process with adaptive scheduling allowed MechStyle to generate objects that were as high as 100 percent structurally viable. Testing out 30 different 3D models with styles resembling things like bricks, stones, and cacti, the team found that the most efficient way to create structurally viable objects was to dynamically identify weak regions and tweak the generative AI process to mitigate its effect. In these scenarios, the researchers found that they could either stop stylization completely when a particular stress threshold was reached, or gradually make smaller refinements to prevent at-risk areas from approaching that mark.
The system also offers two different modes: a freestyle feature that allows AI to quickly visualize different styles on your 3D model, and a MechStyle one that carefully analyzes the structural impacts of your tweaks. You can explore different ideas, then try the MechStyle mode to see how those artistic flourishes will affect the durability of particular regions of the model.
CSAIL researchers add that while their model can ensure your model remains structurally sound before being 3D printed, it’s not yet able to improve 3D models that weren’t viable to begin with. If you upload such a file to MechStyle, you’ll receive an error message, but Faruqi and his colleagues intend to improve the durability of those faulty models in the future.
What’s more, the team hopes to use generative AI to create 3D models for users, instead of stylizing presets and user-uploaded designs. This would make the system even more user-friendly, so that those who are less familiar with 3D models, or can’t find their design online, can simply generate it from scratch. Let’s say you wanted to fabricate a unique type of bowl, and that 3D model wasn’t available in a repository; AI could create it for you instead.
“While style-transfer for 2D images works incredibly well, not many works have explored how this transfer to 3D,” says Google Research Scientist Fabian Manhardt, who wasn’t involved in the paper. “Essentially, 3D is a much more difficult task, as training data is scarce and changing the object’s geometry can harm its structure, rendering it unusable in the real world. MechStyle helps solve this problem, allowing for 3D stylization without breaking the object’s structural integrity via simulation. This gives people the power to be creative and better express themselves through products that are tailored towards them.”
Farqui wrote the paper with senior author Stefanie Mueller, who is an MIT associate professor and CSAIL principal investigator, and two other CSAIL colleagues: researcher Leandra Tejedor SM ’24, and postdoc Jiaji Li. Their co-authors are Amira Abdel-Rahman PhD ’25, now an assistant professor at Cornell University, and Martin Nisser SM ’19, PhD ’24; Google researcher Vrushank Phadnis; Stability AI Vice President of Research Varun Jampani; MIT Professor and Center for Bits and Atoms Director Neil Gershenfeld; and Northeastern University Assistant Professor Megan Hofmann.
Their work was supported by the MIT-Google Program for Computing Innovation. It was presented at the Association for Computing Machinery’s Symposium on Computational Fabrication in November.
Feeding innovation to solve complex urban problems
The Mexico City Initiative at MIT, led by the Institute’s Norman B. Leventhal Center for Advanced Urbanism (LCAU), has conceived and modeled an impressive array of solutions for challenges facing urban areas in Mexico and beyond. Faculty and students have designed the repurposing of a vintage roller coaster as a public meeting space, modeled strategies to decarbonize a municipal neighborhood, and proposed plans to convert nearly 990 acres of what was once Latin America’s largest landfill into a model of ecological restoration and clean energy production. The initiative has also spawned a sustainable construction startup that’s contributing to local economies in both Mexico and the United States.
When asked what’s most impactful about their work, however, those leading and collaborating with the LCAU’s Mexico City Initiative point to something else: the cross-border human connections they say are essential to continuing the ideation, development, and implementation of projects designed for Mexico City, but likely to be scalable and beneficial in urban centers around the world.
“To really create change in cities, we need to build relationships, friendships, and new networks. And through building them together, we can go so much further,” says Sarah Williams, director of the LCAU, which leads the initiative in collaboration with the National Autonomous University of Mexico (UNAM), the Mexico City government, and the engineering firm Mota-Engil Mexico.
“I think one of the big things we’re proud of is there have been a lot of personal connections created between MIT and UNAM, and I think research collaboration will result from these connections,” says Onésimo Flores PhD ’13, director general of Mota-Engil Mexico’s transportation mobility division. “I think what we have contributed to building is deepening collaboration.”
UNAM associate professor of architecture Elena Tudela agrees, noting that “beyond the projects themselves, we have developed a genuine friendship that I hope will continue long after this specific collaboration ends.”
“What I personally value most from these years of collaboration on Mexico City’s energy transition is the set of relationships we have built — with researchers, professors and especially the team at the LCAU,” says Tudela, an initiative collaborator. “For local students, the impact has been even more profound. It built bonds that transcend the workshop’s objectives, contributing to a deeper understanding of design as a collaborative, multidisciplinary practice.”
Williams credits Flores with helping to obtain Mota-Engil’s crucial financial support for the LCAU’s Mexico City Initiative. An MIT alumnus who earned his PhD in urban studies and planning in 2013 with Mota-Engil scholarship aid, Flores says the company’s support is meant to accomplish three goals: connect Mexican researchers with MIT, get Mexican students involved in MIT programs, and stimulate interest in projects relevant to cities like Mexico City among MIT faculty.
“If you can find urban solutions for a city as complex as Mexico City, you can probably figure it out for any city in the world, particularly in the Global South,” he says.
Over the past three years, faculty and students from MIT and UNAM have worked on projects centered on energy transition. Project teams, collaborators, interested local officials, business leaders, and others gathered for a recent symposium showcasing the progress made on the Mexico City Initiative’s projects so far.
Held in Mexico City last fall and featuring presentations by several MIT faculty, the “Energy Transitions” symposium was hosted by the LCAU, UNAM, and Mota-Engil Mexico. Its purpose “was to make sure the research effort that was done together was presented to the public and private sectors — groups that might be able to take the research to the next level,” says Williams, an MIT associate professor of technology and urban planning.
“The lecture series was exciting because we saw an interest in extending all the projects. I also think the conversations and ideas that were had in the room spark the kind of civic debate needed to transform our cities,” Williams says.
Established in 2013, the LCAU’s work cuts across diverse research fields to create innovation in cities.
“There’s not one field that can transform our future cities — innovation happens when we cross disciplines,” says Williams, who became LCAU director four years ago and has since focused the center’s mission on building and maintaining long-term relationships with cities through “City Initiatives.”
Other City Initiatives have included collaborations in Boston, as well as Sydney, Australia; Beirut, Lebanon; Bogota, Colombia; and Pristina, Kosovo. Mexico City was among the first initiatives and is the LCAU’s longest-standing program. Activities have included several classes held between MIT and Mexico City, a public exhibition, a hackathon with MITdesignX, and numerous joint research projects.
Williams describes it as “a fantastic relationship,” which began with development of a strategic plan for a Mexico City Innovation Lab, leading to a decision to focus the initiative on themes playing out over the course of about two years. The current theme is Energy Intersections, which looks at the role design plays in transitioning to cleaner energy infrastructure.
“This came from the group seeing that Mexico wanted to be a player in the global manufacturing marketplace and one of the barriers was how heavily polluted their energy infrastructure was,” Willliams says.
“The LCAU was founded for this idea that the work and research that we do about cities should be experimental, but also framed within contemporary policies and politics,” she says, adding that the team had considered other possible themes — from water and emergency planning to housing — but “as we started to think about energy, it just became so clearly important.”
Attracting about 70 attendees from Mexico City’s academic, government, and private sectors, the symposium was convened to enable MIT and UNAM researchers to share findings and discuss paths forward for several projects. Featured projects included:
- Redesigning Vallejo-I — aimed at transforming Mexico City’s Vallejo Industrial Zone into a revitalized hub for industry, transportation and housing;
- Decarbonize and Revitalize: Urban Regeneration for Mexico City’s Neighborhoods — which envisions ways for energy, equity, and design to regenerate Mexico City neighborhoods, using the Daniel Garza neighborhood as a model; and
- Bordo Poniente: Territories of Industrial and Ecological Metabolism — which presents strategies for reinventing what was once the world’s third-largest solid waste landfill (Bordo Poniente).
Leading the Bordo Poniente panel was project leader Eran Ben-Joseph, professor of landscape architecture and urban planning at MIT. Developed with UNAM and Mota-Engil partners, the project involved 12 MIT School of Architecture and Planning graduate students working across disciplines to address four integrated objectives: converting waste into public value, advancing energy transition (through methane/leachate capture), promoting equity and environmental justice for neighboring communities, and generating actionable policy recommendations, Ben-Joseph says.
“This collaborative effort exemplifies how international courses can combine rigorous fieldwork, interdisciplinary expertise, and community engagement to reimagine a toxic site as a model of urban regeneration and ecological repair,” he says, adding that the project “reflects MIT’s commitments to climate action, urban innovation, and applied systems thinking.” With over 100,000 landfills worldwide, he says, “a replicable ‘Bordo Model’ positions MIT as a global leader in transformation of waste landscapes into energy, ecological, and civic assets.”
In a similar vein, the Vallejo project reimagines urban industrial blocks as engines of clean energy generation, water resilience, and sustainable mobility. Led by MIT Department of Architecture Lecturer Roi Salgueiro Barrio and moderated by UNAM associate professor of architecture and project collaborator Daniel Daou, the symposium’s Redesigning Vallejo panel discussed how the project establishes an actionable framework for energy and industrial transition that can inspire and guide the revival of other industrial areas.
Finally, MIT professor of architecture and urbanism and project leader Rafi Segal presented the team’s Daniel Garza neighborhood case study, which highlighted two replicable urban planning and community clean energy project designs resulting from work by MIT and UNAM researchers.
“The most impactful aspect of ‘Decarbonize and Revitalize’ is its ability to merge energy transition with urban regeneration at the neighborhood scale. The project does not fit neatly into a single disciplinary category; it operates at the intersection of energy, design, and social infrastructure,” says Daniela Martinez Chapa, a former MIT student and an architect and urban designer who served as research assistant on the MIT team. “The project exemplifies MIT’s commitment to collaborative, context-specific innovation,” she adds.
Like others involved with the Mexico City Initiative, UNAM’s Tudela pointed out how working across disciplines, institutions, and borders has benefited both UNAM and MIT.
“MIT brings cutting-edge tools and methodologies in fields such as energy and urban data science, while UNAM contributes deep local expertise, strong social perspectives, and long-standing engagement with communities,” Tudela says. “This combination has produced highly creative, context-sensitive outcomes.”
As for next steps, Williams is hopeful that conversations started at this fall’s symposium might push the team’s research into the local limelight, helping them go from research and strategies to on-the-ground reality. She pointed to the success of an earlier LCAU Mexico City project as an example of what can happen when the right ideas and stakeholders coalesce.
For the 2022 Mextropoli Architecture and City Festival in Mexico City, an MIT team presented “Sueños con Fiber/Timber, Earth/Concrete.”
“As part of that project, we took a decommissioned roller coaster and reused it as a public forum space. And so that was talking about reuse of wood and making sure that building materials are reused in unique ways,” Williams says.
Adjacent to the repurposed roller coaster, Caitlin Mueller, an associate professor in MIT’s departments of Architecture and Civil and Environmental Engineering, built a structure made of 3D printed bricks that capture the traditional style of Mexican construction, but with a fraction of the carbon footprint. Mueller has since taken the Sueños project further, co-founding a design and technology company (Forma Systems) focused on expanding access to high-quality, low-carbon affordable housing and building systems by reimagining widely available materials such as concrete and earth.
“Caitlin’s project with the bricks is just such a good example of what the Cities Initiative can do. We seeded collaborative research, and now there’s a startup based off the idea, and they are continuing to do the work,” Williams says. “I think that’s the idea — we help to fund research that combines deep local knowledge and MIT’s innovation environment to help inspire new ideas and technologies for cities.
“I would hope these new projects just presented in Mexico would have a similar trajectory,” she says. “The future is open.”
Michael Moody: Impacting MIT through leadership in auditing
Michael J. Moody, who has served as Institute auditor since 2014, will retire from MIT in October, following a career in internal and external audit spanning 40 years.
Executive Vice President and Treasurer Glen Shor announced the news today in a letter to MIT’s Academic Council.
“I have greatly appreciated Mike’s rigorous and collaborative approach to auditing and advising on the Institute’s policies and processes,” Shor wrote. “He has helped MIT accomplish far-reaching ambitions while adhering to best practices in administering programs and services.”
As Institute auditor, Moody oversees a division that conducts financial, operational, compliance, and technology reviews across MIT. He leads a team of internal auditors that serve as trusted advisors to administrative leadership and members of the MIT Corporation, assessing processes and making recommendations to control risks, improve processes, and enhance decision-making.
The MIT Audit Division maintains a dual reporting structure to ensure its independence. Moody and his team work for the MIT Corporation Risk and Audit Committee but receive administrative support from the MIT Office of the Executive Vice President and Treasurer.
“Mike is highly principled and rigorous with detail, earning our committee’s trust,” says Pat Callahan, chair of the Risk and Audit Committee. “The committee runs like clockwork because of Mike’s dedication and skill.”
Moody has guided the Audit Division through a transformative period, spearheading several impactful initiatives throughout his tenure. He advanced the approval of the first-ever Audit Division Charter to codify the unit’s independence and objectivity and to articulate its mandates for accountability and oversight, and he implemented a new process to distribute audit reports to all senior administrative officers as a best practice. He also initiated the Institute’s inaugural external quality assurance review, for which MIT received the highest rating. Moody has continued the practice of externally auditing the division.
Having a particular interest in leveraging analytics and data to improve workflows and inform assessments, Moody added a data analyst to his team in 2016. The team also sponsors the cross-Institute Data Analysts and Data Scientists (DADS) group, which seeks to foster collaboration while advancing analytics and data practices at an Institute level.
More recently, Moody helped establish the MIT AI Cohort to advance artificial intelligence solutions across the Institute while minimizing associated risks. The group, launched in November 2025, includes representatives from MIT Sloan School of Management, the Koch Institute for Integrative Cancer Research, the School of Engineering, MIT Libraries, the Office of the Vice President for Research, the Division of Graduate and Undergraduate Education, and MIT Health, among others.
A key aspect of Moody's work — and one that has been especially meaningful to him — is helping the MIT community understand the Audit Division's mission and role in furthering the Institute’s positive impact. To facilitate this, he instilled in his team a set of core values that emphasizes professionalism, objectivity, pragmatism, openness, and willingness to listen, and has presented it as a model for peer institutions. He has in this vein focused on building relationships with the community to identify the right opportunities for improvement in MIT’s operations and ensure that the Audit Division’s feedback is constructively delivered and received.
“Mike has been an invaluable partner,” says Suzy Nelson, MIT vice chancellor for student life. “Over the years, his collaborative and knowledgeable approach has helped us improve so many areas — from student organization event management to our business practices to enhancing our student support services. Mike has listened carefully to students’ needs and offered guidance aligned with the goals of the program and student safety.”
Before joining MIT, Moody served in audit and compliance roles at Northwestern University, the University of Illinois at Chicago, and the state of Illinois. At the public accounting firm Coopers and Lybrand (now Pricewaterhouse Coopers LLP), he managed and performed information technology audits and served as a financial and technology consultant for clients in a variety of industries. Moody has also held numerous volunteer and elected leadership positions in international, national, and local professional audit associations. He holds certified internal auditor and certified information systems auditor designations, along with a certification in risk management assurance.
“In reflecting on my time here, I’m most proud of assembling a team that has made positive changes to how MIT operates,” says Moody. “It’s been very rewarding having leaders, staff, and researchers reach out for advice and assistance. It's a testament to the strong relationships we've built across the Institute.”
Shor and Callahan will soon formally launch a search for Institute auditor, and expect to identify Moody’s successor during the fall 2026 semester.
Chemists determine the structure of the fuzzy coat that surrounds Tau proteins
One of the hallmarks of Alzheimer’s disease is the clumping of proteins called Tau, which form tangled fibrils in the brain. The more severe the clumping, the more advanced the disease is.
The Tau protein, which has also been linked to many other neurodegenerative diseases, is unstructured in its normal state, but in the pathological state it consists of a well-ordered rigid core surrounded by floppy segments. These disordered segments form a “fuzzy coat” that helps determine how Tau interacts with other molecules.
MIT chemists have now shown, for the first time, they can use nuclear magnetic resonance (NMR) spectroscopy to decipher the structure of this fuzzy coat. They hope their findings will aid efforts to develop drugs that interfere with Tau buildup in the brain.
“If you want to disaggregate these Tau fibrils with small-molecule drugs, then these drugs have to penetrate this fuzzy coat,” says Mei Hong, an MIT professor of chemistry and the senior author of the new study. “That would be an important future endeavor.”
MIT graduate student Jia Yi Zhang is the lead author of the paper, which appears today in the Journal of the American Chemical Society. Former MIT postdoc Aurelio Dregni is also an author of the paper.
Analyzing the fuzzy coat
In a healthy brain, Tau proteins help to stabilize microtubules, which give cells their structure. However, when Tau proteins become misfolded or otherwise altered, they form clumps that contribute to neurodegenerative diseases such as Alzheimer’s and frontotemporal dementia.
Determining the structure of the Tau tangles has been difficult because so much of the protein — about 80 percent — is found in the fuzzy coat, which tends to be highly disordered.
This fuzzy coat surrounds a rigid inner core that is made from folded protein strands known as beta sheets. Hong and her colleagues have previously analyzed the structure of the core in a particular Tau fibril using NMR, which can reveal the structures of molecules by measuring the magnetic properties of atomic nuclei within the molecules.
Until now, most researchers had overlooked Tau’s fuzzy coat and focused on the rigid core of the fibrils because those disordered segments change their structures so often that standard structure characterization techniques such as cryoelectron microscopy and X-ray crystallography can’t capture them.
However, in the new study, the researchers developed NMR techniques that allowed them to study the entire Tau protein. In one experiment, they were able to magnetize protons within the most rigid amino acids, then measure how long it took for the magnetization to be transferred to the mobile amino acids. This allowed them to track the magnetization as it traveled from rigid regions to floppy segments, and vice versa.
Using this approach, the researchers could estimate the proximity between the rigid and mobile segments. They complemented this experiment by measuring the different degrees of movement of the amino acids in the fuzzy coat.
“We have now developed an NMR-based technology to examine the fuzzy coat of a full-length Tau fibril, allowing us to capture both the dynamic regions and the rigid core,” Hong says.
Protein dynamics
For this particular fibril, the researchers showed that the overall structure of the Tau protein, which contains about 10 different domains, somewhat resembles a burrito, with several layers of the fuzzy coat wrapped around the rigid core.
Based on their measurements of protein dynamics, the researchers found that these segments fell into three categories. The rigid core of the fibril was surrounded by protein regions with intermediate mobility, whereas the most dynamic segments were found in the outermost layer.
The most dynamic segments of the fuzzy coat are rich in the amino acid proline. In the protein sequence, these prolines are near the amino acids that form the rigid core, and were previously thought to be partially immobilized. Instead, they are highly mobile, indicating that these positively charged proline-rich regions are repelled by the positive charges of the amino acids that form the rigid core.
This structural model gives insight into how Tau proteins form tangles in the brain, Hong says. Similar to how prions trigger healthy proteins to misfold in the brain, it is believed that misfolded Tau proteins latch onto normal Tau proteins and act as a template that induces them to adopt the abnormal structure.
In principle, these normal Tau proteins could add to the ends of existing short filaments or pile onto the sides. The fact that the fuzzy coat wraps around the rigid core indicates that normal Tau proteins more likely add onto the ends of the filaments to generate longer fibrils.
The researchers now plan to explore whether they can stimulate normal Tau proteins to assemble into the type of fibrils seen in Alzheimer’s disease, using misfolded Tau proteins from Alzheimer’s patients as a template.
The research was funded by the National Institutes of Health.
The “delicious joy” of creating and recreating music
As a graduate student, Leslie Tilley spent years studying and practicing the music of Bali, Indonesia, including a traditional technique in which two Balinese drummers play intricately interlocking rhythms while simultaneously improvising. It was beautiful and compelling music, which Tilley heard an unexpected insight about one day.
“The higher drum is the bus driver, and the lower drum is the person who puts the bags on the top of the bus,” a Balinese musician told Tilley.
Today, Tilley is an MIT faculty member who works as both an ethnomusicologist, studying music in its cultural settings, and a music theorist, analyzing its formal principles. The tools of music theory have long been applied to, say, Bach, and rather less often to Balinese drumming. But one of Tilley’s interests is building music theory across boundaries. As she recognized, the drummer’s bus driver analogy is a piece of theory.
“That doesn’t feel like the music theory I had learned, but that is 100 percent music theory,” Tilley said. “What is the relationship between the drummers? The higher drum has to stick to a smaller subset of rhythms so that the lower drum has more freedom to improvise around. Putting it that way is just a different music-theoretical language.”
Tilley’s anecdote touches on many aspects of her career: Her work ranges widely, while linking theory, practice, and learning. Her studies in Bali became the basis for an award-winning book, which uses Balinese music as a case study for a more generalized framework about collective improvisation, one that can apply to any type of music.
Currently, Tilley is engaged in another major project, supported by a multiyear, $500,000 Mellon Foundation grant, to develop a reimagined music theory curriculum. That project aims to produce an alternative four-semester open access music theory curriculum with a broader scope than many existing course materials, to be accompanied by a new audio-visual textbook. The effort includes a major conference later this year that Tilley is organizing, and is designed as a collaborative project; she will work with other scholars on the curriculum and textbook, with 2028 as a completion date.
If that weren’t enough, Tilley is also working on a new book about the phenomenon of cover songs in modern pop music, from the 1950s onward. Here too, Tilley is combining careful cultural analysis of select popular artists and their work, along with a formal examination of the musical choices they have made while developing cover versions of songs.
All told, understanding how music works within a culture, while understanding the inner workings of music, can deliver us new insights — about music, performers, and audiences.
“What I am focused on fundamentally is how musicians take a musical thing and make something new out of it,” Tilley says. “And then how listeners react to that thing. What is happening here musically? And can that explain the human reaction to it, which is messy and subjective?”
Across all these projects, Tilley has been a consistently innovative scholar who reshapes existing genres of work. For her research and teaching, Tilley has received tenure and is now an associate professor in MIT’s Music and Theater Arts Program.
The joy of collective improv
Both of Tilley’s parents were musicians, but “they never had any intention for their kids to go into music,” says Tilley, a native of Halifax, Nova Scotia. Growing up, she studied piano, violin, and French horn for years; played in a symphony orchestra, brass band, and concert bands; sang in choirs; and performed in musicals. Ultimately she realized she could make a career out of music as well.
“In 12th grade I suddenly realized, music is what I do. Music is who I am. Music is what I love,” Tilley says. Back then, she pictured herself being an opera singer. Subsequently, as she recalls, “Somewhere along the way, I steered myself into music scholarship.”
Tilley received her bachelor of music degree from Acadia University in Nova Scotia, and then conducted her graduate studies in music at the University of British Columbia, where she earned an MA and PhD. It was in graduate school that Tilley began studying the music of Bali — on campus and during extended periods of field research.
Studying Balinese music was “mildly accidental,” Tilley says, calling it “a little bit of happy happenstance. Encountering these musical traditions exploded the way I thought about music and ways of understanding the interactions of musicians.”
In her research, Tilley looked intensively at two distinct improvised Balinese musical practices: the four-person melodic gong technique “reyong norot” and the two-person drumming practice “kendang arja.” Both are featured in her 2019 book, “Making It Up Together: The Art of Collective Improvisation in Balinese Music and Beyond.” Published by the University of Chicago Press, it won the 2022 Emerging Scholar Award from the Society for Music Theory.
Grounded in empirical evidence, the book proposes a novel, universal framework for understanding the components of collective improvisation. That includes both the more strictly musical aspects of improvisation — how much flexibility musicians give themselves to improvise, for instance — as well as the forms of interaction musicians have with their co-performers.
“My book is about collective improvisation and what it means,” Tilley says. “What is the give and take of that process, and how can we analyze that? There are lots of scholars who have discussed collective improvisation as it exists in jazz. The delicious joy of collective improvisation is something anybody who improvises in a musical group will talk about. My book looks at examples, especially the case studies I have from Bali, and then creates bigger analytical frameworks, so there can finally be an umbrella way of looking at this phenomenon across music cultures and practices.”
Despite her years of immersing herself in the music, and playing it, Tilley says, “I am a beginner in comparison to the drummers I studied with, who have been playing forever and played with other masters their whole lives, and were generous enough to allow me to learn from them.” Still, she thinks the experience of playing music while studying it is indispensable.
“Ethnomusicology is a field that takes a bit from other fields,” Tilley notes. “The idea of participant observation, we borrow that from anthropology, and the idea of close musical analysis is from musicology or music theory. It’s an in-between way of thinking about music where I get to both participate and observe. But also I’m a music analysis nerd: What’s happening in the notes? Looking at music note-by-note, but from a place of physical embodiment, provides a better understanding than if I had just looked at the notes.”
Expanding instruction
At present, Tilley is devoting significant effort to her music-theory curriculum work, which is funded by the Mellon Foundation as a three-year effort. The upcoming summer conference she is organizing, also supported by the Mellon Foundation, will be a key part of the project, allowing a wide range of scholars to air perspectives about reimagining music theory studies in the 21st century.
Substantively, the idea is to broaden the scope of music theory instruction. Often, Tilley says, “music theory is learning how to understand the musical structures that are essentially between Bach and early Beethoven, that kind of narrow range of a couple hundred years, really amazing musical systems with a very deep, written-down music theory. But that accepted canon leaves out so many other kinds of music and ways of knowing.” Instead, she adds, “If we were not beholden to any assumptions about what we should have in a music program, what skills would we want our students to walk away from four semesters of music theory with?”
About the conference, Tilley quips: “Sitting in a room and nerding out with a bunch of people who care deeply about a thing you care about, which in my case is music, music theory, and pedagogy, is possibly the coolest thing you can do with your time. Hopefully something wonderful comes out of it.”
As Tilley views it, her current book project on pop music cover songs stems from some of the same issues that have long animated her thinking: How do artists fashion their work out of existing knowledge?
“The project on cover songs is similar to the project on collective improvisation in Bali,” Tilley says, in the sense that when it comes to improvisation, “I have a bank of things I know, in my head and in my body about this musical practice, and within that context I can create something that is new and mine, based on something that exists already.”
She adds: “Cover songs to me are the same, but different. The same in that it’s a musical transformation, but different because a pop song doesn’t just have lyrics, melody, and chords, but the vocal quality, the arrangement, the brand of the performer, and so much more. What we think about in popular music isn’t just the song, it’s the person singing it, the social and political contexts, and the listener’s personal relationships to all those things, and they’re so wrapped up together we almost can’t disentangle them.”
As with her earlier work, Tilley is not just examining individual pieces of music, but building a larger analytical model in the process — one that factors in the formal musical changes artists make as well as the cultural components of the phenomenon, to understand why cover songs can produce strong and varying reactions among listeners.
In the process, Tilley has been presenting conference papers and invited talks on the topic for a number of years now. One case that interests Tilley is the singer-songwriter Tori Amos, whose many cover versions transform the viewpoint, music, and meaning of songs by artists from Eminem to Nirvana, and more. There may also be some Taylor Swift content in the next book, although with thousands and thousands of songs to choose from in the pop-rock era, there could be something for everyone — fitting Tilley’s ethos of studying music broadly, across time and space as it is created, recreated, and recreated again.
“This is why music is infinitely cool,” Tilley says. “It’s so malleable, and so open to interpretation.”
A protein found in the GI tract can neutralize many bacteria
The mucosal surfaces that line the body are embedded with defensive molecules that help keep microbes from causing inflammation and infections. Among these molecules are lectins — proteins that recognize microbes and other cells by binding to sugars found on cell surfaces.
One of these lectins, MIT researchers have found, has broad-spectrum antimicrobial activity against bacteria found in the GI tract. This lectin, known as intelectin-2, binds to sugar molecules found on bacterial membranes, trapping the bacteria and hindering their growth. Additionally, it can crosslink molecules that make up mucus, helping to strengthen the mucus barrier.
“What’s remarkable is that intelectin-2 operates in two complementary ways. It helps stabilize the mucus layer, and if that barrier is compromised, it can directly neutralize or restrain bacteria that begin to escape,” says Laura Kiessling, the Novartis Professor of Chemistry at MIT and the senior author of the study.
This kind of broad-spectrum antimicrobial activity could make intelectin-2 useful as a potential therapeutic, the researchers say. It could also be harnessed to help strengthen the mucus barrier in patients with disorders such as inflammatory bowel disease.
Amanda Dugan, a former MIT research scientist, and Deepsing Syangtan PhD ’24 are the lead authors of the paper, which appears today in Nature Communications.
A multifunctional protein
Current evidence suggests that the human genome encodes more than 200 lectins — carbohydrate-binding proteins that play a variety of roles in the immune system and in communication between cells. Kiessling’s lab, which has been exploring lectin-carbohydrate interactions, recently became interested in a family of lectins called intelectins. In humans, this family includes two lectins, intelectin-1 and intelectin-2.
Those two proteins have very similar structures, but intelectin-1 is distinctive in that it only binds to carbohydrates found in bacteria and other microbes. About 10 years ago, Kiessling and her colleagues were able to discover intelectin-1’s structure, but its functions are still not fully understood.
At that time, scientists hypothesized that intelectin-2 might play a role in immune defense, but there hadn’t been many studies to support that idea. Dugan, then a postdoc in Kiessling’s lab, set out to learn more about intelectin-2.
In humans, intelectin-2 is produced at steady levels by Paneth cells in the small intestine, but in mice, its expression from mucus-producing Goblet cells appears to be triggered by inflammation and certain types of parasitic infection.
In the new study, the researchers found that both human and mouse intelectin-2 bind to a sugar molecule called galactose. This sugar is commonly found in molecules called mucins that make up mucus. When intelectin-2 binds to these mucins, it helps to strengthen the mucus barrier, the researchers found.
Galactose is also found in carbohydrates displayed on the surfaces of some bacterial cells. The researchers showed that intelectin-2 can bind to microbes that display these sugars, including many pathogens that cause GI infections.
The researchers also found that over time, these trapped microbes eventually disintegrate, suggesting that the protein is able to kill them by disrupting their cell membranes. This antimicrobial activity appears to affect a wide range of bacteria, including some that are resistant to traditional antibiotics.
These dual functions help to protect the lining of the GI tract from infection, the researchers believe.
“Intelectin-2 first reinforces the mucus barrier itself, and then if that barrier is breached, it can control the bacteria and restrict their growth,” Kiessling says.
Fighting off infection
In patients with inflammatory bowel disease, intelectin-2 levels can become abnormally high or low. Low levels could contribute to degradation of the mucus barrier, while high levels could kill off too many beneficial bacteria that normally live in the gut. Finding ways to restore the correct levels of intelectin-2 could be beneficial for those patients, the researchers say.
“Our findings show just how critical it is to stabilize the mucus barrier. Looking ahead, we can imagine exploiting lectin properties to design proteins that actively reinforce that protective layer,” Kiessling says.
Because intelectin-2 can neutralize or eliminate pathogens such as Staphylococcus aureus and Klebsiella pneumoniae, which are often difficult to treat with antibiotics, it could potentially be adapted as an antimicrobial agent.
“Harnessing human lectins as tools to combat antimicrobial resistance opens up a fundamentally new strategy that draws on our own innate immune defenses,” Kiessling says. “Taking advantage of proteins that the body already uses to protect itself against pathogens is compelling and a direction that we are pursuing.”
The research was funded by the National Institutes of Health Glycoscience Common Fund, the National Institute of Allergy and Infectious Disease, the National Institute of General Medical Sciences, and the National Science Foundation.
Other authors who contributed to the study include Charles Bevins, a professor of medical microbiology and immunology at the University of California at Davis School of Medicine; Ramnik Xavier, a professor of medicine at Harvard Medical School and the Broad Institute of MIT and Harvard; and Katharina Ribbeck, the Andrew and Erna Viterbi Professor of Biological Engineering at MIT.
Understanding ammonia energy’s tradeoffs around the world
Many people are optimistic about ammonia’s potential as an energy source and carrier of hydrogen, and though large-scale adoption would require major changes to the way it is currently manufactured, ammonia does have a number of advantages. For one thing, ammonia is energy-dense and carbon-free. It is also already produced at scale and shipped around the world, primarily for use in fertilizer.
Though current manufacturing processes give ammonia an enormous carbon footprint, cleaner ways to make ammonia do exist. A better understanding of how to guide the ammonia fuel industry’s continued development could improve carbon emissions, energy costs, and regional energy balances.
In a new paper, MIT Energy Initiative (MITEI) researchers created the largest combined dataset showing the economic and environmental impact of global ammonia supply chains under different scenarios. They examined potential ammonia flows across 63 countries and considered a variety of country-specific economic parameters as well as low- and no-carbon ammonia production technologies. The results should help researchers, policymakers, and industry stakeholders calculate the cost and lifecycle emissions of different ammonia production technologies and trade routes.
“This is the most comprehensive work on the global ammonia landscape,” says senior author Guiyan Zang, a research scientist at MITEI. “We developed many of these frameworks at MIT to be able to make better cost-benefit analyses. Hydrogen and ammonia are the only two types of fuel with no carbon at scale. If we want to use fuel to generate power and heat, but not release carbon, hydrogen and ammonia are the only options, and ammonia is easier to transport and lower-cost.”
The study provides the clearest view yet of the tradeoffs associated with various ammonia production technologies. The researchers found, for instance, that a full transition to ammonia produced using conventional processes paired with carbon capture could cut global greenhouse gas emissions by nearly 71 percent for a 23.2 percent cost increase. A transition to electrolyzed ammonia produced using renewable energy could reduce greenhouse gas emissions by 99.7 percent for a 46 percent cost increase.
“Before this, there were no harmonized datasets quantifying the impacts of this transition,” says lead author Woojae Shin, a postdoc at MITEI. “Everyone is talking about ammonia as a super important hydrogen carrier in the future, and also ammonia can be directly used in power generation or fertilizer and other industrial uses. But we needed this dataset. It’s filling a major knowledge gap.”
The paper appears in Energy and Environmental Science. Former MITEI postdocs Haoxiang Lai and Gasim Ibrahim are also co-authors.
Filling a data gap
Today ammonia is mainly produced through the Haber-Bosch process, which in 2020 was responsible for about 1.8 percent of global greenhouse gas emissions. Although current ammonia production is energy-intensive and polluting (referred to as gray ammonia), ammonia can also be produced sustainably using renewable sources (green ammonia) or with natural gas and carbon sequestration (blue ammonia).
As ammonia has increasingly attracted interest as a carbon-free energy source and a medium for hydrogen transport, it’s become more important to quantify the costs and life-cycle emissions associated with various ammonia production technologies, as well as ammonia storage and shipping routes. But existing studies were too narrowly focused.
“The previous studies and datasets were fragmented,” Shin says. “They focused on specific regions or single technologies, like gray ammonia only, or blue ammonia only. They would also only cover the cost or the greenhouse emissions of ammonia in isolation. Finally, they use different scopes and methodologies. It meant you couldn’t make global comparisons or draw definitive conclusions.”
To build their database, the MIT researchers combined data from dozens of studies analyzing specific technologies, regions, economic parameters, and trade flows. They also used frameworks they previously developed to calculate the total cost of ammonia production in each country and estimated lifecycle greenhouse gas emissions across the supply chain, factoring in storage and shipping between different regions.
Emissions calculations included activities related to feedstock extraction, production, transport, and import processing. Major cost factors included each country’s renewable and grid electricity prices, natural gas prices, and location. Other factors like interest rates and equity premiums were also included.
The researchers used their calculations to find ammonia costs and life cycle emissions across six ammonia production technologies. In the context of the U.S. average, they found the lowest production cost came from using a popular form of the Haber Bosch process known as natural gas steam methane reforming (SMR) without carbon capture and storage (gray ammonia), at 48 cents per kilogram of ammonia. Unfortunately, that economic advantage came with the highest greenhouse gas emissions, at 2.46 kilograms of CO2 equivalent per kilogram of ammonia. In contrast, SMR with carbon capture and storage achieves an approximately 61 percent reduction in emissions while incurring a 29 percent increase in production costs.
Another method for producing ammonia that uses natural gas as a feedstock called auto-thermal reforming (ATR) with air combustion, when combined with carbon capture and storage, exhibited a 10 percent higher cost than conventional SMR while generating emissions of 0.75 kilograms of CO2 equivalent per kilogram of ammonia, representing a more cost-effective decarbonization option than SMR with carbon capture and storage.
Among production pathways including carbon capture (blue ammonia), a variation of ATR that uses oxygen combustion and carbon capture had the lowest emissions, with a production cost of about 57 cents per kilogram of ammonia. Producing ammonia with electricity generally had higher production costs than blue ammonia pathways. When nuclear energy is powering ammonia production, as opposed to the grid, greenhouse gas emissions are near zero at 0.03 kilograms of CO2 equivalent per kilogram of ammonia produced.
Across the 63 countries studied, major cost and emissions differences were driven by energy costs, sources of energy for the grid, and financing environments. China emerged as an optimal future supplier of green ammonia to many countries, while the Middle East also offered competitive low-carbon ammonia production pathways. Generally, blue ammonia pathways are most attractive for countries with low-cost natural gas resources, and ammonia made using grid electricity proved more expensive and more carbon-intensive than conventionally produced ammonia.
From data to policy
Low-carbon ammonia use is projected to grow dramatically by 2050, with that ammonia procured via global trade. Japan and Korea, for example, have included ammonia in their national energy strategies and conducted trials using ammonia to generate power. They even offer economic credits for verified CO2 reductions from clean ammonia projects.
“Ammonia researchers, producers, as well as government officials require this data to understand the impact of different technologies and global supply corridors,” Shin says.
The authors also believe industry stakeholders and other researchers will get a lot of value from their database, which allows users to explore the impact of changing specific parameters.
“We collaborate with companies, and they need to know the full costs and lifecycle emissions associated with different options,” Zang says. “Governments can also use this to compare options and set future policies. Any country producing ammonia needs to know which countries they can deliver to economically.”
The research was supported by the MIT Energy Initiative’s Future Energy Systems Center.
This new tool could tell us how consciousness works
Consciousness is famously a “hard problem” of science: We don’t precisely know how the physical matter in our brains translates into thoughts, sensations, and feelings. But an emerging research tool called transcranial focused ultrasound may enable researchers to learn more about the phenomenon.
The technology has entered use in recent years, but it isn’t yet fully integrated into research. Now, two MIT researchers are planning experiments with it, and have published a new paper they term a “roadmap” for using the tool to study consciousness.
“Transcranial focused ultrasound will let you stimulate different parts of the brain in healthy subjects, in ways you just couldn’t before,” says Daniel Freeman, an MIT researcher and co-author of a new paper on the subject. “This is a tool that’s not just useful for medicine or even basic science, but could also help address the hard problem of consciousness. It can probe where in the brain are the neural circuits that generate a sense of pain, a sense of vision, or even something as complex as human thought.”
Transcranial focused ultrasound is noninvasive and reaches deeper into the brain, with greater resolution, than other forms of brain stimulation, such as transcranial magnetic or electrical stimulation.
“There are very few reliable ways of manipulating brain activity that are safe but also work,” says Matthias Michel, an MIT philosopher who studies consciousness and co-authored the new work.
The paper, “Transcranial focused ultrasound for identifying the neural substrate of conscious perception,” is published in Neuroscience and Biobehavioral Reviews. The authors are Freeman, a technical staff member at MIT Lincoln Laboratory; Brian Odegaard, an assistant professor of psychology at the University of Florida; Seung-Schik Yoo, an associate professor of radiology at Brigham and Women’s Hospital and Harvard Medical School; and Michel, an associate professor in MIT’s Department of Philosophy and Linguistics.
Pinpointing causality
Brain research is especially difficult because of the challenge of studying healthy individuals. Apart from neurosurgery, there are very limited ways to gain knowledge of the deepest structures in the human brain. From the outside of the head, noninvasive approaches like MRIs and other kinds of ultrasounds yield some imaging information, while the electroencephalogram (EEG) shows electrical activity in the brain. Conversely, with transcranial focused ultrasound, acoustic waves are transmitted through the skull, focusing down to a target area of a few millimeters, allowing specific brain structures to be stimulated to study the resulting effect. It could therefore be a productive tool for robust experiments.
“It truly is the first time in history that one can modulate activity deep in the brain, centimeters from the scalp, examining subcortical structures with high spatial resolution,” Freeman says. “There’s a lot of interesting emotional circuits that are deep in the brain, but until now you couldn’t manipulate them outside of the operating room.”
Crucially, the technology may help researchers determine cause-and-effect patterns, precisely because its ultrasound waves modulate brain activity. Many studies of consciousness today may measure brain activity in relation to, say, visual stumuli, since visual processing is among the core components of consciousness. But it’s not necessarily clear if the brain activity being measured represents the generation of consciousness, or a mere consequence of consciousness. By manipulating the brain’s activity, researchers can better grasp which actions help constitute consciousness, or are byproducts of it.
“Transcranial focused ultrasound gives us a solution to that problem,” says Michel.
The “roadmap” laid out in the new paper aims to help distinguish between two main conceptions of consciousness. Broadly, the “cognitivist” conception holds that the neural activity that generates conscious experience must involve higher-level mental processes, such as reasoning or self-reflection. These processes link information from many different parts of the brain into a coherent whole, likely using the frontal cortex of the brain.
By contrast, the “non-cognitivist” idea of consciousness takes the position that conscious experience does not require such cognitive machinery; instead, specific patterns of neural activity give rise directly to particular subjective experiences, without the need for sophisticated interpretive processes. In this view, brain activity responsible for consciousness may be more localized, at the back of the cortex or in subcortical structures at the back of the brain.
To use transcranial focused ultrasound productively, the researchers lay out a series of more specific questions that experiments might address: What is the role of the prefrontal cortex in conscious perception? Is perception generated locally, or are brain-wide networks required? If consciousness arises across distant regions of the brain, how are perceptions from those areas linked into one unified experience? And what is the role of subcortical structures in conscious activity?
By modulating brain activity in experiments involving, say, visual stimuli, researchers could draw closer to answers about the brain areas that are necessary in the production of conscious thought. The same goes for studies of, for instance, pain, another core sensation linked with consciousness. We pull our hand back from a hot stove before the pain hits us. But how is the conscious sensation of pain generated, and where in the brain does that happen?
“It’s a basic science question, how is pain generated in the brain,” Freeman says. “And it’s surprising there is such uncertainty … Pain could stem from cortical areas, or it could be deeper brain structures. I’m interested in therapies, but I’m also curious if subcortical structures may play a bigger role than appreciated. It could be the physical manifestation of pain is subcortical. That’s a hypothesis. But now we have a tool to examine it.”
Experiments ahead
Freeman and Michel are not just abstractly charting a course for others to follow; they are planning forthcoming experiments centered on stimulation of the visual cortex, before moving on to higher-level areas in frontal cortex. While methods of recording brain activity, such as an EEG reveal areas that are visually responsive, these new experiments are aiming to build a more complete, causal picture of the entire process of visual perception and its associated brain activity.
“It’s one thing to say if these neurons reponded electrically. It’s another thing to say if a person saw light,” Freeman says.
Michel, for his part, is also playing an active role in generating further interest in studies of consciousness at MIT. Along with Earl Miller, the Picower Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences, Michel is a co-founder of the MIT Consciousness Club, a cross-disciplinary effort to spur further academic study of consciousness, on campus and at other Boston-area institutions.
The MIT Consciousness Club is supported in part by MITHIC, the MIT Human Insight Collaborative, an initiative backed by the School of Humanities, Arts, and Social Sciences. The program aims to hold monthly events, while grappling with the cutting edge of consciousness research.
At the moment, Michel believes, the cutting edge very much involves transcranial focused ultrasound.
“It’s a new tool, so we don’t really know to what extent it’s going to work,” Michel says. “But I feel there’s low risk and high reward. Why wouldn’t you take this path?”
The research for the paper was supported by the U.S. Department of the Air Force.
Fueling research in nuclear thermal propulsion
Going to the moon was one thing; going to Mars will be quite another. The distance alone is intimidating. While the moon is 238,855 miles away, the distance to Mars is between 33 million and 249 million miles. The propulsion systems that got us to the moon just won’t work.
Taylor Hampson, a master’s student in the Department of Nuclear Science and Engineering (NSE), is well aware of the problem. It’s one of the many reasons he’s excited about his NASA-sponsored research into nuclear thermal propulsion (NTP).
The technique uses nuclear energy to heat a propellant, like hydrogen, to an extremely high temperature and expel it through a nozzle. The resultant thrust can significantly reduce travel times to Mars, compared to chemical rockets. “You can get double the efficiency, or more, from a nuclear propulsion engine with the same thrust. Besides, being in microgravity is not ideal for astronauts, so you want to get them there faster, which is a strong motivation for using nuclear propulsion over the chemical equivalents,” Hampson says.
Understanding nuclear thermal propulsion
It’s worth taking a quick survey of rocket propulsion techniques to understand where Hampson’s work fits.
There are three broad types of rocket propulsion: chemical, where thrust is achieved by the combustion of rocket propellants; electrical, where electric fields accelerate charged particles to high velocities to achieve thrust; and nuclear, where nuclear energy delivers needed propulsion.
Nuclear propulsion, which is only used in space, not to get to space, further falls into one of two categories: nuclear electric propulsion uses nuclear energy to generate electricity and accelerate the propellant. Nuclear thermal propulsion, which is what Hampson is researching, heats a propellant using nuclear power. A significant advantage of NTP is that it can deliver double the efficiency (or more) of the chemical equivalent for the same thrust. A disadvantage: cost and regulatory hurdles. “Sure, you can get double the efficiency or more from a nuclear propulsion engine, but there hasn’t been a mission case that has needed it enough to justify the higher cost,” Hampson says.
Until now.
With a human mission to Mars becoming a very real possibility — NASA plans on sending astronauts to Mars as early as the 2030s — NTP might soon come under the spotlight.
"It's almost futuristic"
Growing up on Florida’s Space Coast and watching space shuttle launches stoked Hampson’s early interest in science. Loving many other subjects, including history and math, it wasn’t until his senior year that Hampson cast his lot into the engineering category. While space exploration got him hooked on aerospace engineering, Hampson was also intrigued by the possibility of nuclear engineering as a way to a greener future.
Wracked by indecision, he applied to schools in both fields and completed his undergraduate degree in aerospace engineering from Georgia Tech. It was here that a series of internships in space technology companies like Blue Origin and Stoke Space, and participation in Georgia Tech’s rocket team, cemented Hampson’s love for rocket propulsion.
Looking to pursue graduate studies, MIT seemed like the next logical step. “I think MIT has the best combination of nuclear and aerospace education, and is really strong in the field of testing nuclear fuels,” Hampson says. Facilities in the MIT Reactor enable testing of nuclear fuel under conditions they would see in a nuclear propulsion engine. It helped that Koroush Shirvan, associate professor of NSE and Atlantic Richfield Career Development Professor in Energy Studies, was working on nuclear thermal propulsion efforts with NASA while focusing most of his efforts on the testing of nuclear fuels.
At MIT, Hampson works under the advisement of Shirvan. Hampson has had the chance to pursue further research in a project he started with an internship at NASA: studies of a nuclear thermal propulsion engine. “Nuclear propulsion is itself advanced, and I’m working on what comes after that. It’s almost futuristic,” he says.
Modeling the effects of nuclear thermal propulsion
While the premise of NTP sounds promising, its execution will likely not be straightforward. For one thing, with NTP, the rocket engine won’t start up and shut down like simple combustion engines. The startup is complex because rapid increase in temperatures can cause material failures. And the engines can take longer to shut down because of heat from nuclear decay. As a result, the components have to continue to be cooled until enough fission products decay away so there isn’t a lot of heat left, Hampson says.
Hampson is modeling the entirety of the rocket engine system — the tank, the pump, and more — to understand how these and many other parameters work together. Evaluating the entire engine is important because different configurations of parts (and even the fuel) can affect performance. To simplify calculations and to have simulations run faster, he’s working with a relatively simple one-dimensional model. Using it, Hampson can follow the effects of variables on parameters like temperature and pressure on each of the components throughout the engine operation.
“The challenge is in coupling the thermodynamic effects with the neutronic effects,” he says.
Ready for more challenges ahead
After years of indecision, delaying practically every academics-related decision to the last minute, Hampson seems to have zeroed in on what he expects to be his life’s work — inspired by the space shuttle launches many years ago — and hopes to pursue doctoral studies after graduation.
Hampson always welcomes a challenge, and it’s what motivates him to run. Training for the Boston Marathon, he fractured his leg, an injury that surfaced when he was running for yet another race, the Beantown Marathon. He’s not bowed by the incident. “I learned that you’re a lot more capable than you think,” Hampson says, “although you have to ask yourself about the cost,” he laughs. (He was in crutches for weeks after).
A thirst for a challenge is also one of the many reasons he chose to research thermal nuclear propulsion. It helps that the research indulges his love for the field. “Relatively speaking, it’s a field in need of much more advancement; there are many more unsolved problems,” he says.
MIT named to prestigious 2026 honor roll for mental health services
MIT is often recognized as one of the leading institutions of higher learning not only in the United States, but in the world, by several publications, including U.S. News & World Report, QS World University Rankings, Times Higher Education, and Forbes.
Now, MIT also has the distinction of being one of just 30 colleges and universities out of hundreds recognized by Princeton Review’s 2026 Mental Health Services Honor Roll for providing exemplary mental health and well-being services to its students. This is the second year in a row that MIT has received this honor.
The honor roll was created to be a resource for enrolled students and prospective students who may seek such services when applying to colleges. The survey asked more than a dozen questions about training for students, faculty, and staff; provisions for making new policies and procedures; peer-to-peer offerings; screenings and referral services available to all students; residence hall mental health resources; and other criteria, such as current online information that is updated and accessible.
Overall, the 2025 survey findings for all participating institutions are noteworthy, with Princeton Review reporting double-digit increases in campus counseling, wellness, and student support programs compared with its 2024 survey results. Earning a place on the honor roll underscores MIT’s commitment to providing exceptional services for graduate and undergraduate students alike.
Karen Singleton, deputy chief health officer and chief of mental health and counseling services at MIT Health, says, “This honor highlights the hard work and collaboration that we do here at MIT to support students in their well-being journey. This is a recognition of how we are doing those things effectively, and a recognition of MIT’s investment in these support services.”
MIT Health hosts 36 clinicians to meet the needs of the community, and it recently added an easy online scheduling system at the request of students.
Many mental health and well-being services are offered through several departments housed in the Division of Student Life (DSL). They often collaborate with MIT Health and partners across the Institute, including in the Division of Graduate and Undergraduate Education, to provide the best services for the best outcomes for MIT students.
Support resources in DSL are highly utilized and valued by students. For instance, 82 percent of the Class of 2025 had visited Student Support Services (S3) at least once before graduating, and on a regular satisfaction survey, 91 percent of students who visited S3 said they would return if needed.
“Student Support Services supports over 80 percent of all undergraduates by the time they graduate, and over 60 percent each year. Our offices, including ORSEL, GradSupport, S3, SMHC, the CARE Team, and Residential and Community Life work incredibly well together to support our students,” says Kate McCarthy, senior associate dean of support, wellbeing, and belonging.
“The magic in our support system is the deeply collaborative nature of it. There are many different places students can enter the support network, and each of these teams works closely together to ensure students get connected to the help they need. We always say that students shouldn’t think too much about where they turn … if they get to one of us, they get to all of us,” says David Randall, dean of student life.
Division of Student Life Vice Chancellor Suzy Nelson adds, “It is an honor to see MIT included among colleges and universities recognized for excellent mental health services. Promoting student well-being is central to our mission and guides so much of what we do. This recognition reflects the work of many in our community who are dedicated to creating a campus environment where students can thrive academically and personally.”
3 Questions: How AI could optimize the power grid
Artificial intelligence has captured headlines recently for its rapidly growing energy demands, and particularly the surging electricity usage of data centers that enable the training and deployment of the latest generative AI models. But it’s not all bad news — some AI tools have the potential to reduce some forms of energy consumption and enable cleaner grids.
One of the most promising applications is using AI to optimize the power grid, which would improve efficiency, increase resilience to extreme weather, and enable the integration of more renewable energy. To learn more, MIT News spoke with Priya Donti, the Silverman Family Career Development Professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS), whose work focuses on applying machine learning to optimize the power grid.
Q: Why does the power grid need to be optimized in the first place?
A: We need to maintain an exact balance between the amount of power that is put into the grid and the amount that comes out at every moment in time. But on the demand side, we have some uncertainty. Power companies don’t ask customers to pre-register the amount of energy they are going to use ahead of time, so some estimation and prediction must be done.
Then, on the supply side, there is typically some variation in costs and fuel availability that grid managers need to be responsive to. That has become an even bigger issue because of the integration of energy from time-varying renewable sources, like solar and wind, where uncertainty in the weather can have a major impact on how much power is available. Then, at the same time, depending on how power is flowing in the grid, there is some power lost through resistive heat on the power lines. So, as a grid operator, how do you make sure all that is working all the time? That is where optimization comes in.
Q: How can AI be most useful in power grid optimization?
A: One way AI can be helpful is to use a combination of historical and real-time data to make more precise predictions about how much renewable energy will be available at a certain time. This could lead to a cleaner power grid by allowing us to handle and better utilize these resources.
AI could also help tackle the complex optimization problems that power grid operators must solve to balance supply and demand in a way that also reduces costs. These optimization problems are used to determine which power generators should produce power, how much they should produce, and when they should produce it, as well as when batteries should be charged and discharged, and whether we can leverage flexibility in power loads. These optimization problems are so computationally expensive that operators use approximations so they can solve them in a feasible amount of time. But these approximations are often wrong, and when we integrate more renewable energy into the grid, they are thrown off even farther. AI can help by providing more accurate approximations in a faster manner, which can be deployed in real-time to help grid operators responsively and proactively manage the grid.
AI could also be useful in the planning of next-generation power grids. Planning for power grids requires one to use huge simulation models, so AI can play a big role in running those models more efficiently. The technology can also help with predictive maintenance by detecting where anomalous behavior on the grid is likely to happen, reducing inefficiencies that come from outages. More broadly, AI could also be applied to accelerate experimentation aimed at creating better batteries, which would allow the integration of more energy from renewable sources into the grid.
Q: How should we think about the pros and cons of AI, from an energy sector perspective?
A: One important thing to remember is that AI refers to a heterogeneous set of technologies. There are different types and sizes of models that are used, and different ways that models are used. If you are using a model that is trained on a smaller amount of data with a smaller number of parameters, that is going to consume much less energy than a large, general-purpose model.
In the context of the energy sector, there are a lot of places where, if you use these application-specific AI models for the applications they are intended for, the cost-benefit tradeoff works out in your favor. In these cases, the applications are enabling benefits from a sustainability perspective — like incorporating more renewables into the grid and supporting decarbonization strategies.
Overall, it’s important to think about whether the types of investments we are making into AI are actually matched with the benefits we want from AI. On a societal level, I think the answer to that question right now is “no.” There is a lot of development and expansion of a particular subset of AI technologies, and these are not the technologies that will have the biggest benefits across energy and climate applications. I’m not saying these technologies are useless, but they are incredibly resource-intensive, while also not being responsible for the lion’s share of the benefits that could be felt in the energy sector.
I’m excited to develop AI algorithms that respect the physical constraints of the power grid so that we can credibly deploy them. This is a hard problem to solve. If an LLM says something that is slightly incorrect, as humans, we can usually correct for that in our heads. But if you make the same magnitude of a mistake when you are optimizing a power grid, that can cause a large-scale blackout. We need to build models differently, but this also provides an opportunity to benefit from our knowledge of how the physics of the power grid works.
And more broadly, I think it’s critical that those of us in the technical community put our efforts toward fostering a more democratized system of AI development and deployment, and that it’s done in a way that is aligned with the needs of on-the-ground applications.
