MIT Latest News

At convocation, President Kornbluth greets the Class of 2029
In welcoming the undergraduate Class of 2029 to campus in Cambridge, Massachusetts, MIT President Sally Kornbluth began the Institute’s convocation on Sunday with a greeting that underscored MIT’s confidence in its new students.
“We believe in all of you, in the learning, making, discovering, and inventing that you all have come here to do,” Kornbluth said. “And in your boundless potential as future leaders who will help solve real problems that people face in their daily lives.”
She added: “If you’re out there feeling really lucky to be joining this incredible community, I want you to know that we feel even more lucky. We’re delighted and grateful that you chose to bring your talent, your energy, your curiosity, creativity, and drive here to MIT. And we’re thrilled to be starting this new year with all of you.”
The event, officially called the President’s Convocation for First-years and Families, was held at the Johnson Ice Rink on campus.
While recognizing that academic life can be “intense” at MIT, Kornbluth highlighted the many opportunities available to students outside the classroom, too. A biologist and cancer researcher herself, Kornbluth observed that students can participate in the Undergraduate Research Opportunities Program (UROP), which Kornbluth called “an unmissable opportunity to work side by side with MIT faculty at the front lines of research.” She also noted that MIT offers abundant opportunities for entrepreneurship, as well as 450 official student organizations.
“It’s okay to be a beginner,” Kornbluth said. “Join a group you wouldn’t have had time for in high school. Explore a new skill. Volunteer in the neighborhoods around campus.”
And if the transition to college feels daunting at any point, she added, MIT provides considerable resources to students for well-being and academic help.
“Sometimes the only way to succeed in facing a big challenge or solving a tough problem is to admit there’s no way you can do it all yourself,” Kornbluth observed. “You’re surrounded by a community of caring people. So please don’t be shy about asking for guidance and help.”
The large audience heard additional remarks from two faculty members who themselves have MIT degrees, reflecting on student life at the Institute.
As a student, “The most important things I had were a willingness to take risks and put hard work into the things I cared about,” said Ankur Moitra SM ’09, PhD ’11, the Norbert Wiener Professor of Mathematics.
He emphasized to students the importance of staying grounded and being true to themselves, especially in the face of, say, social media pressures.
“These are the things that make it harder to find your own way and what you really care about,” Moitra said. “Because the rest of the world’s opinion is right there staring you in the face, and it’s impossible to avoid it. And how will you discover what’s important to you, what’s worth pouring yourself into?”
Moitra also advised students to be wary of the tech tools “that want to do the thinking for you, but take away your agency” in the process. He added: “I worry about this because it’s going to become too easy to rely on these tools, and there are going to be many times you’re going to be tempted, especially late at night, with looming p-set deadlines. As educators, we don’t always have fixes for these kinds of things, and all we can do is open the door and hope you walk through it.”
Beyond that, he suggested,“Periodically remind yourself about what’s been important to you all along, what brought you here. For your next four years, you’re going to be surrounded by creative, clever, passionate people every day, who are going to challenge you. Rise to that challenge.”
Christopher Palmer PhD ’14, an associate professor of finance in the MIT Sloan School of Management, began his remarks by revealing that his MIT undergraduate application was not accepted — although he later received his doctorate at the Institute and is now a tenured professor at MIT.
“I played the long game,” he quipped, drawing laughs.
Indeed, Palmer’s remarks focused on cultivating the resilience, focus, and concentration needed to flourish in the long run.
While being at MIT is “thrilling,” Palmer advised students to “build enough slack into your system to handle both the stress and take advantage of the opportunities” on campus. Much like a bank conducts a “stress test” to see if it can withstand changes, Palmer suggested, we can try the same with our workloads: “If you build a schedule that passes the stress test, that means time for curiosity and meaningful creativity.”
Students should also avoid the “false equivalency that your worth is determined by your achievements,” he added. “You have inherent, immutable, instrinsic, eternal value. Be discerning with your commitments. Future you will be so grateful that you have built in the capacity to sleep, to catch up, to say ‘Yes’ to cool invitations, and to attend to your mental health.”
Additionally, Palmer recommended that students pursue “deep work,” involving “the hard thinking where progress actually happens” — a concept, he noted, that has been elevated by computer scientist Cal Newport SM ’06, PhD ’09. As research shows, Palmer explained, “We can’t actually multitask. What we’re really doing is switching tasks at high frequency and incurring a small cost every single time we switch our focus.”
It might help students, he added, to try some structural changes: Put the phone away, turn off alerts, pause notifications, and cultivate sleep. A healthy blend of academic work, activities, and community fun can emerge.
Concluding her own remarks, Kornbluth also emphasized that attending MIT means being part of a community that is respectful of varying viewpoints and all people, and sustains an ethos of fair-minded understanding.
“I know you have extremely high expectations for yourselves,” Kornbluth said, adding: “We have high expectations for you, too, in all kinds of ways. But I want to emphasize one that’s more important than all the others — and that’s an expectation for how we treat each other. At MIT, the work we do is so important, and so hard, that it’s essential we treat each other with empathy, understanding and compassion. That we take care to express our own ideas with clarity and respect, and make room for sharply different points of view. And above all, that we keep engaging in conversation, even when it’s difficult, frustrating or painful.”
Transforming boating, with solar power
The MIT Sailing Pavilion hosted an altogether different marine vessel recently: a prototype of a solar electric boat developed by James Worden ’89, the founder of the MIT Solar Electric Vehicle Team (SEVT). Worden visited the pavilion on a sizzling, sunny day in late July to offer students from the SEVT, the MIT Edgerton Center, MIT Sea Grant, and the broader community an inside look at the Anita, named for his late wife.
Worden’s fascination with solar power began at age 10, when he picked up a solar chip at a “hippy-like” conference in his hometown of Arlington, Massachusetts. “My eyes just lit up,” he says. He built his first solar electric vehicle in high school, fashioned out of cardboard and wood (taking first place at the 1984 Massachusetts Science Fair), and continued his journey at MIT, founding SEVT in 1986. It was through SEVT that he met his wife and lifelong business partner, Anita Rajan Worden ’90. Together, they founded two companies in the solar electric and hybrid vehicles space, and in 2022 launched a solar electric boat company.
On the Charles River, Worden took visitors for short rides on Anita, including a group of current SEVT students who peppered him with questions. The 20-foot pontoon boat, just 12 feet wide and 7 feet tall, is made of carbon fiber composites, single crystalline solar photovoltaic cells, and lithium iron phosphate battery cells. Ultimately, Worden envisions the prototype could have applications as mini-ferry boats and water taxis.
With warmth and humor, he drew parallels between the boat’s components and mechanics and those of the solar cars the students are building. “It’s fun! If you think about all the stuff you guys are doing, it’s all the same stuff,” he told them, “optimizing all the different systems and making them work.” He also explained the design considerations unique to boating applications, like refining the hull shape for efficiency and maneuverability in variable water and wind conditions, and the critical importance of protecting wiring and controls from open water and condensate.
“Seeing Anita in all its glory was super cool,” says Nicole Lin, vice captain of SEVT. “When I first saw it, I could immediately map the different parts of the solar car to its marine counterparts, which was astonishing to see how far I’ve come as an engineer with SEVT. James also explained the boat using solar car terms, as he drew on his experience with solar cars for his solar boats. It blew my mind to see the engineering we learned with SEVT in action.”
Over the years, the Wordens have been avid supporters of SEVT and the Edgerton Center, so the visit was, in part, a way to pay it forward to MIT. “There’s a lot of connections,” he says. He’s still awed by the fact that Harold “Doc” Edgerton, upon learning about his interest in building solar cars, carved out a lab space for him to use in Building 20 — as a first-year student. And a few years ago, as Worden became interested in marine vessels, he tapped Sea Grant Education Administrator Drew Bennett for a 90-minute whiteboard lecture, “MIT fire-hose style,” on hydrodynamics. “It was awesome!” he says.
Imaging tech promises deepest looks yet into living brain tissue at single-cell resolution
For both research and medical purposes, researchers have spent decades pushing the limits of microscopy to produce ever deeper and sharper images of brain activity, not only in the cortex but also in regions underneath, such as the hippocampus. In a new study, a team of MIT scientists and engineers demonstrates a new microscope system capable of peering exceptionally deep into brain tissues to detect the molecular activity of individual cells by using sound.
“The major advance here is to enable us to image deeper at single-cell resolution,” says neuroscientist Mriganka Sur, a corresponding author along with mechanical engineering professor Peter So and principal research scientist Brian Anthony. Sur is the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT.
In the journal Light: Science and Applications, the team demonstrates that they could detect NAD(P)H, a molecule tightly associated with cell metabolism in general and electrical activity in neurons in particular, all the way through samples such as a 1.1-millimeter “cerebral organoid,” a 3D-mini brain-like tissue generated from human stem cells, and a 0.7-milimeter-thick slice of mouse brain tissue.
In fact, says co-lead author and mechanical engineering postdoc W. David Lee, who conceived the microscope’s innovative design, the system could have peered far deeper, but the test samples weren’t big enough to demonstrate that.
“That’s when we hit the glass on the other side,” he says. “I think we’re pretty confident about going deeper.”
Still, a depth of 1.1 milimeters is more than five times deeper than other microscope technologies can resolve NAD(P)H within dense brain tissue. The new system achieved the depth and sharpness by combining several advanced technologies to precisely and efficiently excite the molecule and then to detect the resulting energy, all without having to add any external labels, either via added chemicals or genetically engineered fluorescence.
Rather than focusing the required NAD(P)H excitation energy on a neuron with near ultraviolet light at its normal peak absorption, the scope accomplishes the excitation by focusing an intense, extremely short burst of light (a quadrillionth of a second long) at three times the normal absorption wavelength. Such “three-photon” excitation penetrates deep into tissue with less scattering by brain tissue because of the longer wavelength of the light (“like fog lamps,” Sur says). Meanwhile, although the excitation produces a weak fluorescent signal of light from NAD(P)H, most of the absorbed energy produces a localized (about 10 microns) thermal expansion within the cell, which produces sound waves that travel relatively easily through tissue compared to the fluorescence emission. A sensitive ultrasound microphone in the microscope detects those waves and, with enough sound data, software turns them into high-resolution images (much like a sonogram does). Imaging created in this way is “three-photon photoacoustic imaging.”
“We merged all these techniques — three-photon, label-free, photoacoustic detection,” says co-lead author Tatsuya Osaki, a research scientist in the Picower Institute in Sur’s lab. “We integrated all these cutting-edge techniques into one process to establish this ‘Multiphoton-In and Acoustic-Out’ platform.”
Lee and Osaki combined with research scientist Xiang Zhang and postdoc Rebecca Zubajlo to lead the study, in which the team demonstrated reliable detection of the sound signal through the samples. So far, the team has produced visual images from the sound at various depths as they refine their signal processing.
In the study, the team also shows simultaneous “third-harmonic generation” imaging, which comes from the three-photon stimulation and finely renders cellular structures, alongside their photoacoustic imaging, which detects NAD(P)H. They also note that their photoacoustic method could detect other molecules such as the genetically encoded calcium indicator GCaMP, that neuroscientists use to signal neural electrical activity.
With the concept of label-free, multiphoton, photoacoustic microscopy (LF-MP-PAM) established in the paper, the team is now looking ahead to neuroscience and clinical applications.
For instance, through the company Precision Healing, Inc., which he founded and sold, Lee has already established that NAD(P)H imaging can inform wound care. In the brain, levels of the molecule are known to vary in conditions such as Alzheimer’s disease, Rett syndrome, and seizures, making it a potentially valuable biomarker. Because the new system is label-free (i.e., no added chemicals or altered genes), it could be used in humans, for instance, during brain surgeries.
The next step for the team is to demonstrate it in a living animal, rather than just in in vitro and ex-vivo tissues. The technical challenge there is that the microphone can no longer be on the opposite side of the sample from the light source (as it was in the current study). It has to be on top, just like the light source.
Lee says he expects that full imaging at depths of 2 milimeters in live brains is entirely feasible, given the results in the new study.
“In principle, it should work,” he says.
Mercedes Balcells and Elazer Edelman are also authors of the paper. Funding for the research came from sources including the National Institutes of Health, the Simon Center for the Social Brain, the lab of Peter So, The Picower Institute for Learning and Memory, and the Freedom Together Foundation.
Marcus Stergio named ombudsperson
Marcus Stergio will join the MIT Ombuds Office on Aug. 25, bringing over a decade of experience as a mediator and conflict-management specialist. Previously an ombuds at the U.S. Department of Labor, Stergio will be part of MIT’s ombuds team, working alongside Judi Segall.
The MIT Ombuds Office provides a confidential, independent resource for all members of the MIT community to constructively manage concerns and conflicts related to their experiences at MIT.
Established in 1980, the office played a key role in the early development of the profession, helping to develop and establish standards of practice for organizational ombuds offices. The ombudspersons help MIT community members analyze concerns, clarify policies and procedures, and identify options to constructively manage conflicts.
“There’s this aura and legend around MIT’s Ombuds Office that is really exciting,” Stergio says.
Among other types of conflict resolution, the work of an ombuds is particularly appealing for its versatility, according to Stergio. “We can be creative and flexible in figuring out which types of processes work for the people seeking support, whether that’s having one-on-one, informal, confidential conversations or exploring more active and involved ways of getting their issues addressed,” he says.
Prior to coming to MIT, Stergio worked for six years at the Department of Labor, where he established a new externally facing ombuds office for the Office of Federal Contract Compliance Programs (OFCCP). There, he operated in accordance with the International Ombuds Association’s standards of practice, offering ombuds services to both external stakeholders and OFCCP employees.
He has also served as ombudsperson or in other conflict-management roles for a variety of organizations across multiple sectors. These included the Centers for Disease Control and Prevention, the United Nations Population Fund, General Motors, BMW of North America, and the U.S. Department of Treasury, among others. From 2013 to 2019, Stergio was a mediator and the manager of commercial and corporate programs for the Boston-based dispute resolution firm MWI.
Stergio has taught conflict resolution courses and delivered mediation and negotiation workshops at multiple universities, including MIT, where he says the interest in his subject matter was palpable. “There was something about the MIT community, whether it was students or staff or faculty. People seemed really energized by the conflict management skills that I was presenting to them,” he recalls. “There was this eagerness to perfect things that was inspiring and contagious.”
“I’m honored to be joining such a prestigious institution, especially one with such a rich history in the ombuds field,” Stergio adds. “I look forward to building on that legacy and working with the MIT community to navigate challenges together.”
Stergio earned a bachelor’s degree from Northeastern University in 2008 and a master’s in conflict resolution from the University of Massachusetts at Boston in 2012. He has served on the executive committee of the Coalition of Federal Ombuds since 2022, as co-chair of the American Bar Association’s ombuds day subcommittee, and as an editor for the newsletter of the ABA’s Dispute Resolution Section. He is also a member of the International Ombuds Association.
Astronomers detect the brightest fast radio burst of all time
A fast radio burst is an immense flash of radio emission that lasts for just a few milliseconds, during which it can momentarily outshine every other radio source in its galaxy. These flares can be so bright that their light can be seen from halfway across the universe, several billion light years away.
The sources of these brief and dazzling signals are unknown. But scientists now have a chance to study a fast radio burst (FRB) in unprecedented detail. An international team of scientists including physicists at MIT have detected a near and ultrabright fast radio burst some 130 million light-years from Earth in the constellation Ursa Major. It is one of the closest FRBs detected to date. It is also the brightest — so bright that the signal has garnered the informal moniker, RBFLOAT, for “radio brightest flash of all time.”
The burst’s brightness, paired with its proximity, is giving scientists the closest look yet at FRBs and the environments from which they emerge.
“Cosmically speaking, this fast radio burst is just in our neighborhood,” says Kiyoshi Masui, associate professor of physics and affiliate of MIT’s Kavli Institute for Astrophysics and Space Research. “This means we get this chance to study a pretty normal FRB in exquisite detail.”
Masui and his colleagues report their findings today in the Astrophysical Journal Letters.
Diverse bursts
The clarity of the new detection is thanks to a significant upgrade to The Canadian Hydrogen Intensity Mapping Experiment (CHIME), a large array of halfpipe-shaped antennae based in British Columbia. CHIME was originally designed to detect and map the distribution of hydrogen across the universe. The telescope is also sensitive to ultrafast and bright radio emissions. Since it started observations in 2018, CHIME has detected about 4,000 fast radio bursts, from all parts of the sky. But the telescope had not been able to precisely pinpoint the location of each fast radio burst, until now.
CHIME recently got a significant boost in precision, in the form of CHIME Outriggers — three miniature versions of CHIME, each sited in different parts of North America. Together, the telescopes work as one continent-sized system that can focus in on any bright flash that CHIME detects, to pin down its location in the sky with extreme precision.
“Imagine we are in New York and there’s a firefly in Florida that is bright for a thousandth of a second, which is usually how quick FRBs are,” says MIT Kavli graduate student Shion Andrew. “Localizing an FRB to a specific part of its host galaxy is analogous to figuring out not just what tree the firefly came from, but which branch it’s sitting on.”
The new fast radio burst is the first detection made using the combination of CHIME and the completed CHIME Outriggers. Together, the telescope array identified the FRB and determined not only the specific galaxy, but also the region of the galaxy from where the burst originated. It appears that the burst arose from the edge of the galaxy, just outside of a star-forming region. The precise localization of the FRB is allowing scientists to study the environment around the signal for clues to what brews up such bursts.
“As we’re getting these much more precise looks at FRBs, we’re better able to see the diversity of environments they’re coming from,” says MIT physics postdoc Adam Lanman.
Lanman, Andrew, and Masui are members of the CHIME Collaboration — which includes scientists from multiple institutions around the world — and are authors of the new paper detailing the discovery of the new FRB detection.
An older edge
Each of CHIME’s Outrigger stations continuously monitors the same swath of sky as the parent CHIME array. Both CHIME and the Outriggers “listen” for radio flashes, at incredibly short, millisecond timescales. Even over several minutes, such precision monitoring can amount to a huge amount of data. If CHIME detects no FRB signal, the Outriggers automatically delete the last 40 seconds of data to make room for the next span of measurements.
On March 16, 2025, CHIME detected an ultrabright flash of radio emissions, which automatically triggered the CHIME Outriggers to record the data. Initially, the flash was so bright that astronomers were unsure whether it was an FRB or simply a terrestrial event caused, for instance, by a burst of cellular communications.
That notion was put to rest as the CHIME Outrigger telescopes focused in on the flash and pinned down its location to NGC4141 — a spiral galaxy in the constellation Ursa Major about 130 million light years away, which happens to be surprisingly close to our own Milky Way. The detection is one of the closest and brightest fast radio bursts detected to date.
Follow-up observations in the same region revealed that the burst came from the very edge of an active region of star formation. While it’s still a mystery as to what source could produce FRBs, scientists’ leading hypothesis points to magnetars — young neutron stars with extremely powerful magnetic fields that can spin out high-energy flares across the electromagnetic spectrum, including in the radio band. Physicists suspect that magnetars are found in the center of star formation regions, where the youngest, most active stars are forged. The location of the new FRB, just outside a star-forming region in its galaxy, may suggest that the source of the burst is a slightly older magnetar.
“These are mostly hints,” Masui says. “But the precise localization of this burst is letting us dive into the details of how old an FRB source could be. If it were right in the middle, it would only be thousands of years old — very young for a star. This one, being on the edge, may have had a little more time to bake.”
No repeats
In addition to pinpointing where the new FRB was in the sky, the scientists also looked back through CHIME data to see whether any similar flares occurred in the same region in the past. Since the first FRB was discovered in 2007, astronomers have detected over 4,000 radio flares. Most of these bursts are one-offs. But a few percent have been observed to repeat, flashing every so often. And an even smaller fraction of these repeaters flash in a pattern, like a rhythmic heartbeat, before flaring out. A central question surrounding fast radio bursts is whether repeaters and nonrepeaters come from different origins.
The scientists looked through CHIME’s six years of data and came up empty: This new FRB appears to be a one-off, at least in the last six years. The findings are particularly exciting, given the burst’s proximity. Because it is so close and so bright, scientists can probe the environment in and around the burst for clues to what might produce a nonrepeating FRB.
“Right now we’re in the middle of this story of whether repeating and nonrepeating FRBs are different. These observations are putting together bits and pieces of the puzzle,” Masui says.
“There’s evidence to suggest that not all FRB progenitors are the same,” Andrew adds. “We’re on track to localize hundreds of FRBs every year. The hope is that a larger sample of FRBs localized to their host environments can help reveal the full diversity of these populations.”
The construction of the CHIME Outriggers was funded by the Gordon and Betty Moore Foundation and the U.S. National Science Foundation. The construction of CHIME was funded by the Canada Foundation for Innovation and provinces of Quebec, Ontario, and British Columbia.
Study links rising temperatures and declining moods
Rising global temperatures affect human activity in many ways. Now, a new study illuminates an important dimension of the problem: Very hot days are associated with more negative moods, as shown by a large-scale look at social media postings.
Overall, the study examines 1.2 billion social media posts from 157 countries over the span of a year. The research finds that when the temperature rises above 95 degrees Fahrenheit, or 35 degrees Celsius, expressed sentiments become about 25 percent more negative in lower-income countries and about 8 percent more negative in better-off countries. Extreme heat affects people emotionally, not just physically.
“Our study reveals that rising temperatures don’t just threaten physical health or economic productivity — they also affect how people feel, every day, all over the world,” says Siqi Zheng, a professor in MIT’s Department of Urban Studies and Planning (DUSP) and Center for Real Estate (CRE), and co-author of a new paper detailing the results. “This work opens up a new frontier in understanding how climate stress is shaping human well-being at a planetary scale.”
The paper, “Unequal Impacts of Rising Temperatures on Global Human Sentiment,” is published today in the journal One Earth. The authors are Jianghao Wang, of the Chinese Academy of Sciences; Nicolas Guetta-Jeanrenaud SM ’22, a graduate of MIT’s Technology and Policy Program (TPP) and Institute for Data, Systems, and Society; Juan Palacios, a visiting assistant professor at MIT’s Sustainable Urbanization Lab (SUL) and an assistant professor Maastricht University; Yichun Fan, of SUL and Duke University; Devika Kakkar, of Harvard University; Nick Obradovich, of SUL and the Laureate Institute for Brain Research in Tulsa; and Zheng, who is the STL Champion Professor of Urban and Real Estate Sustainability at CRE and DUSP. Zheng is also the faculty director of CRE and founded the Sustainable Urbanization Lab in 2019.
Social media as a window
To conduct the study, the researchers evaluated 1.2 billion posts from the social media platforms Twitter and Weibo, all of which appeared in 2019. They used a natural language processing technique called Bidirectional Encoder Representations from Transformers (BERT), to analyze 65 languages across the 157 countries in the study.
Each social media post was given a sentiment rating from 0.0 (for very negative posts) to 1.0 (for very positive posts). The posts were then aggregated geographically to 2,988 locations and evaluated in correlation with area weather. From this method, the researchers could then deduce the connection between extreme temperatures and expressed sentiment.
“Social media data provides us with an unprecedented window into human emotions across cultures and continents,” Wang says. “This approach allows us to measure emotional impacts of climate change at a scale that traditional surveys simply cannot achieve, giving us real-time insights into how temperature affects human sentiment worldwide.”
To assess the effects of temperatures on sentiment in higher-income and middle-to-lower-income settings, the scholars also used a World Bank cutoff level of gross national income per-capita annual income of $13,845, finding that in places with incomes below that, the effects of heat on mood were triple those found in economically more robust settings.
“Thanks to the global coverage of our data, we find that people in low- and middle-income countries experience sentiment declines from extreme heat that are three times greater than those in high-income countries,” Fan says. “This underscores the importance of incorporating adaptation into future climate impact projections.”
In the long run
Using long-term global climate models, and expecting some adaptation to heat, the researchers also produced a long-range estimate of the effects of extreme temperatures on sentiment by the year 2100. Extending the current findings to that time frame, they project a 2.3 percent worsening of people’s emotional well-being based on high temperatures alone by then — although that is a far-range projection.
“It's clear now, with our present study adding to findings from prior studies, that weather alters sentiment on a global scale,” Obradovich says. “And as weather and climates change, helping individuals become more resilient to shocks to their emotional states will be an important component of overall societal adaptation.”
The researchers note that there are many nuances to the subject, and room for continued research in this area. For one thing, social media users are not likely to be a perfectly representative portion of the population, with young children and the elderly almost certainly using social media less than other people. However, as the researchers observe in the paper, the very young and elderly are probably particularly vulnerable to heat shocks, making the response to hot weather possible even larger than their study can capture.
The research is part of the Global Sentiment project led by the MIT Sustainable Urbanization Lab, and the study’s dataset is publicly available. Zheng and other co-authors have previously investigated these dynamics using social media, although never before at this scale.
“We hope this resource helps researchers, policymakers, and communities better prepare for a warming world,” Zheng says.
The research was supported, in part, by Zheng’s chaired professorship research fund, and grants Wang received from the National Natural Science Foundation of China and the Chinese Academy of Sciences.
The “Mississippi Bubble” and the complex history of Haiti
Many things account for Haiti’s modern troubles. A good perspective on them comes from going back in time to 1715 or so — and grappling with a far-flung narrative involving the French monarchy, a financial speculator named John Law, and a stock-market crash called the “Mississippi Bubble.”
To condense: After the death of Louis XIV in 1715, France was mired in debt following decades of war. The country briefly turned over its economic policy to Law, a Scotsman who implemented a system in which, among other things, French debt was retired while private monopoly companies expanded overseas commerce.
This project did not go entirely as planned. Stock-market speculation created the “Mississippi Bubble” and crash of 1719-20. Amid the chaos, Law lost a short-lived fortune and left France.
Yet Law’s system had lasting effects. French expansionism helped spur Haiti’s “sugar revolution” of the early 1700s, in which the country’s economy first became oriented around labor-intensive sugar plantations. Using enslaved workers and deploying violence against political enemies, plantation owners helped define Haiti’s current-day geography and place within the global economy, creating an extractive system benefitting a select few.
While there has been extensive debate about how the Haitian Revolution of 1789-1804 (and the 1825 “indemnity” Haiti agreed to pay France) has influenced the country’s subsequent path, the events of the early 1700s help illuminate the whole picture.
“This is a moment of transformation for Haiti’s history that most people don’t know much about,” says MIT historian Malick Ghachem. “And it happened well before independence. It goes back to the 18th century when Haiti began to be enmeshed in the debtor-creditor relationships from which it has never really escaped. The 1720s was the period when those relationships crystallized.”
Ghachem examines the economic transformations and multi-sided power struggles of that time in a new book, “The Colony and the Company: Haiti after the Mississippi Bubble,” published this summer by Princeton University Press.
“How did Haiti come to be the way it is today? This is the question everybody asks about it,” says Ghachem. “This book is an intervention in that debate.”
Enmeshed in the crisis
Ghachem is both a professor and head of MIT’s program in history. A trained lawyer, his work ranges across France’s global history and American legal history. His 2012 book “The Old Regime and the Haitian Revolution,” also situated in pre-revolutionary Haiti, examines the legal backdrop of the drive for emancipation.
“The Colony and the Company” draws on original archival research while arriving at two related conclusions: Haiti was a big part of the global bubble of the 1710s, and that bubble and its aftermath is a big part of Haiti’s history.
After all, until the late 1600s, Haiti, then known as Saint Domingue, was “a fragile, mostly ungoverned, and sparsely settled place of uncertain direction,” as Ghachem writes in the book. The establishment of Haiti’s economy is not just the background of later events, but a formative event on its own.
And while the “sugar revolution” may have reached Haiti sooner or later, it was amplified by France’s quest for new sources of revenue. Louis XIV’s military agenda had been a fiscal disaster for the French. Law — a convicted murderer, and evidently a persuasive salesman — proposed a restructuring scheme that concentrated revenue-raising and other fiscal powers in a monopoly overseas trading company and bank overseen by Law himself.
As France sought economic growth beyond its borders, that led the company to Haiti, to tap its agricultural potential. For that matter, as Ghachem details, multiple countries were expanding their overseas activities — and France, Britain, and Spain also increased slave-trading activities markedly. Within a few decades, Haiti was a center of global sugar production, based on slave labor.
“When the company is seen as the answer to France’s own woes, Haiti becomes enmeshed in the crisis,” Ghachem says. “The Mississippi Bubble of 1719-20 was really a global event. And one of the theaters where it played out most dramatically was Haiti.”
As it happens, in Haiti, the dynamics of this were complex. Local planters did not want to be answerable to Law’s company, and fended it off, but, as Ghachem writes, they “internalized and privatized the financial and economic logic of the System against which they had rebelled, making of it a script for the management of plantation society.”
That society was complex. One of the main elements of “The Colony and the Company” is the exploration of its nuances. Haiti was home to a variety of people, including Jesuit missionaries, European women who had been re-settled there, and maroons (freed or escaped slaves living apart from plantations), among others. Plantation life came with violence, civic instability, and a lack of economic alternatives.
“What’s called the ‘success’ of the colony as a French economic force is really inseparable from the conditions that make it hard for Haiti to survive as an independent nation after the revolution,” Ghachem observes.
Stories in a new light
In public discourse, questions about Haiti’s past are often considered highly relevant to its present, as a near-failed state whose capital city is now substantially controlled by gangs, with no end to violence in sight. Some people draw a through line between the present and Haiti’s revolutionary-era condition. But to Ghachem, the revolution changed some political dynamics, but not the underlying conditions of life in the country.
“One [view] is that it’s the Haitian Revolution that leads to Haiti’s immiseration and violence and political dysfunction and its economic underdevelopment,” Ghachem says. “I think that argument is wrong. It’s an older problem that goes back to Haiti’s relationship with France in the late 17th and early 18th centuries. The revolution compounds that problem, and does so significantly, because of how France responds. But the terms of Haiti’s subordination are already set.”
Other scholars have praised “The Colony and the Company.” Pernille Røge of the University of Pittsburgh has called it “a multilayered and deeply compelling history rooted in a careful analysis of both familiar and unfamiliar primary sources.”
For his part, Ghachem hopes to persuade anyone interested in Haiti’s past and present to look more expansively at the subject, and consider how the deep roots of Haiti’s economy have helped structure its society.
“I’m trying to keep up with the day job of a historian,” Ghachem says. “Which includes finding stories that aren’t well-known, or are well-known and have aspects that are underappreciated, and telling them in a new light.”
Lincoln Laboratory reports on airborne threat mitigation for the NYC subway
A multiyear program at MIT Lincoln Laboratory to characterize how biological and chemical vapors and aerosols disperse through the New York City subway system is coming to a close. The program, part of the U.S. Department of Homeland Security (DHS) Science and Technology Directorate's Urban Area Security Initiative, builds on other efforts at Lincoln Laboratory to detect chemical and biological threats, validate air dispersion models, and improve emergency protocols in urban areas in case of an airborne attack. The results of this program will inform the New York Metropolitan Transportation Authority (MTA) on how best to install an efficient, cost-effective system for airborne threat detection and mitigation throughout the subway. On a broader scale, the study will help the national security community understand pragmatic chemical and biological defense options for mass transit, critical facilities, and special events.
Trina Vian from the laboratory's Counter–Weapons of Mass Destruction (WMD) Systems Group led this project, which she says had as much to do with air flow and sensors as it did with MTA protocols and NYC commuters. "There are real dangers associated with panic during an alarm. People can get hurt during mass evacuation, or lose trust in a system and the authorities that administer that system, if there are false alarms," she says. "A novel aspect of our project was to investigate effective low-regret response options, meaning those with little operational consequence to responding to a false alarm."
Currently, depending on the severity of the alarm, the MTA's response can include stopping service and evacuating passengers and employees.
A complex environment for testing
For the program, which started in 2019, Vian and her team collected data on how chemical and biological sensors performed in the subway, what factors affected sensor accuracy, and how different mitigation protocols fared in stopping an airborne threat from spreading and removing the threat from a contaminated location. For their tests, they released batches of a safe, custom-developed aerosol simulant within Grand Central Station that they could track with DNA barcodes. Each batch had a different barcode, which allowed the team to differentiate among them and quantitatively assess different combinations of mitigation strategies.
To control and isolate air flow, the team tested static air curtains as well as air filtration systems. They also tested a spray knockdown system developed by Sandia National Laboratories designed to reduce and isolate particulate hazards in large volume areas. The system sprays a fine water mist into the tunnels that attaches to threat particulates and uses gravity to rain out the threat material. The spray contains droplets of a particular size and concentration, and with an applied electrostatic field. The original idea for the system was adapted from the coal mining industry, which used liquid sprayers to reduce the amount of inhalable soot.
The tests were done in a busy environment, and the team was required to complete trainings on MTA protocols such as track safety and how to interact with the public.
"We had long and sometimes very dirty days," says Jason Han of the Counter–WMD Systems Group, who collected measurements in the tunnels and analyzed the data. "We all wore bright orange contractor safety vests, which made people think we were official employees of the MTA. We would often get approached by people asking for directions!"
At times, issues such as power outages or database errors could disrupt data capture.
"We learned fairly early on that we had to capture daily data backups and keep a daily evolving master list of unique sensor identifiers and locations," says fellow team member Cassie Smith. "We developed workflows and wrote scripts to help automate the process, which ensured successful sensor data capture and attribution."
The team also worked closely with the MTA to make sure their tests and data capture ran smoothly. "The MTA was great at helping us maintain the test bed, doing as much as they could in our physical absence," Vian says.
Calling on industry
Another crucial aspect of the program was to connect with the greater chemical and biological industrial community to solicit their sensors for testing. These partnerships reduced the cost for DHS to bring new sensing technologies into the project, and, in return, participants gained a testing and data collection opportunity within the challenging NYC subway environment.
The team ultimately fielded 16 different sensors, each with varying degrees of maturity, that operated through a range of methods, such as ultraviolet laser–induced fluorescence, polymerase chain reaction, and long-wave infrared spectrometry.
"The partners appreciated the unique data they got and the opportunity to work with the MTA and experience an environment and customer base that they may not have anticipated before," Vian says.
The team finished testing in 2024 and has delivered the final report to the DHS. The MTA will use the report to help expand their PROTECT chemical detection system (originally developed by Argonne National Laboratory) from Grand Central Station into adjacent stations. They expect to complete this work in 2026.
"The value of this program cannot be overstated. This partnership with DHS and MIT Lincoln Laboratory has led to the identification of the best-suited systems for the MTA’s unique operating environment," says Michael Gemelli, director of chemical, biological, radiological, and nuclear/WMD detection and mitigation at the New York MTA.
"Other transit authorities can leverage these results to start building effective chemical and biological defense systems for their own specific spaces and threat priorities," adds Benjamin Ervin, leader of Lincoln Laboratory's Counter–WMD Systems Group. "Specific test and evaluation within the operational environment of interest, however, is always recommended to ensure defense system objectives are met."
Building these types of decision-making reports for airborne chemical and biological sensing has been a part of Lincoln Laboratory's mission since the mid-1990s. The laboratory also helped to define priorities in the field when DHS was forming in the early 2000s.
Beyond this study, the Lincoln Laboratory is leading several other projects focused on forecasting the impact of novel chemical and biological threats within multiple domains — military, space, agriculture, health, etc. — and on prototyping rapid, autonomous, high-confidence biological identification capabilities for the homeland to provide actionable evidence of hazardous environments.
Learning from punishment
From toddlers’ timeouts to criminals’ prison sentences, punishment reinforces social norms, making it known that an offender has done something unacceptable. At least, that is usually the intent — but the strategy can backfire. When a punishment is perceived as too harsh, observers can be left with the impression that an authority figure is motivated by something other than justice.
It can be hard to predict what people will take away from a particular punishment, because everyone makes their own inferences not just about the acceptability of the act that led to the punishment, but also the legitimacy of the authority who imposed it. A new computational model developed by scientists at MIT’s McGovern Institute for Brain Research makes sense of these complicated cognitive processes, recreating the ways people learn from punishment and revealing how their reasoning is shaped by their prior beliefs.
Their work, reported Aug. 4 in the journal PNAS, explains how a single punishment can send different messages to different people, and even strengthen the opposing viewpoints of groups who hold different opinions about authorities or social norms.
“The key intuition in this model is the fact that you have to be evaluating simultaneously both the norm to be learned and the authority who’s punishing,” says McGovern investigator and John W. Jarve Professor of Brain and Cognitive Sciences Rebecca Saxe, who led the research. “One really important consequence of that is even where nobody disagrees about the facts — everybody knows what action happened, who punished it, and what they did to punish it — different observers of the same situation could come to different conclusions.”
For example, she says, a child who is sent to timeout after biting a sibling might interpret the event differently than the parent. One might see the punishment as proportional and important, teaching the child not to bite. But if the biting, to the toddler, seemed a reasonable tactic in the midst of a squabble, the punishment might be seen as unfair, and the lesson will be lost.
People draw on their own knowledge and opinions when they evaluate these situations — but to study how the brain interprets punishment, Saxe and graduate student Setayesh Radkani wanted to take those personal ideas out of the equation. They needed a clear understanding of the beliefs that people held when they observed a punishment, so they could learn how different kinds of information altered their perceptions. So Radkani set up scenarios in imaginary villages where authorities punished individuals for actions that had no obvious analog in the real world.
Participants observed these scenarios in a series of experiments, with different information offered in each one. In some cases, for example, participants were told that the person being punished was either an ally or a competitor of the authority, whereas in other cases, the authority’s possible bias was left ambiguous.
“That gives us a really controlled setup to vary prior beliefs,” Radkani explains. “We could ask what people learn from observing punitive decisions with different severities, in response to acts that vary in their level of wrongness, by authorities that vary in their level of different motives.”
For each scenario, participants were asked to evaluate four factors: how much the authority figure cared about justice; the selfishness of the authority; the authority’s bias for or against the individual being punished; and the wrongness of the punished act. The research team asked these questions when participants were first introduced to the hypothetical society, then tracked how their responses changed after they observed the punishment. Across the scenarios, participants’ initial beliefs about the authority and the wrongness of the act shaped the extent to which those beliefs shifted after they observed the punishment.
Radkani was able to replicate these nuanced interpretations using a cognitive model framed around an idea that Saxe’s team has long used to think about how people interpret the actions of others. That is, to make inferences about others’ intentions and beliefs, we assume that people choose actions that they expect will help them achieve their goals.
To apply that concept to the punishment scenarios, Radkani developed a model that evaluates the meaning of a punishment (an action aimed at achieving a goal of the authority) by considering the harm associated with that punishment; its costs or benefits to the authority; and its proportionality to the violation. By assessing these factors, along with prior beliefs about the authority and the punished act, the model was able to predict people’s responses to the hypothetical punishment scenarios, supporting the idea that people use a similar mental model. “You need to have them consider those things, or you can’t make sense of how people understand punishment when they observe it,” Saxe says.
Even though the team designed their experiments to preclude preconceived ideas about the people and actions in their imaginary villages, not everyone drew the same conclusions from the punishments they observed. Saxe’s group found that participants’ general attitudes toward authority influenced their interpretation of events. Those with more authoritarian attitudes — assessed through a standard survey — tended to judge punished acts as more wrong and authorities as more motivated by justice than other observers.
“If we differ from other people, there’s a knee-jerk tendency to say, ‘either they have different evidence from us, or they’re crazy,’” Saxe says. Instead, she says, “It’s part of the way humans think about each other’s actions.”
“When a group of people who start out with different prior beliefs get shared evidence, they will not end up necessarily with shared beliefs. That’s true even if everybody is behaving rationally,” says Saxe.
This way of thinking also means that the same action can simultaneously strengthen opposing viewpoints. The Saxe lab’s modeling and experiments showed that when those viewpoints shape individuals’ interpretations of future punishments, the groups’ opinions will continue to diverge. For instance, a punishment that seems too harsh to a group who suspects an authority is biased can make that group even more skeptical of the authority’s future actions. Meanwhile, people who see the same punishment as fair and the authority as just will be more likely to conclude that the authority figure’s future actions are also just.
“You will get a vicious cycle of polarization, staying and actually spreading to new things,” says Radkani.
The researchers say their findings point toward strategies for communicating social norms through punishment. “It is exactly sensible in our model to do everything you can to make your action look like it’s coming out of a place of care for the long-term outcome of this individual, and that it’s proportional to the norm violation they did,” Saxe says. “That is your best shot at getting a punishment interpreted pedagogically, rather than as evidence that you’re a bully.”
Nevertheless, she says that won’t always be enough. “If the beliefs are strong the other way, it’s very hard to punish and still sustain a belief that you were motivated by justice.”
Joining Saxe and Radkani on the paper is Joshua Tenenbaum, MIT professor of brain and cognitive sciences. The study was funded, in part, by the Patrick J McGovern Foundation.
A boost for the precision of genome editing
The U.S. Food and Drug Administration’s recent approval of the first CRISPR-Cas9–based gene therapy has marked a major milestone in biomedicine, validating genome editing as a promising treatment strategy for disorders like sickle cell disease, muscular dystrophy, and certain cancers.
CRISPR-Cas9, often likened to “molecular scissors,” allows scientists to cut DNA at targeted sites to snip, repair, or replace genes. But despite its power, Cas9 poses a critical safety risk: The active enzyme can linger in cells and cause unintended DNA breaks — so-called off-target effects — which may trigger harmful mutations in healthy genes.
Now, researchers in the labs of Ronald T. Raines, MIT professor of chemistry, and Amit Choudhary, professor of medicine at Harvard Medical School, have engineered a precise way to turn Cas9 off after its job is done — significantly reducing off-target effects and improving the clinical safety of gene editing. Their findings are detailed in a new paper published in the Proceedings of the National Academy of Sciences (PNAS).
“To ‘turn off’ Cas9 after it achieves its intended genome-editing outcome, we developed the first cell-permeable anti-CRISPR protein system,” says Raines, the Roger and Georges Firmenich Professor of Natural Products Chemistry. “Our technology reduces the off-target activity of Cas9 and increases its genome-editing specificity and clinical utility.”
The new tool — called LFN-Acr/PA — uses a protein-based delivery system to ferry anti-CRISPR proteins into human cells rapidly and efficiently. While natural Type II anti-CRISPR proteins (Acrs) are known to inhibit Cas9, their use in therapy has been limited because they’re often too bulky or charged to enter cells, and conventional delivery methods are too slow or ineffective.
LFN-Acr/PA overcomes these hurdles using a component derived from anthrax toxin to introduce Acrs into cells within minutes. Even at picomolar concentrations, the system shuts down Cas9 activity with remarkable speed and precision — boosting genome-editing specificity up to 40 percent.
Bradley L. Pentelute, MIT professor of chemistry, is an expert on the anthrax delivery system, and is also an author of the paper.
The implications of this advance are wide-ranging. With patent applications filed, LFN-Acr/PA represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.
The research was supported by the National Institutes of Health and a Gilliam Fellowship from the Howard Hughes Medical Institute awarded to lead author Axel O. Vera, a graduate student in the Department of Chemistry.
Materials Research Laboratory: Driving interdisciplinary materials research at MIT
Materials research thrives across MIT, spanning disciplines and departments. Recent breakthroughs include strategies for securing sustainable supplies of nickel — critical to clean-energy technologies (Department of Materials Science and Engineering); the discovery of unexpected magnetism in atomically thin quantum materials (Department of Physics); and the development of adhesive coatings that reduce scarring around medical implants (departments of Mechanical Engineering and Civil and Environmental Engineering).
Beyond individual projects, the MIT Materials Research Laboratory (MRL) fosters broad collaboration through strategic initiatives such as the Materials Systems Laboratory and SHINE (Sustainability and Health Initiative for Net Positive Enterprise). These efforts bring together academia, government, and industry to accelerate innovation in sustainability, energy use, and advanced materials.
MRL, a hub that connects and supports the Institute’s materials research community, is at the center of these efforts. “MRL serves as a home for the entire materials research community at MIT,” says C. Cem Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering who became MRL director in April. “Our goal is to make it easier for our faculty to conduct their extraordinary research.”
A storied history
Established in 2017, the MRL brings together more than 30 researchers and builds on a 48-year legacy of innovation. It was formed through the merger of the MIT Materials Processing Center (MPC) and the Center for Materials Science and Engineering (CMSE), two institutions that helped lay the foundation for MIT’s global leadership in materials science.
Over the years, research supported by MPC and CMSE has led to transformative technologies and successful spinout companies. Notable examples include amsc, based on advances in superconductivity; OmniGuide, which developed cutting-edge optical fiber technologies; and QD Vision, a pioneer in quantum dot technology acquired by Samsung in 2016. Another landmark achievement was the development of the first germanium laser to operate at room temperature — a breakthrough now used in optical communications.
Enabling research through partnership and support
MRL is launching targeted initiatives to connect MIT researchers with industry partners around specific technical challenges. Each initiative will be led by a junior faculty member working closely with MRL to identify a problem that aligns with their research expertise and is relevant to industry needs.
Through multi-year collaborations with participating companies, faculty can explore early-stage solutions in partnership with postdocs or graduate students. These initiatives are designed to be agile and interdisciplinary, with the potential to grow into major, long-term research programs.
Behind-the-scenes support, front-line impact
MRL provides critical infrastructure that enables faculty to focus on discovery, not logistics. “MRL works silently in the background, where every problem a principal investigator has related to the administration of materials research is solved with efficiency, good organization, and minimum effort,” says Tasan.
This quiet but powerful support spans multiple areas:
- The finance team manages grants and helps secure new funding opportunities.
- The human resources team supports the hiring of postdocs.
- The communications team amplifies the lab’s impact through compelling stories shared with the public and funding agencies.
- The events team plans and coordinates conferences, seminars, and symposia that foster collaboration within the MIT community and with external partners.
Together, these functions ensure that research at MRL runs smoothly and effectively — from initial idea to lasting innovation.
Leadership with a vision
Tasan, who also leads a research group focused on metallurgy, says he took on the directorship because “I thrive on new challenges.” He also saw the role as an opportunity to contribute more broadly to MIT.
“I believe MRL can play an even greater role in advancing materials research across the Institute, and I’m excited to help make that happen,” he says.
Recent MRL initiatives
MRL has supported a wide range of research programs in partnership with major industry leaders, including Apple, Ford, Microsoft, Rio Tinto, IBM, Samsung, and Texas Instruments, as well as organizations such as Advanced Functional Fabrics of America, Allegheny Technologies, Ericsson, and the Semiconductor Research Corp.
MRL researchers are addressing critical global challenges in energy efficiency, environmental sustainability, and the development of next-generation material systems.
- Professor Antoine Allanore is advancing a direct process for wire production from sulfide concentrates, offering a more efficient and sustainable alternative to traditional methods.
- Professor Joe Checkelsky is leading pioneering research on scalable, high-temperature quantum materials, in the realm of quantum transport.
- Professor Pablo Jarillo-Herrero is making significant progress with two-dimensional materials and their heterostructures.
- Professor Nuh Gedik explores ultrafast electronic and structural dynamics and light-matter interactions.
- Professor Gregory Rutledge spearheaded a National Institute of Standards and Technology Rapid Assistance for Coronavirus Economic Response (NIST RACER)-sponsored initiative to develop biodegradable nanofiber-based personal protective equipment, aimed at improving manufacturing automation, diversifying supply chains, and reducing environmental impact.
- Professor Elsa Olivetti serves as the lead principal investigator at MIT for REMADE: the Institute for Reducing Embodied-energy and Decreasing Emissions. Her research on fiber recovery and post-consumer resin processing directly supports REMADE’s mission to enhance material circularity and reduce energy use by 50 percent by 2027.
- Randy Kirchain is modeling metals markets under decarbonization, and developing greener construction materials.
- Anu Agarwal is spearheading efforts to build a sustainable microchip manufacturing ecosystem.
New laser “comb” can enable rapid identification of chemicals with extreme precision
Optical frequency combs are specially designed lasers that act like rulers to accurately and rapidly measure specific frequencies of light. They can be used to detect and identify chemicals and pollutants with extremely high precision.
Frequency combs would be ideal for remote sensors or portable spectrometers because they can enable accurate, real-time monitoring of multiple chemicals without complex moving parts or external equipment.
But developing frequency combs with high enough bandwidth for these applications has been a challenge. Often, researchers must add bulky components that limit scalability and performance.
Now, a team of MIT researchers has demonstrated a compact, fully integrated device that uses a carefully crafted mirror to generate a stable frequency comb with very broad bandwidth. The mirror they developed, along with an on-chip measurement platform, offers the scalability and flexibility needed for mass-producible remote sensors and portable spectrometers. This development could enable more accurate environmental monitors that can identify multiple harmful chemicals from trace gases in the atmosphere.
“The broader the bandwidth a spectrometer has, the more powerful it is, but dispersion is in the way. Here we took the hardest problem that limits bandwidth and made it the centerpiece of our study, addressing every step to ensure robust frequency comb operation,” says Qing Hu, Distinguished Professor in Electrical Engineering and Computer Science at MIT, principal investigator in the Research Laboratory of Electronics, and senior author on an open-access paper describing the work.
He is joined on the paper by lead author Tianyi Zeng PhD ’23; as well as Yamac Dikmelik of General Dynamics Mission Systems; Feng Xie and Kevin Lascola of Thorlabs Quantum Electronics; and David Burghoff SM ’09, PhD ’14, an assistant professor at the University of Texas at Austin. The research appears today in Light: Science and Applications.
Broadband combs
An optical frequency comb produces a spectrum of equally spaced laser lines, which resemble the teeth of a comb.
Scientists can generate frequency combs using several types of lasers for different wavelengths. By using a laser that produces long wave infrared radiation, such as a quantum cascade laser, they can use frequency combs for high-resolution sensing and spectroscopy.
In dual-comb spectroscopy (DCS), the beam of one frequency comb travels straight through the system and strikes a detector at the other end. The beam of the second frequency comb passes through a chemical sample before striking the same detector. Using the results from both combs, scientists can faithfully replicate the chemical features of the sample at much lower frequencies, where signals can be easily analyzed.
The frequency combs must have high bandwidth, or they will only be able to detect a small frequency range of chemical compounds, which could lead to false alarms or inaccurate results.
Dispersion is the most important factor that limits a frequency comb’s bandwidth. If there is dispersion, the laser lines are not evenly spaced, which is incompatible with the formation of frequency combs.
“With long wave infrared radiation, the dispersion will be very high. There is no way to get around it, so we have to find a way to compensate for it or counteract it by engineering our system,” Hu says.
Many existing approaches aren’t flexible enough to be used in different scenarios or don’t enable high enough bandwidth.
Hu’s group previously solved this problem in a different type of frequency comb, one that used terahertz waves, by developing a double-chirped mirror (DCM).
A DCM is a special type of optical mirror that has multiple layers with thicknesses that change gradually from one end to the other. They found that this DCM, which has a corrugated structure, could effectively compensate for dispersion when used with a terahertz laser.
“We tried to borrow this trick and apply it to an infrared comb, but we ran into lots of challenges,” Hu says.
Because infrared waves are 10 times shorter than terahertz waves, fabricating the new mirror required an extreme level of precision. At the same time, they needed to coat the entire DCM in a thick layer of gold to remove the heat under laser operation. Plus, their dispersion measurement system, designed for terahertz waves, wouldn’t work with infrared waves, which have frequencies that are about 10 times higher than terahertz.
“After more than two years of trying to implement this scheme, we reached a dead end,” Hu says.
A new solution
Ready to throw in the towel, the team realized something they had missed. They had designed the mirror with corrugation to compensate for the lossy terahertz laser, but infrared radiation sources aren’t as lossy.
This meant they could use a standard DCM design to compensate for dispersion, which is compatible with infrared radiation. However, they still needed to create curved mirror layers to capture the beam of the laser, which made fabrication much more difficult than usual.
“The adjacent layers of mirror differ only by tens of nanometers. That level of precision precludes standard photolithography techniques. On top of that, we still had to etch very deeply into the notoriously stubborn material stacks. Achieving those critical dimensions and etch depths was key to unlocking broadband comb performance,” Zeng says. In addition to precisely fabricating the DCM, they integrated the mirror directly onto the laser, making the device extremely compact. The team also developed a high-resolution, on-chip dispersion measurement platform that doesn’t require bulky external equipment.
“Our approach is flexible. As long as we can use our platform to measure the dispersion, we can design and fabricate a DCM that compensates for it,” Hu adds.
Taken together, the DCM and on-chip measurement platform enabled the team to generate stable infrared laser frequency combs that had far greater bandwidth than can usually be achieved without a DCM.
In the future, the researchers want to extend their approach to other laser platforms that could generate combs with even greater bandwidth and higher power for more demanding applications.
“These researchers developed an ingenious nanophotonic dispersion compensation scheme based on an integrated air–dielectric double-chirped mirror. This approach provides unprecedented control over dispersion, enabling broadband comb formation at room temperature in the long-wave infrared. Their work opens the door to practical, chip-scale frequency combs for applications ranging from chemical sensing to free-space communications,” says Jacob B. Khurgin, a professor at the Johns Hopkins University Whiting School of Engineering, who was not involved with this paper.
This work is funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and the Gordon and Betty Moore Foundation.
Graduate work with an impact — in big cities and on campus
While working to boost economic development in Detroit in the late 2010s, Nick Allen found he was running up against a problem.
The city was trying to spur more investment after long-term industrial flight to suburbs and other states. Relying more heavily on property taxes for revenue, the city was negotiating individualized tax deals with prospective businesses. That’s hardly a scenario unique to Detroit, but such deals involved lengthy approval processes that slowed investment decisions and made smaller projects seem unrealistic.
Moreover, while creating small pockets of growth, these individualized tax abatements were not changing the city’s broader fiscal structure. They also favored those with leverage and resources to work the system for a break.
“The thing you really don’t want to do with taxes is have very particular, highly procedural ways of adjusting the burdens,” says Allen, now a doctoral student in MIT’s Department of Urban Studies and Planning (DUSP). “You want a simple process that fits people’s ideas about what fairness looks like.”
So, after starting his PhD program at MIT, Allen kept studying urban fiscal policy. Along with a group of other scholars, he has produced research papers making the case for a land-value tax — a common tax rate on land that, combined with reduced property taxes, could raise more local revenue by encouraging more city-wide investment, even while lowering tax burdens on residents and businesses. As a bonus, it could also reduce foreclosures.
In the last few years, this has become a larger topic in urban policy circles. The mayor of Detroit has endorsed the idea. The New York Times has written about the work of Allen and his colleagues. The land-value tax is now a serious policy option.
It is unusual for a graduate student to have their work become part of a prominent policy debate. But then, Allen is an unusual student. At MIT, he has not just conducted influential research in his field, but thrown himself into campus-based work with substantial impact as well. Allen has served on task forces assessing student stipend policy, expanding campus housing, and generating ideas for dining program reform.
For all these efforts, in May, Allen received the Karl Taylor Compton Prize, MIT’s highest student honor. At the ceremony, MIT Chancellor Melissa Nobles observed that Allen’s work helped Institute stakeholders “fully understand complex issues, ensuring his recommendations are not only well-informed but also practical and impactful.”
Looking to revive growth
Allen is a Minnesota native who received his BA from Yale University. In 2015, he enrolled in graduate school at MIT, receiving his master’s in city planning from DUSP in 2017. At the time, Allen worked on the Malaysia Sustainable Cities Project, headed by Professor Lawrence Susskind. At one point Allen spent a couple of months in a small Malaysian village studying the effects of coastal development on local fishing and farming.
Malaysia may be different than Michigan, but the issues that Allen encountered in Asia were similar to the ones he wanted to keep studying back in the U.S.: finding ways to finance growth.
“The core interests I have are around real estate, the physical environment, and these fiscal policy questions of how this all gets funded and what the responsibilities are of the state and private markets,” Allen says. “And that brought me to Detroit.”
Specifically, that landed him at the Detroit Economic Growth Corporation, a city-charted development agency that works to facilitate new investment. There, Allen started grappling with the city’s revenue problems. Once heralded as the richest city in America, Detroit has seen a lot of property go vacant, and has hiked property taxes on existing structures to compensate for that. Those rates then discouraged further investment and building.
To be sure, the challenges Detroit has faced stem from far more than tax policy and relate to many macroscale socioeconomic factors, including suburban flight, the shift of manufacturing to states with nonunion employees, and much more. But changing tax policy can be one lever to pull in response.
“It’s difficult to figure out how to revive growth in a place that’s been cannibalized by its losses,” Allen says.
Tasked with underwriting real estate projects, Allen started cataloguing the problems arising from Detroit’s property tax reliance, and began looking at past economics work on optimal tax policy in search of alternatives.
“There’s a real nose-to-the-ground empiricism you start with, asking why we have a system nobody would choose,” Allen says. “There were two parts to that, for me. One was initially looking at the difficulty of making individual projects work, from affordable housing to big industrial plants, along with, secondly, this wave of tax foreclosures in the city.”
Engineering, but for policy
After two years in Detroit, Allen returned to MIT, this time as a doctoral student in DUSP and with a research program oriented around the issues he had worked on. In pursuing that, Allen has worked closely with John E. Anderson, an economist at the University of Nebraska at Lincoln. With a nationwide team of economists convened by the Lincoln Institute of Land Policy, they worked to address the city’s questions on property tax reform.
One paper used current data to show that a land-value tax should lower tax-connected foreclosures in the city. Two other papers study the use of the tax in certain parts of Pennsylvania, one of the few states where it has been deployed. There, the researchers concluded, the land-value tax both leads to greater business development and raises property values.
“What we found overall, looking at past tax reduction in Detroit and other cities, is that in reducing the rate at which people in deep tax distress go through foreclosure, it has a fairly large effect,” Allen says. “It has some effect on allowing business to reinvest in properties. We are seeing a lot more attraction of investment. And it’s got the virtue of being a rules-based system.”
Those empirical results, he notes, helped confirm the sense that a policy change could help growth in Detroit.
“That really validated the hunch we were following,” Allen says.
The widespread attention the policy proposal has garnered could not really have been predicted. The tax has not yet been implemented in Detroit, although it has been a prominent part of civic debates there. Allen has been asked to consult on tax policy by officials in numerous large cities, and is hopeful the concept will gain still more traction.
Meanwhile, at MIT, Allen has one more year to go in his doctoral program. On top of his academic research, he has been an active participant in Institute matters, helping reshape graduate-school policies on multiple fronts.
For instance, Allen was part of the Graduate Housing Working Group, whose efforts helped spur MIT to build Graduate Junction, a new housing complex for 675 graduate students on Vassar Street in Cambridge, Massachusetts. The name also refers to the Grand Junction rail line that runs nearby; the complex formally opened in 2024.
“Innovative places struggle to build housing fast enough,” Allen said at the time Graduate Junction opened, also noting that “new housing for students reduces price pressure on the rest of the Cambridge community.”
Commenting on it now, he adds, “Maybe to most people graduate housing policy doesn’t sound that fun, but to me these are very absorbing questions.”
And ultimately, Allen says, the intellectual problems in either domain can be similar, whether he is working on city policy issues or campus enhancements.
“The reason I think planning fits so well here at MIT is, a lot of what I do is like policy engineering,” Allen says. “It’s really important to understand system constraints, and think seriously about finding solutions that can be built to purpose. I think that’s why I’ve felt at home here at MIT, working on these outside public policy topics, and projects for the Institute. You need to take seriously what people say about the constraints in their lives.”
Professor John Joannopoulos, photonics pioneer and Institute for Soldier Nanotechnologies director, dies at 78
John “JJ” Joannopoulos, the Francis Wright Davis Professor of Physics at MIT and director of the MIT Institute for Soldier Nanotechnologies (ISN), passed away on Aug. 17. He was 78.
Joannopoulos was a prolific researcher in the field of theoretical condensed-matter physics, and an early pioneer in the study and application of photonic crystals. Many of his discoveries, in the ways materials can be made to manipulate light, have led to transformative and life-saving technologies, from chip-based optical wave guides, to wireless energy transfer to health-monitoring textiles, to precision light-based surgical tools.
His remarkable career of over 50 years was spent entirely at MIT, where he was known as much for his generous and unwavering mentorship as for his contributions to science. He made a special point to keep up rich and meaningful collaborations with many of his former students and postdocs, dozens of whom have gone on to faculty positions at major universities, and to leadership roles in the public and private sectors. In his five decades at MIT, he made lasting connections across campus, both in service of science, and friendship.
“A scientific giant, inspiring leader, and a masterful communicator, John carried a generous and loving heart,” says Yoel Fink PhD ’00, an MIT professor of materials science and engineering who was Joannopoulos’ former student and a longtime collaborator. “He chose to see the good in people, keeping his mind and heart always open. Asking little for himself, he gave everything in care of others. John lived a life of deep impact and meaning — savoring the details of truth-seeking, achieving rare discoveries and mentoring generations of students to achieve excellence. With warmth, humor, and a never-ending optimism, JJ left an indelible impact on science and on all who had the privilege to know him. Above all, he was a loving husband, father, grandfather, friend, and mentor.”
“In the end, the most remarkable thing about him was his unmatched humanity, his ability to make you feel that you were the most important thing in the world that deserved his attention, no matter who you were,” says Raul Radovitzky, ISN associate director and the Jerome C. Hunsaker Professor in MIT’s Department of Aeronautics and Astronautics. “The legacy he leaves is not only in equations and innovations, but in the lives he touched, the minds he inspired, and the warmth he spread in every room he entered.”
“JJ was a very special colleague: a brilliant theorist who was also adept at identifying practical applications; a caring and inspiring mentor of younger scientists; a gifted teacher who knew every student in his class by name,” says Deepto Chakrabarty ’88, the William A. M. Burden Professor in Astrophysics and head of MIT’s Department of Physics. “He will be deeply missed.”
Layers of light
John Joannopoulos was born in 1947 in New York City, where his parents both emigrated from Greece. His father was a playwright, and his mother worked as a psychologist. From an early age, Joannopoulos knew he wanted to be a physicist — mainly because the subject was his most challenging in school. In a recent interview with MIT News, he enthusiastically shared: “You probably wouldn’t believe this, but it’s true: I wanted to be a physics professor since I was in high school! I loved the idea of being able to work with students, and being able to have ideas.”
He attended the University of California at Berkeley, where he received a bachelor’s degree in 1968, and a PhD in 1974, both in physics. That same year, he joined the faculty at MIT, where he would spend his 50-plus-year career — though at the time, the chances of gaining a long-term foothold at the Institute seemed slim, as Joannopoulos told MIT News.
“The chair of the physics department was the famous nuclear physicist, Herman Feshbach, who told me the probability that I would get tenure was something like 30 percent,” Joannopoulos recalled. “But when you’re young and just starting off, it was certainly better than zero, and I thought, that was fine — there was hope down the line.”
Starting out at MIT, Joannopoulos knew exactly what he wanted to do. He quickly set up a group to study theoretical condensed-matter physics, and specifically, ab initio physics, meaning physics “from first principles.” In this initial work, he sought to build theoretical models to predict the electronic behavior and structure of materials, based solely on the atomic numbers of the atoms in a material. Such foundational models could be applied to understand and design a huge range of materials and structures.
Then, in the early 1990s, Joannopoulos took a research turn, spurred by a paper by physicist Eli Yablonovitch at the University of California at Los Angeles, who did some preliminary work on materials that can affect the behavior of photons, or particles of light. Joannopoulos recognized a connection with his first-principles work with electrons. Along with his students, he applied that approach to predict the fundamental behavior of photons in different classes of materials. His group was one of the first to pioneer the field of photonic crystals, and the study of how materials can be manipulated at the nanoscale to control the behavior of light traveling through. In 1995, Joannopoulos co-authored the first textbook on the subject.
And in 1998, he took on a more-than-century-old assumption about how light should reflect, and turned it on its head. That assumption predicted that light, shining onto a structure made of multiple refractive layers, could reflect back, but only for a limited range of angles. But in fact, Joannopoulos and his group showed that the opposite is true: If the structure’s layers followed a particular design criteria, the structure as a whole could reflect light coming from any and all angles. This structure, was called the “perfect mirror.”
That insight led to another: If the structure were rolled into a tube, the resulting hollow fiber could act as a perfect optical conduit. Any light traveling through the fiber would reflect and bounce around within the fiber, with none scattering away. Joannopoulos and his group applied this insight to develop the first precision “optical scalpel” — a fiber that can be safely handled, while delivering a highly focused laser, precise and powerful enough to perform delicate surgical procedures. Joannopoulos helped to commercialize the new tool with a startup, Omniguide, that has since provided the optical scalpel to assist in hundreds of thousands of medical procedures around the world.
Legendary mentor
In 2006, Joannopoulos took the helm as director of MIT’s Institute for Soldier Nanotechnologies — a post he steadfastly held for almost 20 years. During his dedicated tenure, he worked with ISN members across campus and in departments outside his own, getting to know and champion their work. He has facilitated countless collaborations between MIT faculty, industry partners, and the U.S. Department of Defense. Among the many projects he raised support for were innovations in lightweight armor, hyperspectral imaging, energy-efficient batteries, and smart and responsive fabrics.
Joannopoulos helped to translate many basic science insights into practical applications. He was a cofounder of six spinoff companies based on his fundamental research, and helped to create dozens more companies, which have advanced technologies as wide-ranging as laser surgery tools, to wireless electric power transmission, transparent display technologies, and optical computing. He was awarded 126 patents for his many discoveries, and has authored over 750 peer-reviewed papers.
In recognition of his wide impact and contributions, Joannopoulos was elected to the National Academy of Sciences and the American Academy of Arts and Sciences. He was also a fellow of both the American Physical Society and the American Association for the Advancement of Science. Over his 50-plus-year career, he was the recipient of many scientific awards and honors including the Max Born Award, and the Aneesur Rahman Prize in Computational Physics. Joannopoulos was also a gifted classroom teacher, and was recognized at MIT with the Buechner Teaching Prize in Physics and the Graduate Teaching Award in Science.
This year, Joannopoulos was the recipient of MIT’s Killian Achievement Award, which recognizes the extraordinary lifetime contributions of a member of the MIT faculty. In addition to the many accomplishments Joannopoulos has made in science, the award citation emphasized his lasting impact on the generations of students he has mentored:
“Professor Joannopoulos has served as a legendary mentor to generations of students, inspiring them to achieve excellence in science while at the same time facilitating the practical benefit to society through entrepreneurship,” the citation reads. “Through all of these individuals he has impacted — not to mention their academic descendants — Professor Joannopoulos has had a vast influence on the development of science in recent decades.”
“JJ was an amazing scientist: He published hundreds of papers that have been cited close to 200,000 times. He was also a serial entrepreneur: Companies he cofounded raised hundreds of millions of dollars and employed hundreds of people,” says MIT Professor Marin Soljacic ’96, a former postdoc under Joannopoulos who with him cofounded a startup, Witricity. “He was an amazing mentor, a close friend, and like a scientific father to me. He always had time for me, any time of the day, and as much as I needed.”
Indeed, Joannopoulos strived to meaningfully support his many students. In the classroom, he “was legendary,” says friend and colleague Patrick Lee ’66, PhD ’70, who recalls that Joannopoulos would make a point of memorizing the names and faces of more than 100 students on the first day of class, and calling them each by their first name, starting on the second day, and for the rest of the term.
What’s more, Joannopoulos encouraged graduate students and postdocs to follow their ideas, even when they ran counter to his own.
“John did not produce clones,” says Lee, who is an MIT professor emeritus of physics. “He showed them the way to do science by example, by caring and by sharing his optimism. I have never seen someone so deeply loved by his students.”
Even students who stepped off the photonics path have kept in close contact with their mentor, as former student and MIT professor Josh Winn ’94, SM ’94, PhD ’01 has done.
“Even though our work together ended more than 25 years ago, and I now work in a different field, I still feel like part of the Joannopoulos academic family,” says Winn, who is now a professor of astrophysics at Princeton University. “It's a loyal group with branches all over the world. We even had our own series of conferences, organized by former students to celebrate John's 50th, 60th, and 70th birthdays. Most professors would consider themselves fortunate to have even one such ‘festschrift’ honoring their legacy.”
MIT professor of mathematics Steven Johnson ’95, PhD ’01, a former student and frequent collaborator, has experienced personally, and seen many times over, Joannopoulos’ generous and open-door mentorship.
“In every collaboration, I’ve unfailingly observed him to cast a wide net to value multiple voices, to ensure that everyone feels included and valued, and to encourage collaborations across groups and fields and institutions,” Johnson says. “Kind, generous, and brimming with infectious enthusiasm and positivity, he set an example so many of his lucky students have striven to follow.”
Joannopoulos started at MIT around the same time as Marc Kastner, who had a nearby office on the second floor of Building 13.
“I would often hear loud arguments punctuated by boisterous laughter, coming from John’s office, where he and his students were debating physics,” recalls Kastner, who is the Donner Professor of Physics Emeritus at MIT. “I am sure this style of interaction is what made him such a great mentor.”
“He exuded such enthusiasm for science and good will to others that he was just good fun to be around,” adds friend and colleague Erich Ippen, MIT professor emeritus of physics.
“John was indeed a great man — a very special one. Everyone who ever worked with him understands this,” says Stanford University physics professor Robert Laughlin PhD ’79, one of Joannopoulos’ first graduate students, who went on to win the 1998 Nobel Prize in Physics. “He sprinkled a kind of transformative magic dust on people that induced them to dedicate every waking moment to the task of making new and wonderful things. You can find traces of it in lots of places around the world that matter, all of them the better for it. There’s quite a pile of it in my office.”
Joannopoulos is survived by his wife, Kyri Dunussi-Joannopoulos; their three daughters, Maria, Lena, and Alkisti; and their families. Details for funeral and memorial services are forthcoming.
A new model predicts how molecules will dissolve in different solvents
Using machine learning, MIT chemical engineers have created a computational model that can predict how well any given molecule will dissolve in an organic solvent — a key step in the synthesis of nearly any pharmaceutical. This type of prediction could make it much easier to develop new ways to produce drugs and other useful molecules.
The new model, which predicts how much of a solute will dissolve in a particular solvent, should help chemists to choose the right solvent for any given reaction in their synthesis, the researchers say. Common organic solvents include ethanol and acetone, and there are hundreds of others that can also be used in chemical reactions.
“Predicting solubility really is a rate-limiting step in synthetic planning and manufacturing of chemicals, especially drugs, so there’s been a longstanding interest in being able to make better predictions of solubility,” says Lucas Attia, an MIT graduate student and one of the lead authors of the new study.
The researchers have made their model freely available, and many companies and labs have already started using it. The model could be particularly useful for identifying solvents that are less hazardous than some of the most commonly used industrial solvents, the researchers say.
“There are some solvents which are known to dissolve most things. They’re really useful, but they’re damaging to the environment, and they’re damaging to people, so many companies require that you have to minimize the amount of those solvents that you use,” says Jackson Burns, an MIT graduate student who is also a lead author of the paper. “Our model is extremely useful in being able to identify the next-best solvent, which is hopefully much less damaging to the environment.”
William Green, the Hoyt Hottel Professor of Chemical Engineering and director of the MIT Energy Initiative, is the senior author of the study, which appears today in Nature Communications. Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering, is also an author of the paper.
Solving solubility
The new model grew out of a project that Attia and Burns worked on together in an MIT course on applying machine learning to chemical engineering problems. Traditionally, chemists have predicted solubility with a tool known as the Abraham Solvation Model, which can be used to estimate a molecule’s overall solubility by adding up the contributions of chemical structures within the molecule. While these predictions are useful, their accuracy is limited.
In the past few years, researchers have begun using machine learning to try to make more accurate solubility predictions. Before Burns and Attia began working on their new model, the state-of-the-art model for predicting solubility was a model developed in Green’s lab in 2022.
That model, known as SolProp, works by predicting a set of related properties and combining them, using thermodynamics, to ultimately predict the solubility. However, the model has difficulty predicting solubility for solutes that it hasn’t seen before.
“For drug and chemical discovery pipelines where you’re developing a new molecule, you want to be able to predict ahead of time what its solubility looks like,” Attia says.
Part of the reason that existing solubility models haven’t worked well is because there wasn’t a comprehensive dataset to train them on. However, in 2023 a new dataset called BigSolDB was released, which compiled data from nearly 800 published papers, including information on solubility for about 800 molecules dissolved about more than 100 organic solvents that are commonly used in synthetic chemistry.
Attia and Burns decided to try training two different types of models on this data. Both of these models represent the chemical structures of molecules using numerical representations known as embeddings, which incorporate information such as the number of atoms in a molecule and which atoms are bound to which other atoms. Models can then use these representations to predict a variety of chemical properties.
One of the models used in this study, known as FastProp and developed by Burns and others in Green’s lab, incorporates “static embeddings.” This means that the model already knows the embedding for each molecule before it starts doing any kind of analysis.
The other model, ChemProp, learns an embedding for each molecule during the training, at the same time that it learns to associate the features of the embedding with a trait such as solubility. This model, developed across multiple MIT labs, has already been used for tasks such as antibiotic discovery, lipid nanoparticle design, and predicting chemical reaction rates.
The researchers trained both types of models on over 40,000 data points from BigSolDB, including information on the effects of temperature, which plays a significant role in solubility. Then, they tested the models on about 1,000 solutes that had been withheld from the training data. They found that the models’ predictions were two to three times more accurate than those of SolProp, the previous best model, and the new models were especially accurate at predicting variations in solubility due to temperature.
“Being able to accurately reproduce those small variations in solubility due to temperature, even when the overarching experimental noise is very large, was a really positive sign that the network had correctly learned an underlying solubility prediction function,” Burns says.
Accurate predictions
The researchers had expected that the model based on ChemProp, which is able to learn new representations as it goes along, would be able to make more accurate predictions. However, to their surprise, they found that the two models performed essentially the same. That suggests that the main limitation on their performance is the quality of the data, and that the models are performing as well as theoretically possible based on the data that they’re using, the researchers say.
“ChemProp should always outperform any static embedding when you have sufficient data,” Burns says. “We were blown away to see that the static and learned embeddings were statistically indistinguishable in performance across all the different subsets, which indicates to us that that the data limitations that are present in this space dominated the model performance.”
The models could become more accurate, the researchers say, if better training and testing data were available — ideally, data obtained by one person or a group of people all trained to perform the experiments the same way.
“One of the big limitations of using these kinds of compiled datasets is that different labs use different methods and experimental conditions when they perform solubility tests. That contributes to this variability between different datasets,” Attia says.
Because the model based on FastProp makes its predictions faster and has code that is easier for other users to adapt, the researchers decided to make that one, known as FastSolv, available to the public. Multiple pharmaceutical companies have already begun using it.
“There are applications throughout the drug discovery pipeline,” Burns says. “We’re also excited to see, outside of formulation and drug discovery, where people may use this model.”
The research was funded, in part, by the U.S. Department of Energy.
Researchers glimpse the inner workings of protein language models
Within the past few years, models that can predict the structure or function of proteins have been widely used for a variety of biological applications, such as identifying drug targets and designing new therapeutic antibodies.
These models, which are based on large language models (LLMs), can make very accurate predictions of a protein’s suitability for a given application. However, there’s no way to determine how these models make their predictions or which protein features play the most important role in those decisions.
In a new study, MIT researchers have used a novel technique to open up that “black box” and allow them to determine what features a protein language model takes into account when making predictions. Understanding what is happening inside that black box could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.
“Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the study. “Additionally, identifying features that protein language models track has the potential to reveal novel biological insights from these representations.”
Onkar Gujral, an MIT graduate student, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Mihir Bafna, an MIT graduate student, and Eric Alm, an MIT professor of biological engineering, are also authors of the paper.
Opening the black box
In 2018, Berger and former MIT graduate student Tristan Bepler PhD ’20 introduced the first protein language model. Their model, like subsequent protein models that accelerated the development of AlphaFold, such as ESM2 and OmegaFold, was based on LLMs. These models, which include ChatGPT, can analyze huge amounts of text and figure out which words are most likely to appear together.
Protein language models use a similar approach, but instead of analyzing words, they analyze amino acid sequences. Researchers have used these models to predict the structure and function of proteins, and for applications such as identifying proteins that might bind to particular drugs.
In a 2021 study, Berger and colleagues used a protein language model to predict which sections of viral surface proteins are less likely to mutate in a way that enables viral escape. This allowed them to identify possible targets for vaccines against influenza, HIV, and SARS-CoV-2.
However, in all of these studies, it has been impossible to know how the models were making their predictions.
“We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger says.
In the new study, the researchers wanted to dig into how protein language models make their predictions. Just like LLMs, protein language models encode information as representations that consist of a pattern of activation of different “nodes” within a neural network. These nodes are analogous to the networks of neurons that store memories and other information within the brain.
The inner workings of LLMs are not easy to interpret, but within the past couple of years, researchers have begun using a type of algorithm known as a sparse autoencoder to help shed some light on how those models make their predictions. The new study from Berger’s lab is the first to use this algorithm on protein language models.
Sparse autoencoders work by adjusting how a protein is represented within a neural network. Typically, a given protein will be represented by a pattern of activation of a constrained number of neurons, for example, 480. A sparse autoencoder will expand that representation into a much larger number of nodes, say 20,000.
When information about a protein is encoded by only 480 neurons, each node lights up for multiple features, making it very difficult to know what features each node is encoding. However, when the neural network is expanded to 20,000 nodes, this extra space along with a sparsity constraint gives the information room to “spread out.” Now, a feature of the protein that was previously encoded by multiple nodes can occupy a single node.
“In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” Gujral says. “Before the sparse representations are created, the networks pack information so tightly together that it's hard to interpret the neurons.”
Interpretable models
Once the researchers obtained sparse representations of many proteins, they used an AI assistant called Claude (related to the popular Anthropic chatbot of the same name), to analyze the representations. In this case, they asked Claude to compare the sparse representations with the known features of each protein, such as molecular function, protein family, or location within a cell.
By analyzing thousands of representations, Claude can determine which nodes correspond to specific protein features, then describe them in plain English. For example, the algorithm might say, “This neuron appears to be detecting proteins involved in transmembrane transport of ions or amino acids, particularly those located in the plasma membrane.”
This process makes the nodes far more “interpretable,” meaning the researchers can tell what each node is encoding. They found that the features most likely to be encoded by these nodes were protein family and certain functions, including several different metabolic and biosynthetic processes.
“When you train a sparse autoencoder, you aren’t training it to be interpretable, but it turns out that by incentivizing the representation to be really sparse, that ends up resulting in interpretability,” Gujral says.
Understanding what features a particular protein model is encoding could help researchers choose the right model for a particular task, or tweak the type of input they give the model, to generate the best results. Additionally, analyzing the features that a model encodes could one day help biologists to learn more about the proteins that they are studying.
“At some point when the models get a lot more powerful, you could learn more biology than you already know, from opening up the models,” Gujral says.
The research was funded by the National Institutes of Health.
A shape-changing antenna for more versatile sensing and communication
MIT researchers have developed a reconfigurable antenna that dynamically adjusts its frequency range by changing its physical shape, making it more versatile for communications and sensing than static antennas.
A user can stretch, bend, or compress the antenna to make reversible changes to its radiation properties, enabling a device to operate in a wider frequency range without the need for complex, moving parts. With an adjustable frequency range, a reconfigurable antenna could adapt to changing environmental conditions and reduce the need for multiple antennas.
The word “antenna” may draw to mind metal rods like the “bunny ears” on top of old television sets, but the MIT team instead worked with metamaterials — engineered materials whose mechanical properties, such as stiffness and strength, depend on the geometric arrangement of the material’s components.
The result is a simplified design for a reconfigurable antenna that could be used for applications like energy transfer in wearable devices, motion tracking and sensing for augmented reality, or wireless communication across a wide range of network protocols.
In addition, the researchers developed an editing tool so users can generate customized metamaterial antennas, which can be fabricated using a laser cutter.
“Usually, when we think of antennas, we think of static antennas — they are fabricated to have specific properties and that is it. However, by using auxetic metamaterials, which can deform into three different geometric states, we can seamlessly change the properties of the antenna by changing its geometry, without fabricating a new structure. In addition, we can use changes in the antenna’s radio frequency properties, due to changes in the metamaterial geometry, as a new method of sensing for interaction design,” says lead author Marwa AlAlawi, a mechanical engineering graduate student at MIT.
Her co-authors include Regina Zheng and Katherine Yan, both MIT undergraduate students; Ticha Sethapakdi, an MIT graduate student in electrical engineering and computer science; Soo Yeon Ahn of the Gwangju Institute of Science and Technology in Korea; and co-senior authors Junyi Zhu, assistant professor at the University of Michigan; and Stefanie Mueller, the TIBCO Career Development Associate Professor in MIT’s departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the Human-Computer Interaction Group at the Computer Science and Artificial Intelligence Lab. The research will be presented at the ACM Symposium on User Interface Software and Technology.
Making sense of antennas
While traditional antennas radiate and receive radio signals, in this work, the researchers looked at how the devices can act as sensors. The team’s goal was to develop a mechanical element that can also be used as an antenna for sensing.
To do this, they leveraged the antenna’s “resonance frequency,” which is the frequency at which the antenna is most efficient.
An antenna’s resonance frequency will shift due to changes in its shape. (Think about extending the left “bunny ear” to reduce TV static.) Researchers can capture these shifts for sensing. For instance, a reconfigurable antenna could be used in this way to detect the expansion of a person’s chest, to monitor their respiration.
To design a versatile reconfigurable antenna, the researchers used metamaterials. These engineered materials, which can be programmed to adopt different shapes, are composed of a periodic arrangement of unit cells that can be rotated, compressed, stretched, or bent.
By deforming the metamaterial structure, one can shift the antenna’s resonance frequency.
“In order to trigger changes in resonance frequency, we either need to change the antenna’s effective length or introduce slits and holes into it. Metamaterials allow us to get those different states from only one structure,” AlAlawi says.
The device, dubbed the meta-antenna, is composed of a dielectric layer of material sandwiched between two conductive layers.
To fabricate a meta-antenna, the researchers cut the dielectric laser out of a rubber sheet with a laser cutter. Then they added a patch on top of the dielectric layer using conductive spray paint, creating a resonating “patch antenna.”
But they found that even the most flexible conductive material couldn’t withstand the amount of deformation the antenna would experience.
“We did a lot of trial and error to determine that, if we coat the structure with flexible acrylic paint, it protects the hinges so they don’t break prematurely,” AlAlawi explains.
A means for makers
With the fabrication problem solved, the researchers built a tool that enables users to design and produce metamaterial antennas for specific applications.
The user can define the size of the antenna patch, choose a thickness for the dielectric layer, and set the length to width ratio of the metamaterial unit cells. Then the system automatically simulates the antenna’s resonance frequency range.
“The beauty of metamaterials is that, because it is an interconnected system of linkages, the geometric structure allows us to reduce the complexity of a mechanical system,” AlAlawi says.
Using the design tool, the researchers incorporated meta-antennas into several smart devices, including a curtain that dynamically adjusts household lighting and headphones that seamlessly transition between noise-cancelling and transparent modes.
For the smart headphone, for instance, when the meta-antenna expands and bends, it shifts the resonance frequency by 2.6 percent, which switches the headphone mode. The team’s experiments also showed that meta-antenna structures are durable enough to withstand more than 10,000 compressions.
Because the antenna patch can be patterned onto any surface, it could be used with more complex structures. For instance, the antenna could be incorporated into smart textiles that perform noninvasive biomedical sensing or temperature monitoring.
In the future, the researchers want to design three-dimensional meta-antennas for a wider range of applications. They also want to add more functions to the design tool, improve the durability and flexibility of the metamaterial structure, experiment with different symmetric metamaterial patterns, and streamline some manual fabrication steps.
This research was funded, in part, by the Bahrain Crown Prince International Scholarship and the Gwangju Institute of Science and Technology.
How AI could speed the development of RNA vaccines and other RNA therapies
Using artificial intelligence, MIT researchers have come up with a new way to design nanoparticles that can more efficiently deliver RNA vaccines and other types of RNA therapies.
After training a machine-learning model to analyze thousands of existing delivery particles, the researchers used it to predict new materials that would work even better. The model also enabled the researchers to identify particles that would work well in different types of cells, and to discover ways to incorporate new types of materials into the particles.
“What we did was apply machine-learning tools to help accelerate the identification of optimal ingredient mixtures in lipid nanoparticles to help target a different cell type or help incorporate different materials, much faster than previously was possible,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.
This approach could dramatically speed the process of developing new RNA vaccines, as well as therapies that could be used to treat obesity, diabetes, and other metabolic disorders, the researchers say.
Alvin Chan, a former MIT postdoc who is now an assistant professor at Nanyang Technological University, and Ameya Kirtane, a former MIT postdoc who is now an assistant professor at the University of Minnesota, are the lead authors of the new open-access study, which appears today in Nature Nanotechnology.
Particle predictions
RNA vaccines, such as the vaccines for SARS-CoV-2, are usually packaged in lipid nanoparticles (LNPs) for delivery. These particles protect mRNA from being broken down in the body and help it to enter cells once injected.
Creating particles that handle these jobs more efficiently could help researchers to develop even more effective vaccines. Better delivery vehicles could also make it easier to develop mRNA therapies that encode genes for proteins that could help to treat a variety of diseases.
In 2024, Traverso’s lab launched a multiyear research program, funded by the U.S. Advanced Research Projects Agency for Health (ARPA-H), to develop new ingestible devices that could achieve oral delivery of RNA treatments and vaccines.
“Part of what we’re trying to do is develop ways of producing more protein, for example, for therapeutic applications. Maximizing the efficiency is important to be able to boost how much we can have the cells produce,” Traverso says.
A typical LNP consists of four components — a cholesterol, a helper lipid, an ionizable lipid, and a lipid that is attached to polyethylene glycol (PEG). Different variants of each of these components can be swapped in to create a huge number of possible combinations. Changing up these formulations and testing each one individually is very time-consuming, so Traverso, Chan, and their colleagues decided to turn to artificial intelligence to help speed up the process.
“Most AI models in drug discovery focus on optimizing a single compound at a time, but that approach doesn’t work for lipid nanoparticles, which are made of multiple interacting components,” Chan says. “To tackle this, we developed a new model called COMET, inspired by the same transformer architecture that powers large language models like ChatGPT. Just as those models understand how words combine to form meaning, COMET learns how different chemical components come together in a nanoparticle to influence its properties — like how well it can deliver RNA into cells.”
To generate training data for their machine-learning model, the researchers created a library of about 3,000 different LNP formulations. The team tested each of these 3,000 particles in the lab to see how efficiently they could deliver their payload to cells, then fed all of this data into a machine-learning model.
After the model was trained, the researchers asked it to predict new formulations that would work better than existing LNPs. They tested those predictions by using the new formulations to deliver mRNA encoding a fluorescent protein to mouse skin cells grown in a lab dish. They found that the LNPs predicted by the model did indeed work better than the particles in the training data, and in some cases better than LNP formulations that are used commercially.
Accelerated development
Once the researchers showed that the model could accurately predict particles that would efficiently deliver mRNA, they began asking additional questions. First, they wondered if they could train the model on nanoparticles that incorporate a fifth component: a type of polymer known as branched poly beta amino esters (PBAEs).
Research by Traverso and his colleagues has shown that these polymers can effectively deliver nucleic acids on their own, so they wanted to explore whether adding them to LNPs could improve LNP performance. The MIT team created a set of about 300 LNPs that also include these polymers, which they used to train the model. The resulting model could then predict additional formulations with PBAEs that would work better.
Next, the researchers set out to train the model to make predictions about LNPs that would work best in different types of cells, including a type of cell called Caco-2, which is derived from colorectal cancer cells. Again, the model was able to predict LNPs that would efficiently deliver mRNA to these cells.
Lastly, the researchers used the model to predict which LNPs could best withstand lyophilization — a freeze-drying process often used to extend the shelf-life of medicines.
“This is a tool that allows us to adapt it to a whole different set of questions and help accelerate development. We did a large training set that went into the model, but then you can do much more focused experiments and get outputs that are helpful on very different kinds of questions,” Traverso says.
He and his colleagues are now working on incorporating some of these particles into potential treatments for diabetes and obesity, which are two of the primary targets of the ARPA-H funded project. Therapeutics that could be delivered using this approach include GLP-1 mimics with similar effects to Ozempic.
This research was funded by the GO Nano Marble Center at the Koch Institute, the Karl van Tassel Career Development Professorship, the MIT Department of Mechanical Engineering, Brigham and Women’s Hospital, and ARPA-H.
Study sheds light on graphite’s lifespan in nuclear reactors
Graphite is a key structural component in some of the world’s oldest nuclear reactors and many of the next-generation designs being built today. But it also condenses and swells in response to radiation — and the mechanism behind those changes has proven difficult to study.
Now, MIT researchers and collaborators have uncovered a link between properties of graphite and how the material behaves in response to radiation. The findings could lead to more accurate, less destructive ways of predicting the lifespan of graphite materials used in reactors around the world.
“We did some basic science to understand what leads to swelling and, eventually, failure in graphite structures,” says MIT Research Scientist Boris Khaykovich, senior author of the new study. “More research will be needed to put this into practice, but the paper proposes an attractive idea for industry: that you might not need to break hundreds of irradiated samples to understand their failure point.”
Specifically, the study shows a connection between the size of the pores within graphite and the way the material swells and shrinks in volume, leading to degradation.
“The lifetime of nuclear graphite is limited by irradiation-induced swelling,” says co-author and MIT Research Scientist Lance Snead. “Porosity is a controlling factor in this swelling, and while graphite has been extensively studied for nuclear applications since the Manhattan Project, we still do not have a clear understanding of the porosity in both mechanical properties and swelling. This work addresses that.”
The open-access paper appears this week in Interdisciplinary Materials. It is co-authored by Khaykovich, Snead, MIT Research Scientist Sean Fayfar, former MIT research fellow Durgesh Rai, Stony Brook University Assistant Professor David Sprouster, Oak Ridge National Laboratory Staff Scientist Anne Campbell, and Argonne National Laboratory Physicist Jan Ilavsky.
A long-studied, complex material
Ever since 1942, when physicists and engineers built the world’s first nuclear reactor on a converted squash court at the University of Chicago, graphite has played a central role in the generation of nuclear energy. That first reactor, dubbed the Chicago Pile, was constructed from about 40,000 graphite blocks, many of which contained nuggets of uranium.
Today graphite is a vital component of many operating nuclear reactors and is expected to play a central role in next-generation reactor designs like molten-salt and high-temperature gas reactors. That’s because graphite is a good neutron moderator, slowing down the neutrons released by nuclear fission so they are more likely to create fissions themselves and sustain a chain reaction.
“The simplicity of graphite makes it valuable,” Khaykovich explains. “It’s made of carbon, and it’s relatively well-known how to make it cleanly. Graphite is a very mature technology. It’s simple, stable, and we know it works.”
But graphite also has its complexities.
“We call graphite a composite even though it’s made up of only carbon atoms,” Khaykovich says. “It includes ‘filler particles’ that are more crystalline, then there is a matrix called a ‘binder’ that is less crystalline, then there are pores that span in length from nanometers to many microns.”
Each graphite grade has its own composite structure, but they all contain fractals, or shapes that look the same at different scales.
Those complexities have made it hard to predict how graphite will respond to radiation in microscopic detail, although it’s been known for decades that when graphite is irradiated, it first densifies, reducing its volume by up to 10 percent, before swelling and cracking. The volume fluctuation is caused by changes to graphite’s porosity and lattice stress.
“Graphite deteriorates under radiation, as any material does,” Khaykovich says. “So, on the one hand we have a material that’s extremely well-known, and on the other hand, we have a material that is immensely complicated, with a behavior that’s impossible to predict through computer simulations.”
For the study, the researchers received irradiated graphite samples from Oak Ridge National Laboratory. Co-authors Campbell and Snead were involved in irradiating the samples some 20 years ago. The samples are a grade of graphite known as G347A.
The research team used an analysis technique known as X-ray scattering, which uses the scattered intensity of an X-ray beam to analyze the properties of material. Specifically, they looked at the distribution of sizes and surface areas of the sample’s pores, or what are known as the material’s fractal dimensions.
“When you look at the scattering intensity, you see a large range of porosity,” Fayfar says. “Graphite has porosity over such large scales, and you have this fractal self-similarity: The pores in very small sizes look similar to pores spanning microns, so we used fractal models to relate different morphologies across length scales.”
Fractal models had been used on graphite samples before, but not on irradiated samples to see how the material’s pore structures changed. The researchers found that when graphite is first exposed to radiation, its pores get filled as the material degrades.
“But what was quite surprising to us is the [size distribution of the pores] turned back around,” Fayfar says. “We had this recovery process that matched our overall volume plots, which was quite odd. It seems like after graphite is irradiated for so long, it starts recovering. It’s sort of an annealing process where you create some new pores, then the pores smooth out and get slightly bigger. That was a big surprise.”
The researchers found that the size distribution of the pores closely follows the volume change caused by radiation damage.
“Finding a strong correlation between the [size distribution of pores] and the graphite’s volume changes is a new finding, and it helps connect to the failure of the material under irradiation,” Khaykovich says. “It’s important for people to know how graphite parts will fail when they are under stress and how failure probability changes under irradiation.”
From research to reactors
The researchers plan to study other graphite grades and explore further how pore sizes in irradiated graphite correlate with the probability of failure. They speculate that a statistical technique known as the Weibull Distribution could be used to predict graphite’s time until failure. The Weibull Distribution is already used to describe the probability of failure in ceramics and other porous materials like metal alloys.
Khaykovich also speculated that the findings could contribute to our understanding of why materials densify and swell under irradiation.
“There’s no quantitative model of densification that takes into account what’s happening at these tiny scales in graphite,” Khaykovich says. “Graphite irradiation densification reminds me of sand or sugar, where when you crush big pieces into smaller grains, they densify. For nuclear graphite, the crushing force is the energy that neutrons bring in, causing large pores to get filled with smaller, crushed pieces. But more energy and agitation create still more pores, and so graphite swells again. It’s not a perfect analogy, but I believe analogies bring progress for understanding these materials.”
The researchers describe the paper as an important step toward informing graphite production and use in nuclear reactors of the future.
“Graphite has been studied for a very long time, and we’ve developed a lot of strong intuitions about how it will respond in different environments, but when you’re building a nuclear reactor, details matter,” Khaykovich says. “People want numbers. They need to know how much thermal conductivity will change, how much cracking and volume change will happen. If components are changing volume, at some point you need to take that into account.”
This work was supported, in part, by the U.S. Department of Energy.
Using generative AI, researchers design compounds that can kill drug-resistant bacteria
With help from artificial intelligence, MIT researchers have designed novel antibiotics that can combat two hard-to-treat infections: drug-resistant Neisseria gonorrhoeae and multi-drug-resistant Staphylococcus aureus (MRSA).
Using generative AI algorithms, the research team designed more than 36 million possible compounds and computationally screened them for antimicrobial properties. The top candidates they discovered are structurally distinct from any existing antibiotics, and they appear to work by novel mechanisms that disrupt bacterial cell membranes.
This approach allowed the researchers to generate and evaluate theoretical compounds that have never been seen before — a strategy that they now hope to apply to identify and design compounds with activity against other species of bacteria.
“We’re excited about the new possibilities that this project opens up for antibiotics development. Our work shows the power of AI from a drug design standpoint, and enables us to exploit much larger chemical spaces that were previously inaccessible,” says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.
Collins is the senior author of the study, which appears today in Cell. The paper’s lead authors are MIT postdoc Aarti Krishnan, former postdoc Melis Anahtar ’08, and Jacqueline Valeri PhD ’23.
Exploring chemical space
Over the past 45 years, a few dozen new antibiotics have been approved by the FDA, but most of these are variants of existing antibiotics. At the same time, bacterial resistance to many of these drugs has been growing. Globally, it is estimated that drug-resistant bacterial infections cause nearly 5 million deaths per year.
In hopes of finding new antibiotics to fight this growing problem, Collins and others at MIT’s Antibiotics-AI Project have harnessed the power of AI to screen huge libraries of existing chemical compounds. This work has yielded several promising drug candidates, including halicin and abaucin.
To build on that progress, Collins and his colleagues decided to expand their search into molecules that can’t be found in any chemical libraries. By using AI to generate hypothetically possible molecules that don’t exist or haven’t been discovered, they realized that it should be possible to explore a much greater diversity of potential drug compounds.
In their new study, the researchers employed two different approaches: First, they directed generative AI algorithms to design molecules based on a specific chemical fragment that showed antimicrobial activity, and second, they let the algorithms freely generate molecules, without having to include a specific fragment.
For the fragment-based approach, the researchers sought to identify molecules that could kill N. gonorrhoeae, a Gram-negative bacterium that causes gonorrhea. They began by assembling a library of about 45 million known chemical fragments, consisting of all possible combinations of 11 atoms of carbon, nitrogen, oxygen, fluorine, chlorine, and sulfur, along with fragments from Enamine’s REadily AccessibLe (REAL) space.
Then, they screened the library using machine-learning models that Collins’ lab has previously trained to predict antibacterial activity against N. gonorrhoeae. This resulted in nearly 4 million fragments. They narrowed down that pool by removing any fragments predicted to be cytotoxic to human cells, displayed chemical liabilities, and were known to be similar to existing antibiotics. This left them with about 1 million candidates.
“We wanted to get rid of anything that would look like an existing antibiotic, to help address the antimicrobial resistance crisis in a fundamentally different way. By venturing into underexplored areas of chemical space, our goal was to uncover novel mechanisms of action,” Krishnan says.
Through several rounds of additional experiments and computational analysis, the researchers identified a fragment they called F1 that appeared to have promising activity against N. gonorrhoeae. They used this fragment as the basis for generating additional compounds, using two different generative AI algorithms.
One of those algorithms, known as chemically reasonable mutations (CReM), works by starting with a particular molecule containing F1 and then generating new molecules by adding, replacing, or deleting atoms and chemical groups. The second algorithm, F-VAE (fragment-based variational autoencoder), takes a chemical fragment and builds it into a complete molecule. It does so by learning patterns of how fragments are commonly modified, based on its pretraining on more than 1 million molecules from the ChEMBL database.
Those two algorithms generated about 7 million candidates containing F1, which the researchers then computationally screened for activity against N. gonorrhoeae. This screen yielded about 1,000 compounds, and the researchers selected 80 of those to see if they could be produced by chemical synthesis vendors. Only two of these could be synthesized, and one of them, named NG1, was very effective at killing N. gonorrhoeae in a lab dish and in a mouse model of drug-resistant gonorrhea infection.
Additional experiments revealed that NG1 interacts with a protein called LptA, a novel drug target involved in the synthesis of the bacterial outer membrane. It appears that the drug works by interfering with membrane synthesis, which is fatal to cells.
Unconstrained design
In a second round of studies, the researchers explored the potential of using generative AI to freely design molecules, using Gram-positive bacteria, S. aureus as their target.
Again, the researchers used CReM and VAE to generate molecules, but this time with no constraints other than the general rules of how atoms can join to form chemically plausible molecules. Together, the models generated more than 29 million compounds. The researchers then applied the same filters that they did to the N. gonorrhoeae candidates, but focusing on S. aureus, eventually narrowing the pool down to about 90 compounds.
They were able to synthesize and test 22 of these molecules, and six of them showed strong antibacterial activity against multi-drug-resistant S. aureus grown in a lab dish. They also found that the top candidate, named DN1, was able to clear a methicillin-resistant S. aureus (MRSA) skin infection in a mouse model. These molecules also appear to interfere with bacterial cell membranes, but with broader effects not limited to interaction with one specific protein.
Phare Bio, a nonprofit that is also part of the Antibiotics-AI Project, is now working on further modifying NG1 and DN1 to make them suitable for additional testing.
“In a collaboration with Phare Bio, we are exploring analogs, as well as working on advancing the best candidates preclinically, through medicinal chemistry work,” Collins says. “We are also excited about applying the platforms that Aarti and the team have developed toward other bacterial pathogens of interest, notably Mycobacterium tuberculosis and Pseudomonas aeruginosa.”
The research was funded, in part, by the U.S. Defense Threat Reduction Agency, the National Institutes of Health, the Audacious Project, Flu Lab, the Sea Grape Foundation, Rosamund Zander and Hansjorg Wyss for the Wyss Foundation, and an anonymous donor.