MIT Latest News
Helping AI agents search to get the best results out of large language models
Whether you’re a scientist brainstorming research ideas or a CEO hoping to automate a task in human resources or finance, you’ll find that artificial intelligence tools are becoming the assistants you didn’t know you needed. In particular, many professionals are tapping into the talents of semi-autonomous software systems called AI agents, which can call on AI at specific points to solve problems and complete tasks.
AI agents are particularly effective when they use large language models (LLMs) because those systems are powerful, efficient, and adaptable. One way to program such technology is by describing in code what you want your system to do (the “workflow”), including when it should use an LLM. If you were a software company trying to revamp your old codebase to use a more modern programming language for better optimizations and safety, you might build a system that uses an LLM to translate the codebase one file at a time, testing each file as you go.
But what happens when LLMs make mistakes? You’ll want the agent to backtrack to make another attempt, incorporating lessons it learned from previous mistakes. Coding this up can take as much effort as implementing the original agent; if your system for translating a codebase contained thousands of lines of code, then you’d be making thousands of lines of code changes or additions to support the logic for backtracking when LLMs make mistakes.
To save programmers time and effort, researchers with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Asari AI have developed a framework called “EnCompass.”
With EnCompass, you no longer have to make these changes yourself. Instead, when EnCompass runs your program, it automatically backtracks if LLMs make mistakes. EnCompass can also make clones of the program runtime to make multiple attempts in parallel in search of the best solution. In full generality, EnCompass searches over the different possible paths your agent could take as a result of the different possible outputs of all the LLM calls, looking for the path where the LLM finds the best solution.
Then, all you have to do is to annotate the locations where you may want to backtrack or clone the program runtime, as well as record any information that may be useful to the strategy used to search over the different possible execution paths of your agent (the search strategy). You can then separately specify the search strategy — you could either use one that EnCompass provides out of the box or, if desired, implement your own custom search strategy.
“With EnCompass, we’ve separated the search strategy from the underlying workflow of an AI agent,” says lead author Zhening Li ’25, MEng ’25, who is an MIT electrical engineering and computer science (EECS) PhD student, CSAIL researcher, and research consultant at Asari AI. “Our framework lets programmers easily experiment with different search strategies to find the one that makes the AI agent perform the best.”
EnCompass was used for agents implemented as Python programs that call LLMs, where it demonstrated noticeable code savings. EnCompass reduced coding effort for implementing search by up to 80 percent across agents, such as an agent for translating code repositories and for discovering transformation rules of digital grids. In the future, EnCompass could enable agents to tackle large-scale tasks, including managing massive code libraries, designing and carrying out science experiments, and creating blueprints for rockets and other hardware.
Branching out
When programming your agent, you mark particular operations — such as calls to an LLM — where results may vary. These annotations are called “branchpoints.” If you imagine your agent program as generating a single plot line of a story, then adding branchpoints turns the story into a choose-your-own-adventure story game, where branchpoints are locations where the plot branches into multiple future plot lines.
You can then specify the strategy that EnCompass uses to navigate that story game, in search of the best possible ending to the story. This can include launching parallel threads of execution or backtracking to a previous branchpoint when you get stuck in a dead end.
Users can also plug-and-play a few common search strategies provided by EnCompass out of the box, or define their own custom strategy. For example, you could opt for Monte Carlo tree search, which builds a search tree by balancing exploration and exploitation, or beam search, which keeps the best few outputs from every step. EnCompass makes it easy to experiment with different approaches to find the best strategy to maximize the likelihood of successfully completing your task.
The coding efficiency of EnCompass
So just how code-efficient is EnCompass for adding search to agent programs? According to researchers’ findings, the framework drastically cut down how much programmers needed to add to their agent programs to add search, helping them experiment with different strategies to find the one that performs the best.
For example, the researchers applied EnCompass to an agent that translates a repository of code from the Java programming language, which is commonly used to program apps and enterprise software, to Python. They found that implementing search with EnCompass — mainly involving adding branchpoint annotations and annotations that record how well each step did — required 348 fewer lines of code (about 82 percent) than implementing it by hand. They also demonstrated how EnCompass enabled them to easily try out different search strategies, identifying the best strategy to be a two-level beam search algorithm, achieving an accuracy boost of 15 to 40 percent across five different repositories at a search budget of 16 times the LLM calls made by the agent without search.
“As LLMs become a more integral part of everyday software, it becomes more important to understand how to efficiently build software that leverages their strengths and works around their limitations,” says co-author Armando Solar-Lezama, who is an MIT professor of EECS and CSAIL principal investigator. “EnCompass is an important step in that direction.”
The researchers add that EnCompass targets agents where a program specifies the steps of the high-level workflow; the current iteration of their framework is less applicable to agents that are entirely controlled by an LLM. “In those agents, instead of having a program that specifies the steps and then using an LLM to carry out those steps, the LLM itself decides everything,” says Li. “There is no underlying programmatic workflow, so you can execute inference-time search on whatever the LLM invents on the fly. In this case, there’s less need for a tool like EnCompass that modifies how a program executes with search and backtracking.”
Li and his colleagues plan to extend EnCompass to more general search frameworks for AI agents. They also plan to test their system on more complex tasks to refine it for real-world uses, including at companies. What’s more, they’re evaluating how well EnCompass helps agents work with humans on tasks like brainstorming hardware designs or translating much larger code libraries. For now, EnCompass is a powerful building block that enables humans to tinker with AI agents more easily, improving their performance.
“EnCompass arrives at a timely moment, as AI-driven agents and search-based techniques are beginning to reshape workflows in software engineering,” says Carnegie Mellon University Professor Yiming Yang, who wasn’t involved in the research. “By cleanly separating an agent’s programming logic from its inference-time search strategy, the framework offers a principled way to explore how structured search can enhance code generation, translation, and analysis. This abstraction provides a solid foundation for more systematic and reliable search-driven approaches to software development.”
Li and Solar-Lezama wrote the paper with two Asari AI researchers: Caltech Professor Yisong Yue, an advisor at the company; and senior author Stephan Zheng, who is the founder and CEO. Their work was supported by Asari AI.
The team’s work was presented at the Conference on Neural Information Processing Systems (NeurIPS) in December.
New vaccine platform promotes rare protective B cells
A longstanding goal of immunotherapies and vaccine research is to induce antibodies in humans that neutralize deadly viruses such as HIV and influenza. Of particular interest are antibodies that are “broadly neutralizing,” meaning they can in principle eliminate multiple strains of a virus such as HIV, which mutates rapidly to evade the human immune system.
Researchers at MIT and the Scripps Research Institute have now developed a vaccine that generates a significant population of rare precursor B cells that are capable of evolving to produce broadly neutralizing antibodies. Expanding these cells is the first step toward a successful HIV vaccine.
The researchers’ vaccine design uses DNA instead of protein as a scaffold to fabricate a virus-like particle (VLP) displaying numerous copies of an engineered HIV immunogen called eOD-GT8, which was developed at Scripps. This vaccine generated substantially more precursor B cells in a humanized mouse model compared to a protein-based virus-like particle that has shown significant success in human clinical trials.
Preclinical studies showed that the DNA-VLP generated eight times more of the desired, or “on-target,” B cells than the clinical product, which was already shown to be highly potent.
“We were all surprised that this already outstanding VLP from Scripps was significantly outperformed by the DNA-based VLP,” says Mark Bathe, an MIT professor of biological engineering and an associate member of the Broad Institute of MIT and Harvard. “These early preclinical results suggest a potential breakthrough as an entirely new, first-in-class VLP that could transform the way we think about active immunotherapies, and vaccine design, across a variety of indications.”
The researchers also showed that the DNA scaffold doesn’t induce an immune response when applied to the engineered HIV antigen. This means the DNA VLP might be used to deliver multiple antigens when boosting strategies are needed, such as for challenging diseases such as HIV.
“The DNA-VLP allowed us for the first time to assess whether B cells targeting the VLP itself limit the development of ‘on target’ B cell responses — a longstanding question in vaccine immunology,” says Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute and a Howard Hughes Medical Institute Investigator.
Bathe and Irvine are the senior authors of the study, which appears today in Science. The paper’s lead author is Anna Romanov PhD ’25.
Priming B cells
The new study is part of a major ongoing global effort to develop active immunotherapies and vaccines that expand specific lineages of B cells. All humans have the necessary genes to produce the right B cells that can neutralize HIV, but they are exceptionally rare and require many mutations to become broadly neutralizing. If exposed to the right series of antigens, however, these cells can in principle evolve to eventually produce the requisite broadly neutralizing antibodies.
In the case of HIV, one such target antibody, called VRC01, was discovered by National Institutes of Health researchers in 2010 when they studied humans living with HIV who did not develop AIDS. This set off a major worldwide effort to develop an HIV vaccine that would induce this target antibody, but this remains an outstanding challenge.
Generating HIV-neutralizing antibodies is believed to require three stages of vaccination, each one initiated by a different antigen that helps guide B cell evolution toward the correct target, the native HIV envelope protein gp120.
In 2013, William Schief, a professor of immunology and microbiology at Scripps, reported an engineered antigen called eOD-GT6 that could be used for the first step in this process, known as priming. His team subsequently upgraded the antigen to eOD-GT8. Vaccination with eOD-GT8 arrayed on a protein VLP generated early antibody precursors to VRC01 both in mice and more recently in humans, a key first step toward an HIV vaccine.
However, the protein VLP also generated substantial “off-target” antibodies that bound the irrelevant, and potentially highly distracting, protein VLP itself. This could have unknown consequences on propagating target B cells of interest for HIV, as well as other challenging immunotherapy applications.
The Bathe and Irvine labs set out to test if they could use a particle made from DNA, instead of protein, to deliver the priming antigen. These nanoscale particles are made using DNA origami, a method that offers precise control over the structure of synthetic DNA and allows researchers to attach viral antigens at specific locations.
In 2024, Bathe and Daniel Lingwood, an associate professor at Harvard Medical School and a principal investigator at the Ragon Institute, showed this DNA VLP could be used to deliver a SARS-CoV-2 vaccine in mice to generate neutralizing antibodies. From that study, the researchers learned that the DNA scaffold does not induce antibodies to the VLP itself, unlike proteins. They wondered whether this might also enable a more focused antibody response.
Building on these results, Romanov, co-advised by Bathe and Irvine, set off to apply the DNA VLP to the Scripps HIV priming vaccine, based on eOD-GT8.
“Our earlier work with SARS-CoV-2 antigens on DNA-VLPs showed that DNA-VLPs can be used to focus the immune response on an antigen of interest. This property seemed especially useful for a case like HIV, where the B cells of interest are exceptionally rare. Thus, we hypothesized that reducing the competition among other irrelevant B cells (by delivering the vaccine on a silent DNA nanoparticle) may help these rare cells have a better chance to survive,” Romanov says.
Initial studies in mice, however, showed the vaccine did not induce sufficient early B cell response to the first, priming dose.
After redesigning the DNA VLPs, Romanov and colleagues found that a smaller diameter version with 60 instead of 30 copies of the engineered antigen dramatically out-performed the clinical protein VLP construct, both in overall number of antigen-specific B cells and the fraction of B cells that were on-target to the specific HIV domain of interest. This was a result of improved retention of the particles in B cell follicles in lymph nodes and better collaboration with helper T cells, which promote B cell survival.
Overall, these improvements enabled the particles to generate eightfold more on-target B cells than the vaccine consisting of eOD-GT8 carried by a protein scaffold. Another key finding, elucidated by the Lingwood lab, was that the DNA particles promoted VRC01 precursor B cells toward the VRC01 antibody more efficiently than the protein VLP.
“In the field of vaccine immunology, the question of whether B cell responses to a targeted protective epitope on a vaccine antigen might be hindered by responses to neighboring off-target epitopes on the same antigen has been under intense investigation,” says Schief, who is also vice president for protein design at Moderna. “There are some data from other studies suggesting that off-target responses might not have much impact, but this study shows quite convincingly that reducing off-target responses by using a DNA VLP can improve desired on-target responses.”
“While nanoparticle formulations have been great at boosting antibody responses to various antigens, there is always this nagging question of whether competition from B cells specific for the particle’s own structural antigens won’t get in the way of antibody responses to targeted epitopes,” says Gabriel Victora, a professor of immunology, virology, and microbiology at Rockefeller University, who was not involved in the study. “DNA-based particles that leverage B cells’ natural tolerance to nucleic acids are a clever idea to circumvent this problem, and the research team’s elegant experiments clearly show that this strategy can be used to make difficult epitopes easier to target.”
A “silent” scaffold
The fact that the DNA-VLP scaffold doesn’t induce scaffold-specific antibodies means that it could be used to carry second and potentially third antigens needed in the vaccine series, as the researchers are currently investigating. It also might offer significantly improved on-target antibodies for numerous antigens that are outcompeted and dominated by off-target, irrelevant protein VLP scaffolds in this or other applications.
“A breakthrough of this paper is the rigorous, mechanistic quantification of how DNA-VLPs can ‘focus’ antibody responses on target antigens of interest, which is a consequence of the silent nature of this DNA-based scaffold we’ve previously shown is stealth to the immune system,” Bathe says.
More broadly, this new type of VLP could be used to generate other kinds of protective antibody responses against pandemic threats such as flu, or potentially against chemical warfare agents, the researchers suggest. Alternatively, it might be used as an active immunotherapy to generate antibodies that target amyloid beta or tau protein to treat degenerative diseases such as Alzheimer’s, or to generate antibodies that target noxious chemicals such as opioids or nicotine to help people suffering from addiction.
The research was funded by the National Institutes of Health; the Ragon Institute of MGH, MIT, and Harvard; the Howard Hughes Medical Institute; the National Science Foundation; the Novo Nordisk Foundation; a Koch Institute Support (core) Grant from the National Cancer Institute; the National Institute of Environmental Health Sciences; the Gates Foundation Collaboration for AIDS Vaccine Discovery; the IAVI Neutralizing Antibody Center; the National Institute of Allergy and Infectious Diseases; and the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies.
“Essential” torch heralds the start of the 2026 Winter Olympics
Before the thrill of victory; before the agony of defeat; before the gold medalist’s national anthem plays, there is the Olympic torch. A symbol of unity, friendship, and the spirit of competition, the torch links today’s Olympic Games to its heritage in ancient Greece.
The torch for the 2026 Milano Cortina Olympic Games and Paralympic Games was designed by Carlo Ratti, a professor of the practice for the MIT Department of Urban Studies and Planning and the director of the Senseable City Lab in the MIT School of Architecture and Planning.
A native of Turin, Italy, and a respected designer and architect worldwide, Ratti’s work and that of his firm, Carlo Ratti Associati, has been featured at various international expositions such as the French Pavilion at the Osaka Expo (World’s Fair) in 2025 and the Italian Pavilion at the Dubai Expo in 2020. Their design for The Cloud, a 400-foot tall spherical structure that would serve as a unique observation deck, was a finalist for the 2012 Olympic Games in London, but ultimately not built.
Ratti relishes the opportunity to participate in these events.
“You can push the boundaries more at these [venues] because you are building something that is temporary,” says Ratti. “They allow for more creativity, so it’s a good moment to experiment.”
Based on his previous work, Ratti was invited to design the torch by the Olympic organizers. He approached the project much as he instructs his students working in his lab.
“It is about what the object or the design is to convey,” Ratti says. “How it can touch people, how it can relate to people, how it can transmit emotions. That’s the most important thing.”
To Ratti, the fundamental aspect of the torch is the flame. A few months before the games begin, the torch is lit in Olympia, Greece, using a parabolic mirror reflecting the sun’s rays. In ancient Greece, the flame was considered “sacred,” and was to remain lit throughout the competition. Ratti, familiar with the history of the Olympic torch, is less impressed with designs that he deems overwrought. Many torches added superfluous ornamentation to its exterior much like cars are designed around their engines, he says. Instead, he decided to strip away everything that wasn’t essential to the flame itself.
What is “essential”
“Essential” — the official name for the 2026 Winter Olympic torch — was designed to perform regardless of the weather, wind, or altitude it would encounter on its journey from Olympia to Milan. The process took three years with many designs created, considered, and discussed with the local and global Olympic committees and Olympic sponsor Versalis. And, as with Ratti’s work at MIT, researchers and engineers collaborated in the effort.
“Each design pushed the boundaries in different directions, but all of them with the key principle to put the flame at the center,” says Ratti who wanted the torch to embody “an ethos of frugality.”
At the core of Ratti’s torch is a high-performance burner powered by bio-GPL produced by energy company ENI from 100 percent renewable feedstocks. Furthermore, the torch can be recharged 10 times. In previous years, torches were used only once. This allowed for a 10-fold reduction in the number of torches created.
Also unique to this torch is its internal mechanism, which is visible via a vertical opening along its side, allowing audiences to see the burner in action. This reinforces the desire to keep the emphasis on the flame instead of the object.
In keeping with the requisite for minimalism and sustainability, the torch is primarily composed of recycled aluminum. It is the lightest torch created for the Olympics, weighing just under 2.5 pounds. The body is finished with a PVD coating that is heat resistant, letting it shift colors by reflecting the environments — such as the mountains and the city lights — through which it is carried. The Olympic torch is a blue-green shade, while the Paralympic torch is gold.
The torch won an honorable mention in Italy’s most prestigious industrial design award, the Compasso d’Oro.
The Olympic Relay
The torch relay is considered an event itself, drawing thousands as it is carried to the host city by hundreds of volunteers. Its journey for the 2026 Olympics started in late November and, after visiting cities across Greece, will have covered all 110 Italian provinces before arriving in Milan for the opening ceremony on Feb. 6.
Ratti carried the torch for a portion of its journey through Turin in mid-January — another joyful invitation to this quadrennial event. He says winter sports are his favorite; he grew up skiing where these games are being held, and has since skied around the world — from Utah to the Himalayas.
In addition to a highly sustainable torch, there was another statement Ratti wanted to make: He wanted to showcase the Italy of today and of the future. It is the same issue he confronted as the curator of the 2025 Biennale Architettura in Venice titled “Intelligens. Natural. Artificial. Collective: an architecture exhibition, but infused with technology for the future.”
“When people think about Italy, they often think about the past, from ancient Romans to the Renaissance or Baroque period,” he says. “Italy does indeed have a significant past. But the reality is that it is also the second-largest industrial powerhouse in Europe and is leading in innovation and tech in many fields. So, the 2026 torch aims to combine both past and future. It draws on Italian design from the past, but also on future-forward technologies.”
“There should be some kind of architectural design always translating into form some kind of ethical principles or ideals. It’s not just about a physical thing. Ultimately, it’s about the human dimension. That applies to the work we do at MIT or the Olympic torch.”
Brian Hedden named co-associate dean of Social and Ethical Responsibilities of Computing
Brian Hedden PhD ’12 has been appointed co-associate dean of the Social and Ethical Responsibilities of Computing (SERC) at MIT, a cross-cutting initiative in the MIT Schwarzman College of Computing, effective Jan. 16.
Hedden is a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS). He joined the MIT faculty last fall from the Australian National University and the University of Sydney, where he previously served as a faculty member. He earned his BA from Princeton University and his PhD from MIT, both in philosophy.
“Brian is a natural and compelling choice for SERC, as a philosopher whose work speaks directly to the intellectual challenges facing education and research today, particularly in computing and AI. His expertise in epistemology, decision theory, and ethics addresses questions that have become increasingly urgent in an era defined by information abundance and artificial intelligence. His scholarship exemplifies the kind of interdisciplinary inquiry that SERC exists to advance,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.
Hedden’s research focuses on how we ought to form beliefs and make decisions, and it explores how philosophical thinking about rationality can yield insights into contemporary ethical issues, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics such as collective action problems, legal standards of proof, algorithmic fairness, and political polarization.
Joining co-associate dean Nikos Trichakis, the J.C. Penney Professor of Management at the MIT Sloan School of Management, Hedden will help lead SERC and advance the initiative’s ongoing research, teaching, and engagement efforts. He succeeds professor of philosophy Caspar Hare, who stepped down at the conclusion of his three-year term on Sept. 1, 2025.
Since its inception in 2020, SERC has launched a range of programs and activities designed to cultivate responsible “habits of mind and action” among those who create and deploy computing technologies, while fostering the development of technologies in the public interest.
The SERC Scholars Program invites undergraduate and graduate students to work alongside postdoctoral mentors to explore interdisciplinary ethical challenges in computing. The initiative also hosts an annual prize competition that challenges MIT students to envision the future of computing, publishes a twice-yearly series of case studies, and collaborates on coordinated curricular materials, including active-learning projects, homework assignments, and in-class demonstrations. In 2024, SERC introduced a new seed grant program to support MIT researchers investigating ethical technology development; to date, two rounds of grants have been awarded to 24 projects.
Antonio Torralba, three MIT alumni named 2025 ACM fellows
Antonio Torralba, Delta Electronics Professor of Electrical Engineering and Computer Science and faculty head of artificial intelligence and decision-making at MIT, has been named to the 2025 cohort of Association for Computing Machinery (ACM) Fellows. He shares the honor of an ACM Fellowship with three MIT alumni: Eytan Adar ’97, MEng ’98; George Candea ’97, MEng ’98; and Gookwon Edward Suh SM ’01, PhD ’05.
A principal investigator within both the Computer Science and Artificial Intelligence Laboratory and the Center for Brains, Minds, and Machines, Torralba received his BS in telecommunications engineering from Telecom BCN, Spain, in 1994, and a PhD in signal, image, and speech processing from the Institut National Polytechnique de Grenoble, France, in 2000. At different points in his MIT career, he has been director of both the MIT Quest for Intelligence (now the MIT Siegel Family Quest for Intelligence) and the MIT-IBM Watson AI Lab.
Torralba’s research focuses on computer vision, machine learning, and human visual perception; as he puts it, “I am interested in building systems that can perceive the world like humans do.” Alongside Phillip Isola and William Freeman, he recently co-authored “Foundations of Computer Vision,” an 800-plus page textbook exploring the foundations and core principles of the field.
Among other awards and recognitions, he is the recipient of the 2008 National Science Foundation Career award; the 2010 J. K. Aggarwal Prize from the International Association for Pattern Recognition; the 2017 Frank Quick Faculty Research Innovation Fellowship; the Louis D. Smullin (’39) Award for Teaching Excellence; and the 2020 PAMI Mark Everingham Prize. In 2021, he was awarded the inaugural Thomas Huang Memorial Prize by the Pattern Analysis and Machine Intelligence Technical Committee and was named a fellow of the Association for the Advancement of Artificial Intelligence. In 2022, he received an honorary doctoral degree from the Universitat Politècnica de Catalunya — BarcelonaTech (UPC).
ACM fellows, the highest honor bestowed by the professional organization, are registered members of the society selected by their peers for outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.
3 Questions: Using AI to accelerate the discovery and design of therapeutic drugs
In the pursuit of solutions to complex global challenges including disease, energy demands, and climate change, scientific researchers, including at MIT, have turned to artificial intelligence, and to quantitative analysis and modeling, to design and construct engineered cells with novel properties. The engineered cells can be programmed to become new therapeutics — battling, and perhaps eradicating, diseases.
James J. Collins is one of the founders of the field of synthetic biology, and is also a leading researcher in systems biology, the interdisciplinary approach that uses mathematical analysis and modeling of complex systems to better understand biological systems. His research has led to the development of new classes of diagnostics and therapeutics, including in the detection and treatment of pathogens like Ebola, Zika, SARS-CoV-2, and antibiotic-resistant bacteria. Collins, the Termeer Professor of Medical Engineering and Science and professor of biological engineering at MIT, is a core faculty member of the Institute for Medical Engineering and Science (IMES), the director of the MIT Abdul Latif Jameel Clinic for Machine Learning in Health, as well as an institute member of the Broad Institute of MIT and Harvard, and core founding faculty at the Wyss Institute for Biologically Inspired Engineering, Harvard.
In this Q&A, Collins speaks about his latest work and goals for this research.
Q. You’re known for collaborating with colleagues across MIT, and at other institutions. How have these collaborations and affiliations helped you with your research?
A: Collaboration has been central to the work in my lab. At the MIT Jameel Clinic for Machine Learning in Health, I formed a collaboration with Regina Barzilay [the Delta Electronics Professor in the MIT Department of Electrical Engineering and Computer Science and affiliate faculty member at IMES] and Tommi Jaakkola [the Thomas Siebel Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society] to use deep learning to discover new antibiotics. This effort combined our expertise in artificial intelligence, network biology, and systems microbiology, leading to the discovery of halicin, a potent new antibiotic effective against a broad range of multidrug-resistant bacterial pathogens. Our results were published in Cell in 2020 and showcased the power of bringing together complementary skill sets to tackle a global health challenge.
At the Wyss Institute, I’ve worked closely with Donald Ingber [the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Hansjörg Wyss Professor of Biologically Inspired Engineering at Harvard], leveraging his organs-on-chips technology to test the efficacy of AI-discovered and AI-generated antibiotics. These platforms allow us to study how drugs behave in human tissue-like environments, complementing traditional animal experiments and providing a more nuanced view of their therapeutic potential.
The common thread across our many collaborations is the ability to combine computational predictions with cutting-edge experimental platforms, accelerating the path from ideas to validated new therapies.
Q. Your research has led to many advances in designing novel antibiotics, using generative AI and deep learning. Can you talk about some of the advances you’ve been a part of in the development of drugs that can battle multi-drug-resistant pathogens, and what you see on the horizon for breakthroughs in this arena?
A: In 2025, our lab published a study in Cell demonstrating how generative AI can be used to design completely new antibiotics from scratch. We used genetic algorithms and variational autoencoders to generate millions of candidate molecules, exploring both fragment-based designs and entirely unconstrained chemical space. After computational filtering, retrosynthetic modeling, and medicinal chemistry review, we synthesized 24 compounds and tested them experimentally. Seven showed selective antibacterial activity. One lead, NG1, was highly narrow-spectrum, eradicating multi-drug-resistant Neisseria gonorrhoeae, including strains resistant to first-line therapies, while sparing commensal species. Another, DN1, targeted methicillin-resistant Staphylococcus aureus (MRSA) and cleared infections in mice through broad membrane disruption. Both were non-toxic and showed low rates of resistance.
Looking ahead, we are using deep learning to design antibiotics with drug-like properties that make them stronger candidates for clinical development. By integrating AI with high-throughput biological testing, we aim to accelerate the discovery and design of antibiotics that are novel, safe, and effective, ready for real-world therapeutic use. This approach could transform how we respond to drug-resistant bacterial pathogens, moving from a reactive to a proactive strategy in antibiotic development.
Q. You’re a co-founder of Phare Bio, a nonprofit organization that uses AI to discover new antibiotics, and the Collins Lab has helped to launch the Antibiotics-AI Project in collaboration with Phare Bio. Can you tell us more about what you hope to accomplish with these collaborations, and how they tie back to your research goals?
A: We founded Phare Bio as a nonprofit to take the most promising antibiotic candidates emerging from the Antibiotics-AI Project at MIT and advance them toward the clinic. The idea is to bridge the gap between discovery and development by collaborating with biotech companies, pharmaceutical partners, AI companies, philanthropies, other nonprofits, and even nation states. Akhila Kosaraju has been doing a brilliant job leading Phare Bio, coordinating these efforts and moving candidates forward efficiently.
Recently, we received a grant from ARPA-H to use generative AI to design 15 new antibiotics and develop them as pre-clinical candidates. This project builds directly on our lab’s research, combining computational design with experimental testing to create novel antibiotics that are ready for further development. By integrating generative AI, biology, and translational partnerships, we hope to create a pipeline that can respond more rapidly to the global threat of antibiotic resistance, ultimately delivering new therapies to patients who need them most.
3D-printed metamaterials that stretch and fail by design
Metamaterials — materials whose properties are primarily dictated by their internal microstructure, and not their chemical makeup — have been redefining the engineering materials space for the last decade. To date, however, most metamaterials have been lightweight options designed for stiffness and strength.
New research from the MIT Department of Mechanical Engineering introduces a computational design framework to support the creation of a new class of soft, compliant, and deformable metamaterials. These metamaterials, termed 3D woven metamaterials, consist of building blocks that are composed of intertwined fibers that self-contact and entangle to endow the material with unique properties.
“Soft materials are required for emerging engineering challenges in areas such as soft robotics, biomedical devices, or even for wearable devices and functional textiles,” explains Carlos Portela, the Robert N. Noyce Career Development Professor and associate professor of mechanical engineering.
In an open-access paper published Jan. 26 in the journal Nature Communications, researchers from Portela’s lab provide a universal design framework that generates complex 3D woven metamaterials with a wide range of properties. The work also provides open-source code that allows users to create designs to fit specifications and generate a file for printing or simulating the material using a 3D printer.
“Normal knitting or weaving have been constrained by the hardware for hundreds of years — there’s only a few patterns that you can make clothes out of, for example — but that changes if hardware is no longer a limitation,” Portela says. “With this framework, you can come up with interesting patterns that completely change the way the textile is going to behave.”
Possible applications include wearable sensors that move with human skin, fabrics for aerospace or defense needs, flexible electronic devices, and a variety of other printable textiles.
The team developed general design rules — in the form of an algorithm — that first provide a graph representation of the metamaterial. The attributes of this graph eventually dictate how each fiber is placed and connected within the metamaterial. The fundamental building blocks are woven unit cells that can be functionally graded via control of various design parameters, such as the radius and pitch of the fibers that make up the woven struts.
“Because this framework allows these metamaterials to be tailored to be softer in one place and stiffer in another, or to change shape as they stretch, they can exhibit an exceptional range of behaviors that would be hard to design using conventional soft materials,” says Molly Carton, lead author of the study. Carton, a former postdoc in Portela’s lab, is now an assistant research professor in mechanical engineering at the University of Maryland.
Further, the simulation framework also allows users to predict the deformation response of these materials, capturing complex phenomena such as self-contact within fibers and entanglement, and design to predict and resist deformation or tearing patterns.
“The most exciting part was being able to tailor failure in these materials and design arbitrary combinations,” says Portela. “Based on the simulations, we were able to fabricate these spatially varying geometries and experiment on them at the microscale.”
This work is the first to provide a tool for users to design, print, and simulate an emerging class of metamaterials that are extensible and tough. It also demonstrates that through tuning of geometric parameters, users can control and predict how these materials will deform and fail, and presents several new design building blocks that substantially expand the property space of woven metamaterials.
“Until now, these complex 3D lattices have been designed manually, painstakingly, which limits the number of designs that anyone has tested,” says Carton. “We’ve been able to describe how these woven lattices work and use that to create a design tool for arbitrary woven lattices. With that design freedom, we’re able to design the way that a lattice changes shape as it stretches, how the fibers entangle and knot with each other, as well as how it tears when stretched to the limit.”
Carton says she believes the framework will be useful across many disciplines. “In releasing this framework as a software tool, our hope is that other researchers will explore what’s possible using woven lattices and find new ways to use this design flexibility,” she says. “I’m looking forward to seeing what doors our work can open.”
The paper, “Design framework for programmable three-dimensional woven metamaterials,” is available now in the journal Nature Communications. Its other MIT-affiliated authors are James Utama Surjadi, Bastien F. G. Aymon, and Ling Xu.
This work was performed, in part, through the use of MIT.nano’s fabrication and characterization facilities.
Terahertz microscope reveals the motion of superconducting electrons
You can tell a lot about a material based on the type of light you shine at it: Optical light illuminates a material’s surface, while X-rays reveal its internal structures and infrared captures a material’s radiating heat.
Now, MIT physicists have used terahertz light to reveal inherent, quantum vibrations in a superconducting material, which have not been observable until now.
Terahertz light is a form of energy that lies between microwaves and infrared radiation on the electromagnetic spectrum. It oscillates over a trillion times per second — just the right pace to match how atoms and electrons naturally vibrate inside materials. Ideally, this makes terahertz light the perfect tool to probe these motions.
But while the frequency is right, the wavelength — the distance over which the wave repeats in space — is not. Terahertz waves have wavelengths hundreds of microns long. Because the smallest spot that any kind of light can be focused into is limited by its wavelength, terahertz beams cannot be tightly confined. As a result, a focused terahertz beam is physically too large to interact effectively with microscopic samples, simply washing over these tiny structures without revealing fine detail.
In a paper appearing today in the journal Nature, the scientists report that they have developed a new terahertz microscope that compresses terahertz light down to microscopic dimensions. This pinpoint of terahertz light can resolve quantum details in materials that were previously inaccessible.
The team used the new microscope to send terahertz light into a sample of bismuth strontium calcium copper oxide, or BSCCO (pronounced “BIS-co”) — a material that superconducts at relatively high temperatures. With the terahertz scope, the team observed a frictionless “superfluid” of superconducting electrons that were collectively jiggling back and forth at terahertz frequencies within the BSCCO material.
“This new microscope now allows us to see a new mode of superconducting electrons that nobody has ever seen before,” says Nuh Gedik, the Donner Professor of Physics at MIT.
By using terahertz light to probe BSCCO and other superconductors, scientists can gain a better understanding of properties that could lead to long-coveted room-temperature superconductors. The new microscope can also help to identify materials that emit and receive terahertz radiation. Such materials could be the foundation of future wireless, terahertz-based communications, that could potentially transmit more data at faster rates compared to today’s microwave-based communications.
“There’s a huge push to take Wi-Fi or telecommunications to the next level, to terahertz frequencies,” says Alexander von Hoegen, a postdoc in MIT’s Materials Research Laboratory and lead author of the study. “If you have a terahertz microscope, you could study how terahertz light interacts with microscopically small devices that could serve as future antennas or receivers.”
In addition to Gedik and von Hoegen, the study’s MIT co-authors include Tommy Tai, Clifford Allington, Matthew Yeung, Jacob Pettine, Alexander Kossak, Byunghun Lee, and Geoffrey Beach, along with collaborators at Harvard University, the Max Planck Institute for the Structure and Dynamics of Matter, the Max Planck Institute for the Physics of Complex Systems and the Brookhaven National Lab.
Hitting a limit
Terahertz light is a promising yet largely untapped imaging tool. It occupies a unique spectral “sweet spot”: Like microwaves, radio waves, and visible light, terahertz radiation is nonionizing and therefore does not carry enough energy to cause harmful radiation effects, making it safe for use in humans and biological tissues. At the same time, much like X-rays, terahertz waves can penetrate a wide range of materials, including fabric, wood, cardboard, plastic, ceramics, and even thin brick walls.
Owing to these distinctive properties, terahertz light is being actively explored for applications in security screening, medical imaging, and wireless communications. In contrast, far less effort has been devoted to applying terahertz radiation to microscopy and the illumination of microscopic phenomena. The primary reason is a fundamental limitation shared by all forms of light: the diffraction limit, which restricts spatial resolution to roughly the wavelength of the radiation used.
With wavelengths on the order of hundreds of microns, terahertz radiation is far larger than atoms, molecules, and many other microscopic structures. As a result, its ability to directly resolve microscale features is fundamentally constrained.
“Our main motivation is this problem that, you might have a 10-micron sample, but your terahertz light has a 100-micron wavelength, so what you would mostly be measuring is air, or the vacuum around your sample,” von Hoegen explains. “You would be missing all these quantum phases that have characteristic fingerprints in the terahertz regime.”
Zooming in
The team found a way around the terahertz diffraction limit by using spintronic emitters — a recent technology that produces sharp pulses of terahertz light. Spintronic emitters are made from multiple ultrathin metallic layers. When a laser illuminates the multilayered structure, the light triggers a cascade of effects in the electrons within each layer, such that the structure ultimately emits a pulse of energy at terahertz frequencies.
By holding a sample close to the emitter, the team trapped the terahertz light before it had a chance to spread, essentially squeezing it into a space much smaller than its wavelength. In this regime, the light can bypass the diffraction limit to resolve features that were previously too small to see.
The MIT team adapted this technology to observe microscopic, quantum-scale phenomena. For their new study, the team developed a terahertz microscope using spintronic emitters interfaced with a Bragg mirror. This multilayered structure of reflective films successively filters out certain, undesired wavelengths of light while letting through others, protecting the sample from the “harmful” laser which triggers the terahertz emission.
As a demonstration, the team used the new microscope to image a small, atomically thin sample of BSCCO. They placed the sample very close to the terahertz source and imaged it at temperatures close to absolute zero — cold enough for the material to become a superconductor. To create the image, they scanned the laser beam, sending terahertz light through the sample and looking for the specific signatures left by the superconducting electrons.
“We see the terahertz field gets dramatically distorted, with little oscillations following the main pulse,” von Hoegen says. “That tells us that something in the sample is emitting terahertz light, after it got kicked by our initial terahertz pulse.”
With further analysis, the team concluded that the terahertz microscope was observing the natural, collective terahertz oscillations of superconducting electrons within the material.
“It’s this superconducting gel that we’re sort of seeing jiggle,” von Hoegen says.
This jiggling superfluid was expected, but never directly visualized until now. The team is now applying the microscope to other two-dimensional materials, where they hope to capture more terahertz phenomena.
“There are a lot of the fundamental excitations, like lattice vibrations and magnetic processes, and all these collective modes that happen at terahertz frequencies,” von Hoegen says. “We can now resonantly zoom in on these interesting physics with our terahertz microscope.”
This research was supported, in part, by the U.S. Department of Energy and by the Gordon and Betty Moore Foundation.
MIT winter club sports energized by the Olympics
With the Milano Cortina 2026 Winter Olympics officially kicking off today, several of MIT’s winter sports clubs are hosting watch parties to cheer on their favorite players, events, and teams.
Members of MIT’s Curling Club are hosting a gathering to support their favorite teams. Co-presidents Polly Harrington and Gabi Wojcik are rooting for the United States.
“I’m looking forward to watching the Olympics and cheering for Team USA. I grew up in Seattle, and during the Vancouver Olympics, we took a family trip to the games. The most affordable tickets were to the curling events, and that was my first exposure to the sport. Seeing it live was really cool. I was hooked,” says Harrington.
Wojcik says, “It’s a very analytical and strategic sport, so it’s perfect for MIT students. Physicists still don't entirely agree on why the rocks behave the way they do. Everyone in the club is welcoming and open to teaching new people to play. I’d never played before and learned from scratch. The other advantage of playing is that it is a lifelong sport.”
The two say the biggest misconception about curling, other than that it is easy, is that it is played on ice skates. It’s neither easy nor played on skates. The stone, or rock, as it is often called, weighs 43 pounds, and is always made from the same weathered granite from Scotland so that the playing field, or in this case, ice, is even.
Both agree that playing is a great way to meet other students from MIT that they might not otherwise have the chance to.
Having seen the American team at a recent tournament, Wojcik is hoping the team does well, but admits that if Scotland wins, she’ll also be happy. Harrington met members of the U.S. men's curling team, Luc Violette and Ben Richardson, when curling in Seattle in high school, and will be cheering for them.
The Curling Club team practices and competes in tournaments in the New England area from late September until mid-March and always welcomes new members, no previous experience is necessary to join.
Figure Skating Club
The MIT Figure Skating Club is also excited for the 2026 Olympics and has been watching preliminary events (nationals) leading up to the games with great anticipation. Eleanor Li, the current club president, and Amanda (Mandy) Paredes Rioboo, former president, say holding small gatherings to watch the Olympics is a great way for the team to bond further.
Li began taking skating lessons at age 14 and fell in love with the sport right away, and has been skating ever since. Paredes Rioboo started lessons at age 5 and practices in the mornings with other club members, saying, “there is no better way to start the day.”
The Figure Skating Club currently has 120 members and offers a great way to meet friends who share the same passion. Any MIT student, regardless of skill level, is welcome to join the club.
Li says, “We have members ranging from former national and international competitors to people who are completely new to the ice.” Adding that her favorite part of skating is, “the freeing feeling of wind coming at you when you’re gliding across the ice! And all the life lessons learned — time management, falling again and again, and getting up again and again, the artistry and expressiveness of this beautiful sport, and most of all the community.”
Paredes Rioboo agrees. “The sport taught me discipline, to work at something and struggle with it until I got good at it. It taught me to be patient with myself and to be unafraid of failure.”
“The Olympics always bring a lot of buzz and curiosity around skating, and we’re excited to hopefully see more people come to our Saturday free group lessons, try skating for the first time, and maybe even join the club,” says Li.
Li and Paredes Rioboo are ready to watch the games with other club members. Li says, “I’m especially excited for women’s singles skating. All of the athletes have trained so hard to get there, and I’m really looking forward to watching all the beautiful skating. Especially Kaori Sakamoto.”
“I’m excited to watch Alysa Liu and Ami Nakai,” adds Paredes Rioboo.
Students interested in joining the Figure Skating Club can find more information here.
Katie Spivakovsky wins 2026 Churchill Scholarship
MIT senior Katie Spivakovsky has been selected as a 2026-27 Churchill Scholar and will undertake an MPhil in biological sciences at the Wellcome Sanger Institute at Cambridge University in the U.K. this fall.
Spivakovsky, who is double-majoring in biological engineering and artificial intelligence, with minors in mathematics and biology, aims to integrate computation and bioengineering in an academic research career focused on developing robust, scalable solutions that promote equitable health outcomes.
At MIT’s Bathe BioNanoLab, Spivakovsky investigates therapeutic applications of DNA origami, DNA-scaffolded nanoparticles for gene and mRNA delivery, and co-authored a manuscript in press at Science. She leads the development of an immune therapy for cancer cachexia with a team supported by MIT’s BioMakerSpace; this work earned a silver medal at the international synthetic biology competition iGEM and was published in the MIT Undergraduate Research Journal. Previously, she worked on Merck’s Modeling & Informatics team, characterizing a cancer-associated protein mutation, and at the New York Structural Biology Center, where she improved cryogenic electron microscopy particle detection models.
On campus, Spivakovsky serves as director of the Undergraduate Initiative in the MIT Biotech Group. She is deeply committed to teaching and mentoring, and has served as a lecturer and co-director for class 6.S095 (Probability Problem Solving), a teaching assistant for classes 20.309 (Bioinstrumentation) and 20.A06 (Hands-on Making in Biological Engineering), a lab assistant for 6.300 (Signal Processing), and as an associate advisor.
“Katie is a brilliant researcher who has a keen intellectual curiosity that will make her a leader in biological engineering in the future. We are proud that she will be representing MIT at Cambridge University,” says Kim Benard, associate dean of distinguished fellowships.
The Churchill Scholarship is a highly competitive fellowship that annually offers 16 American students the opportunity to pursue a funded graduate degree in science, mathematics, or engineering at Churchill College within Cambridge University. The scholarship, established in 1963, honors former British Prime Minister Winston Churchill’s vision for U.S.-U.K. scientific exchange. Since 2017, two Kanders Churchill Scholarships have also been awarded each year for studies in science policy.
MIT students interested in learning more about the Churchill Scholarship should contact Kim Benard in MIT Career Advising and Professional Development.
Counter intelligence
How can artificial intelligence step out of a screen and become something we can physically touch and interact with?
That question formed the foundation of class 4.043/4.044 (Interaction Intelligence), an MIT course focused on designing a new category of AI-driven interactive objects. Known as large language objects (LLOs), these physical interfaces extend large language models into the real world. Their behaviors can be deliberately generated for specific people or applications, and their interactions can evolve from simple to increasingly sophisticated — providing meaningful support for both novice and expert users.
“I came to the realization that, while powerful, these new forms of intelligence still remain largely ignorant of the world outside of language,” says Marcelo Coelho, associate professor of the practice in the MIT Department of Architecture, who has been teaching the design studio for several years and directs the Design Intelligence Lab. “They lack real-time, contextual understanding of our physical surroundings, bodily experiences, and social relationships to be truly intelligent. In contrast, LLOs are physically situated and interact in real time with their physical environment. The course is an attempt to both address this gap and develop a new kind of design discipline for the age of AI.”
Given the assignment to design an interactive device that they would want in their lives, students Jacob Payne and Ayah Mahmoud focused on the kitchen. While they each enjoy cooking and baking, their design inspiration came from the first home computer: the Honeywell 316 Kitchen Computer, marketed by Neiman Marcus in 1969. Priced at $10,000, there is no record of one ever being sold.
“It was an ambitious but impractical early attempt at a home kitchen computer,” says Payne, an architecture graduate student. “It made an intriguing historical reference for the project.”
“As somebody who likes learning to cook — especially now, in college as an undergrad — the thought of designing something that makes cooking easy for those who might not have a cooking background and just wants a nice meal that satisfies their cravings was a great starting point for me,” says Mahmoud, a senior design major.
“We thought about the leftover ingredients you have in the refrigerator or pantry, and how AI could help you find new creative uses for things that you may otherwise throw away,” says Payne.
Generative cuisine
The students designed their device — named Kitchen Cosmo — with instructions to function as a “recipe generator.” One challenge was prompting the LLM to consistently acknowledge real-world cooking parameters, such as heating, timing, or temperature. One issue they worked out was having the LLM recognize flavor profiles and spices accurate to regional and cultural dishes around the world to support a wider range of cuisines. Troubleshooting included taste-testing recipes Kitchen Cosmo generated. Not every early recipe produced a winning dish.
“There were lots of small things that AI wasn't great at conceptually understanding,” says Mahmoud. “An LLM needs to fundamentally understand human taste to make a great meal.”
They fine-tuned their device to allow for the myriad ways people approach preparing a meal. Is this breakfast, lunch, dinner, or a snack? How advanced of a cook are you? How much meal prep time do you have? How many servings will you make? Dietary preferences were also programmed, as well as the type of mood or vibe you want to achieve. Are you feeling nostalgic, or are you in a celebratory mood? There’s a dial for that.
“These selections were the focal point of the device because we were curious to see how the LLM would interpret subjective adjectives as inputs and use them to transform the type of recipe outputs we would get,” says Payne.
Unlike most AI interactions that tend to be invisible, Payne and Mahmoud wanted their device to be more of a “partner” in the kitchen. The tactile interface was intentionally designed to structure the interaction, giving users a physical control over how the AI responded.
“While I’ve worked with electronics and hardware before, this project pushed me to integrate the components with a level of precision and refinement that felt much closer to a product-ready device,” says Payne of the course work.
Retro and red
After their electronic work was completed, the students designed a series of models using cardboard until settling on the final look, which Payne describes as “retro.” The body was designed in a 3D modeling software and printed. In a nod to the original Honeywell computer, they painted it red.
A thin, rectangular device about 18 inches in height, Kitchen Cosmo has a webcam that hinges open to scan ingredients set on a counter. It translates these into a recipe that takes into consideration general spices and condiments common in most households. An integrated thermal printer delivers a printed recipe that is torn off. Recipes can be stored in a plastic receptacle on its base.
While Kitchen Cosmo made a modest splash in design magazines, both students have ideas where they will take future iterations.
Payne would like to see it “take advantage of a lot of the data we have in the kitchen and use AI as a mediator, offering tips for how to improve on what you’re cooking at that moment.”
Mahmoud is looking at how to optimize Kitchen Cosmo for her thesis. Classmates have given feedback to upgrade its abilities. One suggestion is to provide multi-person instructions that give several people tasks needed to complete a recipe. Another idea is to create a “learning mode” in which a kitchen tool — for example, a paring knife — is set in front of Kitchen Cosmo, and it delivers instructions on how to use the tool. Mahmoud has been researching food science history as well.
“I’d like to get a better handle on how to train AI to fully understand food so it can tailor recipes to a user’s liking,” she says.
Having begun her MIT education as a geologist, Mahmoud’s pivot to design has been a revelation, she says. Each design class has been inspiring. Coelho’s course was her first class to include designing with AI. Referencing the often-mentioned analogy of “drinking from a firehouse” while a student at MIT, Mahmoud says the course helped define a path for her in product design.
“For the first time, in that class, I felt like I was finally drinking as much as I could and not feeling overwhelmed. I see myself doing design long-term, which is something I didn’t think I would have said previously about technology.”
SMART launches new Wearable Imaging for Transforming Elderly Care research group
What if ultrasound imaging is no longer confined to hospitals? Patients with chronic conditions, such as hypertension and heart failure, could be monitored continuously in real-time at home or on the move, giving health care practitioners ongoing clinical insights instead of the occasional snapshots — a scan here and a check-up there. This shift from reactive, hospital-based care to preventative, community and home-based care could enable earlier detection and timely intervention, and truly personalized care.
Bringing this vision to reality, the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, has launched a new collaborative research project: Wearable Imaging for Transforming Elderly Care (WITEC).
WITEC marks a pioneering effort in wearable technology, medical imaging, research, and materials science. It will be dedicated to foundational research and development of the world’s first wearable ultrasound imaging system capable of 48-hour intermittent cardiovascular imaging for continuous and real-time monitoring and diagnosis of chronic conditions such as hypertension and heart failure.
This multi-million dollar, multi-year research program, supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence and Technological Enterprise program, brings together top researchers and expertise from MIT, Nanyang Technological University (NTU Singapore), and the National University of Singapore (NUS). Tan Tock Seng Hospital (TTSH) is WITEC’s clinical collaborator and will conduct patient trials to validate long-term heart imaging for chronic cardiovascular disease management.
“Addressing society’s most pressing challenges requires innovative, interdisciplinary thinking. Building on SMART’s long legacy in Singapore as a hub for research and innovation, WITEC will harness interdisciplinary expertise — from MIT and leading institutions in Singapore — to advance transformative research that creates real-world impact and benefits Singapore, the U.S., and societies all over. This is the kind of collaborative research that not only pushes the boundaries of knowledge, but also redefines what is possible for the future of health care,” says Bruce Tidor, chief executive officer and interim director of SMART, who is also an MIT professor of biological engineering and electrical engineering and computer science.
Industry-leading precision equipment and capabilities
To support this work, WITEC’s laboratory is equipped with advanced tools, including Southeast Asia’s first sub-micrometer 3D printer and the latest Verasonics Vantage NXT 256 ultrasonic imaging system, which is the first unit of its kind in Singapore.
Unlike conventional 3D printers that operate at millimeter or micrometer scales, WITEC’s 3D printer can achieve sub‑micrometer resolution, allowing components to be fabricated at the level of single cells or tissue structures. With this capability, WITEC researchers can prototype bioadhesive materials and device interfaces with unprecedented accuracy — essential to ensuring skin‑safe adhesion and stable, long‑term imaging quality.
Complementing this is the latest Verasonics ultrasonic imaging system. Equipped with a new transducer adapter and supporting a significantly larger number of probe control channels than existing systems, it gives researchers the freedom to test highly customized imaging methods. This allows more complex beamforming, higher‑resolution image capture, and integration with AI‑based diagnostic models — opening the door to long‑duration, real‑time cardiovascular imaging not possible with standard hospital equipment.
Together, these technologies allow WITEC to accelerate the design, prototyping, and testing of its wearable ultrasound imaging system, and to demonstrate imaging quality on phantoms and healthy subjects.
Transforming chronic disease care through wearable innovation
Chronic diseases are rising rapidly in Singapore and globally, especially among the aging population and individuals with multiple long-term conditions. This trend highlights the urgent need for effective home-based care and easy-to-use monitoring tools that go beyond basic wellness tracking.
Current consumer wearables, such as smartwatches and fitness bands, offer limited physiological data like heart rate or step count. While useful for general health, they lack the depth needed to support chronic disease management. Traditional ultrasound systems, although clinically powerful, are bulky, operator-dependent, can only be deployed episodically within the hospitals, and are limited to snapshots in time, making them unsuitable for long-term, everyday use.
WITEC aims to bridge this gap with its wearable ultrasound imaging system that uses bioadhesive technology to enable up to 48 hours of uninterrupted imaging. Combined with AI-enhanced diagnostics, the innovation is aimed at supporting early detection, home-based pre-diagnosis, and continuous monitoring of chronic diseases.
Beyond improving patient outcomes, this innovation could help ease labor shortages by freeing up ultrasound operators, nurses, and doctors to focus on more complex care, while reducing demand for hospital beds and resources. By shifting monitoring to homes and communities, WITEC’s technology will enable patient self-management and timely intervention, potentially lowering health-care costs and alleviating the increasing financial and manpower pressures of an aging population.
Driving innovation through interdisciplinary collaboration
WITEC is led by the following co-lead principal investigators: Xuanhe Zhao, professor of mechanical engineering and professor of civil and environmental engineering at MIT; Joseph Sung, senior vice president of health and life sciences at NTU Singapore and dean of the Lee Kong Chian School of Medicine (LKCMedicine); Cher Heng Tan, assistant dean of clinical research at LKCMedicine; Chwee Teck Lim, NUS Society Professor of Biomedical Engineering at NUS and director of the Institute for Health Innovation and Technology at NUS; and Xiaodong Chen, distinguished university professor at the School of Materials Science and Engineering within NTU.
“We’re extremely proud to bring together an exceptional team of researchers from Singapore and the U.S. to pioneer core technologies that will make wearable ultrasound imaging a reality. This endeavor combines deep expertise in materials science, data science, AI diagnostics, biomedical engineering, and clinical medicine. Our phased approach will accelerate translation into a fully wearable platform that reshapes how chronic diseases are monitored, diagnosed and managed,” says Zhao, who serves as a co-lead PI of WITEC.
Research roadmap with broad impact across health care, science, industry, and economy
Bringing together leading experts across interdisciplinary fields, WITEC will advance foundational work in soft materials, transducers, microelectronics, data science and AI diagnostics, clinical medicine, and biomedical engineering. As a deep-tech R&D group, its breakthroughs will have the potential to drive innovation in health-care technology and manufacturing, diagnostics, wearable ultrasonic imaging, metamaterials, diagnostics, and AI-powered health analytics. WITEC’s work is also expected to accelerate growth in high-value jobs across research, engineering, clinical validation, and health-care services, and attract strategic investments that foster biomedical innovation and industry partnerships in Singapore, the United States, and beyond.
“Chronic diseases present significant challenges for patients, families, and health-care systems, and with aging populations such as Singapore, those challenges will only grow without new solutions. Our research into a wearable ultrasound imaging system aims to transform daily care for those living with cardiovascular and other chronic conditions — providing clinicians with richer, continuous insights to guide treatment, while giving patients greater confidence and control over their own health. WITEC’s pioneering work marks an important step toward shifting care from episodic, hospital-based interventions to more proactive, everyday management in the community,” says Sung, who serves as co‑lead PI of WITEC.
Led by Violet Hoon, senior consultant at TTSH, clinical trials are expected to commence this year to validate long-term heart monitoring in the management of chronic cardiovascular disease. Over the next three years, WITEC aims to develop a fully integrated platform capable of 48-hour intermittent imaging through innovations in bioadhesive couplants, nanostructured metamaterials, and ultrasonic transducers.
As MIT’s research enterprise in Singapore, SMART is committed to advancing breakthrough technologies that address pressing global challenges. WITEC adds to SMART’s existing research endeavors that foster a rich exchange of ideas through collaboration with leading researchers and academics from the United States, Singapore, and around the world in key areas such as antimicrobial resistance, cell therapy development, precision agriculture, AI, and 3D-sensing technologies.
New tissue models could help researchers develop drugs for liver disease
More than 100 million people in the United States suffer from metabolic dysfunction-associated steatotic liver disease (MASLD), characterized by a buildup of fat in the liver. This condition can lead to the development of more severe liver disease that causes inflammation and fibrosis.
In hopes of discovering new treatments for these liver diseases, MIT engineers have designed a new type of tissue model that more accurately mimics the architecture of the liver, including blood vessels and immune cells.
Reporting their findings today in Nature Communications, the researchers showed that this model could accurately replicate the inflammation and metabolic dysfunction that occur in the early stages of liver disease. Such a device could help researchers identify and test new drugs to treat those conditions.
This is the latest study in a larger effort by this team to use these types of tissue models, also known as microphysiological systems, to explore human liver biology, which cannot be easily replicated in mice or other animals.
In another recent paper, the researchers used an earlier version of their liver tissue model to explore how the liver responds to resmetirom. This drug is used to treat an advanced form of liver disease called metabolic dysfunction-associated steatohepatitis (MASH), but it is only effective in about 30 percent of patients. The team found that the drug can induce an inflammatory response in liver tissue, which may help to explain why it doesn’t help all patients.
“There are already tissue models that can make good preclinical predictions of liver toxicity for certain drugs, but we really need to better model disease states, because now we want to identify drug targets, we want to validate targets. We want to look at whether a particular drug may be more useful early or later in the disease,” says Linda Griffith, the School of Engineering Professor of Teaching Innovation at MIT, a professor of biological engineering and mechanical engineering, and the senior author of both studies.
Former MIT postdoc Dominick Hellen is the lead author of the resmetirom paper, which appeared Jan. 14 in Communications Biology. Erin Tevonian PhD ’25 and PhD candidate Ellen Kan, both in the Department of Biological Engineering, are the lead authors of today’s Nature Communications paper on the new microphysiological system.
Modeling drug response
In the Communications Biology paper, Griffith’s lab worked with a microfluidic device that she originally developed in the 1990s, known as the LiverChip. This chip offers a simple scaffold for growing 3D models of liver tissue from hepatocytes, the primary cell type in the liver.
This chip is widely used by pharmaceutical companies to test whether their new drugs have adverse effects on the liver, which is an important step in drug development because most drugs are metabolized by the liver.
For the new study, Griffith and her students modified the chip so that it could be used to study MASLD.
Patients with MASLD, a buildup of fat in the liver, can eventually develop MASH, a more severe disease that occurs when scar tissue called fibrosis forms in the liver. Currently, resmetirom and the GLP-1 drug semaglutide are the only medications that are FDA-approved to treat MASH. Finding new drugs is a priority, Griffith says.
“You’re never declaring victory with liver disease with one drug or one class of drugs, because over the long term there may be patients who can’t use them, or they may not be effective for all patients,” she says.
To create a model of MASLD, the researchers exposed the tissue to high levels of insulin, along with large quantities of glucose and fatty acids. This led to a buildup of fatty tissue and the development of insulin resistance, a trait that is often seen in MASLD patients and can lead to type 2 diabetes.
Once that model was established, the researchers treated the tissue with resmetirom, a drug that works by mimicking the effects of thyroid hormone, which stimulates the breakdown of fat.
To their surprise, the researchers found that this treatment could also lead to an increase in immune signaling and markers of inflammation.
“Because resmetirom is primarily intended to reduce hepatic fibrosis in MASH, we found the result quite paradoxical,” Hellen says. “We suspect this finding may help clinicians and scientists alike understand why only a subset of patients respond positively to the thyromimetic drug. However, additional experiments are needed to further elucidate the underlying mechanism.”
A more realistic liver model
In the Nature Communications paper, the researchers reported a new type of chip that allows them to more accurately reproduce the architecture of the human liver. The key advance was developing a way to induce blood vessels to grow into the tissue. These vessels can deliver nutrients and also allow immune cells to flow through the tissue.
“Making more sophisticated models of liver that incorporate features of vascularity and immune cell trafficking that can be maintained over a long time in culture is very valuable,” Griffith says. “The real advance here was showing that we could get an intimate microvascular network through liver tissue and that we could circulate immune cells. This helped us to establish differences between how immune cells interact with the liver cells in a type two diabetes state and a healthy state.”
As the liver tissue matured, the researchers induced insulin resistance by exposing the tissue to increased levels of insulin, glucose, and fatty acids.
As this disease state developed, the researchers observed changes in how hepatocytes clear insulin and metabolize glucose, as well as narrower, leakier blood vessels that reflect microvascular complications often seen in diabetic patients. They also found that insulin resistance leads to an increase in markers of inflammation that attract monocytes into the tissue. Monocytes are the precursors of macrophages, immune cells that help with tissue repair during inflammation and are also observed in the liver of patients with early-stage liver disease.
“This really shows that we can model the immune features of a disease like MASLD, in a way that is all based on human cells,” Griffith says.
The research was funded by the National Institutes of Health, the National Science Foundation Graduate Research Fellowship program, NovoNordisk, the Massachusetts Life Sciences Center, and the Siebel Scholars Foundation.
Your future home might be framed with printed plastic
The plastic bottle you just tossed in the recycling bin could provide structural support for your future house.
MIT engineers are using recycled plastic to 3D print construction-grade beams, trusses, and other structural elements that could one day offer lighter, modular, and more sustainable alternatives to traditional wood-based framing.
In a paper published in the Solid FreeForm Fabrication Symposium Proceedings, the MIT team presents the design for a 3D-printed floor truss system made from recycled plastic.
A traditional floor truss is made from wood beams that connect via metal plates in a pattern resembling a ladder with diagonal rungs. Set on its edge and combined with other parallel trusses, the resulting structure provides support for flooring material such as plywood that lies over the trusses.
The MIT team printed four long trusses out of recycled plastic and configured them into a conventional plywood-topped floor frame, then tested the structure’s load-bearing capacity. The printed flooring held over 4,000 pounds, exceeding key building standards set by the U.S. Department of Housing and Urban Development.
The plastic-printed trusses weigh about 13 pounds each, which is lighter than a comparable wood-based truss, and they can be printed on a large-scale industrial printer in under 13 minutes. In addition to floor trusses, the group is working on printing other elements and combining them into a full frame for a modest-sized home.
The researchers envision that as global demand for housing eclipses the supply of wood in the coming years, single-use plastics such as water bottles and food containers could get a second life as recycled framing material to alleviate both a global housing crisis and the overwhelming demand for timber.
“We’ve estimated that the world needs about 1 billion new homes by 2050. If we try to make that many homes using wood, we would need to clear-cut the equivalent of the Amazon rainforest three times over,” says AJ Perez, a lecturer in the MIT School of Engineering and research scientist in the MIT Office of Innovation. “The key here is: We recycle dirty plastic into building products for homes that are lighter, more durable, and sustainable.”
Perez’ co-authors on the study are graduate students Tyler Godfrey, Kenan Sehnawi, Arjun Chandar, and professor of mechanical engineering David Hardt, who are all members of the MIT Laboratory for Manufacturing and Productivity.
Printing dirty
In 2019, Perez and Hardt started MIT HAUS, a group within the Laboratory for Manufacturing and Productivity that aims to produce homes from recycled polymer products, using large-scale additive manufacturing, which encompasses technologies that are capable of producing big structures, layer-by-layer, in relatively short timescales.
Today, some companies are exploring large-scale additive manufacturing to 3D-print modest-sized homes. These efforts mainly focus on printing with concrete or clay — materials that have had a large negative environmental impact associated with their production. The house structures that have been printed so far are largely walls. The MIT HAUS group is among the first to consider printing structural framing elements such as foundation pilings, floor trusses, stair stringers, roof trusses, wall studs, and joists.
What’s more, they are seeking to do so not with cement, but with recycled “dirty” plastic — plastic that doesn’t have to be cleaned and preprocessed before reuse. The researchers envision that one day, used bottles and food containers could be fed directly into a shredder, pelletized, then fed into a large-scale additive manufacturing machine to become structural composite construction components. The plastic composite parts would be light enough to transport via pickup truck rather than a traditional lumber-hauling 18-wheeler. At the construction site, the elements could be quickly fitted into a lightweight yet sturdy home frame.
“We are starting to crack the code on the ability to process and print really dirty plastic,” Perez says. “The questions we’ve been asking are, what is the dirty, unwanted plastic good for, and how do we use the dirty plastic as-is?”
Weight class
The team’s new study is one step toward that overall goal of sustainable, recycled construction. In this work, they developed a design for a printed floor truss made from recycled plastic. They designed the truss with a high stiffness-to-weight ratio, meaning that it should be able to support a given amount of weight with minimal deflection, or bending. (Think of being able to walk across a floor without it sagging between the joists.)
The researchers first explored a handful of possible truss designs in simulation, and put each design through a simulated load-bearing test. Their modeling showed that one design in particular exhibited the highest stiffness-to-weight ratio and was therefore the most promising pattern to print and physically test. The design is close to the traditional wood-based floor truss pattern resembling a ladder with diagonal, triangular rungs. The team made a slight adjustment to this design, adding small reinforcing elements to each node where a “rung” met the main truss frame.
To print the design, Perez and his colleagues went to MIT’s Bates Research and Engineering Center, which houses the group’s industrial-scale 3D printer — a room-sized industrial machine that is capable of printing large structures at a fast rate of up to 80 pounds of material per hour. For their preliminary study, the researchers used pellets made of a combination of recycled PET polymers and glass fibers — a mixture that improves the material’s printability and durability. They obtained the material from an aerospace materials company, and then fed the pellets into the printer as composite “ink.”
The team printed four trusses, each measuring 8 feet long, 1 foot high, and about 1 inch wide. Each truss took about 13 minutes to print. Perez and Godfrey spaced the trusses apart in a parallel configuration similar to traditional wood-based trusses, and screwed them into a sheet of plywood to mimic a 4-x-8-foot floor frame. They placed bags of sand and concrete of increasing weight in the center of the flooring system and measured the amount of deflection that the trusses experienced underneath.
The trusses easily withstood loads of 300 pounds, well above the deflection standards set by the U.S. by the Department of Housing and Urban Development. They didn’t stop there, continuing to add weight. Only when the loads reached over 4,000 pounds did the trusses finally buckle and crack.
In terms of stiffness, the printed trusses meet existing building codes in the U.S. To make them ready for wide adoption, Perez says the cost of producing the structures will have to be brought down to compete with the price of wood. The trusses in the new study were printed using recycled plastic, but from a source that he describes as the “crème de la crème of recycled feedstocks.” The plastic is factory-discarded material, but is not quite the “dirty” plastic that he aims ultimately to shred, print, and build.
The current study demonstrates that it is possible to print structural building elements from recycled plastic. Perez is in the process of working with dirtier plastic such as used soda bottles — that still hold a bit of liquid residue — to see how such contaminants affect the quality of the printed product.
If dirty plastics can be made into durable housing structures, Perez says “the idea is to bring shipping containers close to where you know you’ll have a lot of plastic, like next to a football stadium. Then you could use off-the-shelf shredding technology and feed that dirty shredded plastic into a large-scale additive manufacturing system, which could exist in micro-factories, just like bottling centers, around the world. You could print the parts for entire buildings that would be light enough to transport on a moped or pickup truck to where homes are most needed.”
This research was supported, in part, by the Gerstner Foundation, the Chandler Health of the Planet grant, and Cincinnati Incorporated.
Young and gifted
James Baldwin was a prodigy. That is not the first thing most people associate with a writer who once declared that he “had no childhood” and whose work often elides the details of his early life in New York, in the 1920s and 1930s. Still, by the time Baldwin was 14, he was a successful church preacher, excelling in a role otherwise occupied by adults.
Throw in the fact that Baldwin was reading Dostoyevsky by the fifth grade, wrote “like an angel” according to his elementary school principal, edited his middle school periodical, and wrote for his high school magazine, and it’s clear he was a precocious wordsmith.
These matters are complicated, of course. To MIT scholar Joshua Bennett, Baldwin’s writings reveal enough for us to conclude that his childhood was marked by a “relentless introspection” as he sought to come to terms with the world. Beyond that, Bennett thinks, some of Baldwin’s work, and even the one children’s book he wrote, yields “messages of persistence,” recognizing the need for any child to receive encouragement and education.
And if someone as precocious as Baldwin still needed cultivation, then virtually everyone does. If we act is if talent blossoms on its own, we are ignoring the vital role communities, teachers, and families play in helping artists — or anyone — develop their skills.
“We talk as if these people emerged ex nihilo,” Bennett says. “When all along the way, there were people who cultivated them, and our children deserve the same — all of the children of the world. We have a dominant model of genius that is fundamentally flawed, in that it often elides the role of communities and cultural institutions.”
Bennett explores these issues in a new book, “The People Can Fly: American Promise, Black Prodigies, and the Greatest Miracle of All Time,” published this week by Hachette. A literary scholar and poet himself, Bennett is the Distinguished Chair of the Humanities at MIT and a professor of literature.
“The People Can Fly” accomplishes many kinds of work at once: Bennett offers a series of profiles, carefully wrought to see how some prominent figures were able to flourish from childhood forward. And he closely reads their works for indications about how they understood the shape of their own lives. In so doing, Bennett underscores the significance of the social settings that prodigious talents grow up in. For good measure, he also offers reflections on his own career trajectory and encounters with these artists, driving home their influence and meaning.
Reading about these many prodigies, one by one, helps readers build a picture of the realities, and complications, of trying to sustain early promise.
“It’s part of what I tell my students — the individual is how you get to the universal,” Bennett says. “It doesn’t mean I need to share certain autobiographical impulses with, say, Hemingway. It’s just that I think those touchpoints exist in all great works of art.”
Space odyssey
For Bennett, the idea of writing about prodigies grew naturally from his research and teaching, which ranges broadly in American and global literature. Bennett began contemplating “the idea of promise as this strange, idiosyncratic quality, this thing we see through various acts, perhaps something as simple as a little riff you hear a child sing, an element of their drawings, or poems.” At the same time, he notes, people struggle with “the weight of promise. There is a peril that can come along with promise. Promise can be taken away.”
Ultimately, Bennett adds, “I started thinking a little more about what promise has meant in African American communities,” in particular. Ranging widely in the book, Bennett consistently loops back to a core focus on the ideals, communities, and obstacles many Black artists grew up with. These artists and intellectuals include Malcolm X, Gwendolyn Brooks, Stevie Wonder, and the late poet and scholar Nikki Giovanni.
Bennett’s chapter on Giovanni shows his own interest in placing an artist’s life in historical context, and picks up on motifs relating back to childhood and personal promise.
Giovanni attended Fisk University early, enrolling at 17. Later she enrolled in Columbia University’s Masters of Fine Arts program, where poetry students were supposed to produce publishable work in a two-year program. In her first year, Giovanni’s poetry collection, “Black Feeling, Black Talk,” not only got published but became a hit, selling 10,000 copies. She left the program early — without a degree, since it required two years of residency. In short, she was always going places.
Giovanni went on to become one of the most celebrated poets of her time, and spent decades on the faculty at Virginia Tech. One idea that kept recurring in her work: dreams of space exploration. Giovanni’s work transmitted a clear enthusiasm for exploring the stars.
“Looking through her work, you see space travel everywhere,” Bennett says. “Even in her most prominent poem, ‘Ego trippin (there may be a reason why),’ there is this sense of someone who’s soaring over the landscape — ‘I’m so hip even my errors are correct.’ There is this idea of an almost divine being.”
That enthusiasm was accompanied by the recognition that astronauts, at least at one time, emerged from a particular slice of society. Indeed, Giovanni at many times publicly called for more opportunities for more Americans to become astronauts. A pressing issue, for her, was making dreams achievable for more people.
“Nikki Giovanni is very invested in these sorts of questions, as a writer, as an educator, and as a big thinker,” Bennett says. “This kind of thinking about the cosmos is everywhere in her work. But inside of that is a critique, that everyone should have a chance to expand the orbit of their dreaming. And dream of whatever they need to.”
And as Bennett draws out in “The People Can Fly,” stories and visions of flying have run deep in Black culture, offering a potent symbolism and a mode of “holding on to a deeper sense that the constraints of this present world are not all-powerful or everlasting. The miraculous is yet available. The people could fly, and still can.”
Children with promise, families with dreams
Other artists have praised “The People Can Fly.” The actor, producer, and screenwriter Lena Waithe has said that “Bennett’s poetic nature shines through on every page. … This book is a masterclass in literature and a necessary reminder to cherish the child in all of us.”
Certainly Bennett brings a vast sense of scope to “The People Can Fly,” ranging across centuries of history. Phillis Wheatley, a former enslaved woman whose 1773 poetry collection was later praised by George Washington, was an early American prodigy, studying the classics as a teenager and releasing her work at age 20. Mae Jemison, the first Black female astronaut, enrolled in Stanford University at age 16, spurred by family members who taught her about the stars. All told, Bennett weaves together a scholarly tapestry about hope, ambition, and, at times, opportunity.
Often, that hope and ambition belong to whole families, not just one gifted child. As Nikki Giovanni herself quipped, while giving the main address at MIT’s annual Martin Luther King convocation in 1990, “the reason you go to college is that it makes your mother happy.”
Bennett can relate, having come from a family where his mother was the only prior relative to have attended college. As a kid in the 1990s, growing up in Yonkers, New York, he had a Princeton University sweatshirt, inspired by his love of the television program “The Fresh Prince of Bel Air.” The program featured a character named Phillip Banks — popularly known as “Uncle Phil” — who was, within the world of the show, a Princeton alumnus.
“I would ask my Mom, ‘How do I get into Princeton?’” Bennett recalls. “She would just say, ‘Study hard, honey.’ No one but her had even been to college in my family. No one had been to Princeton. No one had set foot on Princeton University’s campus. But the idea that was possible in the country we lived in, for a woman who was the daughter of two sharecroppers, and herself grew up in a tenement with her brothers and sister, and nonetheless went on to play at Carnegie Hall and get a college degree and buy her mother a color TV — it’s fascinating to me.”
The postscript to that anecdote is that Bennett did go on to earn his PhD from Princeton. Behind many children with promise are families and communities with dreams for those kids.
“There’s something to it I refuse to relinquish,” Bennett says. “My mother’s vision was a powerful and persistent one — she believed that the future also belonged to her children.”
How a unique class of neurons may set the table for brain development
The way the brain develops can shape us throughout our lives, so neuroscientists are intensely curious about how it happens. A new study by researchers in The Picower Institute for Learning and Memory at MIT that focused on visual cortex development in mice reveals that an important class of neurons follows a set of rules that, while surprising, might just create the right conditions for circuit optimization.
During early brain development, multiple types of neurons emerge in the visual cortex (where the brain processes vision). Many are “excitatory,” driving the activity of brain circuits, and others are “inhibitory,” meaning they control that activity. Just like a car needs not only an engine and a gas pedal, but also a steering wheel and brakes, a healthy balance between excitation and inhibition is required for proper brain function. During a “critical period” of development in the visual cortex, soon after the eyes first open, excitatory and inhibitory neurons forge and edit millions of connections, or synapses, to adapt nascent circuits to the incoming flood of visual experience. Over many days, in other words, the brain optimizes its attunement to the world.
In the new study in The Journal of Neuroscience, a team led by MIT research scientist Josiah Boivin and Professor Elly Nedivi visually tracked somatostatin (SST)-expressing inhibitory neurons forging synapses with excitatory cells along their sprawling dendrite branches, illustrating the action before, during, and after the critical period with unprecedented resolution. Several of the rules the SST cells appeared to follow were unexpected — for instance, unlike other cell types, their activity did not depend on visual input — but now that the scientists know these neurons’ unique trajectory, they have a new idea about how it may enable sensory activity to influence development: SST cells might help usher in the critical period by establishing the baseline level of inhibition needed to ensure that only certain types of sensory input will trigger circuit refinement.
“Why would you need part of the circuit that’s not really sensitive to experience? It could be that it’s setting things up for the experience-dependent components to do their thing,” says Nedivi, the William R. and Linda R. Young Professor in the Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences.
Boivin adds: “We don’t yet know whether SST neurons play a causal role in the opening of the critical period, but they are certainly in the right place at the right time to sculpt cortical circuitry at a crucial developmental stage.”
A unique trajectory
To visualize SST-to-excitatory synapse development, Nedivi and Boivin’s team used a genetic technique that pairs expression of synaptic proteins with fluorescent molecules to resolve the appearance of the “boutons” SST cells use to reach out to excitatory neurons. They then performed a technique called eMAP, developed by Kwanghun Chung’s lab in the Picower Institute, that expands and clears brain tissue to increase magnification, allowing super-resolution visualization of the actual synapses those boutons ultimately formed with excitatory cells along their dendrites. Co-author and postdoc Bettina Schmerl helped lead the eMAP work.
These new techniques revealed that SST bouton appearance and then synapse formation surged dramatically when the eyes opened, and then as the critical period got underway. But while excitatory neurons during this time frame are still maturing, first in the deepest layers of the cortex and later in its more superficial layers, the SST boutons blanketed all layers simultaneously, meaning that, perhaps counterintuitively, they sought to establish their inhibitory influence regardless of the maturation stage of their intended partners.
Many studies have shown that eye opening and the onset of visual experience sets in motion the development and elaboration of excitatory cells and another major inhibitory neuron type (parvalbumin-expressing cells). Raising mice in the dark for different lengths of time, for instance, can distinctly alter what happens with these cells. Not so for the SST neurons. The new study showed that varying lengths of darkness had no effect on the trajectory of SST bouton and synapse appearance; it remained invariant, suggesting it is preordained by a genetic program or an age-related molecular signal, rather than experience.
Moreover, after the initial frenzy of synapse formation during development, many synapses are then edited, or pruned away, so that only the ones needed for appropriate sensory responses endure. Again, the SST boutons and synapses proved to be exempt from these redactions. Although the pace of new SST synapse formation slowed at the peak of the critical period, the net number of synapses never declined, and even continued increasing into adulthood.
“While a lot of people think that the only difference between inhibition and excitation is their valence, this demonstrates that inhibition works by a totally different set of rules,” Nedivi says.
In all, while other cell types were tailoring their synaptic populations to incoming experience, the SST neurons appeared to provide an early but steady inhibitory influence across all layers of the cortex. After excitatory synapses have been pruned back by the time of adulthood, the continued upward trickle of SST inhibition may contribute to the increase in the inhibition to excitation ratio that still allows the adult brain to learn, but not as dramatically or as flexibly as during early childhood.
A platform for future studies
In addition to shedding light on typical brain development, Nedivi says, the study’s techniques can enable side-by-side comparisons in mouse models of neurodevelopmental disorders such as autism or epilepsy, where aberrations of excitation and inhibition balance are implicated.
Future studies using the techniques can also look at how different cell types connect with each other in brain regions other than the visual cortex, she adds.
Boivin, who will soon open his own lab as a faculty member at Amherst College, says he is eager to apply the work in new ways.
“I’m excited to continue investigating inhibitory synapse formation on genetically defined cell types in my future lab,” Boivin says. “I plan to focus on the development of limbic brain regions that regulate behaviors relevant to adolescent mental health.”
In addition to Nedivi, Boivin and Schmerl, the paper’s other authors are Kendyll Martin and Chia-Fang Lee.
Funding for the study came from the National Institutes of Health, the Office of Naval Research, and the Freedom Together Foundation.
How generative AI can help scientists synthesize complex materials
Generative artificial intelligence models have been used to create enormous libraries of theoretical materials that could help solve all kinds of problems. Now, scientists just have to figure out how to make them.
In many cases, materials synthesis is not as simple as following a recipe in the kitchen. Factors like the temperature and length of processing can yield huge changes in a material’s properties that make or break its performance. That has limited researchers’ ability to test millions of promising model-generated materials.
Now, MIT researchers have created an AI model that guides scientists through the process of making materials by suggesting promising synthesis routes. In a new paper, they showed the model delivers state-of-the-art accuracy in predicting effective synthesis pathways for a class of materials called zeolites, which could be used to improve catalysis, absorption, and ion exchange processes. Following its suggestions, the team synthesized a new zeolite material that showed improved thermal stability.
The researchers believe their new model could break the biggest bottleneck in the materials discovery process.
“To use an analogy, we know what kind of cake we want to make, but right now we don’t know how to bake the cake,” says lead author Elton Pan, a PhD candidate in MIT’s Department of Materials Science and Engineering (DMSE). “Materials synthesis is currently done through domain expertise and trial and error.”
The paper describing the work appears today in Nature Computational Science. Joining Pan on the paper are Soonhyoung Kwon ’20, PhD ’24; DMSE postdoc Sulin Liu; chemical engineering PhD student Mingrou Xie; DMSE postdoc Alexander J. Hoffman; Research Assistant Yifei Duan SM ’25; DMSE visiting student Thorben Prein; DMSE PhD candidate Killian Sheriff; MIT Robert T. Haslam Professor in Chemical Engineering Yuriy Roman-Leshkov; Valencia Polytechnic University Professor Manuel Moliner; MIT Paul M. Cook Career Development Professor Rafael Gómez-Bombarelli; and MIT Jerry McAfee Professor in Engineering Elsa Olivetti.
Learning to bake
Massive investments in generative AI have led companies like Google and Meta to create huge databases filled with material recipes that, at least theoretically, have properties like high thermal stability and selective absorption of gases. But making those materials can require weeks or months of careful experiments that test specific reaction temperatures, times, precursor ratios, and other factors.
“People rely on their chemical intuition to guide the process,” Pan says. “Humans are linear. If there are five parameters, we might keep four of them constant and vary one of them linearly. But machines are much better at reasoning in a high-dimensional space.”
The synthesis process of materials discovery now often takes the most time in a material’s journey from hypothesis to use.
To help scientists navigate that process, the MIT researchers trained a generative AI model on over 23,000 material synthesis recipes described over 50 years of scientific papers. The researchers iteratively added random “noise” to the recipes during training, and the model learned to de-noise and sample from the random noise to find promising synthesis routes.
The result is DiffSyn, which uses an approach in AI known as diffusion.
“Diffusion models are basically a generative AI model like ChatGPT, but more like the DALL-E image generation model,” Pan says. “During inference, it converts noise into meaningful structure by subtracting a little bit of noise at each step. In this case, the ‘structure’ is the synthesis route for a desired material.”
When a scientist using DiffSyn enters a desired material structure, the model offers some promising combinations of reaction temperatures, reaction times, precursor ratios, and more.
“It basically tells you how to bake your cake,” Pan says. “You have a cake in mind, you feed it into the model, the model spits out the synthesis recipes. The scientist can pick whichever synthesis path they want, and there are simple ways to quantify the most promising synthesis path from what we provide, which we show in our paper.”
To test their system, the researchers used DiffSyn to suggest novel synthesis paths for a zeolite, a material class that is complex and takes time to form into a testable material.
“Zeolites have a very high-dimensional synthesis space,” Pan says. “Zeolites also tend to take days or weeks to crystallize, so the impact [of finding the best synthesis pathway faster] is much higher than other materials that crystallize in hours.”
The researchers were able to make the new zeolite material using synthesis pathways suggested by DiffSyn. Subsequent testing revealed the material had a promising morphology for catalytic applications.
“Scientists have been trying out different synthesis recipes one by one,” Pan says. “That makes them very time-consuming. This model can sample 1,000 of them in under a minute. It gives you a very good initial guess on synthesis recipes for completely new materials.”
Accounting for complexity
Previously, researchers have built machine-learning models that mapped a material to a single recipe. Those approaches do not take into account that there are different ways to make the same material.
DiffSyn is trained to map material structures to many different possible synthesis paths. Pan says that is better aligned with experimental reality.
“This is a paradigm shift away from one-to-one mapping between structure and synthesis to one-to-many mapping,” Pan says. “That’s a big reason why we achieved strong gains on the benchmarks.”
Moving forward, the researchers believe the approach should work to train other models that guide the synthesis of materials outside of zeolites, including metal-organic frameworks, inorganic solids, and other materials that have more than one possible synthesis pathway.
“This approach could be extended to other materials,” Pan says. “Now, the bottleneck is finding high-quality data for different material classes. But zeolites are complicated, so I can imagine they are close to the upper-bound of difficulty. Eventually, the goal would be interfacing these intelligent systems with autonomous real-world experiments, and agentic reasoning on experimental feedback to dramatically accelerate the process of materials design.”
The work was supported by MIT International Science and Technology Initiatives (MISTI), the National Science Foundation, Generalitat Vaslenciana, the Office of Naval Research, ExxonMobil, and the Agency for Science, Technology and Research in Singapore.
A portable ultrasound sensor may enable earlier detection of breast cancer
For people who are at high risk of developing breast cancer, frequent screenings with ultrasound can help detect tumors early. MIT researchers have now developed a miniaturized ultrasound system that could make it easier for breast ultrasounds to be performed more often, either at home or at a doctor’s office.
The new system consists of a small ultrasound probe attached to an acquisition and processing module that is a little larger than a smartphone. This system can be used on the go when connected to a laptop computer to reconstruct and view wide-angle 3D images in real-time.
“Everything is more compact, and that can make it easier to be used in rural areas or for people who may have barriers to this kind of technology,” says Canan Dagdeviren, an associate professor of media arts and sciences at MIT and the senior author of the study.
With this system, she says, more tumors could potentially be detected earlier, which increases the chances of successful treatment.
Colin Marcus PhD ’25 and former MIT postdoc Md Osman Goni Nayeem are the lead authors of the paper, which appears in the journal Advanced Healthcare Materials. Other authors of the paper are MIT graduate students Aastha Shah, Jason Hou, and Shrihari Viswanath; MIT summer intern and University of Central Florida undergraduate Maya Eusebio; MIT Media Lab Research Specialist David Sadat; MIT Provost Anantha Chandrakasan; and Massachusetts General Hospital breast cancer surgeon Tolga Ozmen.
Frequent monitoring
While many breast tumors are detected through routine mammograms, which use X-rays, tumors can develop in between yearly mammograms. These tumors, known as interval cancers, account for 20 to 30 percent of all breast cancer cases, and they tend to be more aggressive than those found during routine scans.
Detecting these tumors early is critical: When breast cancer is diagnosed in the earliest stages, the survival rate is nearly 100 percent. However, for tumors detected in later stages, that rate drops to around 25 percent.
For some individuals, more frequent ultrasound scanning in addition to regular mammograms could help to boost the number of tumors that are detected early. Currently, ultrasound is usually done only as a follow-up if a mammogram reveals any areas of concern. Ultrasound machines used for this purpose are large and expensive, and they require highly trained technicians to use them.
“You need skilled ultrasound technicians to use those machines, which is a major obstacle to getting ultrasound access to rural communities, or to developing countries where there aren’t as many skilled radiologists,” Viswanath says.
By creating ultrasound systems that are portable and easier to use, the MIT team hopes to make frequent ultrasound scanning accessible to many more people.
In 2023, Dagdeviren and her colleagues developed an array of ultrasound transducers that were incorporated into a flexible patch that can be attached to a bra, allowing the wearer to move an ultrasound tracker along the patch and image the breast tissue from different angles.
Those 2D images could be combined to generate a 3D representation of the tissue, but there could be small gaps in coverage, making it possible that small abnormalities could be missed. Also, that array of transducers had to be connected to a traditional, costly, refrigerator-sized processing machine to view the images.
In their new study, the researchers set out to develop a modified ultrasound array that would be fully portable and could create a 3D image of the entire breast by scanning just two or three locations.
The new system they developed is a chirped data acquisition system (cDAQ) that consists of an ultrasound probe and a motherboard that processes the data. The probe, which is a little smaller than a deck of cards, contains an ultrasound array arranged in the shape of an empty square, a configuration that allows the array to take 3D images of the tissue below.
This data is processed by the motherboard, which is a little bit larger than a smartphone and costs only about $300 to make. All of the electronics used in the motherboard are commercially available. To view the images, the motherboard can be connected to a laptop computer, so the entire system is portable.
“Traditional 3D ultrasound systems require power expensive and bulky electronics, which limits their use to high-end hospitals and clinics,” Chandrakasan says. “By redesigning the system to be ultra-sparse and energy-efficient, this powerful diagnostic tool can be moved out of the imaging suite and into a wearable form factor that is accessible for patients everywhere.”
This system also uses much less power than a traditional ultrasound machine, so it can be powered with a 5V DC supply (a battery or an AC/DC adapter used to plug in small electronic devices such as modems or portable speakers).
“Ultrasound imaging has long been confined to hospitals,” says Nayeem. “To move ultrasound beyond the hospital setting, we reengineered the entire architecture, introducing a new ultrasound fabrication process, to make the technology both scalable and practical.”
Earlier diagnosis
The researchers tested the new system on one human subject, a 71-year-old woman with a history of breast cysts. They found that the system could accurately image the cysts and created a 3D image of the tissue, with no gaps.
The system can image as deep as 15 centimeters into the tissue, and it can image the entire breast from two or three locations. And, because the ultrasound device sits on top of the skin without having to be pressed into the tissue like a typical ultrasound probe, the images are not distorted.
“With our technology, you simply place it gently on top of the tissue and it can visualize the cysts in their original location and with their original sizes,” Dagdeviren says.
The research team is now conducting a larger clinical trial at the MIT Center for Clinical and Translational Research and at MGH.
The researchers are also working on an even smaller version of the data processing system, which will be about the size of a fingernail. They hope to connect this to a smartphone that could be used to visualize the images, making the entire system smaller and easier to use. They also plan to develop a smartphone app that would use an AI algorithm to help guide the patient to the best location to place the ultrasound probe.
While the current version of the device could be readily adapted for use in a doctor’s office, the researchers hope that the future, a smaller version can be incorporated into a wearable sensor that could be used at home by people at high risk for developing breast cancer.
Dagdeviren is now working on launching a company to help commercialize the technology, with assistance from an MIT HEALS Deshpande Momentum Grant, the Martin Trust Center for MIT Entrepreneurship, and the MIT Media Lab WHx Women’s Health Innovation Fund.
The research was funded by a National Science Foundation CAREER Award, a 3M Non-Tenured Faculty Award, the Lyda Hill Foundation, and the MIT Media Lab Consortium.
The philosophical puzzle of rational artificial intelligence
To what extent can an artificial system be rational?
A new MIT course, 6.S044/24.S00 (AI and Rationality), doesn’t seek to answer this question. Instead, it challenges students to explore this and other philosophical problems through the lens of AI research. For the next generation of scholars, concepts of rationality and agency could prove integral in AI decision-making, especially when influenced by how humans understand their own cognitive limits and their constrained, subjective views of what is or isn’t rational.
This inquiry is rooted in a deep relationship between computer science and philosophy, which have long collaborated in formalizing what it is to form rational beliefs, learn from experience, and make rational decisions in pursuit of one's goals.
“You’d imagine computer science and philosophy are pretty far apart, but they’ve always intersected. The technical parts of philosophy really overlap with AI, especially early AI,” says course instructor Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT, calling to mind Alan Turing, who was both a computer scientist and a philosopher. Kaelbling herself holds an undergraduate degree in philosophy from Stanford University, noting that computer science wasn’t available as a major at the time.
Brian Hedden, a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS), who teaches the class with Kaelbling, notes that the two disciplines are more aligned than people might imagine, adding that the “differences are in emphasis and perspective.”
Tools for further theoretical thinking
Offered for the first time in fall 2025, Kaelbling and Hedden created AI and Rationality as part of the Common Ground for Computing Education, a cross-cutting initiative of the MIT Schwarzman College of Computing that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.
With over two dozen students registered, AI and Rationality is one of two Common Ground classes with a foundation in philosophy, the other being 6.C40/24.C40 (Ethics of Computing).
While Ethics of Computing explores concerns about the societal impacts of rapidly advancing technology, AI and Rationality examines the disputed definition of rationality by considering several components: the nature of rational agency, the concept of a fully autonomous and intelligent agent, and the ascription of beliefs and desires onto these systems.
Because AI is extremely broad in its implementation and each use case raises different issues, Kaelbling and Hedden brainstormed topics that could provide fruitful discussion and engagement between the two perspectives of computer science and philosophy.
“It's important when I work with students studying machine learning or robotics that they step back a bit and examine the assumptions they’re making,” Kaelbling says. “Thinking about things from a philosophical perspective helps people back up and understand better how to situate their work in actual context.”
Both instructors stress that this isn’t a course that provides concrete answers to questions on what it means to engineer a rational agent.
Hedden says, “I see the course as building their foundations. We’re not giving them a body of doctrine to learn and memorize and then apply. We’re equipping them with tools to think about things in a critical way as they go out into their chosen careers, whether they’re in research or industry or government.”
The rapid progress of AI also presents a new set of challenges in academia. Predicting what students may need to know five years from now is something Kaelbling sees as an impossible task. “What we need to do is give them the tools at a higher level — the habits of mind, the ways of thinking — that will help them approach the stuff that we really can’t anticipate right now,” she says.
Blending disciplines and questioning assumptions
So far, the class has drawn students from a wide range of disciplines — from those firmly grounded in computing to others interested in exploring how AI intersects with their own fields of study.
Throughout the semester’s reading and discussions, students grappled with different definitions of rationality and how they pushed back against assumptions in their fields.
On what surprised her about the course, Amanda Paredes Rioboo, a senior in EECS, says, “We’re kind of taught that math and logic are this golden standard or truth. This class showed us a variety of examples that humans act inconsistently with these mathematical and logical frameworks. We opened up this whole can of worms as to whether, is it humans that are irrational? Is it the machine learning systems that we designed that are irrational? Is it math and logic itself?”
Junior Okoroafor, a PhD student in the Department of Brain and Cognitive Sciences, was appreciative of the class’s challenges and the ways in which the definition of a rational agent could change depending on the discipline. “Representing what each field means by rationality in a formal framework, makes it clear exactly which assumptions are to be shared, and which were different, across fields.”
The co-teaching, collaborative structure of the course, as with all Common Ground endeavors, gave students and the instructors opportunities to hear different perspectives in real-time.
For Paredes Rioboo, this is her third Common Ground course. She says, “I really like the interdisciplinary aspect. They’ve always felt like a nice mix of theoretical and applied from the fact that they need to cut across fields.”
According to Okoroafor, Kaelbling and Hedden demonstrated an obvious synergy between fields, saying that it felt as if they were engaging and learning along with the class. How computer science and philosophy can be used to inform each other allowed him to understand their commonality and invaluable perspectives on intersecting issues.
He adds, “philosophy also has a way of surprising you.”
Designing the future of metabolic health through tissue-selective drug delivery
New treatments based on biological molecules like RNA give scientists unprecedented control over how cells function. But delivering those drugs to the right tissues remains one of the biggest obstacles to turning these promising yet fragile molecules into powerful new treatments.
Now Gensaic, founded by Lavi Erisson MBA ’19; Uyanga Tsedev SM ’15, PhD ’21; and Jonathan Hsu PhD ’22, is building an artificial intelligence-powered discovery engine to develop protein shuttles that can deliver therapeutic molecules like RNA to specific tissues and cells in the body. The company is using its platform to create advanced treatments for metabolic diseases and other conditions. It is also developing treatments in partnership with Novo Nordisk and exploring additional collaborations to amplify the speed and scale of its impact.
The founders believe their delivery technology — combined with advanced therapies that precisely control gene expression, like RNA interference (RNAi) and small activating RNA (saRNA) — will enable new ways of improving health and treating disease.
“I think the therapeutic space in general is going to explode with the possibilities our approach unlocks,” Erisson says. “RNA has become a clinical-grade commodity that we know is safe. It is easy to synthesize, and it has unparalleled specificity and reversibility. By taking that and combining it with our targeting and delivery, we can change the therapeutic landscape.”
Drinking from the firehose
Erisson worked on drug development at the large pharmaceutical company Teva Pharmaceuticals before coming to MIT for his Sloan Fellows MBA in 2018.
“I came to MIT in large part because I was looking to stretch the boundaries of how I apply critical thinking,” Erisson says. “At that point in my career, I had taken about 10 drug programs into clinical development, with products on the market now. But what I didn’t have were the intellectual and quantitative tools for interrogating finance strategy and other disciplines that aren’t purely scientific. I knew I’d be drinking from the firehose coming to MIT.”
Erisson met Hsu and Tsedev, then PhD students at MIT, in a class taught by professors Harvey Lodish and Andrew Lo. The group started holding weekly meetings to discuss their research and the prospect of starting a business.
After Erisson completed his MBA program in 2019, he became chief medical and business officer at the MIT spinout Iterative Health, a company using AI to improve screening for colorectal cancer and inflammatory bowel disease that has raised over $200 million to date. There, Erisson ran a 1,400-patient study and led the development and clearance of the company’s software product.
During that time, the eventual founders continued to meet at Erisson’s house to discuss promising research avenues, including Tsedev’s work in the lab of Angela Belcher, MIT’s James Mason Crafts Professor of Biological Engineering. Tsedev’s research involved using bacteriophages, which are fast-replicating protein particles, to deliver treatments into hard-to-drug places like the brain.
As Hsu and Tsedev neared completion of their PhDs, the team decided to commercialize the technology, founding Gensaic at the end of 2021. Gensaic’s approach uses a method called unbiased directed evolution to find the best protein scaffolding to reach target tissues in the body.
“Directed evolution means having a lot of different species of proteins competing together for a certain function,” Erisson says. “The proteins are competing for the ability to reach the right cell, and we are then able to look at the genetic code of the protein that has ‘won’ that competition. When we do that process repeatedly, we find extremely adaptable proteins that can achieve the function we’re looking for.”
Initially, the founders focused on developing protein scaffolds to deliver gene therapies. Gensaic has since pivoted to focus on delivering molecules like siRNA and RNAi, which have been hard to deliver outside of the liver.
Today Gensaic has screened more than 500 billion different proteins using a process called phage display and directed evolution. It calls its platform FORGE, for Functional Optimization by Recursive Genetic Evolution.
Erisson says Gensaic’s delivery vehicles can also carry multiple RNA molecules into cells at the same time, giving doctors a novel and powerful set of tools to treat and prevent diseases.
“Today FORGE is built into the idea of multifunctional medicines,” Erisson says. “We are moving into a future where we can extract multiple therapeutic mechanisms from a single molecule. We can combine proteins with multiple tissue selectivity and multiple molecules of siRNA or other therapeutic modalities, and affect complex disease system biology with a single molecule.”
A “universe of opportunity”
The founders believe their approach will enable new ways of improving health by delivering advanced therapies directly to new places in the body. Precise delivery of drugs to anywhere in the body could not only unlock new therapeutic targets but also boost the effectiveness of existing treatments and reduce side effects.
“We’ve found we can get to the brain, and we can get to specific tissues like skeletal and adipose tissue,” Erisson says. “We’re the only company, to my knowledge, that has a protein-based delivery mechanism to get to adipose tissue.”
Delivering drugs into fat and muscle cells could be used to help people lose weight, retain muscle, and prevent conditions like fatty liver disease or osteoporosis.
Erisson says combining RNA therapeutics is another differentiator for Gensaic.
“The idea of multiplexed medicines is just emerging,” Erisson says. “There are no clinically approved drugs using dual-targeted siRNAs, especially ones that have multi-tissue targeting. We are focused on metabolic indications that have two targets at the same time and can take on unique tissues or combinations of tissues.”
Gensaic’s collaboration with Novo Nordisk, announced last year, targets cardiometabolic diseases and includes up to $354 million in upfront and milestone payments per disease target.
“We already know we can deliver multiple types of payloads, and Novo Nordisk is not limited to siRNA, so we can go after diseases in ways that aren’t available to other companies,” Erisson says. “We are too small to try to swallow this universe of opportunity on our own, but the potential of this platform is incredibly large. Patients deserve safer medicines and better outcomes than what are available now.”
