MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 8 hours 55 min ago

AI system predicts protein fragments that can bind to or inhibit a target

Thu, 02/20/2025 - 2:35pm

All biological function is dependent on how different proteins interact with each other. Protein-protein interactions facilitate everything from transcribing DNA and controlling cell division to higher-level functions in complex organisms.

Much remains unclear, however, about how these functions are orchestrated on the molecular level, and how proteins interact with each other — either with other proteins or with copies of themselves.

Recent findings have revealed that small protein fragments have a lot of functional potential. Even though they are incomplete pieces, short stretches of amino acids can still bind to interfaces of a target protein, recapitulating native interactions. Through this process, they can alter that protein’s function or disrupt its interactions with other proteins.

Protein fragments could therefore empower both basic research on protein interactions and cellular processes, and could potentially have therapeutic applications.

Recently published in Proceedings of the National Academy of Sciences, a new method developed in the Department of Biology builds on existing artificial intelligence models to computationally predict protein fragments that can bind to and inhibit full-length proteins in E. coli. Theoretically, this tool could lead to genetically encodable inhibitors against any protein.

The work was done in the lab of associate professor of biology and Howard Hughes Medical Institute investigator Gene-Wei Li in collaboration with the lab of Jay A. Stein (1968) Professor of Biology, professor of biological engineering, and department head Amy Keating.

Leveraging machine learning

The program, called FragFold, leverages AlphaFold, an AI model that has led to phenomenal advancements in biology in recent years due to its ability to predict protein folding and protein interactions.

The goal of the project was to predict fragment inhibitors, which is a novel application of AlphaFold. The researchers on this project confirmed experimentally that more than half of FragFold’s predictions for binding or inhibition were accurate, even when researchers had no previous structural data on the mechanisms of those interactions.

“Our results suggest that this is a generalizable approach to find binding modes that are likely to inhibit protein function, including for novel protein targets, and you can use these predictions as a starting point for further experiments,” says co-first and corresponding author Andrew Savinov, a postdoc in the Li Lab. “We can really apply this to proteins without known functions, without known interactions, without even known structures, and we can put some credence in these models we’re developing.”

One example is FtsZ, a protein that is key for cell division. It is well-studied but contains a region that is intrinsically disordered and, therefore, especially challenging to study. Disordered proteins are dynamic, and their functional interactions are very likely fleeting — occurring so briefly that current structural biology tools can’t capture a single structure or interaction.

The researchers leveraged FragFold to explore the activity of fragments of FtsZ, including fragments of the intrinsically disordered region, to identify several new binding interactions with various proteins. This leap in understanding confirms and expands upon previous experiments measuring FtsZ’s biological activity.

This progress is significant in part because it was made without solving the disordered region’s structure, and because it exhibits the potential power of FragFold.

“This is one example of how AlphaFold is fundamentally changing how we can study molecular and cell biology,” Keating says. “Creative applications of AI methods, such as our work on FragFold, open up unexpected capabilities and new research directions.”

Inhibition, and beyond

The researchers accomplished these predictions by computationally fragmenting each protein and then modeling how those fragments would bind to interaction partners they thought were relevant.

They compared the maps of predicted binding across the entire sequence to the effects of those same fragments in living cells, determined using high-throughput experimental measurements in which millions of cells each produce one type of protein fragment.

AlphaFold uses co-evolutionary information to predict folding, and typically evaluates the evolutionary history of proteins using something called multiple sequence alignments for every single prediction run. The MSAs are critical, but are a bottleneck for large-scale predictions — they can take a prohibitive amount of time and computational power.

For FragFold, the researchers instead pre-calculated the MSA for a full-length protein once, and used that result to guide the predictions for each fragment of that full-length protein.

Savinov, together with Keating Lab alumnus Sebastian Swanson PhD ’23, predicted inhibitory fragments of a diverse set of proteins in addition to FtsZ. Among the interactions they explored was a complex between lipopolysaccharide transport proteins LptF and LptG. A protein fragment of LptG inhibited this interaction, presumably disrupting the delivery of lipopolysaccharide, which is a crucial component of the E. coli outer cell membrane essential for cellular fitness.

“The big surprise was that we can predict binding with such high accuracy and, in fact, often predict binding that corresponds to inhibition,” Savinov says. “For every protein we’ve looked at, we’ve been able to find inhibitors.”

The researchers initially focused on protein fragments as inhibitors because whether a fragment could block an essential function in cells is a relatively simple outcome to measure systematically. Looking forward, Savinov is also interested in exploring fragment function outside inhibition, such as fragments that can stabilize the protein they bind to, enhance or alter its function, or trigger protein degradation.

Design, in principle

This research is a starting point for developing a systemic understanding of cellular design principles, and what elements deep-learning models may be drawing on to make accurate predictions.

“There’s a broader, further-reaching goal that we’re building towards,” Savinov says. “Now that we can predict them, can we use the data we have from predictions and experiments to pull out the salient features to figure out what AlphaFold has actually learned about what makes a good inhibitor?”

Savinov and collaborators also delved further into how protein fragments bind, exploring other protein interactions and mutating specific residues to see how those interactions change how the fragment interacts with its target.

Experimentally examining the behavior of thousands of mutated fragments within cells, an approach known as deep mutational scanning, revealed key amino acids that are responsible for inhibition. In some cases, the mutated fragments were even more potent inhibitors than their natural, full-length sequences.

“Unlike previous methods, we are not limited to identifying fragments in experimental structural data,” says Swanson. “The core strength of this work is the interplay between high-throughput experimental inhibition data and the predicted structural models: the experimental data guides us towards the fragments that are particularly interesting, while the structural models predicted by FragFold provide a specific, testable hypothesis for how the fragments function on a molecular level.”

Savinov is excited about the future of this approach and its myriad applications.

“By creating compact, genetically encodable binders, FragFold opens a wide range of possibilities to manipulate protein function,” Li agrees. “We can imagine delivering functionalized fragments that can modify native proteins, change their subcellular localization, and even reprogram them to create new tools for studying cell biology and treating diseases.” 

MIT faculty, alumni named 2025 Sloan Research Fellows

Thu, 02/20/2025 - 1:40pm

Seven MIT faculty and 21 additional MIT alumni are among 126 early-career researchers honored with 2025 Sloan Research Fellowships by the Alfred P. Sloan Foundation.

The recipients represent the MIT departments of Biology; Chemistry; Civil and Environmental Engineering; Earth, Atmospheric and Planetary Sciences; Economics; Electrical Engineering and Computer Science; Mathematics; and Physics as well as the Music and Theater Arts Section and the MIT Sloan School of Management.

The fellowships honor exceptional researchers at U.S. and Canadian educational institutions, whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders. Winners receive a two-year, $75,000 fellowship that can be used flexibly to advance the fellow’s research.

“The Sloan Research Fellows represent the very best of early-career science, embodying the creativity, ambition, and rigor that drive discovery forward,” says Adam F. Falk, president of the Alfred P. Sloan Foundation. “These extraordinary scholars are already making significant contributions, and we are confident they will shape the future of their fields in remarkable ways.”

Including this year’s recipients, a total of 333 MIT faculty have received Sloan Research Fellowships since the program’s inception in 1955. MIT and Northwestern University are tied for having the most faculty in the 2025 cohort of fellows, each with seven. The MIT recipients are: 

Ariel L. Furst is the Paul M. Cook Career Development Professor of Chemical Engineering at MIT. Her lab combines biological, chemical, and materials engineering to solve challenges in human health and environmental sustainability, with lab members developing technologies for implementation in low-resource settings to ensure equitable access to technology. Furst completed her PhD in the lab of Professor Jacqueline K. Barton at Caltech developing new cancer diagnostic strategies based on DNA charge transport. She was then an A.O. Beckman Postdoctoral Fellow in the lab of Professor Matthew Francis at the University of California at Berkeley, developing sensors to monitor environmental pollutants. She is the recipient of the NIH New Innovator Award, the NSF CAREER Award, and the Dreyfus Teacher-Scholar Award. She is passionate about STEM outreach and increasing participation of underrepresented groups in engineering.

Mohsen Ghaffari SM ’13, PhD ’17 is an associate professor in the Department of Electrical Engineering and Computer Science (EECS) as well as the Computer Science and Artificial Intelligence Laboratory (CSAIL). His research explores the theory of distributed and parallel computation, and he has had influential work on a range of algorithmic problems, including generic derandomization methods for distributed computing and parallel computing (which resolved several decades-old open problems), improved distributed algorithms for graph problems, sublinear algorithms derived via distributed techniques, and algorithmic and impossibility results for massively parallel computation. His work has been recognized with best paper awards at the IEEE Symposium on Foundations of Computer Science (FOCS), ACM-SIAM Symposium on Discrete Algorithms (SODA), ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), the ACM Symposium on Principles of Distributed Computing (PODC), and the International Symposium on Distributed Computing (DISC), the European Research Council's Starting Grant, and a Google Faculty Research Award, among others.

Marzyeh Ghassemi PhD ’17 is an associate professor within EECS and the Institute for Medical Engineering and Science (IMES). Ghassemi earned two bachelor’s degrees in computer science and electrical engineering from New Mexico State University as a Goldwater Scholar; her MS in biomedical engineering from Oxford University as a Marshall Scholar; and her PhD in computer science from MIT. Following stints as a visiting researcher with Alphabet’s Verily and an assistant professor at University of Toronto, Ghassemi joined EECS and IMES as an assistant professor in July 2021. (IMES is the home of the Harvard-MIT Program in Health Sciences and Technology.) She is affiliated with the Laboratory for Information and Decision Systems (LIDS), the MIT-IBM Watson AI Lab, the Abdul Latif Jameel Clinic for Machine Learning in Health, the Institute for Data, Systems, and Society (IDSS), and CSAIL. Ghassemi’s research in the Healthy ML Group creates a rigorous quantitative framework in which to design, develop, and place machine learning models in a way that is robust and useful, focusing on health settings. Her contributions range from socially-aware model construction to improving subgroup- and shift-robust learning methods to identifying important insights in model deployment scenarios that have implications in policy, health practice, and equity. Among other awards, Ghassemi has been named one of MIT Technology Review’s 35 Innovators Under 35 and an AI2050 Fellow, as well as receiving the 2018 Seth J. Teller Award, the 2023 MIT Prize for Open Data, a 2024 NSF CAREER Award, and the Google Research Scholar Award. She founded the nonprofit Association for Health, Inference and Learning (AHLI) and her work has been featured in popular press such as Forbes, Fortune, MIT News, and The Huffington Post.

Darcy McRose is the Thomas D. and Virginia W. Cabot Career Development Assistant Professor of Civil and Environmental Engineering. She is an environmental microbiologist who draws on techniques from genetics, chemistry, and geosciences to understand the ways microbes control nutrient cycling and plant health. Her laboratory uses small molecules, or “secondary metabolites,” made by plants and microbes as tractable experiments tools to study microbial activity in complex environments like soils and sediments. In the long term, this work aims to uncover fundamental controls on microbial physiology and community assembly that can be used to promote agricultural sustainability, ecosystem health, and human prosperity.

Sarah Millholland, an assistant professor of physics at MIT and member of the Kavli Institute for Astrophysics and Space Research, is a theoretical astrophysicist who studies extrasolar planets, including their formation and evolution, orbital dynamics, and interiors/atmospheres. She studies patterns in the observed planetary orbital architectures, referring to properties like the spacings, eccentricities, inclinations, axial tilts, and planetary size relationships. She specializes in investigating how gravitational interactions such as tides, resonances, and spin dynamics sculpt observable exoplanet properties. She is the 2024 recipient of the Vera Rubin Early Career Award for her contributions to the formation and dynamics of extrasolar planetary systems. She plans to use her Sloan Fellowship to explore how tidal physics shape the diversity of orbits and interiors of exoplanets orbiting close to their stars.

Emil Verner is the Albert F. (1942) and Jeanne P. Clear Career Development Associate Professor of Global Management and an associate professor of finance at the MIT Sloan School of Management. His research lies at the intersection of finance and macroeconomics, with a particular focus on understanding the causes and consequences of financial crises over the past 150 years. Verner’s recent work examines the drivers of bank runs and insolvency during banking crises, the role of debt booms in amplifying macroeconomic fluctuations, the effectiveness of debt relief policies during crises, and how financial crises impact political polarization and support for populist parties. Before joining MIT, he earned a PhD in economics from Princeton University.

Christian Wolf, the Rudi Dornbusch Career Development Assistant Professor of Economics and a faculty research fellow at the National Bureau of Economic Research, works in macroeconomics, monetary economics, and time series econometrics. His work focuses on the development and application of new empirical methods to address classic macroeconomic questions and to evaluate how robust the answers are to a range of common modeling assumptions. His research has provided path-breaking insights on monetary transmission mechanisms and fiscal policy. In a separate strand of work, Wolf has substantially deepened our understanding of the appropriate methods macroeconomists should use to estimate impulse response functions — how key economic variables respond to policy changes or unexpected shocks.

The following MIT alumni also received fellowships: 

Jason Altschuler SM ’18, PhD ’22
David Bau III PhD ’21 
Rene Boiteau PhD ’16 
Lynne Chantranupong PhD ’17
Lydia B. Chilton ’06, ’07, MNG ’09 
Jordan Cotler ’15 
Alexander Ji PhD ’17 
Sarah B. King ’10
Allison Z. Koenecke ’14 
Eric Larson PhD ’18
Chen Lian ’15, PhD ’20
Huanqian Loh ’06 
Ian J. Moult PhD ’16
Lisa Olshansky PhD ’15
Andrew Owens SM ’13, PhD ’16 
Matthew Rognlie PhD ’16
David Rolnick ’12, PhD ’18 
Shreya Saxena PhD ’17
Mark Sellke ’18
Amy X. Zhang PhD ’19 
Aleksandr V. Zhukhovitskiy PhD ’16

Professor Anthony Sinskey, biologist, inventor, entrepreneur, and Center for Biomedical Innovation co-founder, dies at 84

Thu, 02/20/2025 - 1:00pm

Longtime MIT Professor Anthony “Tony” Sinskey ScD ’67, who was also the co-founder and faculty director of the Center for Biomedical Innovation (CBI), passed away on Feb. 12 at his home in New Hampshire. He was 84.

Deeply engaged with MIT, Sinskey left his mark on the Institute as much through the relationships he built as the research he conducted. Colleagues say that throughout his decades on the faculty, Sinskey’s door was always open.

“He was incredibly generous in so many ways,” says Graham Walker, an American Cancer Society Professor at MIT. “He was so willing to support people, and he did it out of sheer love and commitment. If you could just watch Tony in action, there was so much that was charming about the way he lived. I’ve said for years that after they made Tony, they broke the mold. He was truly one of a kind.”

Sinskey’s lab at MIT explored methods for metabolic engineering and the production of biomolecules. Over the course of his research career, he published more than 350 papers in leading peer-reviewed journals for biology, metabolic engineering, and biopolymer engineering, and filed more than 50 patents. Well-known in the biopharmaceutical industry, Sinskey contributed to the founding of multiple companies, including Metabolix, Tepha, Merrimack Pharmaceuticals, and Genzyme Corporation. Sinskey’s work with CBI also led to impactful research papers, manufacturing initiatives, and educational content since its founding in 2005.

Across all of his work, Sinskey built a reputation as a supportive, collaborative, and highly entertaining friend who seemed to have a story for everything.

“Tony would always ask for my opinions — what did I think?” says Barbara Imperiali, MIT’s Class of 1922 Professor of Biology and Chemistry, who first met Sinskey as a graduate student. “Even though I was younger, he viewed me as an equal. It was exciting to be able to share my academic journey with him. Even later, he was continually opening doors for me, mentoring, connecting. He felt it was his job to get people into a room together to make new connections.”

Sinskey grew up in the small town of Collinsville, Illinois, and spent nights after school working on a farm. For his undergraduate degree, he attended the University of Illinois, where he got a job washing dishes at the dining hall. One day, as he recalled in a 2020 conversation, he complained to his advisor about the dishwashing job, so the advisor offered him a job washing equipment in his microbiology lab.

In a development that would repeat itself throughout Sinskey’s career, he befriended the researchers in the lab and started learning about their work. Soon he was showing up on weekends and helping out. The experience inspired Sinskey to go to graduate school, and he only applied to one place.

Sinskey earned his ScD from MIT in nutrition and food science in 1967. He joined MIT’s faculty a few years later and never left.

“He loved MIT and its excellence in research and education, which were incredibly important to him,” Walker says. “I don’t know of another institution this interdisciplinary — there’s barely a speed bump between departments — so you can collaborate with anybody. He loved that. He also loved the spirit of entrepreneurship, which he thrived on. If you heard somebody wanted to get a project done, you could run around, get 10 people, and put it together. He just loved doing stuff like that.”

Working across departments would become a signature of Sinskey’s research. His original office was on the first floor of MIT’s Building 56, right next to the parking lot, so he’d leave his door open in the mornings and afternoons and colleagues would stop in and chat.

“One of my favorite things to do was to drop in on Tony when I saw that his office door was open,” says Chris Kaiser, MIT’s Amgen Professor of Biology. “We had a whole range of things we liked to catch up on, but they always included his perspectives looking back on his long history at MIT. It also always included hopes for the future, including tracking trajectories of MIT students, whom he doted on.”

Long before the internet, colleagues describe Sinskey as a kind of internet unto himself, constantly leveraging his vast web of relationships to make connections and stay on top of the latest science news.

“He was an incredibly gracious person — and he knew everyone,” Imperiali says. “It was as if his Rolodex had no end. You would sit there and he would say, ‘Call this person.’ or ‘Call that person.’ And ‘Did you read this new article?’ He had a wonderful view of science and collaboration, and he always made that a cornerstone of what he did. Whenever I’d see his door open, I’d grab a cup of tea and just sit there and talk to him.”

When the first recombinant DNA molecules were produced in the 1970s, it became a hot area of research. Sinskey wanted to learn more about recombinant DNA, so he hosted a large symposium on the topic at MIT that brought in experts from around the world.

“He got his name associated with recombinant DNA for years because of that,” Walker recalls. “People started seeing him as Mr. Recombinant DNA. That kind of thing happened all the time with Tony.”

Sinskey’s research contributions extended beyond recombinant DNA into other microbial techniques to produce amino acids and biodegradable plastics. He co-founded CBI in 2005 to improve global health through the development and dispersion of biomedical innovations. The center adopted Sinskey’s collaborative approach in order to accelerate innovation in biotechnology and biomedical research, bringing together experts from across MIT’s schools.

“Tony was at the forefront of advancing cell culture engineering principles so that making biomedicines could become a reality. He knew early on that biomanufacturing was an important step on the critical path from discovering a drug to delivering it to a patient,” says Stacy Springs, the executive director of CBI. “Tony was not only my boss and mentor, but one of my closest friends. He was always working to help everyone reach their potential, whether that was a colleague, a former or current researcher, or a student. He had a gentle way of encouraging you to do your best.”

“MIT is one of the greatest places to be because you can do anything you want here as long as it’s not a crime,” Sinskey joked in 2020. “You can do science, you can teach, you can interact with people — and the faculty at MIT are spectacular to interact with.”

Sinskey shared his affection for MIT with his family. His wife, the late ChoKyun Rha ’62, SM ’64, SM ’66, ScD ’67, was a professor at MIT for more than four decades and the first woman of Asian descent to receive tenure at MIT. His two sons also attended MIT — Tong-ik Lee Sinskey ’79, SM ’80 and Taeminn Song MBA ’95, who is the director of strategy and strategic initiatives for MIT Information Systems and Technology (IS&T).

Song recalls: “He was driven by same goal my mother had: to advance knowledge in science and technology by exploring new ideas and pushing everyone around them to be better.”

Around 10 years ago, Sinskey began teaching a class with Walker, Course 7.21/7.62 (Microbial Physiology). Walker says their approach was to treat the students as equals and learn as much from them as they taught. The lessons extended beyond the inner workings of microbes to what it takes to be a good scientist and how to be creative. Sinskey and Rha even started inviting the class over to their home for Thanksgiving dinner each year.

“At some point, we realized the class was turning into a close community,” Walker says. “Tony had this endless supply of stories. It didn’t seem like there was a topic in biology that Tony didn’t have a story about either starting a company or working with somebody who started a company.”

Over the last few years, Walker wasn’t sure they were going to continue teaching the class, but Sinskey remarked it was one of the things that gave his life meaning after his wife’s passing in 2021. That decided it.

After finishing up this past semester with a class-wide lunch at Legal Sea Foods, Sinskey and Walker agreed it was one of the best semesters they’d ever taught.

In addition to his two sons, Sinskey is survived by his daughter-in-law Hyunmee Elaine Song, five grandchildren, and two great grandsons. He has two brothers, Terry Sinskey (deceased in 1975) and Timothy Sinskey, and a sister, Christine Sinskey Braudis.

Gifts in Sinskey’s memory can be made to the ChoKyun Rha (1962) and Anthony J Sinskey (1967) Fund.

MIT biologists discover a new type of control over RNA splicing

Thu, 02/20/2025 - 5:00am

RNA splicing is a cellular process that is critical for gene expression. After genes are copied from DNA into messenger RNA, portions of the RNA that don’t code for proteins, called introns, are cut out and the coding portions are spliced back together.

This process is controlled by a large protein-RNA complex called the spliceosome. MIT biologists have now discovered a new layer of regulation that helps to determine which sites on the messenger RNA molecule the spliceosome will target.

The research team discovered that this type of regulation, which appears to influence the expression of about half of all human genes, is found throughout the animal kingdom, as well as in plants. The findings suggest that the control of RNA splicing, a process that is fundamental to gene expression, is more complex than previously known.

“Splicing in more complex organisms, like humans, is more complicated than it is in some model organisms like yeast, even though it’s a very conserved molecular process. There are bells and whistles on the human spliceosome that allow it to process specific introns more efficiently. One of the advantages of a system like this may be that it allows more complex types of gene regulation,” says Connor Kenny, an MIT graduate student and the lead author of the study.

Christopher Burge, the Uncas and Helen Whitaker Professor of Biology at MIT, is the senior author of the study, which appears today in Nature Communications.

Building proteins

RNA splicing, a process discovered in the late 1970s, allows cells to precisely control the content of the mRNA transcripts that carry the instructions for building proteins.

Each mRNA transcript contains coding regions, known as exons, and noncoding regions, known as introns. They also include sites that act as signals for where splicing should occur, allowing the cell to assemble the correct sequence for a desired protein. This process enables a single gene to produce multiple proteins; over evolutionary timescales, splicing can also change the size and content of genes and proteins, when different exons become included or excluded.

The spliceosome, which forms on introns, is composed of proteins and noncoding RNAs called small nuclear RNAs (snRNAs). In the first step of spliceosome assembly, an snRNA molecule known as U1 snRNA binds to the 5’ splice site at the beginning of the intron. Until now, it had been thought that the binding strength between the 5’ splice site and the U1 snRNA was the most important determinant of whether an intron would be spliced out of the mRNA transcript.

In the new study, the MIT team discovered that a family of proteins called LUC7 also helps to determine whether splicing will occur, but only for a subset of introns — in human cells, up to 50 percent.

Before this study, it was known that LUC7 proteins associate with U1 snRNA, but the exact function wasn’t clear. There are three different LUC7 proteins in human cells, and Kenny’s experiments revealed that two of these proteins interact specifically with one type of 5’ splice site, which the researchers called “right-handed.” A third human LUC7 protein interacts with a different type, which the researchers call “left-handed.”

The researchers found that about half of human introns contain a right- or left-handed site, while the other half do not appear to be controlled by interaction with LUC7 proteins. This type of control appears to add another layer of regulation that helps remove specific introns more efficiently, the researchers say.

“The paper shows that these two different 5’ splice site subclasses exist and can be regulated independently of one another,” Kenny says. “Some of these core splicing processes are actually more complex than we previously appreciated, which warrants more careful examination of what we believe to be true about these highly conserved molecular processes.”

“Complex splicing machinery”

Previous work has shown that mutation or deletion of one of the LUC7 proteins that bind to right-handed splice sites is linked to blood cancers, including about 10 percent of acute myeloid leukemias (AMLs). In this study, the researchers found that AMLs that lost a copy of the LUC7L2 gene have inefficient splicing of right-handed splice sites. These cancers also developed the same type of altered metabolism seen in earlier work.

“Understanding how the loss of this LUC7 protein in some AMLs alters splicing could help in the design of therapies that exploit these splicing differences to treat AML,” Burge says. “There are also small molecule drugs for other diseases such as spinal muscular atrophy that stabilize the interaction between U1 snRNA and specific 5’ splice sites. So the knowledge that particular LUC7 proteins influence these interactions at specific splice sites could aid in improving the specificity of this class of small molecules.”

Working with a lab led by Sascha Laubinger, a professor at Martin Luther University Halle-Wittenberg, the researchers found that introns in plants also have right- and left-handed 5’ splice sites that are regulated by Luc7 proteins.

The researchers’ analysis suggests that this type of splicing arose in a common ancestor of plants, animals, and fungi, but it was lost from fungi soon after they diverged from plants and animals.

“A lot what we know about how splicing works and what are the core components actually comes from relatively old yeast genetics work,” Kenny says. “What we see is that humans and plants tend to have more complex splicing machinery, with additional components that can regulate different introns independently.”

The researchers now plan to further analyze the structures formed by the interactions of Luc7 proteins with mRNA and the rest of the spliceosome, which could help them figure out in more detail how different forms of Luc7 bind to different 5’ splice sites.

The research was funded by the U.S. National Institutes of Health and the German Research Foundation.

Rooftop panels, EV chargers, and smart thermostats could chip in to boost power grid resilience

Thu, 02/20/2025 - 12:00am

There’s a lot of untapped potential in our homes and vehicles that could be harnessed to reinforce local power grids and make them more resilient to unforeseen outages, a new study shows.

In response to a cyber attack or natural disaster, a backup network of decentralized devices — such as residential solar panels, batteries, electric vehicles, heat pumps, and water heaters — could restore electricity or relieve stress on the grid, MIT engineers say.

Such devices are “grid-edge” resources found close to the consumer rather than near central power plants, substations, or transmission lines. Grid-edge devices can independently generate, store, or tune their consumption of power. In their study, the research team shows how such devices could one day be called upon to either pump power into the grid, or rebalance it by dialing down or delaying their power use.

In a paper appearing this week in the Proceedings of the National Academy of Sciences, the engineers present a blueprint for how grid-edge devices could reinforce the power grid through a “local electricity market.” Owners of grid-edge devices could subscribe to a regional market and essentially loan out their device to be part of a microgrid or a local network of on-call energy resources.

In the event that the main power grid is compromised, an algorithm developed by the researchers would kick in for each local electricity market, to quickly determine which devices in the network are trustworthy. The algorithm would then identify the combination of trustworthy devices that would most effectively mitigate the power failure, by either pumping power into the grid or reducing the power they draw from it, by an amount that the algorithm would calculate and communicate to the relevant subscribers. The subscribers could then be compensated through the market, depending on their participation.

The team illustrated this new framework through a number of grid attack scenarios, in which they considered failures at different levels of a power grid, from various sources such as a cyber attack or a natural disaster. Applying their algorithm, they showed that various networks of grid-edge devices were able to dissolve the various attacks.

The results demonstrate that grid-edge devices such as rooftop solar panels, EV chargers, batteries, and smart thermostats (for HVAC devices or heat pumps) could be tapped to stabilize the power grid in the event of an attack.

“All these small devices can do their little bit in terms of adjusting their consumption,” says study co-author Anu Annaswamy, a research scientist in MIT’s Department of Mechanical Engineering. “If we can harness our smart dishwashers, rooftop panels, and EVs, and put our combined shoulders to the wheel, we can really have a resilient grid.”

The study’s MIT co-authors include lead author Vineet Nair and John Williams, along with collaborators from multiple institutions including the Indian Institute of Technology, the National Renewable Energy Laboratory, and elsewhere.

Power boost

The team’s study is an extension of their broader work in adaptive control theory and designing systems to automatically adapt to changing conditions. Annaswamy, who leads the Active-Adaptive Control Laboratory at MIT, explores ways to boost the reliability of renewable energy sources such as solar power.

“These renewables come with a strong temporal signature, in that we know for sure the sun will set every day, so the solar power will go away,” Annaswamy says. “How do you make up for the shortfall?”

The researchers found the answer could lie in the many grid-edge devices that consumers are increasingly installing in their own homes.

“There are lots of distributed energy resources that are coming up now, closer to the customer rather than near large power plants, and it’s mainly because of individual efforts to decarbonize,” Nair says. “So you have all this capability at the grid edge. Surely we should be able to put them to good use.”

While considering ways to deal with drops in energy from the normal operation of renewable sources, the team also began to look into other causes of power dips, such as from cyber attacks. They wondered, in these malicious instances, whether and how the same grid-edge devices could step in to stabilize the grid following an unforeseen, targeted attack.

Attack mode

In their new work, Annaswamy, Nair, and their colleagues developed a framework for incorporating grid-edge devices, and in particular, internet-of-things (IoT) devices, in a way that would support the larger grid in the event of an attack or disruption. IoT devices are physical objects that contain sensors and software that connect to the internet.

For their new framework, named EUREICA (Efficient, Ultra-REsilient, IoT-Coordinated Assets), the researchers start with the assumption that one day, most grid-edge devices will also be IoT devices, enabling rooftop panels, EV chargers, and smart thermostats to wirelessly connect to a larger network of similarly independent and distributed devices. 

The team envisions that for a given region, such as a community of 1,000 homes, there exists a certain number of IoT devices that could potentially be enlisted in the region’s local network, or microgrid. Such a network would be managed by an operator, who would be able to communicate with operators of other nearby microgrids.

If the main power grid is compromised or attacked, operators would run the researchers’ decision-making algorithm to determine trustworthy devices within the network that can pitch in to help mitigate the attack.

The team tested the algorithm on a number of scenarios, such as a cyber attack in which all smart thermostats made by a certain manufacturer are hacked to raise their setpoints simultaneously to a degree that dramatically alters a region’s energy load and destabilizes the grid. The researchers also considered attacks and weather events that would shut off the transmission of energy at various levels and nodes throughout a power grid.

“In our attacks we consider between 5 and 40 percent of the power being lost. We assume some nodes are attacked, and some are still available and have some IoT resources, whether a battery with energy available or an EV or HVAC device that’s controllable,” Nair explains. “So, our algorithm decides which of those houses can step in to either provide extra power generation to inject into the grid or reduce their demand to meet the shortfall.”

In every scenario that they tested, the team found that the algorithm was able to successfully restabilize the grid and mitigate the attack or power failure. They acknowledge that to put in place such a network of grid-edge devices will require buy-in from customers, policymakers, and local officials, as well as innovations such as advanced power inverters that enable EVs to inject power back into the grid.

“This is just the first of many steps that have to happen in quick succession for this idea of local electricity markets to be implemented and expanded upon,” Annaswamy says. “But we believe it’s a good start.”

This work was supported, in part, by the U.S. Department of Energy and the MIT Energy Initiative.

Chip-based system for terahertz waves could enable more efficient, sensitive electronics

Thu, 02/20/2025 - 12:00am

The use of terahertz waves, which have shorter wavelengths and higher frequencies than radio waves, could enable faster data transmission, more precise medical imaging, and higher-resolution radar.

But effectively generating terahertz waves using a semiconductor chip, which is essential for incorporation into electronic devices, is notoriously difficult.

Many current techniques can’t generate waves with enough radiating power for useful applications unless they utilize bulky and expensive silicon lenses. Higher radiating power allows terahertz signals to travel farther. Such lenses, which are often larger than the chip itself, make it hard to integrate the terahertz source into an electronic device.

To overcome these limitations, MIT researchers developed a terahertz amplifier-multiplier system that achieves higher radiating power than existing devices without the need for silicon lenses.

By affixing a thin, patterned sheet of material to the back of the chip and utilizing higher-power Intel transistors, the researchers produced a more efficient, yet scalable, chip-based terahertz wave generator.

This compact chip could be used to make terahertz arrays for applications like improved security scanners for detecting hidden objects or environmental monitors for pinpointing airborne pollutants.

“To take full advantage of a terahertz wave source, we need it to be scalable. A terahertz array might have hundreds of chips, and there is no place to put silicon lenses because the chips are combined with such high density. We need a different package, and here we’ve demonstrated a promising approach that can be used for scalable, low-cost terahertz arrays,” says Jinchen Wang, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and lead author of a paper on the terahertz radiator.

He is joined on the paper by EECS graduate students Daniel Sheen and Xibi Chen; Steven F. Nagle, managing director of the T.J. Rodgers RLE Laboratory; and senior author Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group. The research will be presented at the IEEE International Solid-States Circuits Conference.

Making waves

Terahertz waves sit on the electromagnetic spectrum between radio waves and infrared light. Their higher frequencies enable them to carry more information per second than radio waves, while they can safely penetrate a wider range of materials than infrared light.

One way to generate terahertz waves is with a CMOS chip-based amplifier-multiplier chain that increases the frequency of radio waves until they reach the terahertz range. To achieve the best performance, waves go through the silicon chip and are eventually emitted out the back into the open air.

But a property known as the dielectric constant gets in the way of a smooth transmission.

The dielectric constant influences how electromagnetic waves interact with a material. It affects the amount of radiation that is absorbed, reflected, or transmitted. Because the dielectric constant of silicon is much higher than that of air, most terahertz waves are reflected at the silicon-air boundary rather than being cleanly transmitted out the back.

Since most signal strength is lost at this boundary, current approaches often use silicon lenses to boost the power of the remaining signal. 

The MIT researchers approached this problem differently.

They drew on an electromechanical theory known as matching. With matching, they seek to equal out the dielectric constants of silicon and air, which will minimize the amount of signal that is reflected at the boundary.

They accomplish this by sticking a thin sheet of material which has a dielectric constant between silicon and air to the back of the chip. With this matching sheet in place, most waves will be transmitted out the back rather than being reflected.

A scalable approach

They chose a low-cost, commercially available substrate material with a dielectric constant very close to what they needed for matching. To improve performance, they used a laser cutter to punch tiny holes into the sheet until its dielectric constant was exactly right.

“Since the dielectric constant of air is 1, if you just cut some subwavelength holes in the sheet, it is equivalent to injecting some air, which lowers the overall dielectric constant of the matching sheet,” Wang explains.

In addition, they designed their chip with special transistors developed by Intel that have a higher maximum frequency and breakdown voltage than traditional CMOS transistors.

“These two things taken together, the more powerful transistors and the dielectric sheet, plus a few other small innovations, enabled us to outperform several other devices,” he says.

Their chip generated terahertz signals with a peak radiation power of 11.1 decibel-milliwatts, the best among state-of-the-art techniques. Moreover, since the low-cost chip can be fabricated at scale, it could be integrated into real-world electronic devices more readily.

One of the biggest challenges of developing a scalable chip was determining how to manage the power and temperature when generating terahertz waves.

“Because the frequency and the power are so high, many of the standard ways to design a CMOS chip are not applicable here,” Wang says.

The researchers also needed to devise a technique for installing the matching sheet that could be scaled up in a manufacturing facility.

Moving forward, they want to demonstrate this scalability by fabricating a phased array of CMOS terahertz sources, enabling them to steer and focus a powerful terahertz beam with a low-cost, compact device.

This research is supported, in part, by NASA’s Jet Propulsion Laboratory and Strategic University Research Partnerships Program, as well as the MIT Center for Integrated Circuits and Systems. The chip was fabricated through the Intel University Shuttle Program.

Reducing carbon emissions from residential heating: A pathway forward

Wed, 02/19/2025 - 3:25pm

In the race to reduce climate-warming carbon emissions, the buildings sector is falling behind. While carbon dioxide (CO2) emissions in the U.S. electric power sector dropped by 34 percent between 2005 and 2021, emissions in the building sector declined by only 18 percent in that same time period. Moreover, in extremely cold locations, burning natural gas to heat houses can make up a substantial share of the emissions portfolio. Therefore, steps to electrify buildings in general, and residential heating in particular, are essential for decarbonizing the U.S. energy system.

But that change will increase demand for electricity and decrease demand for natural gas. What will be the net impact of those two changes on carbon emissions and on the cost of decarbonizing? And how will the electric power and natural gas sectors handle the new challenges involved in their long-term planning for future operations and infrastructure investments?

A new study by MIT researchers with support from the MIT Energy Initiative (MITEI) Future Energy Systems Center unravels the impacts of various levels of electrification of residential space heating on the joint power and natural gas systems. A specially devised modeling framework enabled them to estimate not only the added costs and emissions for the power sector to meet the new demand, but also any changes in costs and emissions that result for the natural gas sector.

The analyses brought some surprising outcomes. For example, they show that — under certain conditions — switching 80 percent of homes to heating by electricity could cut carbon emissions and at the same time significantly reduce costs over the combined natural gas and electric power sectors relative to the case in which there is only modest switching. That outcome depends on two changes: Consumers must install high-efficiency heat pumps plus take steps to prevent heat losses from their homes, and planners in the power and the natural gas sectors must work together as they make long-term infrastructure and operations decisions. Based on their findings, the researchers stress the need for strong state, regional, and national policies that encourage and support the steps that homeowners and industry planners can take to help decarbonize today’s building sector.

A two-part modeling approach

To analyze the impacts of electrification of residential heating on costs and emissions in the combined power and gas sectors, a team of MIT experts in building technology, power systems modeling, optimization techniques, and more developed a two-part modeling framework. Team members included Rahman Khorramfar, a senior postdoc in MITEI and the Laboratory for Information and Decision Systems (LIDS); Morgan Santoni-Colvin SM ’23, a former MITEI graduate research assistant, now an associate at Energy and Environmental Economics, Inc.; Saurabh Amin, a professor in the Department of Civil and Environmental Engineering and principal investigator in LIDS; Audun Botterud, a principal research scientist in LIDS; Leslie Norford, a professor in the Department of Architecture; and Dharik Mallapragada, a former MITEI principal research scientist, now an assistant professor at New York University, who led the project. They describe their new methods and findings in a paper published in the journal Cell Reports Sustainability on Feb. 6.

The first model in the framework quantifies how various levels of electrification will change end-use demand for electricity and for natural gas, and the impacts of possible energy-saving measures that homeowners can take to help. “To perform that analysis, we built a ‘bottom-up’ model — meaning that it looks at electricity and gas consumption of individual buildings and then aggregates their consumption to get an overall demand for power and for gas,” explains Khorramfar. By assuming a wide range of building “archetypes” — that is, groupings of buildings with similar physical characteristics and properties — coupled with trends in population growth, the team could explore how demand for electricity and for natural gas would change under each of five assumed electrification pathways: “business as usual” with modest electrification, medium electrification (about 60 percent of homes are electrified), high electrification (about 80 percent of homes make the change), and medium and high electrification with “envelope improvements,” such as sealing up heat leaks and adding insulation.

The second part of the framework consists of a model that takes the demand results from the first model as inputs and “co-optimizes” the overall electricity and natural gas system to minimize annual investment and operating costs while adhering to any constraints, such as limits on emissions or on resource availability. The modeling framework thus enables the researchers to explore the impact of each electrification pathway on the infrastructure and operating costs of the two interacting sectors.

The New England case study: A challenge for electrification

As a case study, the researchers chose New England, a region where the weather is sometimes extremely cold and where burning natural gas to heat houses contributes significantly to overall emissions. “Critics will say that electrification is never going to happen [in New England]. It’s just too expensive,” comments Santoni-Colvin. But he notes that most studies focus on the electricity sector in isolation. The new framework considers the joint operation of the two sectors and then quantifies their respective costs and emissions. “We know that electrification will require large investments in the electricity infrastructure,” says Santoni-Colvin. “But what hasn’t been well quantified in the literature is the savings that we generate on the natural gas side by doing that — so, the system-level savings.”

Using their framework, the MIT team performed model runs aimed at an 80 percent reduction in building-sector emissions relative to 1990 levels — a target consistent with regional policy goals for 2050. The researchers defined parameters including details about building archetypes, the regional electric power system, existing and potential renewable generating systems, battery storage, availability of natural gas, and other key factors describing New England.

They then performed analyses assuming various scenarios with different mixes of home improvements. While most studies assume typical weather, they instead developed 20 projections of annual weather data based on historical weather patterns and adjusted for the effects of climate change through 2050. They then analyzed their five levels of electrification.

Relative to business-as-usual projections, results from the framework showed that high electrification of residential heating could more than double the demand for electricity during peak periods and increase overall electricity demand by close to 60 percent. Assuming that building-envelope improvements are deployed in parallel with electrification reduces the magnitude and weather sensitivity of peak loads and creates overall efficiency gains that reduce the combined demand for electricity plus natural gas for home heating by up to 30 percent relative to the present day. Notably, a combination of high electrification and envelope improvements resulted in the lowest average cost for the overall electric power-natural gas system in 2050.

Lessons learned

Replacing existing natural gas-burning furnaces and boilers with heat pumps reduces overall energy consumption. Santoni-Colvin calls it “something of an intuitive result” that could be expected because heat pumps are “just that much more efficient than old, fossil fuel-burning systems. But even so, we were surprised by the gains.”

Other unexpected results include the importance of homeowners making more traditional energy efficiency improvements, such as adding insulation and sealing air leaks — steps supported by recent rebate policies. Those changes are critical to reducing costs that would otherwise be incurred for upgrading the electricity grid to accommodate the increased demand. “You can’t just go wild dropping heat pumps into everybody’s houses if you’re not also considering other ways to reduce peak loads. So it really requires an ‘all of the above’ approach to get to the most cost-effective outcome,” says Santoni-Colvin.

Testing a range of weather outcomes also provided important insights. Demand for heating fuel is very weather-dependent, yet most studies are based on a limited set of weather data — often a “typical year.” The researchers found that electrification can lead to extended peak electric load events that can last for a few days during cold winters. Accordingly, the researchers conclude that there will be a continuing need for a “firm, dispatchable” source of electricity; that is, a power-generating system that can be relied on to produce power any time it’s needed — unlike solar and wind systems. As examples, they modeled some possible technologies, including power plants fired by a low-carbon fuel or by natural gas equipped with carbon capture equipment. But they point out that there’s no way of knowing what types of firm generators will be available in 2050. It could be a system that’s not yet mature, or perhaps doesn’t even exist today.

In presenting their findings, the researchers note several caveats. For one thing, their analyses don’t include the estimated cost to homeowners of installing heat pumps. While that cost is widely discussed and debated, that issue is outside the scope of their current project.

In addition, the study doesn’t specify what happens to existing natural gas pipelines. “Some homes are going to electrify and get off the gas system and not have to pay for it, leaving other homes with increasing rates because the gas system cost now has to be divided among fewer customers,” says Khorramfar. “That will inevitably raise equity questions that need to be addressed by policymakers.”

Finally, the researchers note that policies are needed to drive residential electrification. Current financial support for installation of heat pumps and steps to make homes more thermally efficient are a good start. But such incentives must be coupled with a new approach to planning energy infrastructure investments. Traditionally, electric power planning and natural gas planning are performed separately. However, to decarbonize residential heating, the two sectors should coordinate when planning future operations and infrastructure needs. Results from the MIT analysis indicate that such cooperation could significantly reduce both emissions and costs for residential heating — a change that would yield a much-needed step toward decarbonizing the buildings sector as a whole.

J-WAFS: Supporting food and water research across MIT

Wed, 02/19/2025 - 2:40pm

MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has transformed the landscape of water and food research at MIT, driving faculty engagement and catalyzing new research and innovation in these critical areas. With philanthropic, corporate, and government support, J-WAFS’ strategic approach spans the entire research life cycle, from support for early-stage research to commercialization grants for more advanced projects.

Over the past decade, J-WAFS has invested approximately $25 million in direct research funding to support MIT faculty pursuing transformative research with the potential for significant impact. “Since awarding our first cohort of seed grants in 2015, it’s remarkable to look back and see that over 10 percent of the MIT faculty have benefited from J-WAFS funding,” observes J-WAFS Executive Director Renee J. Robins ’83. “Many of these professors hadn’t worked on water or food challenges before their first J-WAFS grant.” 

By fostering interdisciplinary collaborations and supporting high-risk, high-reward projects, J-WAFS has amplified the capacity of MIT faculty to pursue groundbreaking research that addresses some of the world’s most pressing challenges facing our water and food systems.

Drawing MIT faculty to water and food research

J-WAFS open calls for proposals enable faculty to explore bold ideas and develop impactful approaches to tackling critical water and food system challenges. Professor Patrick Doyle’s work in water purification exemplifies this impact. “Without J-WAFS, I would have never ventured into the field of water purification,” Doyle reflects. While previously focused on pharmaceutical manufacturing and drug delivery, exposure to J-WAFS-funded peers led him to apply his expertise in soft materials to water purification. “Both the funding and the J-WAFS community led me to be deeply engaged in understanding some of the key challenges in water purification and water security,” he explains.

Similarly, Professor Otto Cordero of the Department of Civil and Environmental Engineering (CEE) leveraged J-WAFS funding to pivot his research into aquaculture. Cordero explains that his first J-WAFS seed grant “has been extremely influential for my lab because it allowed me to take a step in a new direction, with no preliminary data in hand.” Cordero’s expertise is in microbial communities. He was previous unfamiliar with aquaculture, but he saw the relevance of microbial communities the health of farmed aquatic organisms.

Supporting early-career faculty

New assistant professors at MIT have particularly benefited from J-WAFS funding and support. J-WAFS has played a transformative role in shaping the careers and research trajectories of many new faculty members by encouraging them to explore novel research areas, and in many instances providing their first MIT research grant.

Professor Ariel Furst reflects on how pivotal J-WAFS’ investment has been in advancing her research. “This was one of the first grants I received after starting at MIT, and it has truly shaped the development of my group’s research program,” Furst explains. With J-WAFS’ backing, her lab has achieved breakthroughs in chemical detection and remediation technologies for water. “The support of J-WAFS has enabled us to develop the platform funded through this work beyond the initial applications to the general detection of environmental contaminants and degradation of those contaminants,” she elaborates. 

Karthish Manthiram, now a professor of chemical engineering and chemistry at Caltech, explains how J-WAFS’ early investment enabled him and other young faculty to pursue ambitious ideas. “J-WAFS took a big risk on us,” Manthiram reflects. His research on breaking the nitrogen triple bond to make ammonia for fertilizer was initially met with skepticism. However, J-WAFS’ seed funding allowed his lab to lay the groundwork for breakthroughs that later attracted significant National Science Foundation (NSF) support. “That early funding from J-WAFS has been pivotal to our long-term success,” he notes. 

These stories underscore the broad impact of J-WAFS’ support for early-career faculty, and its commitment to empowering them to address critical global challenges and innovate boldly.

Fueling follow-on funding 

J-WAFS seed grants enable faculty to explore nascent research areas, but external funding for continued work is usually necessary to achieve the full potential of these novel ideas. “It’s often hard to get funding for early stage or out-of-the-box ideas,” notes J-WAFS Director Professor John H. Lienhard V. “My hope, when I founded J-WAFS in 2014, was that seed grants would allow PIs [principal investigators] to prove out novel ideas so that they would be attractive for follow-on funding. And after 10 years, J-WAFS-funded research projects have brought more than $21 million in subsequent awards to MIT.”

Professor Retsef Levi led a seed study on how agricultural supply chains affect food safety, with a team of faculty spanning the MIT schools Engineering and Science as well as the MIT Sloan School of Management. The team parlayed their seed grant research into a multi-million-dollar follow-on initiative. Levi reflects, “The J-WAFS seed funding allowed us to establish the initial credibility of our team, which was key to our success in obtaining large funding from several other agencies.”

Dave Des Marais was an assistant professor in the Department of CEE when he received his first J-WAFS seed grant. The funding supported his research on how plant growth and physiology are controlled by genes and interact with the environment. The seed grant helped launch his lab’s work addressing enhancing climate change resilience in agricultural systems. The work led to his Faculty Early Career Development (CAREER) Award from the NSF, a prestigious honor for junior faculty members. Now an associate professor, Des Marais’ ongoing project to further investigate the mechanisms and consequences of genomic and environmental interactions is supported by the five-year, $1,490,000 NSF grant. “J-WAFS providing essential funding to get my new research underway,” comments Des Marais.

Stimulating interdisciplinary collaboration

Des Marais’ seed grant was also key to developing new collaborations. He explains, “the J-WAFS grant supported me to develop a collaboration with Professor Caroline Uhler in EECS/IDSS [the Department of Electrical Engineering and Computer Science/Institute for Data, Systems, and Society] that really shaped how I think about framing and testing hypotheses. One of the best things about J-WAFS is facilitating unexpected connections among MIT faculty with diverse yet complementary skill sets.”

Professors A. John Hart of the Department of Mechanical Engineering and Benedetto Marelli of CEE also launched a new interdisciplinary collaboration with J-WAFS funding. They partnered to join expertise in biomaterials, microfabrication, and manufacturing, to create printed silk-based colorimetric sensors that detect food spoilage. “The J-WAFS Seed Grant provided a unique opportunity for multidisciplinary collaboration,” Hart notes.

Professors Stephen Graves in the MIT Sloan School of Management and Bishwapriya Sanyal in the Department of Urban Studies and Planning (DUSP) partnered to pursue new research on agricultural supply chains. With field work in Senegal, their J-WAFS-supported project brought together international development specialists and operations management experts to study how small firms and government agencies influence access to and uptake of irrigation technology by poorer farmers. “We used J-WAFS to spur a collaboration that would have been improbable without this grant,” they explain. Being part of the J-WAFS community also introduced them to researchers in Professor Amos Winter’s lab in the Department of Mechanical Engineering working on irrigation technologies for low-resource settings. DUSP doctoral candidate Mark Brennan notes, “We got to share our understanding of how irrigation markets and irrigation supply chains work in developing economies, and then we got to contrast that with their understanding of how irrigation system models work.”

Timothy Swager, professor of chemistry, and Rohit Karnik, professor of mechanical engineering and J-WAFS associate director, collaborated on a sponsored research project supported by Xylem, Inc. through the J-WAFS Research Affiliate program. The cross-disciplinary research, which targeted the development of ultra-sensitive sensors for toxic PFAS chemicals, was conceived following a series of workshops hosted by J-WAFS. Swager and Karnik were two of the participants, and their involvement led to the collaborative proposal that Xylem funded. “J-WAFS funding allowed us to combine Swager lab’s expertise in sensing with my lab’s expertise in microfluidics to develop a cartridge for field-portable detection of PFAS,” says Karnik. “J-WAFS has enriched my research program in so many ways,” adds Swager, who is now working to commercialize the technology.

Driving global collaboration and impact

J-WAFS has also helped MIT faculty establish and advance international collaboration and impactful global research. By funding and supporting projects that connect MIT researchers with international partners, J-WAFS has not only advanced technological solutions, but also strengthened cross-cultural understanding and engagement.

Professor Matthew Shoulders leads the inaugural J-WAFS Grand Challenge project. In response to the first J-WAFS call for “Grand Challenge” proposals, Shoulders assembled an interdisciplinary team based at MIT to enhance and provide climate resilience to agriculture by improving the most inefficient aspect of photosynthesis, the notoriously-inefficient carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk/high-reward project following a competitive process that engaged external reviewers through a several rounds of iterative proposal development. The technical feedback to the team led them to researchers with complementary expertise from the Australian National University. “Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists and field trial experts, yielding a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team will be able to make a concerted effort using the most modern, state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”

Professor Leon Glicksman and Research Engineer Eric Verploegen’s team designed a low-cost cooling chamber to preserve fruits and vegetables harvested by smallholder farmers with no access to cold chain storage. J-WAFS’ guidance motivated the team to prioritize practical considerations informed by local collaborators, ensuring market competitiveness. “As our new idea for a forced-air evaporative cooling chamber was taking shape, we continually checked that our solution was evolving in a direction that would be competitive in terms of cost, performance, and usability to existing commercial alternatives,” explains Verploegen, who is currently an MIT D-Lab affiliate. Following the team’s initial seed grant, the team secured a J-WAFS Solutions commercialization grant, which Verploegen say “further motivated us to establish partnerships with local organizations capable of commercializing the technology earlier in the project than we might have done otherwise.” The team has since shared an open-source design as part of its commercialization strategy to maximize accessibility and impact.

Bringing corporate sponsored research opportunities to MIT faculty

J-WAFS also plays a role in driving private partnerships, enabling collaborations that bridge industry and academia. Through its Research Affiliate Program, for example, J-WAFS provides opportunities for faculty to collaborate with industry on sponsored research, helping to convert scientific discoveries into licensable intellectual property (IP) that companies can turn into commercial products and services.

J-WAFS introduced professor of mechanical engineering Alex Slocum to a challenge presented by its research affiliate company, Xylem: how to design a more energy-efficient pump for fluctuating flows. With centrifugal pumps consuming an estimated 6 percent of U.S. electricity annually, Slocum and his then-graduate student Hilary Johnson SM '18, PhD '22 developed an innovative variable volute mechanism that reduces energy usage. “Xylem envisions this as the first in a new category of adaptive pump geometry,” comments Johnson. The research produced a pump prototype and related IP that Xylem is working on commercializing. Johnson notes that these outcomes “would not have been possible without J-WAFS support and facilitation of the Xylem industry partnership.” Slocum adds, “J-WAFS enabled Hilary to begin her work on pumps, and Xylem sponsored the research to bring her to this point … where she has an opportunity to do far more than the original project called for.”

Swager speaks highly of the impact of corporate research sponsorship through J-WAFS on his research and technology translation efforts. His PFAS project with Karnik described above was also supported by Xylem. “Xylem was an excellent sponsor of our research. Their engagement and feedback were instrumental in advancing our PFAS detection technology, now on the path to commercialization,” Swager says.

Looking forward

What J-WAFS has accomplished is more than a collection of research projects; a decade of impact demonstrates how J-WAFS’ approach has been transformative for many MIT faculty members. As Professor Mathias Kolle puts it, his engagement with J-WAFS “had a significant influence on how we think about our research and its broader impacts.” He adds that it “opened my eyes to the challenges in the field of water and food systems and the many different creative ideas that are explored by MIT.” 

This thriving ecosystem of innovation, collaboration, and academic growth around water and food research has not only helped faculty build interdisciplinary and international partnerships, but has also led to the commercialization of transformative technologies with real-world applications. C. Cem Taşan, the POSCO Associate Professor of Metallurgy who is leading a J-WAFS Solutions commercialization team that is about to launch a startup company, sums it up by noting, “Without J-WAFS, we wouldn’t be here at all.”  

As J-WAFS looks to the future, its continued commitment — supported by the generosity of its donors and partners — builds on a decade of success enabling MIT faculty to advance water and food research that addresses some of the world’s most pressing challenges.

MIT community members elected to the National Academy of Engineering for 2025

Wed, 02/19/2025 - 1:15pm

Eight MIT researchers are among the 128 new members and 22 international members recently elected to the National Academy of Engineering (NAE) for 2025. Thirteen additional MIT alumni were also elected as new members.

One of the highest professional distinctions for engineers, membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education, including, where appropriate, significant contributions to the engineering literature” and to “the pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education.”

The eight MIT electees this year include:

Martin Zdenek Bazant, the E.G. Roos (1944) Chair Professor in the Department of Chemical Engineering, was honored for contributions to nonlinear electrochemical and electrokinetic phenomena, including induced charge electroosmosis, shock electrodialysis, capacitive desalination, and energy storage applications.

Moshe E. Ben-Akiva SM ’71, PhD ’73, the Edmund K. Turner Professor in the Department of Civil and Environmental Engineering, was honored for advances in transportation and infrastructure systems modeling and demand analysis. 

Charles L. Cooney SM ’67, PhD ’70, professor emeritus of the Department of Chemical Engineering, was honored for contributions to biochemical and pharmaceutical manufacturing that propelled the establishment and growth of the global biotechnology industry. 

Yoel Fink PhD ’00, a professor in the Department of Materials Science and Engineering and Department of Electrical Engineering and Computer Science (EECS), was honored for the design and production of structured photonic fibers, enabling surgeries and the invention of fabrics that sense and communicate. 

Tomás Lozano-Pérez ’73, SM ’77, PhD ’80, the School of Engineering Professor of Teaching Excellence in the Department of EECS and a principal investigator in the Computer Science and Artificial Intelligence Laboratory, was honored for contributions to robot motion planning and molecular design. 

Kristala L. Prather ’94, the Arthur Dehon Little Professor and head of the Department of Chemical Engineering, was honored for the development of innovative approaches to regulate metabolic flux in engineered microorganisms with applications to specialty chemicals production. 

Eric Swanson SM ’84, research affiliate at the Research Laboratory of Electronics and mentor for the MIT Deshpande Center for Technological Innovation, was honored for contributions and entrepreneurship in biomedical imaging and optical communications. 

Evelyn N. Wang ’00, MIT's vice president for climate and Ford Professor of Engineering in the Department of Mechanical Engineering, was honored for contributions to clean energy, water technology, and nanostructure-based phase change heat transfer, and for service to the nation.

“I am thrilled that eight MIT researchers, along with many others from our broader MIT community, have been elected to the National Academy of Engineering this year,” says Anantha P. Chandrakasan, dean of the School of Engineering, MIT’s chief innovation and strategy officer, and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “This is a well-deserved recognition of their outstanding contributions to the field of engineering, and I extend my heartfelt congratulations to them all.”

Thirteen additional alumni were elected to the National Academy of Engineering this year. They are: Gregg T. Beckham SM ’04, PhD ’08; Douglas C. Cameron PhD ’87; Long-Qing Chen PhD ’90; Jennifer R. Cochran PhD ’01; Christopher Richard Doerr ’89, ’90, SM ’90, PhD ’95;  Justin Hanes PhD ’96; Elizabeth Ann Holm SM ’89; Denise C. Johnson SM ’97; Wayne R. Johnson ’68, SM ’68, ScD ’70; Concetta LaMarca '81; Maja J. Matarić SM ’90, PhD ’94; David V. Schaffer PhD ’98; and Lixia Zhang PhD ’89.

Cynthia Barnhart to step down as provost

Wed, 02/19/2025 - 11:00am

Cynthia Barnhart SM ’86, PhD ’88 will step down as MIT’s provost, effective July 1, President Sally Kornbluth announced today. Barnhart, who served as MIT’s chancellor for more than seven years before becoming provost in 2022, will return to the faculty following a sabbatical.

Barnhart led a variety of efforts to enhance academics and research at the Institute during her tenure as provost, which bridged a transition between two MIT presidents. She drew from deep-rooted experience in the MIT community, first as a graduate student and then as a faculty member for more than 30 years, serving in the Department of Civil and Environmental Engineering and in the MIT Sloan School of Management, with affiliations in the Operations Research Center and the Center for Transportation and Logistics.

Barnhart is only the second MIT administrator, following Julius Stratton, to serve as both chancellor and provost. She is also the first woman to serve in each of these roles.

“It has been a privilege to serve and to support our community’s efforts to lead in education, research, and impact in the world,” Barnhart says.

MIT’s provost is the Institute’s chief academic and budget officer, responsible for leading efforts to establish academic priorities, managing financial planning and research support, and overseeing MIT’s international engagements. The provost works closely with the president, school and college deans, vice provosts, executive vice president and treasurer (EVPT), faculty officers, and many other leaders to recruit and retain the best talent and then, as Barnhart puts it, “create the conditions for them to thrive and do their best work at MIT.”

“Cindy has been a wonderful partner in thinking and doing, and I will be forever grateful for having been able to tap her knowledge of the Institute’s people, culture, practices and institutional systems,” Kornbluth wrote in an email to the MIT community.

The next chapter

L. Rafael Reif named Barnhart provost as he was stepping down as MIT’s 17th president, citing her “values, skills, vision, and collaborative spirit.” Her appointment provided the Institute with continuity and helped to sustain momentum during the transition and the first years of Kornbluth’s presidency.

“A good provost is constantly solving problems over the widest range of scales, thinking at the level of systems and structures but also digging deep when needed. Cindy has brought discipline, commitment, and heart to this role, and it's been a privilege to work with her,” says Faculty Chair Mary Fuller.

As she proceeds with a long-planned sabbatical, Barnhart will focus on a project she has been working on as provost. The effort centers around creating a flexible, affordable educational experience and curriculum in order to dramatically increase the size of the nation’s science- and technology-based workforce.

Creating conditions for faculty to thrive

As chief budget officer, Barnhart worked closely with EVPT Glen Shor, Vice President for Finance Katie Hammer, and colleagues across the Institute to provide essential resources for the Institute’s education and research enterprises. She cites as points of pride implementing new central support for MIT’s under-recovery system, closing the NASA and U.S. National Science Foundation (NSF) fellowship tuition and stipend shortfalls, and expanding the number of Office of the Provost professorship chairs to 230, an increase of more than 20 percent.

“The review of the Institute’s budget process that we launched last year is critical to the Institute’s effective response to emerging financial constraints,” Barnhart says.

“It’s about providing our community with access to the tools needed to develop strategic, creative ways to deploy our resources so that, even in the face of budget challenges, we can help people do what they came here to do — discover, invent, innovate, and problem solve, all in service to the nation and the world,” she says.

Responding to calls for more faculty involvement in searches to identify faculty leaders in the Office of the Provost, Barnhart established best practices for search advisory committees. Over the course of her tenure, these faculty-led groups assisted with the appointments of Vice Provost for Faculty Paula Hammond, Vice Provost for Open Learning Dimitris Bertsimas, Vice Provost for International Activities Duane Boning, MIT Sloan School of Management Dean Rick Locke, and Interim CEO and Director of the Singapore-MIT Alliance for Research and Technology (SMART) Bruce Tidor. In addition, Barnhart convened a search committee to identify a new leader to steward MIT’s research enterprise, which informed President Kornbluth’s appointment of Vice President for Research Ian A. Waitz last year.

“MIT’s next provost is going to be working with an exceptional team,” says Barnhart. “The Provost’s Office is positioned to ensure the faculty have what they need to make big impacts in research and education.”

Barnhart took several steps to provide her leadership team and faculty more broadly with professional development and career advancement opportunities. She standardized appointment and reappointment terms, created 360-degree feedback mechanisms, and formalized reappointment review processes for deans and vice provosts. In partnership with Hammond, new programs were launched on establishing positive department climates, generating impactful research funding proposals, and fostering effective graduate student mentoring.

“Cindy has always cared greatly about the faculty experience, and this deep regard is evident in all that she has done,” Hammond says. “She has sought to understand how we might better build a supportive environment that fosters faculty success and has invested in meaningful programs and policies that help to address faculty needs while developing tools for faculty to accomplish their professional and leadership goals.”

Barnhart and Hammond also partnered with Fuller, the MIT Institutional Research team, and other colleagues to assess MIT’s progress on addressing the findings of two landmark reports: the 1999 Study on the Status of Women Faculty in Science at MIT and the 2010 Report on the Initiative for Faculty Race and Diversity.

Barnhart shared the group’s analysis and corresponding response plan with the entire faculty. In her message, Barnhart highlighted how “the original reports’ power sprang from the rigorous analysis the authors conducted and from how openly our community reflected on the problems they identified. For MIT to foster the diverse breadth of faculty excellence that is critical to our mission, we need that same collective embrace of data and transparency, dialogue and action again.”

Advancing 21st-century education and research

Barnhart is committed to making MIT’s education accessible and affordable to a much broader set of learners. With Bertsimas, Barnhart launched the next phase of MIT Open Learning, which involves ambitious plans to extend MIT’s commitment to providing global access to the Institute’s brand of education.

“With her good judgement, open mindedness, passion for quality education in the world, and love and deep knowledge of MIT, Cindy has been a great partner in reshaping the strategy for open learning,” Bertsimas says. “I look forward to continuing our partnership in the years to come.”

As computing has become increasingly integral to many disciplines, the creation of interdisciplinary computing courses through The Common Ground, degrees that blend computing with another field, and interdisciplinary computing faculty hires have expanded the forefront of MIT education and research. With Barnhart as a strong champion, the MIT Schwarzman College of Computing has been at the center of these efforts.

“Embracing the college’s efforts to broaden and deepen MIT’s world-leading strengths in interdisciplinary education and research is, simply put, in Barnhart’s DNA,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing.

By design, the Institute’s strategic initiatives in climate, humanities, and life sciences also lean into this interdisciplinary approach. Barnhart worked alongside Kornbluth, Chief Innovation and Strategy Officer Anantha Chandrakasan, and many other faculty on the development of these efforts throughout her tenure.

“It has been a privilege working with Provost Barnhart and President Kornbluth to advance the Institute’s wide range of strategic initiatives,” says Chandrakasan. “With a sense of urgency that these initiatives demand, Provost Barnhart was instrumental in defining the vision for these missions, promoting broad engagement from the MIT community and beyond while paving critical pathways for seed funding and fundraising. It would have been impossible to launch these initiatives without her inspiring ideas, creative solutions, and incredible support.”

A systems thinker

After earning her PhD in transportation systems in 1988 at MIT, Barnhart joined the operations research faculty at the School of Industrial and Systems Engineering at Georgia Tech. She returned to MIT four years later in 1992 and has been at the Institute ever since. Her research, which she has continued throughout her time in leadership roles, focuses on the development of optimization models and methods for designing, planning, and operating transportation systems.

“I’m a systems thinker, an optimizer, and a problem solver,” Barnhart says. “That is one of the reasons I have enjoyed serving as provost, a role in which there certainly is no shortage of opportunity to apply my decision-making and problem-solving mindset.”

Barnhart became associate dean of the MIT School of Engineering in 2007 and served as acting dean in 2010-2011. As chancellor, she was responsible for “all things students” at MIT, including student life, undergraduate admissions, graduate student support, the first-year educational experience, and more. She also participated in strategic planning, faculty appointments, resource development, and campus planning as chancellor.

Barnhart has been an undergraduate adviser and has supervised graduate and undergraduate theses of students across the Institute, including in the departments of Civil and Environmental Engineering, Aeronautics and Astronautics, Mechanical Engineering, and Electrical Engineering and Computer Science; in the Engineering Systems Division; in the MIT Sloan School of Management; in the Operations Research Center; and in the Center for Transportation and Logistics. She has taught subjects jointly listed in these units on optimization and operations research, with applications to transportation operations, planning, and control.

Kornbluth will work with a group of faculty members drawn from each school and the college to help her in selecting the next provost. 

MIT Human Insight Collaborative launches SHASS Faculty Fellows program

Wed, 02/19/2025 - 9:25am

A new initiative will offer faculty in the MIT School of Humanities, Arts, and Social Sciences (SHASS) the opportunity to participate in a semester-long internal fellows program.

The SHASS Faculty Fellows program, administered by the MIT Human Insight Collaborative (MITHIC), will provide faculty with time to focus on their research, writing, or artistic production, and to receive collegial support for the same; to foster social and intellectual community within SHASS, including between faculty and students beyond the classroom; and provide informal opportunities to develop intergenerational professional mentorships.

“SHASS faculty have been eager for a supportive, vibrant internal community for the nearly 35 years I’ve been at MIT,” says Anne McCants, the Ann F. Friedlaender Professor of History, and Faculty Fellows Program committee chair. “By providing participants with UROPs [Undergraduate Research Opportunities Program projects] and other opportunities to interact with students, we’re demonstrating our commitment to fostering an environment in which faculty can recharge and sustain the high-quality teaching and service our community has come to expect and appreciate.”

The creation of the program was one of the recommendations included in a May 2024 SHASS Programming Initiative Report, an effort led by Keeril Makan, SHASS associate dean for strategic initiatives, and the Michael (1949) and Sonja Koerner Music Composition Professor.

The inaugural group of fellows for Spring 2026 includes:

Tenure-line faculty are eligible to apply, with a maximum of 12 members selected per year, or roughly six participants per term.

Selected faculty will spend a semester outside the classroom while still holding time for sustained interaction with a small cohort of colleagues. Fellows can work with the dedicated students in UROP to advance their research projects while investing in a unique, cross-disciplinary set of conversations.

“I was honored to help design the Fellows Program and to serve on the review committee,” says Arthur Bahr, a professor in the Literature Section and a member of the Faculty Fellows Program Selection Committee. “I was fortunate to have wonderful mentors within Literature, but would have loved the opportunity to get to know and learn from colleagues in other fields, which the Fellows Program will offer.”

“What excites me about the Faculty Fellows Program — beyond the opportunity for faculty to connect with each other across disciplines and units — is that it will spotlight the excellence and centrality of the humanities, arts, and social sciences at MIT,” says Heather Paxson, SHASS associate dean for faculty, and the William R. Kenan, Jr. Professor of Anthropology. “I look forward to hearing about new ideas sparked, and new friendships made, through participation in the program.”

Organizers say the program signals that MIT takes its investment in the humanities, arts and social sciences as seriously as its peer institutions, most of which have internal fellows programs.

“Given the strong demand for something like this, getting the program up and running is an important signal to SHASS faculty that Dean [Agustín] Rayo hears their concerns and is committed to supporting this type of community development,” McCants notes.

Like human brains, large language models reason about diverse data in a general way

Wed, 02/19/2025 - 12:00am

While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio.   

MIT researchers probed the inner workings of LLMs to better understand how they process such assorted data, and found evidence that they share some similarities with the human brain.

Neuroscientists believe the human brain has a “semantic hub” in the anterior temporal lobe that integrates semantic information from various modalities, like visual data and tactile inputs. This semantic hub is connected to modality-specific “spokes” that route information to the hub. The MIT researchers found that LLMs use a similar mechanism by abstractly processing data from diverse modalities in a central, generalized way. For instance, a model that has English as its dominant language would rely on English as a central medium to process inputs in Japanese or reason about arithmetic, computer code, etc. Furthermore, the researchers demonstrate that they can intervene in a model’s semantic hub by using text in the model’s dominant language to change its outputs, even when the model is processing data in other languages.

These findings could help scientists train future LLMs that are better able to handle diverse data.

“LLMs are big black boxes. They have achieved very impressive performance, but we have very little knowledge about their internal working mechanisms. I hope this can be an early step to better understand how they work so we can improve upon them and better control them when needed,” says Zhaofeng Wu, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this research.

His co-authors include Xinyan Velocity Yu, a graduate student at the University of Southern California (USC); Dani Yogatama, an associate professor at USC; Jiasen Lu, a research scientist at Apple; and senior author Yoon Kim, an assistant professor of EECS at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the International Conference on Learning Representations.

Integrating diverse data

The researchers based the new study upon prior work which hinted that English-centric LLMs use English to perform reasoning processes on various languages.

Wu and his collaborators expanded this idea, launching an in-depth study into the mechanisms LLMs use to process diverse data.

An LLM, which is composed of many interconnected layers, splits input text into words or sub-words called tokens. The model assigns a representation to each token, which enables it to explore the relationships between tokens and generate the next word in a sequence. In the case of images or audio, these tokens correspond to particular regions of an image or sections of an audio clip.

The researchers found that the model’s initial layers process data in its specific language or modality, like the modality-specific spokes in the human brain. Then, the LLM converts tokens into modality-agnostic representations as it reasons about them throughout its internal layers, akin to how the brain’s semantic hub integrates diverse information.

The model assigns similar representations to inputs with similar meanings, despite their data type, including images, audio, computer code, and arithmetic problems. Even though an image and its text caption are distinct data types, because they share the same meaning, the LLM would assign them similar representations.

For instance, an English-dominant LLM “thinks” about a Chinese-text input in English before generating an output in Chinese. The model has a similar reasoning tendency for non-text inputs like computer code, math problems, or even multimodal data.

To test this hypothesis, the researchers passed a pair of sentences with the same meaning but written in two different languages through the model. They measured how similar the model’s representations were for each sentence.

Then they conducted a second set of experiments where they fed an English-dominant model text in a different language, like Chinese, and measured how similar its internal representation was to English versus Chinese. The researchers conducted similar experiments for other data types.

They consistently found that the model’s representations were similar for sentences with similar meanings. In addition, across many data types, the tokens the model processed in its internal layers were more like English-centric tokens than the input data type.

“A lot of these input data types seem extremely different from language, so we were very surprised that we can probe out English-tokens when the model processes, for example, mathematic or coding expressions,” Wu says.

Leveraging the semantic hub

The researchers think LLMs may learn this semantic hub strategy during training because it is an economical way to process varied data.

“There are thousands of languages out there, but a lot of the knowledge is shared, like commonsense knowledge or factual knowledge. The model doesn’t need to duplicate that knowledge across languages,” Wu says.

The researchers also tried intervening in the model’s internal layers using English text when it was processing other languages. They found that they could predictably change the model outputs, even though those outputs were in other languages.

Scientists could leverage this phenomenon to encourage the model to share as much information as possible across diverse data types, potentially boosting efficiency.

But on the other hand, there could be concepts or knowledge that are not translatable across languages or data types, like culturally specific knowledge. Scientists might want LLMs to have some language-specific processing mechanisms in those cases.

“How do you maximally share whenever possible but also allow languages to have some language-specific processing mechanisms? That could be explored in future work on model architectures,” Wu says.

In addition, researchers could use these insights to improve multilingual models. Often, an English-dominant model that learns to speak another language will lose some of its accuracy in English. A better understanding of an LLM’s semantic hub could help researchers prevent this language interference, he says.

“Understanding how language models process inputs across languages and modalities is a key question in artificial intelligence. This paper makes an interesting connection to neuroscience and shows that the proposed ‘semantic hub hypothesis’ holds in modern language models, where semantically similar representations of different data types are created in the model’s intermediate layers,” says Mor Geva Pipek, an assistant professor in the School of Computer Science at Tel Aviv University, who was not involved with this work. “The hypothesis and experiments nicely tie and extend findings from previous works and could be influential for future research on creating better multimodal models and studying links between them and brain function and cognition in humans.”

This research is funded, in part, by the MIT-IBM Watson AI Lab.

MIT spinout maps the body’s metabolites to uncover the hidden drivers of disease

Wed, 02/19/2025 - 12:00am

Biology is never simple. As researchers make strides in reading and editing genes to treat disease, for instance, a growing body of evidence suggests that the proteins and metabolites surrounding those genes can’t be ignored.

The MIT spinout ReviveMed has created a platform for measuring metabolites — products of metabolism like lipids, cholesterol, sugar, and carbs — at scale. The company is using those measurements to uncover why some patients respond to treatments when others don’t and to better understand the drivers of disease.

“Historically, we’ve been able to measure a few hundred metabolites with high accuracy, but that’s a fraction of the metabolites that exist in our bodies,” says ReviveMed CEO Leila Pirhaji PhD ’16, who founded the company with Professor Ernest Fraenkel. “There’s a massive gap between what we’re accurately measuring and what exists in our body, and that’s what we want to tackle. We want to tap into the powerful insights from underutilized metabolite data.”

ReviveMed’s progress comes as the broader medical community is increasingly linking dysregulated metabolites to diseases like cancer, Alzheimer’s, and cardiovascular disease. ReviveMed is using its platform to help some of the largest pharmaceutical companies in the world find patients that stand to benefit from their treatments. It’s also offering software to academic researchers for free to help gain insights from untapped metabolite data.

“With the field of AI booming, we think we can overcome data problems that have limited the study of metabolites,” Pirhaji says. “There’s no foundation model for metabolomics, but we see how these models are changing various fields such as genomics, so we’re starting to pioneer their development.”

Finding a challenge

Pirhaji was born and raised in Iran before coming to MIT in 2010 to pursue her PhD in biological engineering. She had previously read Fraenkel’s research papers and was excited to contribute to the network models he was building, which integrated data from sources like genomes, proteomes, and other molecules.

“We were thinking about the big picture in terms of what you can do when you can measure everything — the genes, the RNA, the proteins, and small molecules like metabolites and lipids,” says Fraenkel, who currently serves on ReviveMed’s board of directors. “We’re probably only able to measure something like 0.1 percent of small molecules in the body. We thought there had to be a way to get as comprehensive a view of those molecules as we have for the other ones. That would allow us to map out all of the changes occurring in the cell, whether it's in the context of cancer or development or degenerative diseases.”

About halfway through her PhD, Pirhaji sent some samples to a collaborator at Harvard University to collect data on the metabolome — the small molecules that are the products of metabolic processes. The collaborator sent Pirhaji back a huge excel sheet with thousands of lines of data — but they told her she’s better off ignoring everything beyond the top 100 rows because they had no idea what the other data meant. She took that as a challenge.

“I started thinking maybe we could use our network models to solve this problem,” Pirhaji recalls. “There was a lot of ambiguity in the data, and it was very interesting to me because no one had tried this before. It seemed like a big gap in the field.”

Pirhaji developed a huge knowledge graph that included millions of interactions between proteins and metabolites. The data was rich but messy — Pirhaji called it a “hair ball” that couldn’t tell researchers anything about disease. To make it more useful, she created a new way to characterize metabolic pathways and features. In a 2016 paper in Nature Methods, she described the system and used it to analyze metabolic changes in a model of Huntington’s disease.

Initially, Pirhaji had no intention of starting a company, but she started realizing the technology’s commercial potential in the final years of her PhD.

“There’s no entrepreneurial culture in Iran,” Pirhaji says. “I didn’t know how to start a company or turn science into a startup, so I leveraged everything MIT offered.”

Pirhaji began taking classes at the MIT Sloan School of Management, including Course 15.371 (Innovation Teams), where she teamed up with classmates to think about how to apply her technology. She also used the MIT Venture Mentoring Service and MIT Sandbox, and took part in the Martin Trust Center for MIT Entrepreneurship’s delta v startup accelerator.

When Pirhaji and Fraenkel officially founded ReviveMed, they worked with MIT’s Technology Licensing Office to access the patents around their work. Pirhaji has since further developed the platform to solve other problems she discovered from talks with hundreds of leaders in pharmaceutical companies.

ReviveMed began by working with hospitals to uncover how lipids are dysregulated in a disease known as metabolic dysfunction-associated steatohepatitis. In 2020, ReviveMed worked with Bristol Myers Squibb to predict how subsets of cancer patients would respond to the company’s immunotherapies.

Since then, ReviveMed has worked with several companies, including four of the top 10 global pharmaceutical companies, to help them understand the metabolic mechanisms behind their treatments. Those insights help identify the patients that stand to benefit the most from different therapies more quickly.

“If we know which patients will benefit from every drug, it would really decrease the complexity and time associated with clinical trials,” Pirhaji says. “Patients will get the right treatments faster.”

Generative models for metabolomics

Earlier this year, ReviveMed collected a dataset based on 20,000 patient blood samples that it used to create digital twins of patients and generative AI models for metabolomics research. ReviveMed is making its generative models available to nonprofit academic researchers, which could accelerate our understanding of how metabolites influence a range of diseases.

“We’re democratizing the use of metabolomic data,” Pirhaji says. “It’s impossible for us to have data from every single patient in the world, but our digital twins can be used to find patients that could benefit from treatments based on their demographics, for instance, by finding patients that could be at risk of cardiovascular disease.”

The work is part of ReviveMed’s mission to create metabolic foundation models that researchers and pharmaceutical companies can use to understand how diseases and treatments change the metabolites of patients.

“Leila solved a lot of really hard problems you face when you’re trying to take an idea out of the lab and turn it into something that’s robust and reproducible enough to be deployed in biomedicine,” Fraenkel says. “Along the way, she also realized the software that she’s developed is incredibly powerful by itself and could be transformational.”

Unlocking the secrets of fusion’s core with AI-enhanced simulations

Tue, 02/18/2025 - 3:45pm

Creating and sustaining fusion reactions — essentially recreating star-like conditions on Earth — is extremely difficult, and Nathan Howard PhD ’12, a principal research scientist at the MIT Plasma Science and Fusion Center (PSFC), thinks it’s one of the most fascinating scientific challenges of our time. “Both the science and the overall promise of fusion as a clean energy source are really interesting. That motivated me to come to grad school [at MIT] and work at the PSFC,” he says.

Howard is member of the Magnetic Fusion Experiments Integrated Modeling (MFE-IM) group at the PSFC. Along with MFE-IM group leader Pablo Rodriguez-Fernandez, Howard and the team use simulations and machine learning to predict how plasma will behave in a fusion device. MFE-IM and Howard’s research aims to forecast a given technology or configuration’s performance before it’s piloted in an actual fusion environment, allowing for smarter design choices. To ensure their accuracy, these models are continuously validated using data from previous experiments, keeping their simulations grounded in reality.

In a recent open-access paper titled “Prediction of Performance and Turbulence in ITER Burning Plasmas via Nonlinear Gyrokinetic Profile Prediction,” published in the January issue of Nuclear Fusion, Howard explains how he used high-resolution simulations of the swirling structures present in plasma, called turbulence, to confirm that the world’s largest experimental fusion device, currently under construction in Southern France, will perform as expected when switched on. He also demonstrates how a different operating setup could produce nearly the same amount of energy output but with less energy input, a discovery that could positively affect the efficiency of fusion devices in general.

The biggest and best of what’s never been built

Forty years ago, the United States and six other member nations came together to build ITER (Latin for “the way”), a fusion device that, once operational, would yield 500 megawatts of fusion power, and a plasma able to generate 10 times more energy than it absorbs from external heating. The plasma setup designed to achieve these goals — the most ambitious of any fusion experiment — is called the ITER baseline scenario, and as fusion science and plasma physics have progressed, ways to achieve this plasma have been refined using increasingly more powerful simulations like the modeling framework Howard used.

In his work to verify the baseline scenario, Howard used CGYRO, a computer code developed by Howard’s collaborators at General Atomics. CGYRO applies a complex plasma physics model to a set of defined fusion operating conditions. Although it is time-intensive, CGYRO generates very detailed simulations on how plasma behaves at different locations within a fusion device.

The comprehensive CGYRO simulations were then run through the PORTALS framework, a collection of tools originally developed at MIT by Rodriguez-Fernandez. “PORTALS takes the high-fidelity [CGYRO] runs and uses machine learning to build a quick model called a ‘surrogate’ that can mimic the results of the more complex runs, but much faster,” Rodriguez-Fernandez explains. “Only high-fidelity modeling tools like PORTALS give us a glimpse into the plasma core before it even forms. This predict-first approach allows us to create more efficient plasmas in a device like ITER.”

After the first pass, the surrogates’ accuracy was checked against the high-fidelity runs, and if a surrogate wasn’t producing results in line with CGYRO’s, PORTALS was run again to refine the surrogate until it better mimicked CGYRO’s results. “The nice thing is, once you have built a well-trained [surrogate] model, you can use it to predict conditions that are different, with a very much reduced need for the full complex runs.” Once they were fully trained, the surrogates were used to explore how different combinations of inputs might affect ITER’s predicted performance and how it achieved the baseline scenario. Notably, the surrogate runs took a fraction of the time, and they could be used in conjunction with CGYRO to give it a boost and produce detailed results more quickly.

“Just dropped in to see what condition my condition was in”

Howard’s work with CGYRO, PORTALS, and surrogates examined a specific combination of operating conditions that had been predicted to achieve the baseline scenario. Those conditions included the magnetic field used, the methods used to control plasma shape, the external heating applied, and many other variables. Using 14 iterations of CGYRO, Howard was able to confirm that the current baseline scenario configuration could achieve 10 times more power output than input into the plasma. Howard says of the results, “The modeling we performed is maybe the highest fidelity possible at this time, and almost certainly the highest fidelity published.”

The 14 iterations of CGYRO used to confirm the plasma performance included running PORTALS to build surrogate models for the input parameters and then tying the surrogates to CGYRO to work more efficiently. It only took three additional iterations of CGYRO to explore an alternate scenario that predicted ITER could produce almost the same amount of energy with about half the input power. The surrogate-enhanced CGYRO model revealed that the temperature of the plasma core — and thus the fusion reactions — wasn’t overly affected by less power input; less power input equals more efficient operation. Howard’s results are also a reminder that there may be other ways to improve ITER’s performance; they just haven’t been discovered yet.

Howard reflects, “The fact that we can use the results of this modeling to influence the planning of experiments like ITER is exciting. For years, I’ve been saying that this was the goal of our research, and now that we actually do it — it’s an amazing arc, and really fulfilling.” 

Viewing the universe through ripples in space

Tue, 02/18/2025 - 12:00am

In early September 2015, Salvatore Vitale, who was then a research scientist at MIT, stopped home in Italy for a quick visit with his parents after attending a meeting in Budapest. The meeting had centered on the much-anticipated power-up of Advanced LIGO — a system scientists hoped would finally detect a passing ripple in space-time known as a gravitational wave.

Albert Einstein had predicted the existence of these cosmic reverberations nearly 100 years earlier and thought they would be impossible to measure. But scientists including Vitale believed they might have a shot with their new ripple detector, which was scheduled, finally, to turn on in a few days. At the meeting in Budapest, team members were excited, albeit cautious, acknowledging that it could be months or years before the instruments picked up any promising signs.

However, the day after he arrived for his long-overdue visit with his family, Vitale received a huge surprise.

“The next day, we detect the first gravitational wave, ever,” he remembers. “And of course I had to lock myself in a room and start working on it.”

Vitale and his colleagues had to work in secrecy to prevent the news from getting out before they could scientifically confirm the signal and characterize its source. That meant that no one — not even his parents — could know what he was working on. Vitale departed for MIT and promised that he would come back to visit for Christmas.

“And indeed, I fly back home on the 25th of December, and on the 26th we detect the second gravitational wave! At that point I had to swear them to secrecy and tell them what happened, or they would strike my name from the family record,” he says, only partly in jest.

With the family peace restored, Vitale could focus on the path ahead, which suddenly seemed bright with gravitational discoveries. He and his colleagues, as part of the LIGO Scientific Collaboration, announced the detection of the first gravitational wave in February 2016, confirming Einstein’s prediction. For Vitale, the moment also solidified his professional purpose.

“Had LIGO not detected gravitational waves when it did, I would not be where I am today,” Vitale says. “For sure I was very lucky to be doing this at the right time, for me, and for the instrument and the science.”

A few months after, Vitale joined the MIT faculty as an assistant professor of physics. Today, as a recently tenured associate professor, he is working with his students to analyze a bounty of gravitational signals, from Advanced LIGO as well as Virgo (a similar detector in Italy) and KAGRA, in Japan. The combined power of these observatories is enabling scientists to detect at least one gravitational wave a week, which has revealed a host of extreme sources, from merging black holes to colliding neutron stars.

“Gravitational waves give us a different view of the same universe, which could teach us about things that are very hard to see with just photons,” Vitale says.

Random motion

Vitale is from Reggio di Calabria, a small coastal city in the south of Italy, right at “the tip of the boot,” as he says. His family owned and ran a local grocery store, where he spent so much time as a child that he could recite the names of nearly all the wines in the store.

When he was 9 years old, he remembers stopping in at the local newsstand, which also sold used books. He gathered all the money he had in order to purchase two books, both by Albert Einstein. The first was a collection of letters from the physicist to his friends and family. The second was his theory of relativity.

“I read the letters, and then went through the second book and remember seeing these weird symbols that didn’t mean anything to me,” Vitale recalls.

Nevertheless, the kid was hooked, and continued reading up on physics, and later, quantum mechanics. Toward the end of high school, it wasn’t clear if Vitale could go on to college. Large grocery chains had run his parents’ store out of business, and in the process, the family lost their home and were struggling to recover their losses. But with his parents’ support, Vitale applied and was accepted to the University of Bologna, where he went on to earn a bachelor’s and a master’s in theoretical physics, specializing in general relativity and approximating ways to solve Einstein’s equations. He went on to pursue his PhD in theoretical physics at the Pierre and Marie Curie University in Paris.

“Then, things changed in a very, very random way,” he says.

Vitale’s PhD advisor was hosting a conference, and Vitale volunteered to hand out badges and flyers and help guests get their bearings. That first day, one guest drew his attention.

“I see this guy sitting on the floor, kind of banging his head against his computer because he could not connect his Ubuntu computer to the Wi-Fi, which back then was very common,” Vitale says. “So I tried to help him, and failed miserably, but we started chatting.”

The guest happened to be a professor from Arizona who specialized in analyzing gravitational-wave signals. Over the course of the conference, the two got to know each other, and the professor invited Vitale to Arizona to work with his research group. The unexpected opportunity opened a door to gravitational-wave physics that Vitale might have passed by otherwise.

“When I talk to undergrads and how they can plan their career, I say I don’t know that you can,” Vitale says. “The best you can hope for is a random motion that, overall, goes in the right direction.”

High risk, high reward

Vitale spent two months at Embry-Riddle Aeronautical University in Prescott, Arizona, where he analyzed simulated data of gravitational waves. At that time, around 2009, no one had detected actual signals of gravitational waves. The first iteration of the LIGO detectors began observations in 2002 but had so far come up empty.

“Most of my first few years was working entirely with simulated data because there was no real data in the first place. That led a lot of people to leave the field because it was not an obvious path,” Vitale says.

Nevertheless, the work he did in Arizona only piqued his interest, and Vitale chose to specialize in gravitational-wave physics, returning to Paris to finish up his PhD, then going on to a postdoc position at NIKHEF, the Dutch National Institute for Subatomic Physics at the University of Amsterdam. There, he joined on as a member of the Virgo collaboration, making further connections among the gravitational-wave community.

In 2012, he made the move to Cambridge, Massachusetts, where he started as a postdoc at MIT’s LIGO Laboratory. At that time, scientists there were focused on fine-tuning Advanced LIGO’s detectors and simulating the types of signals that they might pick up. Vitale helped to develop an algorithm to search for signals likely to be gravitational waves.

Just before the detectors turned on for the first observing run, Vitale was promoted to research scientist. And as luck would have it, he was working with MIT students and colleagues on one of the two algorithms that picked up what would later be confirmed to be the first ever gravitational wave.

“It was exciting,” Vitale recalls. “Also, it took us several weeks to convince ourselves that it was real.”

In the whirlwind that followed the official announcement, Vitale became an assistant professor in MIT’s physics department. In 2017, in recognition of the discovery, the Nobel Prize in Physics was awarded to three pivotal members of the LIGO team, including MIT’s Rainier Weiss. Vitale and other members of the LIGO-Virgo collaboration attended the Nobel ceremony later on, in Stockholm, Sweden — a moment that was captured in a photograph displayed proudly in Vitale’s office.

Vitale was promoted to associate professor in 2022 and earned tenure in 2024. Unfortunately his father passed away shortly before the tenure announcement. “He would have been very proud,” Vitale reflects. 

Now, in addition to analyzing gravitational-wave signals from LIGO, Virgo, and KAGRA, Vitale is pushing ahead on plans for an even bigger, better LIGO successor. He is part of the Cosmic Explorer Project, which aims to build a gravitational-wave detector that is similar in design to LIGO but 10 times bigger. At that scale, scientists believe such an instrument could pick up signals from sources that are much farther away in space and time, even close to the beginning of the universe.

Then, scientists could look for never-before-detected sources, such as the very first black holes formed in the universe. They could also search within the same neighborhood as LIGO and Virgo, but with higher precision. Then, they might see gravitational signals that Einstein didn’t predict.

“Einstein developed the theory of relativity to explain everything from the motion of Mercury, which circles the sun every 88 days, to objects such as black holes that are 30 times the mass of the sun and move at half the speed of light,” Vitale says. “There’s no reason the same theory should work for both cases, but so far, it seems so, and we’ve found no departure from relativity. But you never know, and you have to keep looking. It’s high risk, for high reward.” 

Engineers turn the body’s goo into new glue

Mon, 02/17/2025 - 3:00pm

Within the animal kingdom, mussels are masters of underwater adhesion. The marine molluscs cluster atop rocks and along the bottoms of ships, and hold fast against the ocean’s waves thanks to a gluey plaque they secrete through their foot. These tenacious adhesive structures have prompted scientists in recent years to design similar bioinspired, waterproof adhesives.

Now engineers from MIT and Freie Universität Berlin have developed a new type of glue that combines the waterproof stickiness of the mussels’ plaques with the germ-proof properties of another natural material: mucus.

Every surface in our bodies not covered in skin is lined with a protective layer of mucus — a slimy network of proteins that acts as a physical barrier against bacteria and other infectious agents. In their new work, the engineers combined sticky, mussel-inspired polymers with mucus-derived proteins, or mucins, to form a gel that strongly adheres to surfaces.

The new mucus-derived glue prevented the buildup of bacteria while keeping its sticky hold, even on wet surfaces. The researchers envision that once the glue’s properties are optimized, it could be applied as a liquid by injection or spray, which would then solidify into a sticky gel. The material might be used to coat medical implants, for example, to prevent infection and bacteria buildup.

The team’s new glue-making approach could also be adjusted to incorporate other natural materials, such as keratin — a fibrous substance found in feathers and hair, with certain chemical features resembling those of mucus.

“The applications of our materials design approach will depend on the specific precursor materials,” says George Degen, a postdoc in MIT’s Department of Mechanical Engineering. “For example, mucus-derived or mucus-inspired materials might be used as multifunctional biomedical adhesives that also prevent infections. Alternatively, applying our approach to keratin might enable development of sustainable packaging materials.”

A paper detailing the team’s results appears this week in the Proceedings of the National Academy of Sciences. Degen’s MIT co-authors include Corey Stevens, Gerardo Cárcamo-Oyarce, Jake Song, Katharina Ribbeck, and Gareth McKinley, along with Raju Bej, Peng Tang, and Rainer Haag of Freie Universität Berlin.

A sticky combination

Before coming to MIT, Degen was a graduate student at the University of California at Santa Barbara, where he worked in a research group that studied the adhesive mechanisms of mussels.

“Mussels are able to deposit materials that adhere to wet surfaces in seconds to minutes,” Degen says. “These natural materials do better than existing commercialized adhesives, specifically at sticking to wet and underwater surfaces, which has been a longstanding technical challenge.”

To stick to a rock or a ship, mussels secrete a protein-rich fluid. Chemical bonds, or cross-links, act as connection points between proteins, enabling the secreted substance to simultaneously solidify into a gel and stick to a wet surface.

As it happens, similar cross-linking features are found in mucin — a large protein that is the primary non-water component of mucus. When Degen came to MIT, he worked with both McKinley, a professor of mechanical engineering and an expert in materials science and fluid flow, and Katharina Ribbeck, a professor of biological engineering and a leader in the study of mucus, to develop a cross-linking glue that would combine the adhesive qualities of mussel plaques with the bacteria-blocking properties of mucus.

Mixing links

The MIT researchers teamed up with Haag and colleagues in Berlin who specialize in synthesizing bioinspired materials. Haag and Ribbeck are members of a collaborative research group that develops dynamic hydrogels for biointerfaces. Haag’s group has made mussel-like adhesives, as well as mucus-inspired liquids by producing microscopic, fiber-like polymers that are similar in structure to the natural mucin proteins.

For their new work, the researchers focused on a chemical motif that appears in mussel adhesives: a bond between two chemical groups known as “catechols” and “thiols.” In the mussel’s natural glue, or plaque, these groups combine to form catechol–thiol cross-links that contribute to the cohesive strength of the plaque. Catechols also enhance a mussel’s adhesion by binding to surfaces such as rocks and ship hulls.

Interestingly, thiol groups are also prevalent in mucin proteins. Degen wondered whether mussel-inspired polymers could link with mucin thiols, enabling the mucins to quickly turn from a liquid to a sticky gel.

To test this idea, he combined solutions of natural mucin proteins with synthetic mussel-inspired polymers and observed how the resulting mixture solidified and stuck to surfaces over time.

“It’s like a two-part epoxy. You combine two liquids together, and chemistry starts to occur so that the liquid solifidies while the substance is simultaneously glueing itself to the surface,” Degen says. 

“Depending on how much cross-linking you have, we can control the speed at which the liquids gelate and adhere,” Haag adds. “We can do this all on wet surfaces, at room temperature, and under very mild conditions. This is what is quite unique.”

The team deposited a range of compositions between two surfaces and found that the resulting adhesive held the surfaces together, with forces comparable to the commercial medical adhesives used for bonding tissue. The researchers also tested the adhesive’s bacteria-blocking properties by depositing the gel onto glass surfaces and incubating them with bacteria overnight.

“We found if we had a bare glass surface without our coating, the bacteria formed a thick biofilm, whereas with our coating, biofilms were largely prevented,” Degen notes.

The team says that with a bit of tuning, they can further improve the adhesive’s hold. Then, the material could be a strong and protective alternative to existing medical adhesives.

“We are excited to have established a biomaterials design platform that gives us these desirable properties of gelation and adhesion, and as a starting point we’ve demonstrated some key biomedical applications,” Degen says. “We are now ready to expand into different synthetic and natural systems and target different applications.”

This research was funded, in part, by the U.S. National Institutes of Health, the U.S. National Science Foundation, and the U.S. Army Research Office.

Mixing beats, history, and technology

Mon, 02/17/2025 - 10:00am

In a classroom on the third floor of the MIT Media Lab, it’s quiet; the disc jockey is setting up. At the end of a conference table ringed with chairs, there are two turntables on either side of a mixer and a worn crossfader. A MacBook sits to the right of the setup.

Today’s class — CMS.303/803/21M.365 (DJ History, Technique, and Technology) — takes students to the 1970s, which means disco, funk, rhythm and blues, and the breaks that form the foundation of early hip-hop are in the mix. Instructor Philip Tan ’01, SM ’03 starts with a needle drop. Class is about to begin.

Tan is a research scientist with the MIT Game Lab — part of the Institute’s Comparative Media Studies/Writing (CMS/W) program. An accomplished DJ and founder of a DJ crew at MIT, he’s been teaching students classic turntable and mixing techniques since 1998. Tan is also an accomplished game designer whose specialties include digital, live-action, and tabletop games, in both production and management. But today’s focus is on two turntables, a mixer, and music.

“DJ’ing is about using the platter as a music instrument,” Tan says as students begin filing into the classroom, “and creating a program for audiences to enjoy.”

Originally from Singapore, Tan arrived in the United States — first as a high school student in 1993, and later as an MIT student in 1997 — to study the humanities. He brought his passion for DJ culture with him.

“A high school friend in Singapore introduced DJ’ing to me in 1993,” he recalls. “We DJ’d a couple of school dances together and entered the same DJ competitions. Before that, though, I made mix tapes, pausing the cassette recorder while cuing up the next song on cassette, compact disc, or vinyl.”

Later, Tan wondered if his passion could translate into a viable course, exploring the idea over several years. “I wanted to find and connect with other folks on campus who might also be interested in DJ’ing,” he says. During MIT’s Independent Activities Period (IAP) in 2019, he led a four-week “Discotheque” lecture series at the Lewis Music Library, talking about vinyl records, DJ mixers, speakers, and digital audio. He also ran meetups for campus DJs in the MIT Music Production Collaborative.

“We couldn’t really do meetups and in-person performances during the pandemic, but I had the opportunity to offer a spring Experiential Learning Opportunity for MIT undergraduates, focused on DJ’ing over livestreams,” he says. The CMS/W program eventually let Tan expand the IAP course to a full-semester, full-credit course in spring 2023.

Showing students the basics

In the class, students learn the foundational practices necessary for live DJ mixing. They also explore a chosen contemporary or historical dance scene from around the world. The course investigates the evolution of DJ’ing and the technology used to make it possible. Students are asked to write and present their findings to the class based on historical research and interviews; create a mix tape showcasing their research into a historical development in dance music, mixing technique, or DJ technology; and end the semester with a live DJ event for the MIT community. Access to the popular course is granted via lottery.

“From circuits to signal processing, we have been able to see real-life uses of our course subjects in a fun and exciting way,” says Madeline Leano, a second-year student majoring in computer science and engineering and minoring in mathematics. “I’ve also always had a great love for music, and this class has already broadened my music taste as well as widened my appreciation for how music is produced.”

Leano lauded the class’s connections with her work in engineering and computer science. “[Tan] would always emphasize how all the parts of the mixing board work technically, which would come down to different electrical engineering and physics topics,” she notes. “It was super fun to see the overlap of our technical coursework with this class.”

During today’s class, Tan walks students through the evolution of the DJ’s tools, explaining the shifts in DJ’ing as it occurred alongside technological advances by companies producing the equipment. Tan delves into differences in hardware for disco and hip-hop DJs, how certain equipment like the Bozak CMA-10-2DL mixer lacked a crossfader, for example, while the UREI 1620 music mixer was all knobs. Needs changed as the culture changed, Tan explains, and so did the DJ’s tools.

He’s also immersing the class in music and cultural history, discussing the foundations of disco and hip-hop in the early 1970s and the former’s reign throughout the decade while the latter grew alongside it. Club culture for members of the LGBTQ+ community, safe spaces for marginalized groups to dance and express themselves, and previously unheard stories from these folks are carefully excavated and examined at length.

“Studying meter, reviewing music history, and learning new skills”

Toward the end of the class, each student takes their place behind the turntables. They’re searching by feel for the ease with which Tan switches back and forth between two tracks, trying to get the right blend of beats so they don’t lose the crowd. You can see their confidence growing in real time as he patiently walks them through the process: find the groove, move between them, blend the beat. They come to understand that it’s harder than it might appear.

“I’m not looking for students to become expert scratchers,” Tan says. “We’re studying meter, reviewing music history, and learning new skills.”

“Philip is one of the coolest teachers I have had here at MIT!” Leano exclaims. “You can just tell from the way he holds himself in class how both knowledgeable and passionate he is about DJ history and technology.”

Watching Tan demonstrate techniques to students, it’s easy to appreciate the skill and dexterity necessary to both DJ well and to show others how it’s done. He’s steeped in the craft of DJ’ing, as comfortable with two turntables and a mixer as he is with a digital setup favored by DJs from other genres, like electronic dance music. Students, including Leano, note his skill, ability, and commitment.

“Any question that any classmate may have is always answered in such depth he seems like a walking dictionary,” she says. “Not to mention, he makes the class so interactive with us coming to the front and using the board, making sure everyone gets what is happening.”

Body of knowledge

Fri, 02/14/2025 - 9:45am

Inside MIT’s Zesiger Sports and Fitness Center, on the springy blue mat of the gymnastics room, an unconventional anatomy lesson unfolded during an October meeting of class STS.024/CMS.524 (Thinking on Your Feet: Dance as a Learning Science).

Supported by a grant from the MIT Center for Art, Science & Technology (CAST), Thinking on Your Feet was developed and offered for the first time in Fall 2024 by Jennifer S. Light, the Bern Dibner Professor of the History of Science and Technology and a professor of Urban Studies and Planning. Light’s vision for the class included a varied lineup of guest instructors. During the last week of October, she handed the reins to Middlebury College Professor Emerita Andrea Olsen, whose expertise bridges dance and science.

Olsen organized the class into small groups. Placing hands on each other’s shoulders conga-line style, participants shuffled across the mat personifying the layers of the nervous system as Olsen had just explained them: the supportive spinal cord and bossy brain of the central nervous system; the sympathetic nervous system responsible for fight-or-flight and its laid-back parasympathetic counterpart; and the literal “gut feelings” of the enteric nervous system. The groups giggled and stumbled as they attempted to stay in character and coordinate their movements.

Unusual as this exercise was, it perfectly suited a class dedicated to movement as a tool for teaching and learning. One of the class’s introductory readings, an excerpt from Annie Murphy Paul’s book “The Extended Mind,” suggests why this was a more effective primer on the nervous system than a standard lecture: “Our memory for what we have heard is remarkably weak. Our memory for what we have done, however — for physical actions we have undertaken — is much more robust.”

Head-to-toe education

Thinking on Your Feet is the third course spun out from Light’s Project on Embodied Education (the other two, developed in collaboration with MIT Director of Physical Education and Wellness Carrie Sampson Moore, examine the history of exercise in relation to schools and medicine, respectively). A historian of science and technology and historian of education for much of her career, Light refocused her scholarship on movement and learning after she’d begun training at Somerville’s Esh Circus Arts to counteract the stress of serving as department head. During her sabbatical a few years later, as part of Esh’s pre-professional program for aspiring acrobats, she took a series of dance classes spanning genres from ballet to hip-hop to Afro modern.

“I started playing with the idea that this is experiential learning — could I bring something like this back to MIT?” she recalls. “There’s a ton of interesting contemporary scientific research on cognition and learning as not just neck-up processes, but whole-body processes.”

Thinking on Your Feet provides an overview of recent scientific studies indicating the surprising extent to which physical activity enhances attention, memory, executive function, and other aspects of mental acuity. Other readings consider dance’s role in the transmission of knowledge throughout human history — from the Native Hawaiian tradition of hula to early forms of ballet in European courts — and describe the ways movement-based instruction can engage underserved populations and neurodiverse learners.

“You can argue for embodied learning on so many dimensions,” says Light. “I want my students to understand that what they’ve been taught about learning is only part of the story, and that contemporary science, ancient wisdom, and non-Western traditions all have a lot to tell us about how we might rethink education to maximize the benefits for all different kinds of students.”

Learning to dance

If you scan the new class’s syllabus, you’re unlikely to miss the word “fun.” It appears twice — bolded, in all caps, and garnished by an exclamation point.

“I’m trying to bring a playful, experimental, ‘you don’t have to be perfect, just be creative’ vibe,” says Light. A dance background is not a prerequisite. The 18 students who registered this fall ranged from experienced dancers to novices.

“I initially took this class just to fulfill my arts requirement,” admits junior physics major Matson Garza, one of the latter group. He was surprised at how much he enjoyed it. “I have an interest in physics education, and I’ve found that beyond introductory courses it’s often lacking intuition. Integrating movement may be one way to solve this problem.”

Similarly, second-year biological engineering major Annabel Tiong found her entry point through an interest in hands-on education, deepened after volunteering with a program that aims to spark curiosity about health-care careers by engaging kids in medical simulations. “While I don’t have an extensive background in dance,” she says, “I was curious how dance, with its free-form and creative nature, could be used to teach STEM topics that appear to be quite concrete and technical.”

To build on each Tuesday’s lectures and discussions, Thursday “lab” sessions focused on overcoming inhibitions, teaching different styles of movement, and connecting dance with academic content. McKersin of Lakaï Arts, a lecturer in dance for the MIT Music and Theater Arts section, led a lab on Haitian harvest dances; Guy Steele PhD ’80 and Clark Baker SM ’80 of the MIT Tech Squares club provided an intro to square dancing and some of its connections to math and programming. Light invited some of her own dance instructors from the circus community, including Johnny Blazes, who specializes (according to their website) in working with “people who have been told implicitly and explicitly that they don’t belong in movement and fitness spaces.” Another, Reba Rosenberg, led the students through basic partner acrobatics that Light says did wonders for the class’s sense of confidence and community.

“Afterwards, several students asked, ‘Could we do this again?’” remembers Light. “None of them thought they could do the thing that by the end of class they were able to do: balance on each other, stand on each other. You can imagine how the need to physically trust someone with your safety yields incredible benefits when we’re back in the classroom.”

Dancing to learn

The culmination of Thinking on Your Feet — a final project constituting 40 percent of students’ grades — required each student to create a dance-based lesson plan on a STEM topic of their choice. Students were exposed throughout the semester to examples of such pedagogy. Olsen’s nervous-system parade was one. Others came courtesy of Lewis Hou of Science Ceilidh, an organization that uses Scottish highland dance to illustrate concepts across the natural and physical sciences, and MIT alumna Yamilée Toussaint ’08, whose nonprofit STEM from Dance helps young women of color create performances with technical components.

As a stepping stone, Light had planned a midterm assignment asking students to adapt existing choreography. But her students surprised her by wanting to jump directly into creating their own dances from scratch. Those first forays weren’t elaborate, but Light was impressed enough by their efforts that she plans to amend the syllabus accordingly.

“One group was doing differential calculus and imagining the floor as a graph,” she recalls, “having dancers think about where they were in relation to each other.” Another group, comprising members of the MIT Ballroom Dance team, choreographed the computer science concept of pipelined processors. “They were giving commands to each other like ‘load’ and ‘execute’ and ‘write back,’” Light says. “The beauty of this is that the students could offer each other feedback on the technical piece of it. Like, ‘OK, I see that you’re trying to explain a clock cycle. Maybe try to do it this way.”

Among the pipelined processing team was senior Kateryna Morhun, a competitive ballroom dancer since age 4 who is earning her degree in artificial intelligence and decision-making. “We wanted to challenge ourselves to teach a specialized, more technical topic that isn’t usually a target of embodied learning initiatives,” Morhun says.

How useful can dance really be in teaching advanced academic content? This was a lively topic of debate among the Thinking on Your Feet cohort. It’s a question Light intends to investigate further with mechanical engineering lecturer Benita Comeau, who audited the class and offered a lab exploring the connections among dance, physics, and martial arts.

“This class sparked many ideas for me, across multiple subject matters and movement styles,” says Comeau. “As an example, the square dance class reminded me of the symmetry groups that are used to describe molecular symmetry in chemistry, and it occurred to me that students could move through symmetry groups and learn about chirality” — a geometric property relevant to numerous branches of science.

For their final presentation, Garza and Tiong’s group tackled substitution mechanisms, a topic from organic chemistry (“notoriously viewed as a very difficult and dreaded class,” according to their write-up). Their lesson plan specified that learners would first need to familiarize themselves with key points through conventional readings and discussion. But then, to bring that material alive, groups of learners representing atoms would take the floor. One, portraying a central carbon atom, would hold out an arm indicating readiness to accept an electron. Another would stand to the side with two balls representing electrons, bonded by a ribbon. Others would rotate in a predetermined order around the central carbon to portray a model’s initial stereochemistry. And so a dance would begin: a three-dimensional, human-scale visualization of a complex chemical process.

The group was asked to summarize what they hoped learners would discover through their dance. “Chemistry is very dynamic!” they wrote. “It’s not mixing chemicals to magically make new ones — it’s a dynamic process of collision, bonding, and molecule-breaking that causes some structures to vanish and others to appear.”

In addition to evaluating the impact of movement in her classes in collaboration with Raechel Soicher from the MIT Teaching + Learning Lab, Light is working on a book about how modern science has rediscovered the ancient wisdom of embodied learning. She hopes her class will kick off a conversation at MIT about incorporating such movement-assisted insights into the educational practices of the future. In fact, she believes MIT’s heritage of innovative pedagogy makes it ripe for these explorations.

As her syllabus puts it: “For all of us, as part of the MIT community, this class invites us to reconsider how our ‘mind and hand’ approach to experiential learning — a product of the 19th century — might be expanded to ‘mind and body’ for the 21st century.”

AI model deciphers the code in proteins that tells them where to go

Thu, 02/13/2025 - 5:10pm

Proteins are the workhorses that keep our cells running, and there are many thousands of types of proteins in our cells, each performing a specialized function. Researchers have long known that the structure of a protein determines what it can do. More recently, researchers are coming to appreciate that a protein’s localization is also critical for its function. Cells are full of compartments that help to organize their many denizens. Along with the well-known organelles that adorn the pages of biology textbooks, these spaces also include a variety of dynamic, membrane-less compartments that concentrate certain molecules together to perform shared functions. Knowing where a given protein localizes, and who it co-localizes with, can therefore be useful for better understanding that protein and its role in the healthy or diseased cell, but researchers have lacked a systematic way to predict this information.

Meanwhile, protein structure has been studied for over half-a-century, culminating in the artificial intelligence tool AlphaFold, which can predict protein structure from a protein’s amino acid code, the linear string of building blocks within it that folds to create its structure. AlphaFold and models like it have become widely used tools in research.

Proteins also contain regions of amino acids that do not fold into a fixed structure, but are instead important for helping proteins join dynamic compartments in the cell. MIT Professor Richard Young and colleagues wondered whether the code in those regions could be used to predict protein localization in the same way that other regions are used to predict structure. Other researchers have discovered some protein sequences that code for protein localization, and some have begun developing predictive models for protein localization. However, researchers did not know whether a protein’s localization to any dynamic compartment could be predicted based on its sequence, nor did they have a comparable tool to AlphaFold for predicting localization. 

Now, Young, also member of the Whitehead Institute for Biological Research; Young lab postdoc Henry Kilgore; Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health in MIT's Department of Electrical Engineering and Computer Science and principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and colleagues have built such a model, which they call ProtGPS. In a paper published on Feb. 6 in the journal Science, with first authors Kilgore and Barzilay lab graduate students Itamar Chinn, Peter Mikhael, and Ilan Mitnikov, the cross-disciplinary team debuts their model. The researchers show that ProtGPS can predict to which of 12 known types of compartments a protein will localize, as well as whether a disease-associated mutation will change that localization. Additionally, the research team developed a generative algorithm that can design novel proteins to localize to specific compartments.

“My hope is that this is a first step towards a powerful platform that enables people studying proteins to do their research,” Young says, “and that it helps us understand how humans develop into the complex organisms that they are, how mutations disrupt those natural processes, and how to generate therapeutic hypotheses and design drugs to treat dysfunction in a cell.”

The researchers also validated many of the model’s predictions with experimental tests in cells.

“It really excited me to be able to go from computational design all the way to trying these things in the lab,” Barzilay says. “There are a lot of exciting papers in this area of AI, but 99.9 percent of those never get tested in real systems. Thanks to our collaboration with the Young lab, we were able to test, and really learn how well our algorithm is doing.”

Developing the model

The researchers trained and tested ProtGPS on two batches of proteins with known localizations. They found that it could correctly predict where proteins end up with high accuracy. The researchers also tested how well ProtGPS could predict changes in protein localization based on disease-associated mutations within a protein. Many mutations — changes to the sequence for a gene and its corresponding protein — have been found to contribute to or cause disease based on association studies, but the ways in which the mutations lead to disease symptoms remain unknown.

Figuring out the mechanism for how a mutation contributes to disease is important because then researchers can develop therapies to fix that mechanism, preventing or treating the disease. Young and colleagues suspected that many disease-associated mutations might contribute to disease by changing protein localization. For example, a mutation could make a protein unable to join a compartment containing essential partners.

They tested this hypothesis by feeding ProtGOS more than 200,000 proteins with disease-associated mutations, and then asking it to both predict where those mutated proteins would localize and measure how much its prediction changed for a given protein from the normal to the mutated version. A large shift in the prediction indicates a likely change in localization.

The researchers found many cases in which a disease-associated mutation appeared to change a protein’s localization. They tested 20 examples in cells, using fluorescence to compare where in the cell a normal protein and the mutated version of it ended up. The experiments confirmed ProtGPS’s predictions. Altogether, the findings support the researchers’ suspicion that mis-localization may be an underappreciated mechanism of disease, and demonstrate the value of ProtGPS as a tool for understanding disease and identifying new therapeutic avenues.

“The cell is such a complicated system, with so many components and complex networks of interactions,” Mitnikov says. “It’s super interesting to think that with this approach, we can perturb the system, see the outcome of that, and so drive discovery of mechanisms in the cell, or even develop therapeutics based on that.”

The researchers hope that others begin using ProtGPS in the same way that they use predictive structural models like AlphaFold, advancing various projects on protein function, dysfunction, and disease.

Moving beyond prediction to novel generation

The researchers were excited about the possible uses of their prediction model, but they also wanted their model to go beyond predicting localizations of existing proteins, and allow them to design completely new proteins. The goal was for the model to make up entirely new amino acid sequences that, when formed in a cell, would localize to a desired location. Generating a novel protein that can actually accomplish a function — in this case, the function of localizing to a specific cellular compartment — is incredibly difficult. In order to improve their model’s chances of success, the researchers constrained their algorithm to only design proteins like those found in nature. This is an approach commonly used in drug design, for logical reasons; nature has had billions of years to figure out which protein sequences work well and which do not.

Because of the collaboration with the Young lab, the machine learning team was able to test whether their protein generator worked. The model had good results. In one round, it generated 10 proteins intended to localize to the nucleolus. When the researchers tested these proteins in the cell, they found that four of them strongly localized to the nucleolus, and others may have had slight biases toward that location as well.

“The collaboration between our labs has been so generative for all of us,” Mikhael says. “We’ve learned how to speak each other’s languages, in our case learned a lot about how cells work, and by having the chance to experimentally test our model, we’ve been able to figure out what we need to do to actually make the model work, and then make it work better.”

Being able to generate functional proteins in this way could improve researchers’ ability to develop therapies. For example, if a drug must interact with a target that localizes within a certain compartment, then researchers could use this model to design a drug to also localize there. This should make the drug more effective and decrease side effects, since the drug will spend more time engaging with its target and less time interacting with other molecules, causing off-target effects.

The machine learning team members are enthused about the prospect of using what they have learned from this collaboration to design novel proteins with other functions beyond localization, which would expand the possibilities for therapeutic design and other applications.

“A lot of papers show they can design a protein that can be expressed in a cell, but not that the protein has a particular function,” Chinn says. “We actually had functional protein design, and a relatively huge success rate compared to other generative models. That’s really exciting to us, and something we would like to build on.”

All of the researchers involved see ProtGPS as an exciting beginning. They anticipate that their tool will be used to learn more about the roles of localization in protein function and mis-localization in disease. In addition, they are interested in expanding the model’s localization predictions to include more types of compartments, testing more therapeutic hypotheses, and designing increasingly functional proteins for therapies or other applications.

“Now that we know that this protein code for localization exists, and that machine learning models can make sense of that code and even create functional proteins using its logic, that opens up the door for so many potential studies and applications,” Kilgore says.

Engineers enable a drone to determine its position in the dark and indoors

Thu, 02/13/2025 - 12:00am

In the future, autonomous drones could be used to shuttle inventory between large warehouses. A drone might fly into a semi-dark structure the size of several football fields, zipping along hundreds of identical aisles before docking at the precise spot where its shipment is needed.

Most of today’s drones would likely struggle to complete this task, since drones typically navigate outdoors using GPS, which doesn’t work in indoor environments. For indoor navigation, some drones employ computer vision or lidar, but both techniques are unreliable in dark environments or rooms with plain walls or repetitive features.

MIT researchers have introduced a new approach that enables a drone to self-localize, or determine its position, in indoor, dark, and low-visibility environments. Self-localization is a key step in autonomous navigation.

The researchers developed a system called MiFly, in which a drone uses radio frequency (RF) waves, reflected by a single tag placed in its environment, to autonomously self-localize.

Because MiFly enables self-localization with only one small tag, which could be affixed to a wall like a sticker, it would be cheaper and easier to implement than systems that require multiple tags. In addition, since the MiFly tag reflects signals sent by the drone, rather than generating its own signal, it can be operated with very low power.

Two off-the-shelf radars mounted on the drone enable it to localize in relation to the tag. Those measurements are fused with data from the drone’s onboard computer, which enables it to estimate its trajectory.

The researchers conducted hundreds of flight experiments with real drones in indoor environments, and found that MiFly consistently localized the drone to within fewer than 7 centimeters.

“As our understanding of perception and computing improves, we often forget about signals that are beyond the visible spectrum. Here, we’ve looked beyond GPS and computer vision to millimeter waves, and by doing so, we’ve opened up new capabilities for drones in indoor environments that were not possible before,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of a paper on MiFly.

Adib is joined on the paper by co-lead authors and research assistants Maisy Lam and Laura Dodds; Aline Eid, a former postdoc who is now an assistant professor at the University of Michigan; and Jimmy Hester, CTO and co-founder of Atheraxon, Inc. The research will be presented at the IEEE Conference on Computer Communications.

Backscattered signals

To enable drones to self-localize within dark, indoor environments, the researchers decided to utilize millimeter wave signals. Millimeter waves, which are commonly used in modern radars and 5G communication systems, work in the dark and can travel through everyday materials like cardboard, plastic, and interior walls.

They set out to create a system that could work with just one tag, so it would be cheaper and easier to implement in commercial environments. To ensure the device remained low power, they designed a backscatter tag that reflects millimeter wave signals sent by a drone’s onboard radar. The drone uses those reflections to self-localize.

But the drone’s radar would receive signals reflected from all over the environment, not just the tag. The researchers surmounted this challenge by employing a technique called modulation. They configured the tag to add a small frequency to the signal it scatters back to the drone.

“Now, the reflections from the surrounding environment come back at one frequency, but the reflections from the tag come back at a different frequency. This allows us to separate the responses and just look at the response from the tag,” Dodds says.

However, with just one tag and one radar, the researchers could only calculate distance measurements. They needed multiple signals to compute the drone’s location.

Rather than using more tags, they added a second radar to the drone, mounting one horizontally and one vertically. The horizontal radar has a horizontal polarization, which means it sends signals horizontally, while the vertical radar would have a vertical polarization.

They incorporated polarization into the tag’s antennas so it could isolate the separate signals sent by each radar.

“Polarized sunglasses receive a certain polarization of light and block out other polarizations. We applied the same concept to millimeter waves,” Lam explains.

In addition, they applied different modulation frequencies to the vertical and horizontal signals, further reducing interference.

Precise location estimation

This dual-polarization and dual-modulation architecture gives the drone’s spatial location. But drones also move at an angle and rotate, so to enable a drone to navigate, it must estimate its position in space with respect to six degrees of freedom — with trajectory data including pitch, yaw, and roll in addition to the usual forward/backward, left/right, and up/down.

“The drone rotation adds a lot of ambiguity to the millimeter wave estimates. This is a big problem because drones rotate quite a bit as they are flying,” Dodds says.

They overcame these challenges by utilizing the drone’s onboard inertial measurement unit, a sensor that measures acceleration as well as changes in altitude and attitude. By fusing this information with the millimeter wave measurements reflected by the tag, they enable MiFly to estimate the full six-degree-of-freedom pose of the drone in only a few milliseconds.

They tested a MiFly-equipped drone in several indoor environments, including their lab, the flight space at MIT, and the dim tunnels beneath the campus buildings. The system achieved high accuracy consistently across all environments, localizing the drone to within 7 centimeters in many experiments.

In addition, the system was nearly as accurate in situations where the tag was blocked from the drone’s view. They achieved reliable localization estimates up to 6 meters from the tag.

That distance could be extended in the future with the use of additional hardware, such as high-power amplifiers, or by improving the radar and antenna design. The researchers also plan to conduct further research by incorporating MiFly into an autonomous navigation system. This could enable a drone to decide where to fly and execute a flight path using millimeter wave technology.

“The infrastructure and localization algorithms we build up for this work are a strong foundation to go on and make them more robust to enable diverse commercial applications,” Lam says.

This research is funded, in part, by the National Science Foundation and the MIT Media Lab.

Pages