MIT Latest News

New postdoctoral fellowship program to accelerate innovation in health care
The MIT Health and Life Sciences Collaborative (MIT HEALS) is launching the Biswas Postdoctoral Fellowship Program to advance the work of outstanding early-career researchers in health and life sciences. Supported by a gift from the Biswas Family Foundation, the program aims to help apply cutting-edge research to improve health care and the lives of millions.
The program will support exceptional postdocs dedicated to innovation in human health care through a full range of pathways, such as leveraging AI in health-related research, developing low-cost diagnostics, and the convergence of life sciences with such areas as economics, business, policy, or the humanities. With initial funding of $12 million, five four-year fellowships will be awarded for each of the next four years, starting in early 2026.
“An essential goal of MIT HEALS is to find new ways and opportunities to deliver health care solutions at scale, and the Biswas Family Foundation shares our commitment to scalable innovation and broad impact. MIT is also in the talent business, and the foundation’s gift allows us to bring exceptional scholars to campus to explore some of the most pressing issues in human health and build meaningful connections across academia and industry. We look forward to welcoming the first cohort of Biswas Fellows to MIT,” says MIT president Sally Kornbluth.
“We are deeply honored to launch this world-class postdoctoral fellows program,” adds Anantha P. Chandrakasan, MIT’s chief innovation and strategy officer and head of MIT HEALS. “We fully expect to attract top candidates from around the globe to lead innovative cross-cutting projects in AI and health, cancer therapies, diagnostics, and beyond. These fellows will be selected through a rigorous process overseen by a distinguished committee, and will have the opportunity to collaborate with our faculty on the most promising and impactful ideas.”
Angela Koehler, faculty lead of MIT HEALS, professor in MIT’s Department of Biological Engineering, and associate director of the Koch Institute for Integrative Cancer Research, emphasized that the objectives of MIT HEALS align well with a stated goal of the Biswas Family Foundation: to leverage “scientific and technological advancements to revolutionize health care and make a lasting impact on global public health.”
“Health care is a team sport,” Koehler says. “MIT HEALS seeks to create connections involving investigators with diverse expertise across the Institute to tackle the most transformative problems impacting human health. Members of the MIT community are well poised to participate in teams and make an impact.”
MIT HEALS also seeks to maximize its effectiveness by expanding collaboration with medical schools and hospitals, starting with defining important problems that can be approached through research, and continuing all the way to clinical studies, Koehler says.
The Biswas Family Foundation has already demonstrated a similar strategy.
“The Biswas family has a history of enabling connections and partnerships between institutions that each bring a piece to the puzzle,” Koehler says. “This could be a dataset, an algorithm, an agent, a technology platform, or patients.”
Hope Biswas, co-founder of the Biswas Family Foundation with her husband, MIT alumnus Sanjit Biswas SM ’05, also highlighted the synergies between the foundation and MIT.
“The Biswas Family Foundation is proud to support the MIT HEALS initiative, which reimagines how scientific discovery can translate into real-world health impact. Its focus on promoting interdisciplinary collaboration to find new solutions to challenges in health care aligns closely with our mission to advance science and technology to improve health outcomes at scale,” Biswas says.
“As part of this commitment,” Biswas adds, “we are especially proud to support outstanding postdoctoral scholars focused on high-impact cross-disciplinary work in fields such as computational biology, nanoscale therapeutics, women’s health, and fundamental, curiosity-driven life sciences research. We are excited to contribute to an effort that brings together cutting-edge science and a deep commitment to translating knowledge into action.”
AI and machine-learning systems present a new universe of opportunities to investigate disease, biological mechanisms, therapeutics, and health care delivery using huge datasets.
“AI and computational systems biology can improve the accuracy of diagnostic approaches, enable the development of precision medicines, improve choices related to individualized treatment strategy, and improve operational efficiency within health care systems,” says Koehler. “Sanjit and Hope’s support of broad initiatives in AI and computational systems biology will help MIT researchers explore a variety of paths to impact human health on a large scale.”
Frontiers in health-related research are increasingly found where diverse fields converge, and Koehler provides the example of how advances in high-throughput experimentation to develop large datasets “may couple well with the development of new computation or AI tools.” She adds that the four-year funding term provided by the postdoctoral fellowship is “long enough to enable fellows to think big and take on projects at interfaces, emerging as bilingual researchers at the end of the program.”
Chandrakasan sees potential in the program for the Biswas Fellows to make revolutionary progress in health research.
“I’m incredibly grateful to the Biswas Family Foundation for their generous support in enabling transformative research at MIT,” Chandrakasan says.
Exploring data and its influence on political behavior
Data and politics are becoming increasingly intertwined. Today’s political campaigns and voter mobilization efforts are now entirely data-driven. Voters, pollsters, and elected officials are relying on data to make choices that have local, regional, and national impacts.
A Department of Political Science course offers students tools to help make sense of these choices and their outcomes.
In class 17.831 (Data and Politics), students are introduced to principles and practices necessary to understand electoral and other types of political behavior. Taught by associate professor of political science Daniel Hidalgo, students use real-world datasets to explore topics like election polling and prediction, voter turnout, voter targeting, and shifts in public opinion over time.
The course wants students to describe why and how the use of data and statistical methods has changed electoral politics, understand the basic principles of social science statistics, and analyze data using modern statistical computing tools. The course capstone is an original project that involves the collection, analysis, and interpretation of original survey data used in modern campaigns.
“I wanted to create an applied, practice-based course that would appeal to undergraduates and provide a foundation for parsing, understanding, and reporting on large datasets in politics,” says Hidalgo, who redesigned the course for the spring 2025 semester.
Hidalgo, who also works in the Political Methodology Lab at MIT, investigates the political economy of elections, campaigns, and representation in developing democracies, especially in Latin America, as well as quantitative methods in the social sciences.
Politics and modernity
The influence of, and access to, artificial intelligence and large language models makes a course like Data and Politics even more important, Hidalgo says. “You have to understand the people at the other end of the data,” he argues.
The course also centers the human element in politics, exploring conflict, bias, their structures, and impacts while also working to improve information literacy and coherent storytelling.
“Data analysis and collection will never be perfect,” Hidalgo says. “But analyzing and understanding who holds which ideas, and why, and using the information to tell a coherent story is valuable in politics and elsewhere.”
The “always on” nature of news and related content, coupled with the variety of communications channels available to voters, has increased the complexity of the data collection process in polling and campaigns. “In the past, people would answer the phone when you called their homes,” Hidalgo notes, describing analog methods previously used to collect voter data. Now, political scientists, data analysts, and others must contend with the availability of streaming content, mobile devices, and other channels comprising a vast, fractured media ecosystem.
The course opens a window into what happens behind the scenes of local and national political campaigns, which appealed to second-year political science major Jackson Hamilton. “I took this class hoping to expand my ability to use coding for political science applications, and in order to better understand how political models and predictions work,” he says.
“We tailor-made our own sets of questions and experimental designs that we thought would be interesting,” Hamilton adds. “I found that political issues that get a lot of media coverage are not necessarily the same issues which divide lawmakers, at least locally.”
Transparency and accountability in politics and other areas
Teaching students to use tools like polling and data analysis effectively can improve their ability to identify and combat disinformation and misinformation. “As a political scientist, I’m substantively engaged,” Hidalgo says, “and I’d like to help others be engaged, too.”
“There’s lots of data available, and this course provides a foundation and the resources necessary to understand and visualize it,” Hidalgo continues. “The ability to design, implement, and understand surveys has value inside and outside the classroom.”
In politics, Hidalgo believes equipping students to navigate these spaces effectively can potentially improve and increase civic engagement. Data, he says, can help defend ideas. “There’s so much information, it’s important to develop the skills and abilities necessary to understand and visualize it,” he says. “This has value for everyone.”
Second-year physics major Sean Wilson, who also took the class this spring, notes the value of data visualization and analysis both as a potential physicist and a voter. “Data analysis in both politics and in physics is essential work given that voting tendencies, public opinion, and government leadership change so often in the United States,” he says, “and that modeling can be used to support physical hypotheses and improve our understanding of how things work.”
For Wilson, the course can help anyone interested in understanding large groups’ behaviors. “Political scientists are constantly working to better understand how and why certain events occur in U.S. politics, and data analysis is an effective tool for doing so,” he says. “Members of a representative democracy can make better decisions with this kind of information.”
Hamilton, meanwhile, learned more about the behind-the-scenes machinery at work in electoral politics. “I had the opportunity to create a couple of budget trade-off questions, to get a sense of what people actually thought the government should spend money on when they had to make choices,” he says.
“Computer science and data science aren’t just useful for STEM applications; data science approaches can also be extremely useful in many social sciences,” Hamilton argues.
“[Hidalgo helped me realize] that I needed to understand and use data science approaches to gain a deeper understanding of my areas of interest,” Hamilton says. “He focuses on how different approaches in coding can be applied to different types of problems in political science.”
Study shows how a common fertilizer ingredient benefits plants
Lanthanides are a class of rare earth elements that in many countries are added to fertilizer as micronutrients to stimulate plant growth. But little is known about how they are absorbed by plants or influence photosynthesis, potentially leaving their benefits untapped.
Now, researchers from MIT have shed light on how lanthanides move through and operate within plants. These insights could help farmers optimize their use to grow some of the world’s most popular crops.
Published today in the Journal of the American Chemical Society, the study shows that a single nanoscale dose of lanthanides applied to seeds can make some of the world’s most common crops more resilient to UV stress. The researchers also uncovered the chemical processes by which lanthanides interact with the chlorophyll pigments that drive photosynthesis, showing that different lanthanide elements strengthen chlorophyll by replacing the magnesium at its center.
“This is a first step to better understand how these elements work in plants, and to provide an example of how they could be better delivered to plants, compared to simply applying them in the soil,” says Associate Professor Benedetto Marelli, who conducted the research with postdoc Giorgio Rizzo. “This is the first example of a thorough study showing the effects of lanthanides on chlorophyll, and their beneficial effects to protect plants from UV stress.”
Inside plant connections
Certain lanthanides are used as contrast agents in MRI and for applications including light-emitting diodes, solar cells, and lasers. Over the last 50 years, lanthanides have become increasingly used in agriculture to enhance crop yields, with China alone applying lanthanide-based fertilizers to nearly 4 million hectares of land each year.
“Lanthanides have been considered for a long time to be biologically irrelevant, but that’s changed in agriculture, especially in China,” says Rizzo, the paper’s first author. “But we largely don’t know how lanthanides work to benefit plants — nor do we understand their uptake mechanisms from plant tissues.”
Recent studies have shown that low concentrations of lanthanides can promote plant growth, root elongation, hormone synthesis, and stress tolerance, but higher doses can cause harm to plants. Striking the right balance has been hard because of our lack of understanding around how lanthanides are absorbed by plants or how they interact with root soil.
For the study, the researchers leveraged seed coating and treatment technologies they previously developed to investigate the way the plant pigment chlorophyll interacts with lanthanides, both inside and outside of plants. Up until now, researchers haven’t been sure whether chlorophyll interacts with lanthanide ions at all.
Chlorophyll drives photosynthesis, but the pigments lose their ability to efficiently absorb light when the magnesium ion at their core is removed. The researchers discovered that lanthanides can fill that void, helping chlorophyll pigments partially recover some of their optical properties in a process known as re-greening.
“We found that lanthanides can boost several parameters of plant health,” Marelli says. “They mostly accumulate in the roots, but a small amount also makes its way to the leaves, and some of the new chlorophyll molecules made in leaves have lanthanides incorporated in their structure.”
This study also offers the first experimental evidence that lanthanides can increase plant resilience to UV stress, something the researchers say was completely unexpected.
“Chlorophylls are very sensitive pigments,” Rizzo says. “They can convert light to energy in plants, but when they are isolated from the cell structure, they rapidly hydrolyze and degrade. However, in the form with lanthanides at their center, they are pretty stable, even after extracting them from plant cells.”
The researchers, using different spectroscopic techniques, found the benefits held across a range of staple crops, including chickpea, barley, corn, and soybeans.
The findings could be used to boost crop yield and increase the resilience of some of the world’s most popular crops to extreme weather.
“As we move into an environment where extreme heat and extreme climate events are more common, and particularly where we can have prolonged periods of sun in the field, we want to provide new ways to protect our plants,” Marelli says. “There are existing agrochemicals that can be applied to leaves for protecting plants from stressors such as UV, but they can be toxic, increase microplastics, and can require multiple applications. This could be a complementary way to protect plants from UV stress.”
Identifying new applications
The researchers also found that larger lanthanide elements like lanthanum were more effective at strengthening chlorophyll pigments than smaller ones. Lanthanum is considered a low-value byproduct of rare earths mining, and can become a burden to the rare earth element (REE) supply chain due to the need to separate it from more desirable rare earths. Increasing the demand for lanthanum could diversify the economics of REEs and improve the stability of their supply chain, the scientists suggest.
“This study shows what we could do with these lower-value metals,” Marelli says. “We know lanthanides are extremely useful in electronics, magnets, and energy. In the U.S., there’s a big push to recycle them. That’s why for the plant studies, we focused on lanthanum, being the most abundant, cheapest lanthanide ion.”
Moving forward, the team plans to explore how lanthanides work with other biological molecules, including proteins in the human body.
In agriculture, the team hopes to scale up its research to include field and greenhouse studies to continue testing the results of UV resilience on different crop types and in experimental farm conditions.
“Lanthanides are already widely used in agriculture,” Rizzo says. “We hope this study provides evidence that allows more conscious use of them and also a new way to apply them through seed treatments.”
The research was supported by the MIT Climate Grand Challenge and the Office for Naval Research.
Robotic probe quickly measures key properties of new materials
Scientists are striving to discover new semiconductor materials that could boost the efficiency of solar cells and other electronics. But the pace of innovation is bottlenecked by the speed at which researchers can manually measure important material properties.
A fully autonomous robotic system developed by MIT researchers could speed things up.
Their system utilizes a robotic probe to measure an important electrical property known as photoconductance, which is how electrically responsive a material is to the presence of light.
The researchers inject materials-science-domain knowledge from human experts into the machine-learning model that guides the robot’s decision making. This enables the robot to identify the best places to contact a material with the probe to gain the most information about its photoconductance, while a specialized planning procedure finds the fastest way to move between contact points.
During a 24-hour test, the fully autonomous robotic probe took more than 125 unique measurements per hour, with more precision and reliability than other artificial intelligence-based methods.
By dramatically increasing the speed at which scientists can characterize important properties of new semiconductor materials, this method could spur the development of solar panels that produce more electricity.
“I find this paper to be incredibly exciting because it provides a pathway for autonomous, contact-based characterization methods. Not every important property of a material can be measured in a contactless way. If you need to make contact with your sample, you want it to be fast and you want to maximize the amount of information that you gain,” says Tonio Buonassisi, professor of mechanical engineering and senior author of a paper on the autonomous system.
His co-authors include lead author Alexander (Aleks) Siemenn, a graduate student; postdocs Basita Das and Kangyu Ji; and graduate student Fang Sheng. The work appears today in Science Advances.
Making contact
Since 2018, researchers in Buonassisi’s laboratory have been working toward a fully autonomous materials discovery laboratory. They’ve recently focused on discovering new perovskites, which are a class of semiconductor materials used in photovoltaics like solar panels.
In prior work, they developed techniques to rapidly synthesize and print unique combinations of perovskite material. They also designed imaging-based methods to determine some important material properties.
But photoconductance is most accurately characterized by placing a probe onto the material, shining a light, and measuring the electrical response.
“To allow our experimental laboratory to operate as quickly and accurately as possible, we had to come up with a solution that would produce the best measurements while minimizing the time it takes to run the whole procedure,” says Siemenn.
Doing so required the integration of machine learning, robotics, and material science into one autonomous system.
To begin, the robotic system uses its onboard camera to take an image of a slide with perovskite material printed on it.
Then it uses computer vision to cut that image into segments, which are fed into a neural network model that has been specially designed to incorporate domain expertise from chemists and materials scientists.
“These robots can improve the repeatability and precision of our operations, but it is important to still have a human in the loop. If we don’t have a good way to implement the rich knowledge from these chemical experts into our robots, we are not going to be able to discover new materials,” Siemenn adds.
The model uses this domain knowledge to determine the optimal points for the probe to contact based on the shape of the sample and its material composition. These contact points are fed into a path planner that finds the most efficient way for the probe to reach all points.
The adaptability of this machine-learning approach is especially important because the printed samples have unique shapes, from circular drops to jellybean-like structures.
“It is almost like measuring snowflakes — it is difficult to get two that are identical,” Buonassisi says.
Once the path planner finds the shortest path, it sends signals to the robot’s motors, which manipulate the probe and take measurements at each contact point in rapid succession.
Key to the speed of this approach is the self-supervised nature of the neural network model. The model determines optimal contact points directly on a sample image — without the need for labeled training data.
The researchers also accelerated the system by enhancing the path planning procedure. They found that adding a small amount of noise, or randomness, to the algorithm helped it find the shortest path.
“As we progress in this age of autonomous labs, you really do need all three of these expertise — hardware building, software, and an understanding of materials science — coming together into the same team to be able to innovate quickly. And that is part of the secret sauce here,” Buonassisi says.
Rich data, rapid results
Once they had built the system from the ground up, the researchers tested each component. Their results showed that the neural network model found better contact points with less computation time than seven other AI-based methods. In addition, the path planning algorithm consistently found shorter path plans than other methods.
When they put all the pieces together to conduct a 24-hour fully autonomous experiment, the robotic system conducted more than 3,000 unique photoconductance measurements at a rate exceeding 125 per hour.
In addition, the level of detail provided by this precise measurement approach enabled the researchers to identify hotspots with higher photoconductance as well as areas of material degradation.
“Being able to gather such rich data that can be captured at such fast rates, without the need for human guidance, starts to open up doors to be able to discover and develop new high-performance semiconductors, especially for sustainability applications like solar panels,” Siemenn says.
The researchers want to continue building on this robotic system as they strive to create a fully autonomous lab for materials discovery.
This work is supported, in part, by First Solar, Eni through the MIT Energy Initiative, MathWorks, the University of Toronto’s Acceleration Consortium, the U.S. Department of Energy, and the U.S. National Science Foundation.
MIT and Mass General Hospital researchers find disparities in organ allocation
In 1954, the world’s first successful organ transplant took place at Brigham and Women’s Hospital, in the form of a kidney donated from one twin to the other. At the time, a group of doctors and scientists had correctly theorized that the recipient’s antibodies were unlikely to reject an organ from an identical twin. One Nobel Prize and a few decades later, advancements in immune-suppressing drugs increased the viability of and demand for organ transplants. Today, over 1 million organ transplants have been performed in the United States, more than any other country in the world.
The impressive scale of this achievement was made possible due to advances in organ matching systems: The first computer-based organ matching system was released in 1977. Despite continued innovation in computing, medicine, and matching technology over the years, over 100,000 people in the U.S. are currently on the national transplant waiting list and 13 people die each day waiting for an organ transplant.
Most computational research in organ allocation is focused on the initial stages, when waitlisted patients are being prioritized for organ transplants. In a new paper presented at ACM Conference on Fairness, Accountability, and Transparency (FAccT) in Athens, Greece, researchers from MIT and Massachusetts General Hospital focused on the final, less-studied stage: when an offer is made and the physician at the transplant center decides on behalf of the patient whether to accept or reject the offered organ.
“I don’t think we were terribly surprised, but we were obviously disappointed,” co-first author and recent MIT PhD graduate Hammaad Adam says. Using computational models to analyze transplantation data from over 160,000 transplant candidates in the Scientific Registry of Transplant Recipients (SRTR) between 2010 and 2020, the researchers found that physicians were overall less likely to accept liver and lung offers on behalf of Black candidates, resulting in additional barriers for Black patients in the organ allocation process.
For livers, Black patients had 7 percent lower odds of offer acceptance than white patients. When it came to lungs, the disparity became even larger, with 20 percent lower odds of having an offer acceptance than white patients with similar characteristics.
The data don’t necessarily point to clinician bias as the main influence. “The bigger takeaway is that even if there are factors that justify clinical decision-making, there could be clinical conditions that we didn’t control for, that are more common for Black patients,” Adam explains. If the wait-list fails to account for certain patterns in decision-making, they could create obstacles in the process even if the process itself is “unbiased.”
The researchers also point out that high variability in offer acceptance and risk tolerances among transplant centers is a potential factor complicating the decision-making process. Their FAccT paper references a 2020 paper published in JAMA Cardiology, which concluded that wait-list candidates listed at transplant centers with lower offer acceptance rates have a higher likelihood of mortality.
Another key finding was that an offer was more likely to be accepted if the donor and candidate were of the same race. The paper describes this trend as “concerning,” given the historical inequities in organ procurement that have limited donation from racial and ethnic minority groups.
Previous work from Adam and his collaborators has aimed to address this gap. Last year, they compiled and released Organ Retrieval and Collection of Health Information for Donation (ORCHID), the first multi-center dataset describing the performance of organ procurement organizations (OPOs). ORCHID contains 10 years’ worth of OPO data, and is intended to facilitate research that addresses bias in organ procurement.
“Being able to do good work in this field takes time,” says Adam, who notes that the entirety of the organ allocation project took years to complete. To his knowledge, only one paper to date studies the association between offer acceptance and race.
While the bureaucratic and highly interdisciplinary nature of clinical AI projects can dissuade computer science graduate students from pursuing them, Adam committed to the project for the duration of his PhD in the lab of associate professor of electrical engineering Marzyeh Ghassemi, an affiliate of the MIT Jameel Clinic and the Institute of Medical Engineering and Sciences.
To graduate students interested in pursuing clinical AI research projects, Adam recommends that they “free [themselves] from the cycle of publishing every four months.”
“I found it freeing, to be honest — it’s OK if these collaborations take a while,” he says. “It’s hard to avoid that. I made the conscious choice a few years ago and I was happy doing that work.”
This work was supported with funding from the MIT Jameel Clinic. This research was supported, in part, by Takeda Development Center Americas Inc. (successor in interest to Millennium Pharmaceuticals Inc.), an NIH Ruth L. Kirschstein National Research Service Award, a CIFAR AI Chair at the Vector Institute, and by the National Institutes of Health.
Study: Babies’ poor vision may help organize visual brain pathways
Incoming information from the retina is channeled into two pathways in the brain’s visual system: one that’s responsible for processing color and fine spatial detail, and another that’s involved in spatial localization and detecting high temporal frequencies. A new study from MIT provides an account for how these two pathways may be shaped by developmental factors.
Newborns typically have poor visual acuity and poor color vision because their retinal cone cells are not well-developed at birth. This means that early in life, they are seeing blurry, color-reduced imagery. The MIT team proposes that such blurry, color-limited vision may result in some brain cells specializing in low spatial frequencies and low color tuning, corresponding to the so-called magnocellular system. Later, with improved vision, cells may tune to finer details and richer color, consistent with the other pathway, known as the parvocellular system.
To test their hypothesis, the researchers trained computational models of vision on a trajectory of input similar to what human babies receive early in life — low-quality images early on, followed by full-color, sharper images later. They found that these models developed processing units with receptive fields exhibiting some similarity to the division of magnocellular and parvocellular pathways in the human visual system. Vision models trained on only high-quality images did not develop such distinct characteristics.
“The findings potentially suggest a mechanistic account of the emergence of the parvo/magno distinction, which is one of the key organizing principles of the visual pathway in the mammalian brain,” says Pawan Sinha, an MIT professor of brain and cognitive sciences and the senior author of the study.
MIT postdocs Marin Vogelsang and Lukas Vogelsang are the lead authors of the study, which appears today in the journal Communications Biology. Sidney Diamond, an MIT research affiliate, and Gordon Pipa, a professor of neuroinformatics at the University of Osnabrueck, are also authors of the paper.
Sensory input
The idea that low-quality visual input might be beneficial for development grew out of studies of children who were born blind but later had their sight restored. An effort from Sinha’s laboratory, Project Prakash, has screened and treated thousands of children in India, where reversible forms of vision loss such as cataracts are relatively common. After their sight is restored, many of these children volunteer to participate in studies in which Sinha and his colleagues track their visual development.
In one of these studies, the researchers found that children who had cataracts removed exhibited a marked drop in object-recognition performance when the children were presented with black and white images, compared to colored ones. Those findings led the researchers to hypothesize that reduced color input characteristic of early typical development, far from being a hindrance, allows the brain to learn to recognize objects even in images that have impoverished or shifted colors.
“Denying access to rich color at the outset seems to be a powerful strategy to build in resilience to color changes and make the system more robust against color loss in images,” Sinha says.
In that study, the researchers also found that when computational models of vision were initially trained on grayscale images, followed by color images, their ability to recognize objects was more robust than that of models trained only on color images. Similarly, another study from the lab found that models performed better when they were trained first on blurry images, followed by sharper images.
To build on those findings, the MIT team wanted to explore what might be the consequences of both of those features — color and visual acuity — being limited at the outset of development. They hypothesized that these limitations might contribute to the development of the magnocellular and parvocellular pathways.
In addition to being highly attuned to color, cells in the parvocellular pathway have small receptive fields, meaning that they receive input from more compact clusters of retinal ganglion cells. This helps them to process fine detail. Cells in the magnocellular pathway pool information across larger areas, allowing them to process more global spatial information.
To test their hypothesis that developmental progressions could contribute to the magno and parvo cell selectivities, the researchers trained models on two different sets of images. One model was presented with a standard dataset of images that are used to train models to categorize objects. The other dataset was designed to roughly mimic the input that the human visual system receives from birth. This “biomimetic” data consists of low-resolution, grayscale images in the first half of the training, followed by high-resolution, colorful images in the second half.
After the models were trained, the researchers analyzed the models’ processing units — nodes within the network that bear some resemblance to the clusters of cells that process visual information in the brain. They found that the models trained on the biomimetic data developed a distinct subset of units that are jointly responsive to low-color and low-spatial-frequency inputs, similar to the magnocellular pathway. Additionally, these biomimetic models exhibited groups of more heterogenous parvocellular-like units tuned predominantly to higher spatial frequencies or richer color signals. Such distinction did not emerge in the models trained on full color, high-resolution images from the start.
“This provides some support for the idea that the ‘correlation’ we see in the biological system could be a consequence of the types of inputs that are available at the same time in normal development,” Lukas Vogelsang says.
Object recognition
The researchers also performed additional tests to reveal what strategies the differently trained models were using for object recognition tasks. In one, they asked the models to categorize images of objects where the shape and texture did not match — for example, an animal with the shape of cat but the texture of an elephant.
This is a technique several researchers in the field have employed to determine which image attributes a model is using to categorize objects: the overall shape or the fine-grained textures. The MIT team found that models trained on biomimetic input were markedly more likely to use an object’s shape to make those decisions, just as humans usually do. Moreover, when the researchers systematically removed the magnocellular-like units from the models, the models quickly lost their tendency to use shape to make categorizations.
In another set of experiments, the researchers trained the models on videos instead of images, which introduces a temporal dimension. In addition to low spatial resolution and color sensitivity, the magnocellular pathway responds to high temporal frequencies, allowing it to quickly detect changes in the position of an object. When models were trained on biomimetic video input, the units most tuned to high temporal frequencies were indeed the ones that also exhibited magnocellular-like properties in the spatial domain.
Overall, the results support the idea that low-quality sensory input early in life may contribute to the organization of sensory processing pathways of the brain, the researchers say. The findings do not rule out innate specification of the magno and parvo pathways, but provide a proof of principle that visual experience over the course of development could also play a role.
“The general theme that seems to be emerging is that the developmental progression that we go through is very carefully structured in order to give us certain kinds of perceptual proficiencies, and it may also have consequences in terms of the very organization of the brain,” Sinha says.
The research was funded by the National Institutes of Health, the Simons Center for the Social Brain, the Japan Society for the Promotion of Science, and the Yamada Science Foundation.
A new platform for developing advanced metals at scale
Companies building next-generation products and breakthrough technologies are often limited by the physical constraints of traditional materials. In aerospace, defense, energy, and industrial tooling, pushing those constraints introduces possible failure points into the system, but companies don’t have better options, given that producing new materials at scale involves multiyear timelines and huge expenses.
Foundation Alloy wants to break the mold. The company, founded by a team from MIT, is capable of producing a new class of ultra-high-performance metal alloys using a novel production process that doesn’t rely on melting raw materials. The company’s solid-state metallurgy technology, which simplifies development and manufacturing of next-generation alloys, was developed over many years of research by former MIT professor Chris Schuh and collaborators.
“This is an entirely new approach to making metals,” says CEO Jake Guglin MBA ’19, who co-founded Foundation Alloy with Schuh, Jasper Lienhard ’15, PhD ’22, and Tim Rupert PhD ’11. “It gives us a broad set of rules on the materials engineering side that allows us to design a lot of different compositions with previously unattainable properties. We use that to make products that work better for advanced industrial applications.”
Foundation Alloy says its metal alloys can be made twice as strong as traditional metals, with 10 times faster product development, allowing companies to test, iterate, and deploy new metals into products in months instead of years.
The company is already designing metals and shipping demonstration parts to companies manufacturing components for things like planes, bikes, and cars. It’s also making test parts for partners in industries with longer development cycles, such as defense and aerospace.
Moving forward, the company believes its approach enables companies to build higher-performing, more reliable systems, from rockets to cars, nuclear fusion reactors, and artificial intelligence chips.
“For advanced systems like rocket and jet engines, if you can run them hotter, you can get more efficient use of fuel and a more powerful system,” Guglin says. “The limiting factor is whether or not you have structural integrity at those higher temperatures, and that is fundamentally a materials problem. Right now, we’re also doing a lot of work in advanced manufacturing and tooling, which is the unsexy but super critical backbone of the industrial world, where being able to push properties up without multiplying costs can unlock efficiencies in operations, performance, and capacity, all in a way that’s only possible with different materials.”
From MIT to the world
Schuh joined MIT’s faculty in 2002 to study the processing, structure, and properties of metal and other materials. He was named head of the Department of Materials Science and Engineering in 2011 before becoming dean of engineering at Northwestern University in 2023, after more than 20 years at MIT.
“Chris wanted to look at metals from different perspectives and make things more economically efficient and higher performance than what’s possible with traditional processes,” Guglin says. “It wasn’t just for academic papers — it was about making new methods that would be valuable for the industrial world.”
Rupert and Lienhard conducted their PhDs in Schuh’s lab, and Rupert invented complementary technologies to the solid-state processes developed by Schuh and his collaborators as a professor at the University of California at Irvine.
Guglin came to MIT’s Sloan School of Management in 2017 eager to work with high-impact technologies.
“I wanted to go somewhere where I could find the types of fundamental technological breakthroughs that create asymmetric value — the types of things where if they didn’t happen here, they weren’t going to happen anywhere else,” Guglin recalls.
In one of his classes, a PhD student in Schuh’s lab practiced his thesis defense by describing his research on a new way to create metal alloys.
“I didn’t understand any of it — I have a philosophy background,” Guglin says. “But I heard ‘stronger metals’ and I saw the potential of this incredible platform Chris’ lab was working on, and it tied into exactly why I wanted to come to MIT.”
Guglin connected with Schuh, and the pair stayed in touch over the next several years as Guglin graduated and went to work for aerospace companies SpaceX and Blue Origin, where he saw firsthand the problems being caused by the metal parts supply chain.
In 2022, the pair finally decided to launch a company, adding Rupert and Lienhard and licensing technology from MIT and UC Irvine.
The founders’ first challenge was scaling up the technology.
“There’s a lot of process engineering to go from doing something once at 5 grams to doing it 100 times a week at 100 kilograms per batch,” Guglin says.
Today, Foundation Alloys starts with its customers’ material requirements and decides on a precise mixture of the powdered raw materials that every metal starts out as. From there, it uses a specialized industrial mixer — Guglin calls it an industrial KitchenAid blender — to create a metal powder that is homogenous down to the atomic level.
“In our process, from raw material all the way through to the final part, we never melt the metal,” Guglin says. “That is uncommon if not unknown in traditional metal manufacturing.
From there, the company’s material can be solidified using traditional methods like metal injection molding, pressing, or 3D printing. The final step is sintering in a furnace.
“We also do a lot of work around how the metal reacts in the sintering furnace,” Guglin says. “Our materials are specifically designed to sinter at relatively low temperatures, relatively quickly, and all the way to full density.”
The advanced sintering process uses an order of magnitude less heat, saving on costs while allowing the company to forego secondary processes for quality control. It also gives Foundation Alloy more control over the microstructure of the final parts.
“That’s where we get a lot of our performance boost from,” Guglin says. “And by not needing those secondary processing steps, we’re saving days if not weeks in addition to the costs and energy savings.”
A foundation for industry
Foundation Alloy is currently piloting their metals across the industrial base and has also received grants to develop parts for critical components of nuclear fusion reactors.
“The name Foundation Alloy in a lot of ways came from wanting to be the foundation for the next generation of industry,” Guglin says.
Unlike in traditional metals manufacturing, where new alloys require huge investments to scale, Guglin says the company’s process for developing new alloys is nearly the same as its production processes, allowing it to scale new materials production far more quickly.
“At the core of our approach is looking at problems like material scientists with a new technology,” Guglin says. “We’re not beholden to the idea that this type of steel must solve this type of problem. We try to understand why that steel is failing and then use our technology to solve the problem in a way that produces not a 10 percent improvement, but a two- or five-times improvement in terms of performance.”
Confronting the AI/energy conundrum
The explosive growth of AI-powered computing centers is creating an unprecedented surge in electricity demand that threatens to overwhelm power grids and derail climate goals. At the same time, artificial intelligence technologies could revolutionize energy systems, accelerating the transition to clean power.
“We’re at a cusp of potentially gigantic change throughout the economy,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering, at MITEI’s Spring Symposium, “AI and energy: Peril and promise,” held on May 13. The event brought together experts from industry, academia, and government to explore solutions to what Green described as both “local problems with electric supply and meeting our clean energy targets” while seeking to “reap the benefits of AI without some of the harms.” The challenge of data center energy demand and potential benefits of AI to the energy transition is a research priority for MITEI.
AI’s startling energy demands
From the start, the symposium highlighted sobering statistics about AI’s appetite for electricity. After decades of flat electricity demand in the United States, computing centers now consume approximately 4 percent of the nation's electricity. Although there is great uncertainty, some projections suggest this demand could rise to 12-15 percent by 2030, largely driven by artificial intelligence applications.
Vijay Gadepally, senior scientist at MIT’s Lincoln Laboratory, emphasized the scale of AI’s consumption. “The power required for sustaining some of these large models is doubling almost every three months,” he noted. “A single ChatGPT conversation uses as much electricity as charging your phone, and generating an image consumes about a bottle of water for cooling.”
Facilities requiring 50 to 100 megawatts of power are emerging rapidly across the United States and globally, driven both by casual and institutional research needs relying on large language programs such as ChatGPT and Gemini. Gadepally cited congressional testimony by Sam Altman, CEO of OpenAI, highlighting how fundamental this relationship has become: “The cost of intelligence, the cost of AI, will converge to the cost of energy.”
“The energy demands of AI are a significant challenge, but we also have an opportunity to harness these vast computational capabilities to contribute to climate change solutions,” said Evelyn Wang, MIT vice president for energy and climate and the former director at the Advanced Research Projects Agency-Energy (ARPA-E) at the U.S. Department of Energy.
Wang also noted that innovations developed for AI and data centers — such as efficiency, cooling technologies, and clean-power solutions — could have broad applications beyond computing facilities themselves.
Strategies for clean energy solutions
The symposium explored multiple pathways to address the AI-energy challenge. Some panelists presented models suggesting that while artificial intelligence may increase emissions in the short term, its optimization capabilities could enable substantial emissions reductions after 2030 through more efficient power systems and accelerated clean technology development.
Research shows regional variations in the cost of powering computing centers with clean electricity, according to Emre Gençer, co-founder and CEO of Sesame Sustainability and former MITEI principal research scientist. Gençer’s analysis revealed that the central United States offers considerably lower costs due to complementary solar and wind resources. However, achieving zero-emission power would require massive battery deployments — five to 10 times more than moderate carbon scenarios — driving costs two to three times higher.
“If we want to do zero emissions with reliable power, we need technologies other than renewables and batteries, which will be too expensive,” Gençer said. He pointed to “long-duration storage technologies, small modular reactors, geothermal, or hybrid approaches” as necessary complements.
Because of data center energy demand, there is renewed interest in nuclear power, noted Kathryn Biegel, manager of R&D and corporate strategy at Constellation Energy, adding that her company is restarting the reactor at the former Three Mile Island site, now called the “Crane Clean Energy Center,” to meet this demand. “The data center space has become a major, major priority for Constellation,” she said, emphasizing how their needs for both reliability and carbon-free electricity are reshaping the power industry.
Can AI accelerate the energy transition?
Artificial intelligence could dramatically improve power systems, according to Priya Donti, assistant professor and the Silverman Family Career Development Professor in MIT's Department of Electrical Engineering and Computer Science and the Laboratory for Information and Decision Systems. She showcased how AI can accelerate power grid optimization by embedding physics-based constraints into neural networks, potentially solving complex power flow problems at “10 times, or even greater, speed compared to your traditional models.”
AI is already reducing carbon emissions, according to examples shared by Antonia Gawel, global director of sustainability and partnerships at Google. Google Maps’ fuel-efficient routing feature has “helped to prevent more than 2.9 million metric tons of GHG [greenhouse gas] emissions reductions since launch, which is the equivalent of taking 650,000 fuel-based cars off the road for a year," she said. Another Google research project uses artificial intelligence to help pilots avoid creating contrails, which represent about 1 percent of global warming impact.
AI’s potential to speed materials discovery for power applications was highlighted by Rafael Gómez-Bombarelli, the Paul M. Cook Career Development Associate Professor in the MIT Department of Materials Science and Engineering. “AI-supervised models can be trained to go from structure to property,” he noted, enabling the development of materials crucial for both computing and efficiency.
Securing growth with sustainability
Throughout the symposium, participants grappled with balancing rapid AI deployment against environmental impacts. While AI training receives most attention, Dustin Demetriou, senior technical staff member in sustainability and data center innovation at IBM, quoted a World Economic Forum article that suggested that “80 percent of the environmental footprint is estimated to be due to inferencing.” Demetriou emphasized the need for efficiency across all artificial intelligence applications.
Jevons’ paradox, where “efficiency gains tend to increase overall resource consumption rather than decrease it” is another factor to consider, cautioned Emma Strubell, the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Strubell advocated for viewing computing center electricity as a limited resource requiring thoughtful allocation across different applications.
Several presenters discussed novel approaches for integrating renewable sources with existing grid infrastructure, including potential hybrid solutions that combine clean installations with existing natural gas plants that have valuable grid connections already in place. These approaches could provide substantial clean capacity across the United States at reasonable costs while minimizing reliability impacts.
Navigating the AI-energy paradox
The symposium highlighted MIT’s central role in developing solutions to the AI-electricity challenge.
Green spoke of a new MITEI program on computing centers, power, and computation that will operate alongside the comprehensive spread of MIT Climate Project research. “We’re going to try to tackle a very complicated problem all the way from the power sources through the actual algorithms that deliver value to the customers — in a way that’s going to be acceptable to all the stakeholders and really meet all the needs,” Green said.
Participants in the symposium were polled about priorities for MIT’s research by Randall Field, MITEI director of research. The real-time results ranked “data center and grid integration issues” as the top priority, followed by “AI for accelerated discovery of advanced materials for energy.”
In addition, attendees revealed that most view AI's potential regarding power as a “promise,” rather than a “peril,” although a considerable portion remain uncertain about the ultimate impact. When asked about priorities in power supply for computing facilities, half of the respondents selected carbon intensity as their top concern, with reliability and cost following.
3 Questions: How MIT’s venture studio is partnering with MIT labs to solve “holy grail” problems
MIT Proto Ventures is the Institute’s in-house venture studio — a program designed not to support existing startups, but to create entirely new ones from the ground up. Operating at the intersection of breakthrough research and urgent real-world problems, Proto Ventures proactively builds startups that leverage MIT technologies, talent, and ideas to address high-impact industry challenges.
Each venture-building effort begins with a “channel” — a defined domain such as clean energy, fusion, or AI in health care — where MIT is uniquely positioned to lead, and where there are pressing real-world problems needing solutions. Proto Ventures hires full-time venture builders, deeply technical entrepreneurs who embed in MIT labs, connect with faculty, scout promising inventions, and explore unmet market needs. These venture builders work alongside researchers and aspiring founders from across MIT who are accepted into Proto Ventures’ fellowship program to form new teams, shape business concepts, and drive early-stage validation. Once a venture is ready to spin out, Proto Ventures connects it with MIT’s broader innovation ecosystem, including incubation programs, accelerators, and technology licensing.
David Cohen-Tanugi SM '12, PhD '15 has been the venture builder for the fusion and clean energy channel since 2023.
Q: What are the challenges of launching startups out of MIT labs? In other words, why does MIT need a venture studio?
A: MIT regularly takes on the world’s “holy grail” challenges, such as decarbonizing heavy industry, preventing future pandemics, or adapting to climate extremes. Yet despite its extraordinary depth in research, too few of MIT’s technical breakthroughs evolve into successful startups targeting these problems. Not enough technical breakthroughs in MIT labs are turning into commercial efforts to address these highest-impact problems.
There are a few reasons for this. Right now, it takes a great deal of serendipity for a technology or idea in the lab to evolve into a startup project within the Institute’s ecosystem. Great startups don’t just emerge from great technology alone — they emerge from combinations of great technology, unmet market needs, and committed people.
A second reason is that many MIT researchers don’t have the time, professional incentives, or skill set to commercialize a technology. They often lack someone that they can partner with, someone who is technical enough to understand the technology but who also has experience bringing technologies to market.
Finally, while MIT excels at supporting entrepreneurial teams that are already in motion — thanks to world-class accelerators, mentorship services, and research funding programs — what’s missing is actually further upstream: a way to deliberately uncover and develop venture opportunities that haven’t even taken shape yet.
MIT needs a venture studio because we need a new, proactive model for research translation — one that breaks down silos and that bridges deep technical talent with validated market needs.
Q: How do you add value for MIT researchers?
A: As a venture builder, I act as a translational partner for researchers — someone who can take the lead on exploring commercial pathways in partnership with the lab. Proto Ventures fills the gap for faculty and researchers who believe their work could have real-world applications but don’t have the time, entrepreneurial expertise, or interested graduate students to pursue them. Proto Ventures fills that gap.
Having done my PhD studies at MIT a decade ago, I’ve seen firsthand how many researchers are interested in impact beyond academia but don’t know where to start. I help them think strategically about how their work fits into the real market, I break down tactical blockers such as intellectual property conversations or finding a first commercial partner, and I roll up my sleeves to do customer discovery, identify potential co-founders, or locate new funding opportunities. Even when the outcome isn’t a startup, the process often reveals new collaborators, use cases, or research directions. We’re not just scouting for IP — we’re building a deeper culture of tech translation at MIT, one lab at a time.
Q: What counts as a success?
A: We’ve launched five startups across two channels so far, including one that will provide energy-efficient propulsion systems for satellites and another that is developing advanced power supply units for data centers.
But counting startups is not the only way to measure impact. While embedded at the MIT Plasma Science and Fusion Center, I have engaged with 75 researchers in translational activities — many for the first time. For example, I’ve helped research scientist Dongkeun Park craft funding proposals for next-generation MRI and aircraft engines enabled by high-temperature superconducting magnets. Working with Mike Nour from the MIT Sloan Executive MBA program, we’ve also developed an innovative licensing strategy for Professor Michael P. Short and his antifouling coating technology. Sometimes it takes an outsider like me to connect researchers across departments, suggest a new collaboration, or unearth an overlooked idea. Perhaps most importantly, we’ve validated that this model works: embedding entrepreneurial scientists in labs changes how research is translated.
We’ve also seen that researchers are eager to translate their work — they just need a structure and a partner to help them do it. That’s especially true in the hard tech in which MIT excels. That’s what Proto Ventures offers. And based on our early results, we believe this model could be transformative not just for MIT, but for research institutions everywhere.
Study finds better services dramatically help children in foster care
Being placed in foster care is a necessary intervention for some children. But many advocates worry that kids can languish in foster care too long, with harmful effects for children who are temporarily unattached from a permanent family.
A new study co-authored by an MIT economist shows that an innovative Chilean program providing legal aid to children shortens the length of foster-care stays, returning them to families faster. In the process, it improves long-term social outcomes for kids and even reduces government spending on the foster care system.
“It was amazingly successful because the program got kids out of foster care about 30 percent faster,” says Joseph Doyle, an economist at the MIT Sloan School of Management, who helped lead the research. “Because foster care is expensive, that paid for the program by itself about four times over. If you improve the case management of kids in foster care, you can improve a child’s well-being and save money.”
The paper, “Effects of Enhanced Legal Aid in Child Welfare: Evidence from a Randomized Trial of Mi Abogado,” is published in the American Economic Review.
The authors are Ryan Cooper, a professor and director of government innovation at the University of Chicago; Doyle, who is the Erwin H. Schell Professor of Management at MIT Sloan; and Andrés P. Hojman, a professor at the Pontifical Catholic University of Chile.
Rigorous design
To conduct the study, the scholars examined the Chilean government’s new program “Mi Abogado” — meaning, “My Lawyer” — which provided enhanced legal support to children in foster care, as well as access to psychologists and social workers. Legal advocates in the program were given a reduced caseload, for one thing, to help them focus further on each individual case.
Chile introduced Mi Abogado in 2017, with a feature that made it ripe for careful study: The program randomizes most of the participants selected, as part of how it was rolled out. From the pool of children in the foster care system, randomly being part of the program makes it easier to identify its causal impact on later outcomes.
“Very few foster-care redesigns are evaluated in such a rigorous way, and we need more of this innovative approach to policy improvement,” Doyle notes.
The experiment included 1,781 children who were in Chile’s foster care program in 2019, with 581 selected for the Mi Abogado services; it tracked their trajectories over more than two years. Almost all the participants were in group foster-care homes.
In addition to reduced time spent in foster care, the Chilean data showed that children in the Mi Abogado program had a subsequent 30 percent reduction in terms of contact with the criminal justice system and a 5 percent increase in school attendance, compared to children in foster care who did not participate in the program.
“They were getting involved with crime less and attending school more,” Doyle says.
As powerful as the results appear, Doyle acknowledges that he would like to be able to analyze further which elements of the Mi Abogado program had the biggest impact — legal help, counseling and therapy, or other factors.
“We would like to see more about what exactly they are doing for children to speed their exit from care,” Doyle says. “Is it mostly about therapy? Is it working with judges and cutting through red tape? We think the lawyer is a very important part. But the results suggest it is not just the lawyer that improves outcomes.”
More programs in other places?
The current paper is one of many studies Doyle has developed during his career that relate to foster care and related issues. In another forthcoming paper, Doyle and some co-authors find that about 5 percent of U.S. children spend some time in foster care — a number that appears to be fairly common internationally, too.
“People don’t appreciate how common child protective services and foster care are,” Doyle says. Moreover, he adds, “Children involved in these systems are particularly vulnerable.”
With a variety of U.S. jurisdictions running their own foster-care systems, Doyle notes that many people have the opportunity to usefully learn about the Mi Abogado program and consider if its principles might be worth testing. And while that requires some political will, Doyle expresses optimism that policymakers might be open to new ideas.
“It’s not really a partisan issue,” Doyle says. “Most people want to help protect kids, and, if an intervention is needed for kids, have an interest in making the intervention run well.”
After all, he notes, the impact of the Mi Abogado program appears to be both substantial and lasting, making it an interesting example to consider.
“Here we have a case where the child outcomes are improved and the government saved money,” Doyle observes. “I’d like to see more experimentation with programs like this in other places.”
Support for the research was provided in part by the MIT Sloan Latin America Office. Chile’s Studies Department of the Ministry of Education made data available from the education system.
The high-tech wizardry of integrated photonics
Inspired by the “Harry Potter” stories and the Disney Channel show “Wizards of Waverly Place,” 7-year-old Sabrina Corsetti emphatically declared to her parents one afternoon that she was, in fact, a wizard.
“My dad turned to me and said that, if I really wanted to be a wizard, then I should become a physicist. Physicists are the real wizards of the world,” she recalls.
That conversation stuck with Corsetti throughout her childhood, all the way up to her decision to double-major in physics and math in college, which set her on a path to MIT, where she is now a graduate student in the Department of Electrical Engineering and Computer Science.
While her work may not involve incantations or magic wands, Corsetti’s research centers on an area that often produces astonishing results: integrated photonics. A relatively young field, integrated photonics involves building computer chips that route light instead of electricity, enabling compact and scalable solutions for applications ranging from communications to sensing.
Corsetti and her collaborators in the Photonics and Electronics Research Group, led by Professor Jelena Notaros, develop chip-sized devices which enable innovative applications that push the boundaries of what is possible in optics.
For instance, Corsetti and the team developed a chip-based 3D printer, small enough to sit in the palm of one’s hand, that emits a reconfigurable beam of light into resin to create solid shapes. Such a device could someday enable a user to rapidly fabricate customized, low-cost objects on the go.
She also contributed to creating a miniature “tractor beam” that uses a beam of light to capture and manipulate biological particles using a chip. This could help biologists study DNA or investigate the mechanisms of disease without contaminating tissue samples.
More recently, Corsetti has been working on a project in collaboration with MIT Lincoln Laboratory, focused on trapped-ion quantum computing, which involves the manipulation of ions to store and process quantum information.
“Our team has a strong focus on designing devices and systems that interact with the environment. The opportunity to join a new research group, led by a supportive and engaged advisor, that works on projects with a lot of real-world impacts, is primarily what drew me to MIT,” Corsetti says.
Embracing challenges
Years before she set foot in a research lab, Corsetti was a science- and math-focused kid growing up with her parents and younger brother in the suburbs of Chicago, where her family operates a structural steelwork company.
Throughout her childhood, her teachers fostered her love of learning, from her early years in the Frankfort 157-C school district through her time at the Lincoln-Way East High School.
She enjoyed working on science experiments outside the classroom and relished the chance to tackle complex conundrums during independent study projects curated by her teachers (like calculating the math behind the Brachistochrone Curve, or the shortest path between two points, which was famously solved by Isaac Newton).
Corsetti decided to double-major in physics and math at the University of Michigan after graduating from high school a year early.
“When I went to the University of Michigan, I couldn’t wait to get started. I enrolled in the toughest math and physics track right off the bat,” she recalls.
But Corsetti soon found that she had bitten off a bit more than she could chew. A lot of her tough undergraduate courses assumed students had prior knowledge from AP physics and math classes, which Corsetti hadn’t taken because she graduated early.
She met with professors, attended office hours, and tried to pick up the lessons she had missed, but felt so discouraged she contemplated switching majors. Before she made the switch, Corsetti decided to try working in a physics lab to see if she liked a day in the life of a researcher.
After joining Professor Wolfgang Lorenzon’s lab at Michigan, Corsetti spent hours working with grad students and postdocs on a hands-on project to build cells that would hold liquid hydrogen for a particle physics experiment.
As they collaborated for hours at a time to roll material into tubes, she peppered the older students with questions about their experiences in the field.
“Being in the lab made me fall in love with physics. I really enjoyed that environment, working with my hands, and working with people as part of a bigger team,” she says.
Her affinity for hands-on lab work was amplified a few years later when she met Professor Tom Schwarz, her research advisor for the rest of her time at Michigan.
Following a chance conversation with Schwarz, she applied to a research abroad program at CERN in Switzerland, where she was mentored by Siyuan Sun. There, she had the opportunity to join thousands of physicists and engineers on the ATLAS project, writing code and optimizing circuits for new particle-detector technologies.
“That was one of the most transformative experiences of my life. After I came back to Michigan, I was ready to spend my career focusing on research,” she says.
Hooked on photonics
Corsetti began applying to graduate schools but decided to shift focus from the more theoretical particle physics to electrical engineering, with an interest in conducting hands-on chip-design and testing research.
She applied to MIT with a focus on standard electronic-chip design, so it came as a surprise when Notaros reached out to her to schedule a Zoom call. At the time, Corsetti was completely unfamiliar with integrated photonics. However, after one conversation with the new professor, she was hooked.
“Jelena has an infectious enthusiasm for integrated photonics,” she recalls. “After those initial conversations, I took a leap of faith.”
Corsetti joined Notaros’ team as it was just getting started. Closely mentored by a senior student, Milica Notaros, she and her cohort grew immersed in integrated photonics.
Over the years, she’s particularly enjoyed the collaborative and close-knit nature of the lab and how the work involves so many different aspects of the experimental process, from design to simulation to analysis to hardware testing.
“An exciting challenge that we’re always running up against is new chip-fabrication requirements. There is a lot of back-and-forth between new application areas that demand new fabrication technologies, followed by improved fabrication technologies motivating additional application areas. That cycle is constantly pushing the field forward,” she says.
Corsetti plans to stay at the cutting edge of the field after graduation as an integrated-photonics researcher in industry or at a national lab. She would like to focus on trapped-ion quantum computing, which scientists are rapidly scaling up toward commercially viable systems, or other high-performance computing applications.
“You really need accelerated computing for any modern research area. It would be exciting and rewarding to contribute to high-performance computing that can enable a lot of other interesting research areas,” she says.
Paying it forward
In addition to making an impact with research, Corsetti is focused on making a personal impact in the lives of others. Through her involvement in MIT Graduate Hillel, she joined the Jewish Big Brothers Big Sisters of Boston, where she volunteers for the friend-to-friend program.
Participating in the program, which pairs adults who have disabilities with friends in the community for fun activities like watching movies or painting has been an especially uplifting and gratifying experience for Corsetti.
She’s also enjoyed the opportunity to support, mentor, and bond with her fellow MIT EECS students, drawing on the advice she’s received throughout her own academic journey.
“Don’t trust feelings of imposter syndrome,” she advises others. “Keep moving forward, ask for feedback and help, and be confident that you will reach a point where you can make meaningful contributions to a team.”
Outside the lab, she enjoys playing classical music on the clarinet (her favorite piece is Leonard Bernstein’s famous overture to “Candide”), reading, and caring for a family of fish in her aquarium.
MIT Open Learning bootcamp supports effort to bring invention for long-term fentanyl recovery to market
Evan Kharasch, professor of anesthesiology and vice chair for innovation at Duke University, has developed two approaches that may aid in fentanyl addiction recovery. After attending MIT’s Substance Use Disorders (SUD) Ventures Bootcamp, he’s committed to bringing them to market.
Illicit fentanyl addiction is still a national emergency in the United States, fueled by years of opioid misuse. As opioid prescriptions fell by 50 percent over 15 years, many turned to street drugs. Among those drugs, fentanyl stands out for its potency — just 2 milligrams can be fatal — and its low production cost. Often mixed with other drugs, it contributed to a large portion of over 80,000 overdose deaths in 2024. It has been particularly challenging to treat with currently available medications for opioid use disorder.
As an anesthesiologist, Kharasch is highly experienced with opioids, including methadone, one of only three drugs approved in the United States for treating opioid use disorder. Methadone is a key option for managing fentanyl use. It’s employed to transition patients off fentanyl and to support ongoing maintenance, but access is limited, with only 20 percent of eligible patients receiving it. Initiating and adjusting methadone treatment can take weeks due to its clinical characteristics, often causing withdrawal and requiring longer hospital stays. Maintenance demands daily visits to one of just over 2,000 clinics, disrupting work or study and leading most patients to drop out after a few months.
To tackle these challenges, Kharasch developed two novel methadone formulations: one for faster absorption to cut initiation time from weeks to days — or even hours — and one to slow elimination, thereby potentially requiring only weekly, rather than daily, dosing. As a clinician, scientist, and entrepreneur, he sees the science as demanding, but bringing these treatments to patients presents an even greater challenge. Kharasch learned about the SUD Ventures Bootcamp, part of MIT Open Learning, as a recipient of research funding from the National Institute on Drug Abuse (NIDA). He decided to apply to bridge the gap in his expertise and was selected to attend as a fellow.
Each year, the SUD Ventures Bootcamp unites innovators — including scientists, entrepreneurs, and medical professionals — to develop bold, cross-disciplinary solutions to substance use disorders. Through online learning and an intensive one-week in-person bootcamp, teams tackle challenges in different “high priority” areas. Guided by experts in science, entrepreneurship, and policy, they build and pitch ventures aimed at real-world impact. Beyond the multidisciplinary curriculum, the program connects people deeply committed to this space and equipped to drive progress.
Throughout the program, Kharasch’s concepts were validated by the invited industry experts, who highlighted the potential impact of a longer-acting methadone formulation, particularly in correctional settings. Encouragement from MIT professors, coaches, and peers energized Kharasch to fully pursue commercialization. He has already begun securing intellectual property rights, validating the regulatory pathway through the U.S Food and Drug Administration, and gathering market and patient feedback.
The SUD Ventures Bootcamp, he says, both activated and validated his passion for bringing these innovations to patients. “After many years of basic, translational and clinical research on methadone all — supported by NIDA — I experienced that a ha moment of recognizing a potential opportunity to apply the findings to benefit patients at scale,” Kharasch says. “The NIDA-sponsored participation in the MIT SUD Ventures Bootcamp was the critical catalyst which ignited the inspiration and commitment to pursue commercializing our research findings into better treatments for opioid use disorder.”
As next steps, Kharasch is seeking an experienced co-founder and finalizing IP protections. He remains engaged with the SUD Ventures network as mentors, industry experts, and peers offer help with advancing this needed solution to market. For example, the program's mentor, Nat Sims, the Newbower/Eitan Endowed Chair in Biomedical Technology Innovation at Massachusetts General Hospital (MGH) and a fellow anesthesiologist, has helped Kharasch arrange technology validation conversations within the MGH ecosystem and the drug development community.
“Evan’s collaboration with the MGH ecosystem can help define an optimum process for commercializing these innovations — identifying who would benefit, how they would benefit, and who is willing to pilot the product once it’s available,” says Sims.
Kharasch has also presented his project in the program’s webinar series. Looking ahead, Kharasch hopes to involve MIT Sloan School of Management students in advancing his project through health care entrepreneurship classes, continuing the momentum that began with the SUD Ventures Bootcamp.
The program and its research are supported by the NIDA of the National Institutes of Health. Cynthia Breazeal, a professor of media arts and sciences at the MIT Media Lab and dean for digital learning at MIT Open Learning, serves as the principal investigator on the grant.
MIT student wins first-ever Stephen Hawking Junior Medal for Science Communication
Gitanjali Rao, a rising junior at MIT majoring in biological engineering, has been named the first-ever recipient of the Stephen Hawking Junior Medal for Science Communication. This award, presented by the Starmus Festival, is a new category of the already prestigious award created by the late theoretical physicist, cosmologist, and author Stephen Hawking and the Starmus Festival.
“I spend a lot of time in labs,” says Rao, highlighting her Undergraduate Research Opportunities Program project in the Langer Lab. Along with her curiosity to explore, she also has a passion for helping others understand what happens inside the lab. “We very rarely discuss why science communication is important,” she says. “Stephen Hawking was incredible at that.”
Rao is the inventor of Epione, a device for early diagnosis of prescription opioid addiction, and Kindly, an anti-cyber-bullying service powered by AI and natural language processing. Kindly is now a United Nations Children's Fund “Digital Public Good” service and is accessible worldwide. These efforts, among others, brought her to the attention of the Starmus team.
The award ceremony was held last April at the Kennedy Center in Washington, where Rao gave a speech and met acclaimed scientists, artists, and musicians. “It was one for the books,” she says. “I met Brian May from Queen — he's a physicist.” Rao is also a musician in her own right — she plays bass guitar and piano, and she's been learning to DJ at MIT. “Starmus” is a portmanteau of “stars” and “music.”
Originally from Denver, Colorado, Rao attended a STEM-focused school before MIT. Looking ahead, she's open to graduate school, and dreams of launching a biotech startup when the right idea comes.
The medal comes with an internship opportunity that Rao hopes to use for fieldwork or experience in the pharmaceutical industry. She’s already secured a summer internship at Moderna, and is considering spending Independent Activities Period abroad. “Hopefully, I'll have a better idea in the next few months.”
VAMO proposes an alternative to architectural permanence
The International Architecture Exhibition of La Biennale di Venezia holds up a mirror to the industry — not only reflecting current priorities and preoccupations, but also projecting an agenda for what might be possible.
Curated by Carlo Ratti, MIT professor of practice of urban technologies and planning, this year’s exhibition (“Intelligens. Natural. Artificial. Collective”) proposes a “Circular Economy Manifesto” with the goal to support the “development and production of projects that utilize natural, artificial, and collective intelligence to combat the climate crisis.”
Designers and architects will quickly recognize the paradox of this year’s theme. Global architecture festivals have historically had a high carbon footprint, using vast amounts of energy, resources, and materials to build and transport temporary structures that are later discarded. This year’s unprecedented emphasis on waste elimination and carbon neutrality challenges participants to reframe apparent limitations into creative constraints. In this way, the Biennale acts as a microcosm of current planetary conditions — a staging ground to envision and practice adaptive strategies.
VAMO (Vegetal, Animal, Mineral, Other)
When Ratti approached John Ochsendorf, MIT professor and founding director of MIT Morningside Academy of Design (MAD), with the invitation to interpret the theme of circularity, the project became the premise for a convergence of ideas, tools, and know-how from multiple teams at MIT and the wider MIT community.
The Digital Structures research group, directed by Professor Caitlin Mueller, applied expertise in designing efficient structures of tension and compression. The Circular Engineering for Architecture research group, led by MIT alumna Catherine De Wolf at ETH Zurich, explored how digital technologies and traditional woodworking techniques could make optimal use of reclaimed timber. Early-stage startups — including companies launched by the venture accelerator MITdesignX — contributed innovative materials harnessing natural byproducts from vegetal, animal, mineral, and other sources.
The result is VAMO (Vegetal, Animal, Mineral, Other), an ultra-lightweight, biodegradable, and transportable canopy designed to circle around a brick column in the Corderie of the Venice Arsenale — a historic space originally used to manufacture ropes for the city’s naval fleet.
“This year’s Biennale marks a new radicalism in approaches to architecture,” says Ochsendorf. “It’s no longer sufficient to propose an exciting idea or present a stylish installation. The conversation on material reuse must have relevance beyond the exhibition space, and we’re seeing a hunger among students and emerging practices to have a tangible impact. VAMO isn’t just a temporary shelter for new thinking. It’s a material and structural prototype that will evolve into multiple different forms after the Biennale.”
Tension and compression
The choice to build the support structure from reclaimed timber and hemp rope called for a highly efficient design to maximize the inherent potential of comparatively humble materials. Working purely in tension (the spliced cable net) or compression (the oblique timber rings), the structure appears to float — yet is capable of supporting substantial loads across large distances. The canopy weighs less than 200 kilograms and covers over 6 meters in diameter, highlighting the incredible lightness that equilibrium forms can achieve. VAMO simultaneously showcases a series of sustainable claddings and finishes made from surprising upcycled materials — from coconut husks, spent coffee grounds, and pineapple peel to wool, glass, and scraps of leather.
The Digital Structures research group led the design of structural geometries conditioned by materiality and gravity. “We knew we wanted to make a very large canopy,” says Mueller. “We wanted it to have anticlastic curvature suggestive of naturalistic forms. We wanted it to tilt up to one side to welcome people walking from the central corridor into the space. However, these effects are almost impossible to achieve with today's computational tools that are mostly focused on drawing rigid materials.”
In response, the team applied two custom digital tools, Ariadne and Theseus, developed in-house to enable a process of inverse form-finding: a way of discovering forms that achieve the experiential qualities of an architectural project based on the mechanical properties of the materials. These tools allowed the team to model three-dimensional design concepts and automatically adjust geometries to ensure that all elements were held in pure tension or compression.
“Using digital tools enhances our creativity by allowing us to choose between multiple different options and short-circuit a process that would have otherwise taken months,” says Mueller. “However, our process is also generative of conceptual thinking that extends beyond the tool — we’re constantly thinking about the natural and historic precedents that demonstrate the potential of these equilibrium structures.”
Digital efficiency and human creativity
Lightweight enough to be carried as standard luggage, the hemp rope structure was spliced by hand and transported from Massachusetts to Venice. Meanwhile, the heavier timber structure was constructed in Zurich, where it could be transported by train — thereby significantly reducing the project’s overall carbon footprint.
The wooden rings were fabricated using salvaged beams and boards from two temporary buildings in Switzerland — the Huber and Music Pavilions — following a pedagogical approach that De Wolf has developed for the Digital Creativity for Circular Construction course at ETH Zurich. Each year, her students are tasked with disassembling a building due for demolition and using the materials to design a new structure. In the case of VAMO, the goal was to upcycle the wood while avoiding the use of chemicals, high-energy methods, or non-biodegradable components (such as metal screws or plastics).
“Our process embraces all three types of intelligence celebrated by the exhibition,” says De Wolf. “The natural intelligence of the materials selected for the structure and cladding; the artificial intelligence of digital tools empowering us to upcycle, design, and fabricate with these natural materials; and the crucial collective intelligence that unlocks possibilities of newly developed reused materials, made possible by the contributions of many hands and minds.”
For De Wolf, true creativity in digital design and construction requires a context-sensitive approach to identifying when and how such tools are best applied in relation to hands-on craftsmanship.
Through a process of collective evaluation, it was decided that the 20-foot lower ring would be assembled with eight scarf joints using wedges and wooden pegs, thereby removing the need for metal screws. The scarf joints were crafted through five-axis CNC milling; the smaller, dual-jointed upper ring was shaped and assembled by hand by Nicolas Petit-Barreau, founder of the Swiss woodwork company Anku, who applied his expertise in designing and building yurts, domes, and furniture to the VAMO project.
“While digital tools suited the repetitive joints of the lower ring, the upper ring’s two unique joints were more efficiently crafted by hand,” says Petit-Barreau. “When it comes to designing for circularity, we can learn a lot from time-honored building traditions. These methods were refined long before we had access to energy-intensive technologies — they also allow for the level of subtlety and responsiveness necessary when adapting to the irregularities of reused wood.”
A material palette for circularity
The structural system of a building is often the most energy-intensive; an impact dramatically mitigated by the collaborative design and fabrication process developed by MIT Digital Structures and ETH Circular Engineering for Architecture. The structure also serves to showcase panels made of biodegradable and low-energy materials — many of which were advanced through ventures supported by MITdesignX, a program dedicated to design innovation and entrepreneurship at MAD.
“In recent years, several MITdesignX teams have proposed ideas for new sustainable materials that might at first seem far-fetched,” says Gilad Rosenzweig, executive director of MITdesignX. “For instance, using spent coffee grounds to create a leather-like material (Cortado), or creating compostable acoustic panels from coconut husks and reclaimed wool (Kokus). This reflects a major cultural shift in the architecture profession toward rethinking the way we build, but it’s not enough just to have an inventive idea. To achieve impact — to convert invention into innovation — teams have to prove that their concept is cost-effective, viable as a business, and scalable.”
Aligned with the ethos of MAD, MITdesignX assesses profit and productivity in terms of environmental and social sustainability. In addition to presenting the work of R&D teams involved in MITdesignX, VAMO also exhibits materials produced by collaborating teams at University of Pennsylvania’s Stuart Weitzman School of Design, Politecnico di Milano, and other partners, such as Manteco.
The result is a composite structure that encapsulates multiple life spans within a diverse material palette of waste materials from vegetal, animal, and mineral forms. Panels of Ananasse, a material made from pineapple peels developed by Vérabuccia, preserve the fruit’s natural texture as a surface pattern, while rehub repurposes fragments of multicolored Murano glass into a flexible terrazzo-like material; COBI creates breathable shingles from coarse wool and beeswax, and DumoLab produces fuel-free 3D-printable wood panels.
A purpose beyond permanence
Adriana Giorgis, a designer and teaching fellow in architecture at MIT, played a crucial role in bringing the parts of the project together. Her research explores the diverse network of factors that influence whether a building stands the test of time, and her insights helped to shape the collective understanding of long-term design thinking.
“As a point of connection between all the teams, helping to guide the design as well as serving as a project manager, I had the chance to see how my research applied at each level of the project,” Giorgis reflects. “Braiding these different strands of thinking and ultimately helping to install the canopy on site brought forth a stronger idea about what it really means for a structure to have longevity. VAMO isn’t limited to its current form — it’s a way of carrying forward a powerful idea into contemporary and future practice.”
What’s next for VAMO? Neither the attempt at architectural permanence associated with built projects, nor the relegation to waste common to temporary installations. After the Biennale, VAMO will be disassembled, possibly reused for further exhibitions, and finally relocated to a natural reserve in Switzerland, where the parts will be researched as they biodegrade. In this way, the lifespan of the project is extended beyond its initial purpose for human habitation and architectural experimentation, revealing the gradual material transformations constantly taking place in our built environment.
To quote Carlo Ratti’s Circular Economy Manifesto, the “lasting legacy” of VAMO is to “harness nature’s intelligence, where nothing is wasted.” Through a regenerative symbiosis of natural, artificial, and collective intelligence, could architectural thinking and practice expand to planetary proportions?
MIT Open Learning bootcamp supports effort to bring invention for long-term fentanyl recovery to market
How repetition helps art speak to us
Often when we listen to music, we just instinctually enjoy it. Sometimes, though, it’s worth dissecting a song or other composition to figure out how it’s built.
Take the 1953 jazz standard “Satin Doll,” written by Duke Ellington and Billy Strayhorn, whose subtle structure rewards a close listening. As it happens, MIT Professor Emeritus Samuel Jay Keyser, a distinguished linguist and an avid trombonist on the side, has given the song careful scrutiny.
To Keyser, “Satin Doll” is a glittering example of what he calls the “same/except” construction in art. A basic rhyme, like “rent” and “tent,” is another example of this construction, given the shared rhyming sound and the different starting consonants.
In “Satin Doll,” Keyser observes, both the music and words feature a “same/except” structure. For instance, the rhythm of the first two bars of “Satin Doll” is the same as the second two bars, but the pitch goes up a step in bars three and four. An intricate pattern of this prevails throughout the entire body of “Satin Doll,” which Keyser calls “a musical rhyme scheme.”
When lyricist Johnny Mercer wrote words for “Satin Doll,” he matched the musical rhyme scheme. One lyric for the first four bars is, “Cigarette holder / which wigs me / Over her shoulder / she digs me.” Other verses follow the same pattern.
“Both the lyrics and the melody have the same rhyme scheme in their separate mediums, words and music, namely, A-B-A-B,” says Keyser. “That’s how you write lyrics. If you understand the musical rhyme scheme, and write lyrics to match that, you are introducing a whole new level of repetition, one that enhances the experience.”
Now, Keyser has a new book out about repetition in art and its cognitive impact on us, scrutinizing “Satin Doll” along with many other works of music, poetry, painting, and photography. The volume, “Play It Again, Sam: Repetition in the Arts,” is published by the MIT Press. The title is partly a play on Keyser’s name.
Inspired by the Margulis experiment
The genesis of “Play It Again, Sam” dates back several years, when Keyser encountered an experiment conducted by musicologist Elizabeth Margulis, described in her 2014 book, “On Repeat.” Margulis found that when she altered modern atonal compositions to add repetition to them, audiences ranging from ordinary listeners to music theorists preferred these edited versions to the original works.
“The Margulis experiment really caused the ideas to materialize,” Keyser says. He then examined repetition across art forms that featured research on associated cognitive activity, especially music, poetry, and the visual arts. For instance, the brain has distinct locations dedicated to the recognition of faces, places, and bodies. Keyser suggests this is why, prior to the advent of modernism, painting was overwhelmingly mimetic.
Ideally, he suggests, it will be possible to more comprehensively study how our brains process art — to see if encountering repetition triggers an endorphin release, say. For now, Keyser postulates that repetition involves what he calls the 4 Ps: priming, parallelism, prediction, and pleasure. Essentially, hearing or seeing a motif sets the stage for it to be repeated, providing audiences with satisfaction when they discover the repetition.
With remarkable range, Keyser vigorously analyzes how artists deploy repetition and have thought about it, from “Beowulf” to Leonard Bernstein, from Gustave Caillebotte to Italo Calvino. Some artworks do deploy identical repetition of elements, such as the Homeric epics; others use the “same/except” technique.
Keyser is deeply interested in visual art displaying the “same/except” concept, such as Andy Warhol’s famous “Campbell Soup Cans” painting. It features four rows of eight soup cans, which are all the same — except for the kind of soup on each can.
“Discovering this ‘same/except’ repetition in a work of art brings pleasure,” Keyser says.
But why is this? Multiple experimental studies, Keyser notes, suggest that repeated exposure of a subject to an image — such as an infant’s exposure to its mother’s face — helps create a bond of affection. This is the “mere exposure” phenomenon, posited by social psychologist Robert Zajonc, who as Keyser notes in the book, studied in detail “the repetition of an arbitrary stimulus and the mild affection that people eventually have for it.”
This tendency also helps explain why product manufacturers create ads with just the name of their products in ads: Seen often enough, the viewer bonds with the name. However the mechanism connecting repetition with pleasure works, and whatever its original function, Keyser argues that many artists have successfully tapped into it, grasping that audiences like repetition in poetry, painting, and music.
A shadow dog in Albuquerque
In the book, Keyser’s emphasis on repetition generates some distinctive interpretive positions. In one chapter, he digs into Lee Friendlander’s well-known photo, “Albuquerque, New Mexico,” a street scene with a jumble of signs, wires, and buildings, often interpreted in symbolic terms: It’s the American West frontier being submerged under postwar concrete and commerce.
Keyser, however, has a really different view of the Friendlander photo. There is a dog sitting near the middle of it; to the right is the shadow of a street sign. Keyser believes the shadow resembles the dog, and thinks it creates playful repetition in the photo.
“This particular photograph is really two photographs that rhyme,” Keyser says.“They’re the same, except one is the dog and one is the shadow. And that’s why that photograph is pleasurable, because you see that, even if you may not be fully aware of it. Sensing repetition in a work of art brings pleasure.”
“Play It Again, Sam” has received praise from arts practitioners, among others. George Darrah, principal drummer and arranger of the Boston Pops Orchestra, has called the book “extraordinary” in its “demonstration of the ways that poetry, music, painting, and photography engender pleasure in their audiences by exploiting the ability of the brain to detect repetition.” He adds that “Keyser has an uncanny ability to simplify complex ideas so that difficult material is easily understandable.”
In certain ways “Play It Again, Sam” contains the classic intellectual outlook of an MIT linguist. For decades, MIT-linked linguistics research has identified the universal structures of human language, revealing important similarities despite the seemingly wild variation of global languages. And here too, Keyser finds patterns that help organize an apparently boundless world of art. “Play It Again, Sam” is a hunt for structure.
Asked about this, Keyser acknowledges the influence of his longtime field on his current intellectual explorations, while noting that his insights about art are part of a greater investigation into our works and minds.
“I’m bringing a linguistic habit of mind to art,” Keyser says. “But I’m also pointing an analytical lens in the direction of natural predilections of the brain. The idea is to investigate how our aesthetic sense depends on the way the mind works. I’m trying to show how art can exploit the brain’s capacity to produce pleasure from non-art related functions.”
MIT engineers develop electrochemical sensors for cheap, disposable diagnostics
Using an inexpensive electrode coated with DNA, MIT researchers have designed disposable diagnostics that could be adapted to detect a variety of diseases, including cancer or infectious diseases such as influenza and HIV.
These electrochemical sensors make use of a DNA-chopping enzyme found in the CRISPR gene-editing system. When a target such as a cancerous gene is detected by the enzyme, it begins shearing DNA from the electrode nonspecifically, like a lawnmower cutting grass, altering the electrical signal produced.
One of the main limitations of this type of sensing technology is that the DNA that coats the electrode breaks down quickly, so the sensors can’t be stored for very long and their storage conditions must be tightly controlled, limiting where they can be used. In a new study, MIT researchers stabilized the DNA with a polymer coating, allowing the sensors to be stored for up to two months, even at high temperatures. After storage, the sensors were able to detect a prostate cancer gene that is often used to diagnose the disease.
The DNA-based sensors, which cost only about 50 cents to make, could offer a cheaper way to diagnose many diseases in low-resource regions, says Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering at MIT and the senior author of the study.
“Our focus is on diagnostics that many people have limited access to, and our goal is to create a point-of-use sensor. People wouldn’t even need to be in a clinic to use it. You could do it at home,” Furst says.
MIT graduate student Xingcheng Zhou is the lead author of the paper, published June 30 in the journal ACS Sensors. Other authors of the paper are MIT undergraduate Jessica Slaughter, Smah Riki ’24, and graduate student Chao Chi Kuo.
An inexpensive sensor
Electrochemical sensors work by measuring changes in the flow of an electric current when a target molecule interacts with an enzyme. This is the same technology that glucose meters use to detect concentrations of glucose in a blood sample.
The electrochemical sensors developed in Furst’s lab consist of DNA adhered to an inexpensive gold leaf electrode, which is laminated onto a sheet of plastic. The DNA is attached to the electrode using a sulfur-containing molecule known as a thiol.
In a 2021 study, Furst’s lab showed that they could use these sensors to detect genetic material from HIV and human papillomavirus (HPV). The sensors detect their targets using a guide RNA strand, which can be designed to bind to nearly any DNA or RNA sequence. The guide RNA is linked to an enzyme called Cas12, which cleaves DNA nonspecifically when it is turned on and is in the same family of proteins as the Cas9 enzyme used for CRISPR genome editing.
If the target is present, it binds to the guide RNA and activates Cas12, which then cuts the DNA adhered to the electrode. That alters the current produced by the electrode, which can be measured using a potentiostat (the same technology used in handheld glucose meters).
“If Cas12 is on, it’s like a lawnmower that cuts off all the DNA on your electrode, and that turns off your signal,” Furst says.
In previous versions of the device, the DNA had to be added to the electrode just before it was used, because DNA doesn’t remain stable for very long. In the new study, the researchers found that they could increase the stability of the DNA by coating it with a polymer called polyvinyl alcohol (PVA).
This polymer, which costs less than 1 cent per coating, acts like a tarp that protects the DNA below it. Once deposited onto the electrode, the polymer dries to form a protective thin film.
“Once it’s dried, it seems to make a very strong barrier against the main things that can harm DNA, such as reactive oxygen species that can either damage the DNA itself or break the thiol bond with the gold and strip your DNA off the electrode,” Furst says.
Successful detection
The researchers showed that this coating could protect DNA on the sensors for at least two months, and it could also withstand temperatures up to about 150 degrees Fahrenheit. After two months, they rinsed off the polymer and demonstrated that the sensors could still detect PCA3, a prostate cancer gene that can be found in urine.
This type of test could be used with a variety of samples, including urine, saliva, or nasal swabs. The researchers hope to use this approach to develop cheaper diagnostics for infectious diseases, such as HPV or HIV, that could be used in a doctor’s office or at home. This approach could also be used to develop tests for emerging infectious diseases, the researchers say.
A group of researchers from Furst’s lab was recently accepted into delta v, MIT’s student venture accelerator, where they hope to launch a startup to further develop this technology. Now that the researchers can create tests with a much longer shelf-life, they hope to begin shipping them to locations where they could be tested with patient samples.
“Our goal is to continue to test with patient samples against different diseases in real world environments,” Furst says. “Our limitation before was that we had to make the sensors on site, but now that we can protect them, we can ship them. We don’t have to use refrigeration. That allows us to access a lot more rugged or non-ideal environments for testing.”
The research was funded, in part, by the MIT Research Support Committee and a MathWorks Fellowship.
New imaging technique reconstructs the shapes of hidden objects
A new imaging technique developed by MIT researchers could enable quality-control robots in a warehouse to peer through a cardboard shipping box and see that the handle of a mug buried under packing peanuts is broken.
Their approach leverages millimeter wave (mmWave) signals, the same type of signals used in Wi-Fi, to create accurate 3D reconstructions of objects that are blocked from view.
The waves can travel through common obstacles like plastic containers or interior walls, and reflect off hidden objects. The system, called mmNorm, collects those reflections and feeds them into an algorithm that estimates the shape of the object’s surface.
This new approach achieved 96 percent reconstruction accuracy on a range of everyday objects with complex, curvy shapes, like silverware and a power drill. State-of-the-art baseline methods achieved only 78 percent accuracy.
In addition, mmNorm does not require additional bandwidth to achieve such high accuracy. This efficiency could allow the method to be utilized in a wide range of settings, from factories to assisted living facilities.
For instance, mmNorm could enable robots working in a factory or home to distinguish between tools hidden in a drawer and identify their handles, so they could more efficiently grasp and manipulate the objects without causing damage.
“We’ve been interested in this problem for quite a while, but we’ve been hitting a wall because past methods, while they were mathematically elegant, weren’t getting us where we needed to go. We needed to come up with a very different way of using these signals than what has been used for more than half a century to unlock new types of applications,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of a paper on mmNorm.
Adib is joined on the paper by research assistants Laura Dodds, the lead author, and Tara Boroushaki, and former postdoc Kaichen Zhou. The research was recently presented at the Annual International Conference on Mobile Systems, Applications and Services.
Reflecting on reflections
Traditional radar techniques send mmWave signals and receive reflections from the environment to detect hidden or distant objects, a technique called back projection.
This method works well for large objects, like an airplane obscured by clouds, but the image resolution is too coarse for small items like kitchen gadgets that a robot might need to identify.
In studying this problem, the MIT researchers realized that existing back projection techniques ignore an important property known as specularity. When a radar system transmits mmWaves, almost every surface the waves strike acts like a mirror, generating specular reflections.
If a surface is pointed toward the antenna, the signal will reflect off the object to the antenna, but if the surface is pointed in a different direction, the reflection will travel away from the radar and won’t be received.
“Relying on specularity, our idea is to try to estimate not just the location of a reflection in the environment, but also the direction of the surface at that point,” Dodds says.
They developed mmNorm to estimate what is called a surface normal, which is the direction of a surface at a particular point in space, and use these estimations to reconstruct the curvature of the surface at that point.
Combining surface normal estimations at each point in space, mmNorm uses a special mathematical formulation to reconstruct the 3D object.
The researchers created an mmNorm prototype by attaching a radar to a robotic arm, which continually takes measurements as it moves around a hidden item. The system compares the strength of the signals it receives at different locations to estimate the curvature of the object’s surface.
For instance, the antenna will receive the strongest reflections from a surface pointed directly at it and weaker signals from surfaces that don’t directly face the antenna.
Because multiple antennas on the radar receive some amount of reflection, each antenna “votes” on the direction of the surface normal based on the strength of the signal it received.
“Some antennas might have a very strong vote, some might have a very weak vote, and we can combine all votes together to produce one surface normal that is agreed upon by all antenna locations,” Dodds says.
In addition, because mmNorm estimates the surface normal from all points in space, it generates many possible surfaces. To zero in on the right one, the researchers borrowed techniques from computer graphics, creating a 3D function that chooses the surface most representative of the signals received. They use this to generate a final 3D reconstruction.
Finer details
The team tested mmNorm’s ability to reconstruct more than 60 objects with complex shapes, like the handle and curve of a mug. It generated reconstructions with about 40 percent less error than state-of-the-art approaches, while also estimating the position of an object more accurately.
Their new technique can also distinguish between multiple objects, like a fork, knife, and spoon hidden in the same box. It also performed well for objects made from a range of materials, including wood, metal, plastic, rubber, and glass, as well as combinations of materials, but it does not work for objects hidden behind metal or very thick walls.
“Our qualitative results really speak for themselves. And the amount of improvement you see makes it easier to develop applications that use these high-resolution 3D reconstructions for new tasks,” Boroushaki says.
For instance, a robot can distinguish between multiple tools in a box, determine the precise shape and location of a hammer’s handle, and then plan to pick it up and use it for a task. One could also use mmNorm with an augmented reality headset, enabling a factory worker to see lifelike images of fully occluded objects.
It could also be incorporated into existing security and defense applications, generating more accurate reconstructions of concealed objects in airport security scanners or during military reconnaissance.
The researchers want to explore these and other potential applications in future work. They also want to improve the resolution of their technique, boost its performance for less reflective objects, and enable the mmWaves to effectively image through thicker occlusions.
“This work really represents a paradigm shift in the way we are thinking about these signals and this 3D reconstruction process. We’re excited to see how the insights that we’ve gained here can have a broad impact,” Dodds says.
This work is supported, in part, by the National Science Foundation, the MIT Media Lab, and Microsoft.
New method combines imaging and sequencing to study gene function in intact tissue
Imagine that you want to know the plot of a movie, but you only have access to either the visuals or the sound. With visuals alone, you’ll miss all the dialogue. With sound alone, you will miss the action. Understanding our biology can be similar. Measuring one kind of data — such as which genes are being expressed — can be informative, but it only captures one facet of a multifaceted story. For many biological processes and disease mechanisms, the entire “plot” can’t be fully understood without combining data types.
However, capturing both the “visuals and sound” of biological data, such as gene expression and cell structure data, from the same cells requires researchers to develop new approaches. They also have to make sure that the data they capture accurately reflects what happens in living organisms, including how cells interact with each other and their environments.
Whitehead Institute for Biomedical Research and Harvard University researchers have taken on these challenges and developed Perturb-Multimodal (Perturb-Multi), a powerful new approach that simultaneously measures how genetic changes such as turning off individual genes affect both gene expression and cell structure in intact liver tissue. The method, described in Cell on June 12, aims to accelerate discovery of how genes control organ function and disease.
The research team, led by Whitehead Institute Member Jonathan Weissman and then-graduate student in his lab Reuben Saunders, along with Xiaowei Zhuang, the David B. Arnold Professor of Science at Harvard University, and then-postdoc in her lab Will Allen, created a system that can test hundreds of different genetic modifications within a single mouse liver while capturing multiple types of data from the same cells.
“Understanding how our organs work requires looking at many different aspects of cell biology at once,” Saunders says. “With Perturb-Multi, we can see how turning off specific genes changes not just what other genes are active, but also how proteins are distributed within cells, how cellular structures are organized, and where cells are located in the tissue. It’s like having multiple specialized microscopes all focused on the same experiment.”
“This approach accelerates discovery by both allowing us to test the functions of many different genes at once, and then for each gene, allowing us to measure many different functional outputs or cell properties at once — and we do that in intact tissue from animals,” says Zhuang, who is also a Howard Hughes Medical Institute (HHMI) investigator.
A more efficient approach to genetic studies
Traditional genetic studies in mice often turn off one gene and then observe what changes in that gene’s absence to learn about what the gene does. The researchers designed their approach to turn off hundreds of different genes across a single liver, while still only turning off one gene per cell — using what is known as a mosaic approach. This allowed them to study the roles of hundreds of individual genes at once in a single individual. The researchers then collected diverse types of data from cells across the same liver to get a full picture of the consequences of turning off the genes.
“Each cell serves as its own experiment, and because all the cells are in the same animal, we eliminate the variability that comes from comparing different mice,” Saunders says. “Every cell experiences the same physiological conditions, diet, and environment, making our comparisons much more precise.”
“The challenge we faced was that tissues, to perform their functions, rely on thousands of genes, expressed in many different cells, working together. Each gene, in turn, can control many aspects of a cell’s function. Testing these hundreds of genes in mice using current methods would be extremely slow and expensive — near impossible, in practice.” Allen says.
Revealing new biology through combined measurements
The team applied Perturb-Multi to study genetic controls of liver physiology and function. Their study led to discoveries in three important aspects of liver biology: fat accumulation in liver cells — a precursor to liver disease; stress responses; and hepatocyte zonation (how liver cells specialize, assuming different traits and functions, based on their location within the liver).
One striking finding emerged from studying genes that, when disrupted, cause fat accumulation in liver cells. The imaging data revealed that four different genes all led to similar fat droplet accumulation, but the sequencing data showed they did so through three completely different mechanisms.
“Without combining imaging and sequencing, we would have missed this complexity entirely,” Saunders says. “The imaging told us which genes affect fat accumulation, while the sequencing revealed whether this was due to increased fat production, cellular stress, or other pathways. This kind of mechanistic insight could be crucial for developing targeted therapies for fatty liver disease.”
The researchers also discovered new regulators of liver cell zonation. Unexpectedly, the newly discovered regulators include genes involved in modifying the extracellular matrix — the scaffolding between cells. “We found that cells can change their specialized functions without physically moving to a different zone,” Saunders says. “This suggests that liver cell identity is more flexible than previously thought.”
Technical innovation enables new science
Developing Perturb-Multi required solving several technical challenges. The team created new methods for preserving the content of interest in cells — RNA and proteins — during tissue processing, for collecting many types of imaging data and single-cell gene expression data from tissue samples that have been fixed with a preservative, and for integrating multiple types of data from the same cells.
“Overcoming the inherent complexity of biology in living animals required developing new tools that bridge multiple disciplines — including, in this case, genomics, imaging, and AI,” Allen says.
The two components of Perturb-Multi — the imaging and sequencing assays — together, applied to the same tissue, provide insights that are unattainable through either assay alone.
“Each component had to work perfectly while not interfering with the others,” says Weissman, who is also a professor of biology at MIT and an HHMI investigator. “The technical development took considerable effort, but the payoff is a system that can reveal biology we simply couldn’t see before.”
Expanding to new organs and other contexts
The researchers plan to expand Perturb-Multi to other organs, including the brain, and to study how genetic changes affect organ function under different conditions like disease states or dietary changes.
“We’re also excited about using the data we generate to train machine learning models,” adds Saunders. “With enough examples of how genetic changes affect cells, we could eventually predict the effects of mutations without having to test them experimentally — a ‘virtual cell’ that could accelerate both research and drug development.”
“Perturbation data are critical for training such AI models and the paucity of existing perturbation data represents a major hindrance in such ‘virtual cell’ efforts,” Zhuang says. “We hope Perturb-Multi will fill this gap by accelerating the collection of perturbation data.”
The approach is designed to be scalable, with the potential for genome-wide studies that test thousands of genes simultaneously. As sequencing and imaging technologies continue to improve, the researchers anticipate that Perturb-Multi will become even more powerful and accessible to the broader research community.
“Our goal is to keep scaling up. We plan to do genome-wide perturbations, study different physiological conditions, and look at different organs,” says Weissman. “That we can now collect so many types of data from so many cells, at speed, is going to be critical for building AI models like virtual cells, and I think it’s going to help us answer previously unsolvable questions about health and disease.”
President Emeritus Reif reflects on successes as a technical leader
As an electrical engineering student at Stanford University in the late 1970s, L. Rafael Reif was working on not only his PhD but also learning a new language.
“I didn’t speak English. And I saw that it was easy to ignore somebody who doesn’t speak English well,” Reif recalled. To him, that meant speaking with conviction.
“If you have tremendous technical skills, but you cannot communicate, if you cannot persuade others to embrace that, it’s not going to go anywhere. Without the combination, you cannot persuade the powers-that-be to embrace whatever ideas you have.”
Now MIT president emeritus, Reif recently joined Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering (SoE), for a fireside chat. Their focus: the importance of developing engineering leadership skills — such as persuasive communication — to solve the world’s most challenging problems.
SoE’s Technical Leadership and Communication Programs (TLC) sponsored the chat. TLC teaches engineering leadership, teamwork, and technical communication skills to students, from undergrads to postdocs, through its four programs: Undergraduate Practice Opportunities Program (UPOP), Gordon-MIT Engineering Leadership Program (GEL), Communication Lab (Comm Lab), and Riccio-MIT Graduate Engineering Leadership Program (GradEL).
About 175 students, faculty, and guests attended the fireside chat. Relaxed, engaging, and humorous — Reif shared anecdotes and insights about technical leadership from his decades in leadership roles at MIT.
Reif had a transformational impact on MIT. Beginning as an assistant professor of electrical engineering in 1980, he rose to head of the Department of Electrical Engineering and Computer Science (EECS), then served as provost from 2005 to 2012 and MIT president from 2012 to 2022.
He was instrumental in creating the MIT Schwarzman College of Computing in 2018, as well as establishing and growing MITx online open learning and MIT Microsystems Technology Laboratories.
With an ability to peer over the horizon and anticipate what’s coming, Reif used an array of leadership skills to develop and implement clear visions for those programs.
“One of the things that I learned from you is that as a leader, you have to envision the future and make bets,” said Chandrakasan. “And you don’t just wait around for that. You have to drive it.”
Turning new ideas into reality often meant overcoming resistance. When Reif first proposed the College of Computing to some fellow MIT leaders, “they looked at me and they said, no way. This is too hard. It’s not going to happen. It’s going to take too much money. It’s too complicated. OK, then starts the argument.”
Reif seems to have relished “the argument,” or art of persuasion, during his time at MIT. Though hearing different perspectives never hurt.
“All of us have blind spots. I always try to hear all points of view. Obviously, you can’t integrate all of it. You might say, ‘Anantha, I heard you, but I disagree with you because of this.’ So, you make the call knowing all the options. That is something non-technical that I used in my career.”
On the technical side, Reif’s background as an electrical engineer shaped his approach to leadership.
“What’s beautiful about a technical education is that you understand that you can solve anything if you start with first principles. There are first principles in just about anything that you do. If you start with those, you can solve any problem.”
Also, applying systems-level thinking is critical — understanding that organizations are really systems with interconnected parts.
“That was really useful to me. Some of you in the audience have studied this. In a system, when you start tinkering with something over here, something over there will be affected. And you have to understand that. At a place like MIT, that’s all the time!”
Reif was asked: If he were assembling a dream team to tackle the world’s biggest challenges, what skills or capabilities would he want them to have?
“I think we need people who can see things from different directions. I think we need people who are experts in different disciplines. And I think we need people who are experts in different cultures. Because to solve the big problems of the planet, we need to understand how different cultures address different things.”
Reif’s upbringing in Venezuela strongly influenced his leadership approach, particularly when it comes to empathy, a key trait he values.
“My parents were immigrants. They didn’t have an education, and they had to do whatever they could to support the family. And I remember as a little kid seeing how people humiliated them because they were doing menial jobs. And I remember how painful it was to me. It is part of my fabric to respect every individual, to notice them. I have a tremendous respect for every individual, and for the ability of every individual that didn’t have the same opportunity that all of us here have to be somebody.”
Reif’s advice to students who will be the next generation of engineering leaders is to keep learning because the challenges ahead are multidisciplinary. He also reminded them that they are the future.
“What are our assets? The people in this room. When it comes to the ecosystem of innovation in America, what we work on is to create new roadmaps, expand the roadmaps, create new industries. Without that, we have nothing. Companies do a great job of taking what you come up with and making wonderful things with it. But the ideas, whether it’s AI, whether it’s deep learning, it comes from places like this.”