MIT Latest News

Could LLMs help design our next medicines and materials?
The process of discovering molecules that have the properties needed to create new medicines and materials is cumbersome and expensive, consuming vast computational resources and months of human labor to narrow down the enormous space of potential candidates.
Large language models (LLMs) like ChatGPT could streamline this process, but enabling an LLM to understand and reason about the atoms and bonds that form a molecule, the same way it does with words that form sentences, has presented a scientific stumbling block.
Researchers from MIT and the MIT-IBM Watson AI Lab created a promising approach that augments an LLM with other machine-learning models known as graph-based models, which are specifically designed for generating and predicting molecular structures.
Their method employs a base LLM to interpret natural language queries specifying desired molecular properties. It automatically switches between the base LLM and graph-based AI modules to design the molecule, explain the rationale, and generate a step-by-step plan to synthesize it. It interleaves text, graph, and synthesis step generation, combining words, graphs, and reactions into a common vocabulary for the LLM to consume.
When compared to existing LLM-based approaches, this multimodal technique generated molecules that better matched user specifications and were more likely to have a valid synthesis plan, improving the success ratio from 5 percent to 35 percent.
It also outperformed LLMs that are more than 10 times its size and that design molecules and synthesis routes only with text-based representations, suggesting multimodality is key to the new system’s success.
“This could hopefully be an end-to-end solution where, from start to finish, we would automate the entire process of designing and making a molecule. If an LLM could just give you the answer in a few seconds, it would be a huge time-saver for pharmaceutical companies,” says Michael Sun, an MIT graduate student and co-author of a paper on this technique.
Sun’s co-authors include lead author Gang Liu, a graduate student at the University of Notre Dame; Wojciech Matusik, a professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group within the Computer Science and Artificial Intelligence Laboratory (CSAIL); Meng Jiang, associate professor at the University of Notre Dame; and senior author Jie Chen, a senior research scientist and manager in the MIT-IBM Watson AI Lab. The research will be presented at the International Conference on Learning Representations.
Best of both worlds
Large language models aren’t built to understand the nuances of chemistry, which is one reason they struggle with inverse molecular design, a process of identifying molecular structures that have certain functions or properties.
LLMs convert text into representations called tokens, which they use to sequentially predict the next word in a sentence. But molecules are “graph structures,” composed of atoms and bonds with no particular ordering, making them difficult to encode as sequential text.
On the other hand, powerful graph-based AI models represent atoms and molecular bonds as interconnected nodes and edges in a graph. While these models are popular for inverse molecular design, they require complex inputs, can’t understand natural language, and yield results that can be difficult to interpret.
The MIT researchers combined an LLM with graph-based AI models into a unified framework that gets the best of both worlds.
Llamole, which stands for large language model for molecular discovery, uses a base LLM as a gatekeeper to understand a user’s query — a plain-language request for a molecule with certain properties.
For instance, perhaps a user seeks a molecule that can penetrate the blood-brain barrier and inhibit HIV, given that it has a molecular weight of 209 and certain bond characteristics.
As the LLM predicts text in response to the query, it switches between graph modules.
One module uses a graph diffusion model to generate the molecular structure conditioned on input requirements. A second module uses a graph neural network to encode the generated molecular structure back into tokens for the LLMs to consume. The final graph module is a graph reaction predictor which takes as input an intermediate molecular structure and predicts a reaction step, searching for the exact set of steps to make the molecule from basic building blocks.
The researchers created a new type of trigger token that tells the LLM when to activate each module. When the LLM predicts a “design” trigger token, it switches to the module that sketches a molecular structure, and when it predicts a “retro” trigger token, it switches to the retrosynthetic planning module that predicts the next reaction step.
“The beauty of this is that everything the LLM generates before activating a particular module gets fed into that module itself. The module is learning to operate in a way that is consistent with what came before,” Sun says.
In the same manner, the output of each module is encoded and fed back into the generation process of the LLM, so it understands what each module did and will continue predicting tokens based on those data.
Better, simpler molecular structures
In the end, Llamole outputs an image of the molecular structure, a textual description of the molecule, and a step-by-step synthesis plan that provides the details of how to make it, down to individual chemical reactions.
In experiments involving designing molecules that matched user specifications, Llamole outperformed 10 standard LLMs, four fine-tuned LLMs, and a state-of-the-art domain-specific method. At the same time, it boosted the retrosynthetic planning success rate from 5 percent to 35 percent by generating molecules that are higher-quality, which means they had simpler structures and lower-cost building blocks.
“On their own, LLMs struggle to figure out how to synthesize molecules because it requires a lot of multistep planning. Our method can generate better molecular structures that are also easier to synthesize,” Liu says.
To train and evaluate Llamole, the researchers built two datasets from scratch since existing datasets of molecular structures didn’t contain enough details. They augmented hundreds of thousands of patented molecules with AI-generated natural language descriptions and customized description templates.
The dataset they built to fine-tune the LLM includes templates related to 10 molecular properties, so one limitation of Llamole is that it is trained to design molecules considering only those 10 numerical properties.
In future work, the researchers want to generalize Llamole so it can incorporate any molecular property. In addition, they plan to improve the graph modules to boost Llamole’s retrosynthesis success rate.
And in the long run, they hope to use this approach to go beyond molecules, creating multimodal LLMs that can handle other types of graph-based data, such as interconnected sensors in a power grid or transactions in a financial market.
“Llamole demonstrates the feasibility of using large language models as an interface to complex data beyond textual description, and we anticipate them to be a foundation that interacts with other AI algorithms to solve any graph problems,” says Chen.
This research is funded, in part, by the MIT-IBM Watson AI Lab, the National Science Foundation, and the Office of Naval Research.
Exploring the impacts of technology on everyday citizens
Give Dwai Banerjee credit: He doesn’t pick easy topics to study.
Banerjee is an MIT scholar who in a short time has produced a wide-ranging body of work about the impact of technology on society — and who, as a trained anthropologist, has a keen eye for people’s lived experience.
In one book, “Enduring Cancer,” from 2020, Banerjee studies the lives of mostly poor cancer patients in Delhi, digging into their psychological horizons and interactions with the world of medical care. Another book, “Hematologies,” also from 2020, co-authored with anthropologist Jacob Copeman, examines common ideas about blood in Indian society.
And in still another book, forthcoming later this year, Banerjee explores the history of computing in India — including the attempt by some to generate growth through domestic advances, even as global computer firms were putting the industry on rather different footing.
“I enjoy having the freedom to explore new topics,” says Banerjee, an associate professor in MIT’s Program in Science, Technology, and Society (STS). “For some people, building on their previous work is best, but I need new ideas to keep me going. For me, that feels more natural. You get invested in a subject for a time and try to get everything out of it.”
What largely links these disparate topics together is that Banerjee, in his work, is a people person: He aims to illuminate the lives and thoughts of everyday citizens as they interact with the technologies and systems of contemporary society.
After all, a cancer diagnosis can be life-changing not just in physical terms, but psychologically. For some, having cancer creates “a sense of being unmoored from prior certainties about oneself and one’s place in the world,” as Banerjee writes in “Enduring Cancer.”
The technology that enables diagnoses does not meet all our human needs, so the book traces the complicated inner lives of patients, and a medical system shifting to meet psychological and palliative-care challenges. Technology and society interact beyond blockbuster products, as the book deftly implies.
For his research and teaching, Banerjee was awarded tenure at MIT last year.
Falling for the humanities
Banerjee grew up in Delhi, and as a university student he expected to work in computing, before changing course.
“I was going to go to graduate school for computer engineering,” Banerjee says. “Then I just fell in love with the humanities, and studied the humanities and social sciences.” He received an MPhil and an MA in sociology from the Delhi School of Economics, then enrolled as a PhD student at New York University.
At NYU, Banerjee undertook doctoral studies in cultural anthropology, while performing some of the fieldwork that formed the basis of “Enduring Cancer.” At the same time, he found the people he was studying were surrounded by history — shaping the technologies and policies they encountered, and shaping their own thought. Ultimately even Banerjee’s anthropological work has a strong historical dimension.
After earning his PhD, Banerjee became a Mellon Fellow in the Humanities at Dartmouth College, then joined the MIT faculty in STS. It is a logical home for someone who thinks broadly and uses multiple research methods, from the field to the archives.
“I sometimes wonder if I am an anthropologist or if I am an historian,” Banerjee allows. “But it is an interdisciplinary program, so I try to make the most of that.”
Indeed, the STS program draws on many fields and methods, with its scholars and students linked by a desire to rigorously examine the factors shaping the development and application of technology — and, if necessary, to initiate difficult discussions about technology’s effects.
“That’s the history of the field and the department at MIT, that it’s a kind of moral backbone,” Banerjee says.
Finding inspiration
As for where Banerjee’s book ideas come from, he is not simply looking for large issues to write about, but things that spark his intellectual and moral sensibilities — like disadvantaged cancer patients in Delhi.
“‘Enduring Cancer,’ in my mind, is a sort of a traditional medical anthropology text, which came out of finding inspiration from these people, and running with it as far as I could,” Banerjee says.
Alternately, “‘Hematologies’ came out of a collaboration, a conversation with Jacob Copeman, with us talking about things and getting excited about it,” Banerjee adds. “The intellectual friendship became a driving force.” Copeman is now an anthropologist on the faculty at the University of Santiago de Compostela, in Spain.
As for Banerjee’s forthcoming book about computing in India, the spark was partly his own remembered enjoyment of seeing the internet reach the country, facilitated though it was by spotty dial-up modems and other now-quaint-seeming tools.
“It’s coming from an old obsession,” Banerjee says. “When the internet had just arrived, at that time when something was just blowing up, it was exciting. This project is [partly about] recovering my early enjoyment of what was then a really exciting time.”
The subject of the book itself, however, predates the commercial internet. Rather, Banerjee chronicles the history of computing during India’s first few decades after achieving independence from Britain, in 1947. Even into the 1970s, India’s government was interested in creating a strong national IT sector, designing and manufacturing its own machines. Eventually those efforts faded, and the multinational computing giants took ahold of India’s markets.
The book details how and why this happened, in the process recasting what we think we know about India and technology. Today, Banerjee notes, India is an exporter of skilled technology talent and an importer of tech tools, but that wasn’t predestined. It is more that the idea of an autonomous tech sector in the country ran into the prevailing forces of globalization.
“The book traces this moment of this high confidence in the country’s ability to do these things, producing manufacturing and jobs and economic growth, and then it traces the decline of that vision,” Banerjee says.
“One of the aims is for it to be a book anyone can read,” Banerjee adds. In that sense, the principle guiding his interests is now guiding his scholarly output: People first.
The spark of innovation and the commercialization journey
To Vanessa Chan PhD ’00, effective engineers don’t just solve technical problems. To make an impact with a new product or technology, they need to bring it to market, deploy it, and make it mainstream. Yet this is precisely what they aren’t trained to do.
In fact, 97 percent of patents fail to make it over the “commercialization wall.”
“Only 3 percent of patents succeed, and one of the biggest challenges is we are not training our PhDs, our undergrads, our faculty, to commercialize technologies,” said Chan, vice dean of innovation and entrepreneurship at the University of Pennsylvania’s School of Engineering and Applied Science. She delivered the Department of Materials Science and Engineering (DMSE)’s spring 2025 Wulff Lecture at MIT on March 10. “Instead, we’re focused on the really hard technical issues that we have to overcome, versus everything that needs to be addressed for something to make it to market.”
Chan spoke from deep experience, having led McKinsey & Co.’s innovation practice, helping Fortune 100 companies commercialize technologies. She also invented the tangle-free headphones Loopit at re.design, the firm she founded, and served as the U.S. Department of Energy (DoE)’s chief commercialization officer and director of the Office of Technology Transitions during the Biden administration.
From invention to impact
A DMSE alumna, Chan addressed a near-capacity crowd about the importance of materials innovation. She highlighted how new materials — or existing materials used in new ways — could solve key challenges, from energy sustainability to health care delivery. For example, carbon fiber composites have replaced aluminum in the airline industry, leading to reduced fuel consumption, lower emissions, and enhanced safety. Modern lithium-ion and solid-state batteries use optimized electrode materials for higher efficiency and faster charging. And biodegradable polymer stents, which dissolve over time, have replaced traditional metallic stents that remain in arteries and can cause complications.
The Wulff Lecture is a twice-yearly talk aimed at educating students, especially first-years, about materials science and engineering and its impact on society.
Inventing a groundbreaking technology is just the beginning, Chan said. She gave the example of Thomas Edison, often thought of as the father of the electric light bulb. But Edison didn’t invent the carbonized filament — that was Joseph Swan.
“Thomas Edison was the father of the deployed light bulb,” Chan said. “He took Swan’s patents and figured out, how do we actually pull a vacuum on this? How do we manufacture this at scale?”
For an invention to make an impact, it needs to successfully traverse the commercialization journey from research to development, demonstration, and deployment in the market. “An invention without deployment is a tragedy, because you’ve invented something where you may have a lot of paper publications, but it is not making a difference at all in the real world.”
Materials commercialization is difficult, Chan explained, because new materials are at the very beginning of a value chain — the full range of activities in producing a product or service. To make it to market, the materials invention must be adopted by others along the chain, and in some cases, companies must navigate how each part of the chain gets paid. A new material for hip replacements, for example, designed to reduce the risk of infection and rehospitalization, might be a materials breakthrough, but getting it to market is complicated by the way insurance works.
“They will not pay more to avoid hospitalization,” Chan said. “If your material is more expensive than what is currently being used today, the providers will not reimburse for that.”
Beyond technology
But engineers can increase their odds in commercialization if they know the right language. “Adoption readiness levels” (ARLs), developed in Chan’s Office of Technology Transitions, help assess the nontechnical risks technologies face on their journey to commercialization. These risks cover value proposition — whether a technology can perform at a price customers will pay — market acceptance, and other potential barriers, such as infrastructure and regulations.
In 2022, the Bipartisan Infrastructure Law and the Inflation Reduction Act allocated $370 billion toward clean energy deployment — 10 times the Department of Energy’s annual budget — to advance clean energy technologies such as carbon management, clean hydrogen, and geothermal heating and cooling. But Chan emphasized that the real prize was unlocking an estimated $23 trillion from private-sector investors.
“Those are the ones who are going to bring the technologies to market. So, how do we do that? How do we convince them to actually commercialize these technologies which aren’t quite there?” Chan asked.
Chan’s team spearheaded “Pathways to Commercial Liftoff,” a roadmap to bridge the gap between innovation and commercial adoption, helping identify scaling requirements, key players, and the acceptable risk levels for early adoption.
She shared an example from the DoE initiative, which received $8 billion from Congress to create a market for clean hydrogen technologies. She tied the money to specific pathways, explaining, “the private sector will start listening because they want the money.”
Her team also gathered data on where the industry was headed, identifying sectors that would likely adopt hydrogen, the infrastructure needed to support it, and what policies or funding could accelerate commercialization.
“There’s also community perception, because when we talk to people about hydrogen, what's the first thing people think about? The Hindenburg,” Chan said, referencing the 1937 dirigible explosion. “So these are the kinds of things that we had to grapple with if we’re actually going to create a hydrogen economy.”
“What do you love?”
Chan concluded her talk by offering students professional advice. She encouraged them to do what they love. On a slide, she shared a Venn diagram of her passions for technology, business, and making things — she recently started a pottery studio called Rebel Potters — illustrating the motivations behind her career journey.
“So I need you to ask yourself, What is your Venn diagram? What is it that you love?” Chan asked. “And you may say, ‘I don’t know. I’m 18 right now, and I just need to figure out what classes I want to take.’ Well, guess what? Get outside your comfort zone. Go do something new. Go try new things.”
Attendee Delia Harms, a DMSE junior, found the exercise particularly useful. “I think I’m definitely lacking a little bit of direction in where I want to go after undergrad and what I want my career path to look like,” Harms said. “So I’ll definitely try that exercise later — thinking about what my circles are, and how they come together.”
Jeannie She, a junior majoring in artificial intelligence and bioengineering, found inspiration in Chan’s public sector experience.
“I have always seen government as bureaucracy, red tape, slow — but I’m also really interested in policy and policy change,” She said. “So learning from her and the things that she’s accomplished during her time as an appointee has been really inspiring, and makes me see that there are careers in policy where things can actually get done.”
Enabling energy innovation at scale
Enabling and sustaining a clean energy transition depends not only on groundbreaking technology to redefine the world’s energy systems, but also on that innovation happening at scale. As a part of an ongoing speaker series, the MIT Energy Initiative (MITEI) hosted Emily Knight, the president and CEO of The Engine, a nonprofit incubator and accelerator dedicated to nurturing technology solutions to the world’s most urgent challenges. She explained how her organization is bridging the gap between research breakthroughs and scalable commercial impact.
“Our mission from the very beginning was to support and accelerate what we call ‘tough tech’ companies — [companies] who had this vision to solve some of the world’s biggest problems,” Knight said.
The Engine, a spinout of MIT, coined the term “tough tech” to represent not only the durability of the technology, but also the complexity and scale of the problems it will solve. “We are an incubator and accelerator focused on building a platform and creating what I believe is an open community for people who want to build tough tech, who want to fund tough tech, who want to work in a tough tech company, and ultimately be a part of this community,” said Knight.
According to Knight, The Engine creates “an innovation orchard” where early-stage research teams have access to the infrastructure and resources needed to take their ideas from lab to market while maximizing impact. “We use this pathway — from idea to investment, then investment to impact — in a lot of the work that we do,” explained Knight.
She said that tough tech exists at the intersection of several risk factors: technology, market and customer, regulatory, and scaling. Knight highlighted MIT spinout Commonwealth Fusion Systems (CFS) — one of many MIT spinouts within The Engine’s ecosystem that focus on energy — as an example of how The Engine encourages teams to work through these risks.
In the early days, the CFS team was told to assume their novel fusion technology would work. “If you’re only ever worried that your technology won’t work, you won’t pick your head up and have the right people on your team who are building the public affairs relationships so that, when you need it, you can get your first fusion reactor sited and done,” explained Knight. “You don’t know where to go for the next round of funding, and you don’t know who in government is going to be your advocates when you need them to be.”
“I think [CFS’s] eighth employee was a public affairs person,” Knight said. With the significant regulatory, scaling, and customer risks associated with fusion energy, building their team wisely was essential. Bringing on a public affairs person helped CFS build awareness and excitement around fusion energy in the local community and build the community programs necessary for grant funding.
The Engine’s growing ecosystem of entrepreneurs, researchers, institutions, and government agencies is a key component of the support offered to early-stage researchers. The ecosystem creates a space for sharing knowledge and resources, which Knight believes is critical for navigating the unique challenges associated with Tough Tech.
This support can be especially important for new entrepreneurs: “This leader that is going from PhD student to CEO — that is a really, really big journey that happens the minute you get funding,” said Knight. “Knowing that you’re in a community of people who are on that same journey is really important.”
The Engine also extends this support to the broader community through educational programs that walk participants through the process of translating their research from lab to market. Knight highlighted two climate and energy startups that joined The Engine through one such program geared toward graduate students and postdocs: Lithios, which is producing sustainable, low-cost lithium, and Lydian, which is developing sustainable aviation fuels.
The Engine also offers access to capital from investors with an intimate understanding of tough tech ventures. She said that government agency partners can offer additional support through public funding opportunities and highlighted that grants from the U.S. Department of Energy were key in the early funding of another MIT spinout within their ecosystem, Sublime Systems.
In response to the current political shift away from climate investments, as well as uncertainty surrounding government funding, Knight believes that the connections within their ecosystem are more important than ever as startups explore alternative funding. “We’re out there thinking about funding mechanisms that could be more reliable. That’s our role as an incubator.”
Being able to convene the right people to address a problem is something that Knight attributes to her education at Cornell University’s School of Hotel Administration. “My ethos across all of this is about service,” stated Knight. “We’re constantly evolving our resources and how we help our teams based on the gaps they’re facing.”
MITEI Presents: Advancing the Energy Transition is an MIT Energy Initiative speaker series highlighting energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. The next seminar in this series will be April 30 with Manish Bapna, president and CEO of the Natural Resources Defense Council. Visit MITEI’s Events page for more information on this and additional events.
Study: Burning heavy fuel oil with scrubbers is the best available option for bulk maritime shipping
When the International Maritime Organization enacted a mandatory cap on the sulfur content of marine fuels in 2020, with an eye toward reducing harmful environmental and health impacts, it left shipping companies with three main options.
They could burn low-sulfur fossil fuels, like marine gas oil, or install cleaning systems to remove sulfur from the exhaust gas produced by burning heavy fuel oil. Biofuels with lower sulfur content offer another alternative, though their limited availability makes them a less feasible option.
While installing exhaust gas cleaning systems, known as scrubbers, is the most feasible and cost-effective option, there has been a great deal of uncertainty among firms, policymakers, and scientists as to how “green” these scrubbers are.
Through a novel lifecycle assessment, researchers from MIT, Georgia Tech, and elsewhere have now found that burning heavy fuel oil with scrubbers in the open ocean can match or surpass using low-sulfur fuels, when a wide variety of environmental factors is considered.
The scientists combined data on the production and operation of scrubbers and fuels with emissions measurements taken onboard an oceangoing cargo ship.
They found that, when the entire supply chain is considered, burning heavy fuel oil with scrubbers was the least harmful option in terms of nearly all 10 environmental impact factors they studied, such as greenhouse gas emissions, terrestrial acidification, and ozone formation.
“In our collaboration with Oldendorff Carriers to broadly explore reducing the environmental impact of shipping, this study of scrubbers turned out to be an unexpectedly deep and important transitional issue,” says Neil Gershenfeld, an MIT professor, director of the Center for Bits and Atoms (CBA), and senior author of the study.
“Claims about environmental hazards and policies to mitigate them should be backed by science. You need to see the data, be objective, and design studies that take into account the full picture to be able to compare different options from an apples-to-apples perspective,” adds lead author Patricia Stathatou, an assistant professor at Georgia Tech, who began this study as a postdoc in the CBA.
Stathatou is joined on the paper by Michael Triantafyllou, the Henry L. and Grace Doherty and others at the National Technical University of Athens in Greece and the maritime shipping firm Oldendorff Carriers. The research appears today in Environmental Science and Technology.
Slashing sulfur emissions
Heavy fuel oil, traditionally burned by bulk carriers that make up about 30 percent of the global maritime fleet, usually has a sulfur content around 2 to 3 percent. This is far higher than the International Maritime Organization’s 2020 cap of 0.5 percent in most areas of the ocean and 0.1 percent in areas near population centers or environmentally sensitive regions.
Sulfur oxide emissions contribute to air pollution and acid rain, and can damage the human respiratory system.
In 2018, fewer than 1,000 vessels employed scrubbers. After the cap went into place, higher prices of low-sulfur fossil fuels and limited availability of alternative fuels led many firms to install scrubbers so they could keep burning heavy fuel oil.
Today, more than 5,800 vessels utilize scrubbers, the majority of which are wet, open-loop scrubbers.
“Scrubbers are a very mature technology. They have traditionally been used for decades in land-based applications like power plants to remove pollutants,” Stathatou says.
A wet, open-loop marine scrubber is a huge, metal, vertical tank installed in a ship’s exhaust stack, above the engines. Inside, seawater drawn from the ocean is sprayed through a series of nozzles downward to wash the hot exhaust gases as they exit the engines.
The seawater interacts with sulfur dioxide in the exhaust, converting it to sulfates — water-soluble, environmentally benign compounds that naturally occur in seawater. The washwater is released back into the ocean, while the cleaned exhaust escapes to the atmosphere with little to no sulfur dioxide emissions.
But the acidic washwater can contain other combustion byproducts like heavy metals, so scientists wondered if scrubbers were comparable, from a holistic environmental point of view, to burning low-sulfur fuels.
Several studies explored toxicity of washwater and fuel system pollution, but none painted a full picture.
The researchers set out to fill that scientific gap.
A “well-to-wake” analysis
The team conducted a lifecycle assessment using a global environmental database on production and transport of fossil fuels, such as heavy fuel oil, marine gas oil, and very-low sulfur fuel oil. Considering the entire lifecycle of each fuel is key, since producing low-sulfur fuel requires extra processing steps in the refinery, causing additional emissions of greenhouse gases and particulate matter.
“If we just look at everything that happens before the fuel is bunkered onboard the vessel, heavy fuel oil is significantly more low-impact, environmentally, than low-sulfur fuels,” she says.
The researchers also collaborated with a scrubber manufacturer to obtain detailed information on all materials, production processes, and transportation steps involved in marine scrubber fabrication and installation.
“If you consider that the scrubber has a lifetime of about 20 years, the environmental impacts of producing the scrubber over its lifetime are negligible compared to producing heavy fuel oil,” she adds.
For the final piece, Stathatou spent a week onboard a bulk carrier vessel in China to measure emissions and gather seawater and washwater samples. The ship burned heavy fuel oil with a scrubber and low-sulfur fuels under similar ocean conditions and engine settings.
Collecting these onboard data was the most challenging part of the study.
“All the safety gear, combined with the heat and the noise from the engines on a moving ship, was very overwhelming,” she says.
Their results showed that scrubbers reduce sulfur dioxide emissions by 97 percent, putting heavy fuel oil on par with low-sulfur fuels according to that measure. The researchers saw similar trends for emissions of other pollutants like carbon monoxide and nitrous oxide.
In addition, they tested washwater samples for more than 60 chemical parameters, including nitrogen, phosphorus, polycyclic aromatic hydrocarbons, and 23 metals.
The concentrations of chemicals regulated by the IMO were far below the organization’s requirements. For unregulated chemicals, the researchers compared the concentrations to the strictest limits for industrial effluents from the U.S. Environmental Protection Agency and European Union.
Most chemical concentrations were at least an order of magnitude below these requirements.
In addition, since washwater is diluted thousands of times as it is dispersed by a moving vessel, the concentrations of such chemicals would be even lower in the open ocean.
These findings suggest that the use of scrubbers with heavy fuel oil can be considered as equal to or more environmentally friendly than low-sulfur fuels across many of the impact categories the researchers studied.
“This study demonstrates the scientific complexity of the waste stream of scrubbers. Having finally conducted a multiyear, comprehensive, and peer-reviewed study, commonly held fears and assumptions are now put to rest,” says Scott Bergeron, managing director at Oldendorff Carriers and co-author of the study.
“This first-of-its-kind study on a well-to-wake basis provides very valuable input to ongoing discussion at the IMO,” adds Thomas Klenum, executive vice president of innovation and regulatory affairs at the Liberian Registry, emphasizing the need “for regulatory decisions to be made based on scientific studies providing factual data and conclusions.”
Ultimately, this study shows the importance of incorporating lifecycle assessments into future environmental impact reduction policies, Stathatou says.
“There is all this discussion about switching to alternative fuels in the future, but how green are these fuels? We must do our due diligence to compare them equally with existing solutions to see the costs and benefits,” she adds.
This study was supported, in part, by Oldendorff Carriers.
MIT graduate engineering and business programs ranked highly by U.S. News for 2025-26
U.S. News and Word Report has again placed MIT’s graduate program in engineering at the top of its annual rankings, released today. The Institute has held the No. 1 spot since 1990, when the magazine first ranked such programs.
The MIT Sloan School of Management also placed highly, in rankings announced April 8. It occupies the No. 5 spot for the best graduate business programs.
Among individual engineering disciplines, MIT placed first in six areas: aerospace/aeronautical/astronautical engineering, chemical engineering, computer engineering (tied with the University of California at Berkeley), electrical/electronic/communications engineering (tied with Stanford University and Berkeley), materials engineering, and mechanical engineering. It placed second in nuclear engineering and third in biomedical engineering/bioengineering.
In the rankings of individual MBA specialties, MIT placed first in four areas: information systems, production/operations, project management, and supply chain/logistics. It placed second in business analytics and third in entrepreneurship.
U.S. News bases its rankings of graduate schools of engineering and business on two types of data: reputational surveys of deans and other academic officials, and statistical indicators that measure the quality of a school’s faculty, research, and students. The magazine’s less-frequent rankings of graduate programs in the sciences, social sciences, and humanities are based solely on reputational surveys. Among the peer-review disciplines ranked this year, MIT placed first in computer science, and its doctoral program in economics also placed first (tied with Harvard University, Stanford, Berkeley, and the University of Chicago).
Supersize me
Well into the late 19th century, the U.S. retail sector was overwhelmingly local, consisting of small, independent merchants throughout the country. That started changing after Sears and Roebuck’s famous catalog became popular, allowing the firm to grow, while a rival, Montgomery Ward, also expanded. By the 1930s, the U.S. had 130,000 chain stores, topped by Atlantic and Pacific supermarkets (the A&P), with over 15,000 stores.
A century onward, the U.S. retail landscape is dominated by retail giants. Today, 90 percent of Americans live within 10 miles of a Walmart, while five of the country’s 10 biggest employers — Walmart, Amazon, Home Depot, Kroger, and Target— are retailers. Two others in the top 10, UPS and FedEx, are a major part of the retail economy.
The ubiquity of these big retailers, and the sheer extent of the U.S. shopping economy as a whole, is unusual compared to the country’s European counterparts. Domestic consumption plays an outsized role in driving growth in the United States, and credit plays a much larger role in supporting that consumption than in Europe. The U.S. has five times as much retail space per capita as Japan and the U.K., and 10 times as much as Germany. Unlike in Europe, shopping hours are largely unregulated.
How did this happen? To be sure, Walmart, Amazon, Target, and other massive chains have plenty of business acumen. But the full story involves a century or more of political tectonics and legal debates, which helped shape the size of U.S. retailing and the prominence of its large discount chains.
“The markets that we take as given, that we think of as the natural outcome of supply and demand, are heavily shaped by policy and by politics,” says MIT political scientist Kathleen Thelen.
Thelen examines the subject in a new book, “Attention, Shoppers! American Retail Capitalism and the Origins of the Amazon Economy,” published today by Princeton University Press. In it, she examines the growth of the particular model of supersized, low-cost, low-wage retailing that now features so prominently in the U.S. economy.
Prioritizing prices
While a great deal has been written about specific American companies, Thelen’s book has some distinctive features. One is a comparison to the economies of Europe, where she has focused much of her scholarship. Another is her historical lens, extending back to the start of chain retailing.
“It seems like every time I set out to explain something in the present, I’m thrown back to the 19th century,” Thelen says.
For instance, as both Sears and Montgomery Ward grew, producers and consumers were still experimenting with alternate commercial arrangements, like cooperatives, which pooled suppliers together, but they ultimately ran into economic and legal headwinds. Especially, at the time, legal headwinds.
“Antitrust laws in the United States were very forbearing toward big multidivisional corporations and very punitive toward alternative types of arrangements like cooperatives, so big retailers got a real boost in that period,” Thelen says. Separately, the U.S. Postal Service was also crucial, since big mail order houses like Sears relied on not just on its delivery services but also its money order system, to sell goods to the company’s many customers who lacked bank accounts.
Smaller retailers fought large chains during the Depression, especially in the South and the West, which forms another phase of the story. But low-cost discounters worked around some laws through regulatory arbitrage, finding friendlier regulations in some states — and sometimes though outright rule-breaking. Ultimately, larger retailers have thrived again in the last half century, especially as antitrust law increasingly prioritized consumer prices as its leading measuring stick.
Most antitrust theorizing since the 1960s “valorizes consumer welfare, which is basically defined as price, so anything that delivers the lowest price to consumers is A-OK,” Thelen says. “We’re in this world where the large, low-cost retailers are delivering consumer welfare in the way the courts are defining it.”
That emphasis on prices, she notes, then spills over into other areas of the economy, especially wages and labor relations.
“If you prioritize prices, one of the main ways to reduce prices is to reduce labor costs,” Thelen says. “It’s no coincidence that low-cost discounters are often low-wage employers. Indeed, they often squeeze their vendors to deliver goods at ever-lower prices, and by extension they’re pressing down on wages in their supplier networks as well.”
As Thelen’s book explains, legal views supporting large chains were also common during the first U.S. wave of chain-retail growth. She writes, “large, low-cost retailers have almost always enjoyed a privileged position in the American antitrust regime.”
In the “deep equilibrium”
“Attention, Shoppers!” makes clear that this tendency toward lower prices, lower employee pay, and high consumer convenience is particularly pronounced in the U.S., where 22.6 percent of employees count as low-wage workers (making two-thirds or less of the country’s median wage). In the other countries that belong to the Organization for Economic Cooperation and Development, 13.9 percent of workers fit that description. About three-quarters of U.S. retail workers are in the low-wage category.
In other OECD countries, on aggregate, manufacturers and producers make up bigger chunks of the economy and, correspondingly, often have legal frameworks more friendly to manufacturers and to labor. But in the U.S., large retailers have gained more leverage, if anything, in the last half-century, Thelen notes.
“You might think mass retailers and manufacturers would have a symbiotic relationship, but historically there has been great tension between them, especially on price,” Thelen says. “In the postwar period, the balance of power became tilted toward retailers, and away from manufacturers and labor. Retailers also had consumers on their side, and had more power over data to dictate the terms on which their vendors would supply goods to them.”
Currently, as Thelen writes in the book, the U.S. is in a “deep equilibrium” on this front, in that many low-wage workers now rely on these low-cost retailers to make ends meet — and because Americans as a whole now find it normal to have their purchases delivered at lightning speed. Things might be different, Thelen suggests, if there are changes to U.S. antitrust enforcement, or, especially, major reforms to labor law, such as allowing workers to organize for higher wages across companies, not just at individual stores. Short of that, the equilibrium is likely to hold.
“Attention, Shoppers!” has received praise from other scholars. Louis Hyman, a historian at Johns Hopkins University, has called it a “pathbreaking study that provides insight into not only the past but also the future of online retail.”
For her part, Thelen hopes readers will learn more about an economic landscape we might take for granted, even while we shop at big chains, around us and online.
“The triumph of these types of retailers was not inevitable,” Thelen says. “It was a function of politics and political choice.”
A new way to bring personal items to mixed reality
Think of your most prized belongings. In an increasingly virtual world, wouldn’t it be great to save a copy of that precious item and all the memories it holds?
In mixed-reality settings, you can create a digital twin of a physical item, such as an old doll. But it’s hard to replicate interactive elements, like the way it moves or the sounds it makes — the sorts of unique interactive features that made the toy distinct in the first place.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) sought to change that, and they have a potential solution. Their “InteRecon” program enables users to recapture real-world objects in a mobile app, and then animate them in mixed-reality environments.
This prototype could recreate the interaction functions in the physical world, such as the head motions of your favorite bobblehead, or playing a classic video on a digital version of your vintage TV. It creates more lifelike and personal digital surroundings while preserving a memory.
InteRecon’s ability to reconstruct the interactive experience of different items could make it a useful tool for teachers explaining important concepts, like demonstrating how gravity pulls an object down. It could also add a new visual component to museum exhibits, such as animating a painting or bringing a historical mannequin to life (without the scares of characters from “Night at the Museum”). Eventually, InteRecon may be able to teach a doctor’s apprentice organ surgery or a cosmetic procedure by visualizing each motion needed to complete the task.
The exciting potential of InteRecon comes from its ability to add motions or interactive functions to many different objects, according to CSAIL visiting researcher Zisu Li, lead author of a paper introducing the tool.
“While taking a picture or video is a great way to preserve a memory, those digital copies are static,” says Li, who is also a PhD student at the Hong Kong University of Science and Technology. “We found that users wanted to reconstruct personal items while preserving their interactivity to enrich their memories. With the power of mixed reality, InteRecon can make these memories live longer in virtual settings as interactive digital items.”
Li and her colleagues will present InteRecon at the 2025 ACM CHI conference on Human Factors in Computing Systems.
Making a virtual world more realistic
To make digital interactivity possible, the team first developed an iPhone app. Using your camera, you scan the item all the way around three times to ensure it’s fully captured. The 3D model can then be imported into the InteRecon mixed reality interface, where you can mark (“segment”) individual areas to select which parts of the model will be interactive (like a doll’s arms, head, torso, and legs). Alternatively, you can use the function provided by InteRecon for automatic segmentation.
The InteRecon interface can be accessed via the mixed reality headset (such as Hololens 2 and Quest). It allows you to choose a programmable motion for the part of the item you want to animate after your model is segmented.
Movement options are presented as motion demonstrations, allowing you to play around with them before deciding on one — say, a flopping motion that emulates how a bunny doll’s ears move. You can even pinch a specific part and explore different ways to animate it, like sliding, dangling, and pendulum-like turns.
Your old iPod, digitized
The team showed that InteRecon can also recapture the interface of physical electronic devices, like a vintage TV. After making a digital copy of the item, you can customize the 3D model with different interfaces.
Users can play with example widgets from different interfaces before choosing a motion: a screen (either a TV display or camera’s viewfinder), a rotating knob (for, say, adjusting the volume), an “on/off”-style button, and a slider (for changing settings on something like a DJ booth).
Li and colleagues presented an application that recreates the interactivity of a vintage TV by incorporating virtual widgets such as an “on/off” button, a screen, and a channel switch on a TV model, along with embedding old videos into it. This makes the TV model come to life. You could also upload MP3 files and add a “play button” to a 3D model of an iPod to listen to your favorite songs in mixed reality.
The researchers believe InteRecon opens up intriguing new avenues in designing lifelike virtual environments. A user study confirmed that people from different fields share this enthusiasm, viewing it as easy to learn and diverse in its ability to express the richness of users’ memories.
“One thing I really appreciate is that the items that users remember are imperfect,” says Faraz Faruqi SM ’22, another author on the paper who is also a CSAIL affiliate and MIT PhD student in electrical engineering and computer science. “InteRecon brings those imperfections into mixed reality, accurately recreating what made a personal item like a teddy bear missing a few buttons so special.”
In a related study, users imagined how this technology could be applied to professional scenarios, from teaching medical students how to perform surgeries to helping travelers and researchers log their trips, and even assisting fashion designers in experimenting with materials.
Before InteRecon is used in more advanced settings, though, the team would like to upgrade their physical simulation engine to something more precise. This would enable applications such as helping a doctor’s apprentice to learn the pinpoint accuracy needed to do certain surgical maneuvers.
Li and Faruqi may also incorporate large language models and generative models that can recreate lost personal items into 3D models via language descriptions, as well as explain the interface’s features.
As for the researchers’ next steps, Li is working toward a more automatic and powerful pipeline that can make interactivity-preserved digital twins of larger physical environments in mixed reality for end users, such as a virtual office space. Faruqi is looking to build an approach that can physically recreate lost items via 3D printers.
“InteRecon represents an exciting new frontier in the field of mixed reality, going beyond mere visual replication to capture the unique interactivity of physical objects,” says Hanwang Zhang, an associate professor at Nanyang Technological University's College of Computing and Data Science, who wasn’t involved in the research. “This technology has the potential to revolutionize education, health care, and cultural exhibitions by bringing a new level of immersion and personal connection to virtual environments.”
Li and Faruqi wrote the paper with the Hong Kong University of Science and Technology (HKUST) master’s student Jiawei Li, PhD student Shumeng Zhang, Associate Professor Xiaojuan Ma, and assistant professors Mingming Fan and Chen Liang from HKUST; ETH Zurich PhD student Zeyu Xiong; and Stefanie Mueller, the TIBCO Career Development Associate Professor in the MIT departments of Electrical Engineering and Computer Science and Mechanical Engineering, and leader of the HCI Engineering Group. Their work was supported by the APEX Lab of The Hong Kong University of Science and Technology (Guangzhou) in collaboration with the HCI Engineering Group.
The human body, its movement, and music
Watching and listening to a pianist’s performance is an immersive and enjoyable experience. The pianist and the instrument, with a blend of skill, training, and presence, create a series of memorable moments for themselves and the audience. But is there a way to improve the performance and our understanding of how the performer and their instrument work together to create this magic, while also minimizing performance-related injuries?
Mi-Eun Kim, director of keyboard studies in MIT’s Music and Theater Arts Section, and Praneeth Namburi PhD ’16, a research scientist in MIT’s Institute for Medical Engineering and Science, are investigating how the body works when pianists play. Their joint project, The Biomechanics of Assimilating a New Piano Skill, aims to develop mechanistic insights that could transform how we understand and teach piano technique, reduce performance-related injuries, and bridge the gap between artistic expression and biomechanical efficiency.
Their project is among those recently selected for a SHASS+ Connectivity Fund grant through the MIT Human Insight Collaborative.
“The project emerged from a convergence of interests and personal experiences,” Namburi says. “Mi-Eun witnessed widespread injuries among fellow pianists and saw how these injuries could derail careers.”
Kim is a renowned pianist who has performed on stages throughout the United States, in Europe, and in Asia. She earned the Liszt-Garrison Competition’s Liszt Award and the Corpus Christi solo prize, among other honors. She teaches piano and chamber music through MIT Music’s Emerson/Harris Program and chamber music through MIT’s Chamber Music Society. She earned advanced degrees from the University of Michigan and holds a bachelor of arts degree in history from Columbia University.
Namburi’s work focuses on the biomechanics of efficient, expressive, and coordinated movement. He draws inspiration from artists and athletes in specialized movement disciplines, such as dancing and fencing, to investigate skilled movement. He earned a PhD in experimental neuroscience from MIT and a bachelor of engineering degree in electrical and electronic engineering from Singapore’s Nanyang Technological University.
Pursuing the project
Kim and Namburi arrived at their project by taking different roads into the arts. While Kim was completing her studies at the University of Michigan, Namburi was taking dance lessons as a hobby in Boston. He learned that both expressive and sustainable movements might share a common denominator. “A key insight was that elastic tissues play a crucial role in coordinated, expressive, and sustainable movements in dance — a principle that could extend beyond dancing,” he notes.
“We recognized that studying elastic tissues could shed light on reducing injury risk, as well as understanding musical expression and embodiment in the context of piano playing,” Kim says.
Kim and Namburi began collaborating on what would become their project in October 2023, though the groundwork was in place months before. “A visiting student working with me on a research project studying pianists in the MIT.nano Immersion Lab reached out to Mi-Eun in summer 2023,” Namburi recalls. A shared Instagram video showing their setup with motion capture sensors and a pianist playing Chopin on a digital keyboard sparked Kim’s interest. The Immersion Lab is an open-access, shared facility for MIT and beyond dedicated to visualizing, understanding, and interacting with large, multidimensional data.
“I couldn't make sense of all the sensors, but immediately noticed they were using a digital keyboard,” she says.
Kim wanted to elevate these studies’ quality by pairing the musicians with the proper equipment and instrument. While the digital pianos they’d previously used are portable and provide musical instrument digital interface (MIDI) data, they don’t offer the same experience as a real piano. “Pianists dream of playing on an ideal instrument — a 9-foot concert grand with perfectly regulated 24-inch keys that responds to every musical intention without resistance,” Kim says.
The researchers brought a Spirio grand piano to the Immersion Lab and observed that the instrument could both capture pianists’ hammerstrike velocities and reproduce them to play back the performance. Monitoring Kim’s performance on the concert grand piano, for example, both noted marked differences in her playing style.
“Despite all the sensors, lighting, and observers, playing felt so natural that I forgot I was in a lab,” she says. “I could focus purely on the music, without worrying about adapting to a smaller keyboard or digital sound.”
This setup allowed them to observe pianists’ natural movements, which was exactly what Kim wanted to study.
During Independent Activities Period 2025, Kim and Namburi hosted a new course, Biomechanics of Piano Playing, in the Immersion Lab. Students and faculty from MIT, Harvard University, the University of Michigan, the University of Toronto, and the University of Hartford took part. Participants learned how to use motion capture, accelerometers, and ultrasound imaging to visualize signals from the body during piano playing.
Observations and outcomes
If the efficiency and perceived fluency of an expert pianist’s movements comes from harnessing the body’s inherent elastic mechanisms, Kim and Namburi believe, it’s possible to redesign how piano playing is taught. Each wants to reduce occurrences of playing-related injuries and improve how musicians learn their craft.
“I want us to bridge the gap between artistic expression and biomechanical efficiency,” Namburi says.
Through their exploratory sessions at the Immersion Lab, Kim and Namburi found common ground, gathering information about their observations of and experiences in piano and dance through sensor technology, including ultrasound.
Beyond these, Kim saw potential for transforming piano pedagogy. “Traditional teaching relies heavily on subjective descriptions and metaphors passed down through generations,” she says. “While valuable, these approaches could be enhanced with objective, scientific understanding of the physical mechanisms behind skilled piano performance — evidence-driven piano pedagogy, if you will.”
Remembering Juanita Battle: “Everything about her was just happy”
MIT Health Student Health Plan Research and Resolution Specialist Juanita Battle passed away on Jan. 14. She was 70.
Battle was best known throughout the MIT community as one of the friendly faces and voices that students encountered whenever they had a question about their health insurance. For more than 17 years, Juanita was there to help students navigate the complexities of the U.S. health-care system.
“Juanita really cared about the students,” remembers Affiliate Health Plan Representative Lawanda Santiago. Whenever Battle was on a call with a student, you knew that call could take 20 minutes. “She would always go above and beyond.”
Sheila Sanchez, lead student health plan research and resolution specialist, agrees. “There was nothing she wouldn’t do to make sure that the student had a good experience when it came to some insurance question. She made sure that the student was always heard, always happy.”
“At the end of any conversation, she knew the student’s name, where they were from, what their mother’s name was, and even their favorite color,” says Sanchez.
“Juanita was the outward face of the MIT Student Health Insurance Plan,” adds David Tytell, MIT Health’s director of marketing and communications. “Whenever there was a call for volunteers to help promote student insurance, like Campus Preview Weekend, Juanita was always the first to raise her hand.” Her detailed, clear explanations of difficult insurance concepts were featured in multiple MIT Health videos.
“She also had a ‘crush’ on Tim the Beaver,” says Tytell. “She would instantly become a kid again whenever Tim entered the room, and she never missed an opportunity to take a selfie with him.”
Battle’s friends also recall her passion for dining out. “Juanita loved food! When we would go out to eat, Juanita would have the menu memorized before we even got there,” says Sanchez. "She had already done her research, read Yelp reviews, looked at pictures, figured out her top three favorite things, and even had recommendations for everybody else!”
“She especially loved tiramisu,” says Santiago.
Battle’s laugh was infectious. She was known for always looking at the bright side of things and had the uncanny ability to make a joke out of just about anything. Halloween was her favorite holiday, and she would always dress up and pose for pictures. “One of my last encounters with Juanita was last Halloween,” says Tytell. “I came back from a meeting to find a trick-or-treat bag filled with candy and a note from Juanita on my desk.”
“She didn’t let anything affect her attitude,” says Sanchez. “Everything about her was just happy.”
3Q: MIT’s Lonnie Petersen on the first medical X-ray taken in space
Many of us have gotten an X-ray at one time or another, either at the dentist’s or the doctor’s office. Now, astronauts orbiting Earth have shown it’s possible to take an X-ray in space. The effort will help future space explorers diagnose and monitor medical conditions, from fractures and sprains to signs of bone decalcification, while in orbit.
Last week, crew members aboard the Fram2 mission posted to social media and shared the first-ever medical X-ray image taken in space. The image is a black-and-white scan of a hand with a ring, echoing the very first X-ray image ever taken, 130 years ago, by the physicist Wilhelm Roentgen, of his wife’s hand. The new X-ray image was taken in microgravity, inside a four-person space capsule flying at orbital speeds of 17,500 miles per hour, about 200 miles above the Earth’s surface.
The in-flight body scan was part of the SpaceXray project, one of 22 science experiments that astronauts conducted during the Fram2 mission. Operated by SpaceX, Fram2 was the first human spaceflight mission to travel in a polar orbit, looping around the planet from pole to pole. Fram2 gets its mission name from the Norwegian ship “Fram,” which was the first to carry explorers to the Arctic and Antarctic regions in the late 19th century.
The body scans are a first demonstration that medical X-ray imaging can be done within the confines and conditions in space. Lonnie Petersen, a co-investigator on the SpaceXray project, is an associate professor in MIT’s Department of Aeronautics and Astronautics who studies space physiology and the effects of spaceflight on the human body. Petersen helped to define and design the protocol around the SpaceXray project, in collaboration with institutional partners such as Stanford University and the Mayo Clinic, and X-ray hardware companies KA and MinXray. Petersen talked with MIT News about how these first in-orbit X-ray images can help enable safe and healthy longer-term missions in space.
Q: What are the challenges in taking an X-ray in space, versus here on Earth?
A: There are several challenges regarding the hardware, methods, and subjects being X-rayed.
To get hardware certified for spaceflight, it should be miniaturized and as lightweight as possible. There are also increased safety requirements because all devices work in a confined space. The increased requirements drive technology development. I always say space is our best technology accelerator — this is also true for medical technology.
For this project we used a portable, specialized X-ray generator and detector developed by MinXray and KA Imaging for the battlefield and made it applicable for spaceflight.
In terms of methods, one of my concerns was that the increased background radiation might reduce the quality of the image so that it would fall below clinical standards. From the first images we have received from space, it seems that the quality is great. I am very excited to further analyze the full set of images.
We want the X-rays to travel straight through the body part of interest. This requires alignment of equipment and patient. As you can imagine, a floating subject will be harder to position. We will be quantifying any potential impact of this and using it for future updates.
We also do not have radiologists or technicians in space. The methods need to be simple and robust enough for a layperson to operate them.
And, finally, regarding subjects: Entry into space has huge impact on the human body. Blood and fluid are no longer pulled down toward the feet by gravity; they are evenly distributed, and thus there are regional pressures and perfusion changes. The cardiovascular system and the brain are impacted by this over time. Mechanical unloading of the body leads to muscle atrophy and bone decalcification as well reduction in exercise capacity. This mission was only 3.5 days, so the crew will likely not have experienced many negative effects, but with an X-ray, we can now monitor bone health in space. We have never been able to do that before. We can monitor potential fluid buildup in the lungs or check for diseases in the abdomen.
I’ll also take off my physician hat and put on my engineering hat: X-rays are a useful tool in nondestructive hardware tests in aviation (and other areas). This project increases our diagnostic capabilities in space, not just for patients, but also for hardware.
Q: How did the Fram2 crew do it?
A: The crew learned how to take X-rays in one afternoon. It was done as a train-the-trainer model. The protocol was created in advance and the crew took images of each other, checked the quality, and stored the images. We have only seen one image so far, but from that, we are very impressed with the quality, the skills, and the dedication to advancing science by the crew.
Q: What will you learn from these first images?
A: First and foremost, this was a technology demonstration: Can we even do this in space? We are looking forward to analyzing all the images, but from preliminary data it looks like we absolutely can. Now comes a detailed analysis to tease out all the lessons we possibly can learn from this both with regard to current capabilities but also the next steps. The team is, of course, very excited to carry this research forward and break even more ground.
Molecules that fight infection also act on the brain, inducing anxiety or sociability
Immune molecules called cytokines play important roles in the body’s defense against infection, helping to control inflammation and coordinating the responses of other immune cells. A growing body of evidence suggests that some of these molecules also influence the brain, leading to behavioral changes during illness.
Two new studies from MIT and Harvard Medical School, focused on a cytokine called IL-17, now add to that evidence. The researchers found that IL-17 acts on two distinct brain regions — the amygdala and the somatosensory cortex — to exert two divergent effects. In the amygdala, IL-17 can elicit feelings of anxiety, while in the cortex it promotes sociable behavior.
These findings suggest that the immune and nervous systems are tightly interconnected, says Gloria Choi, an associate professor of brain and cognitive sciences, a member of MIT’s Picower Institute for Learning and Memory, and one of the senior authors of the studies.
“If you’re sick, there’s so many more things that are happening to your internal states, your mood, and your behavioral states, and that’s not simply you being fatigued physically. It has something to do with the brain,” she says.
Jun Huh, an associate professor of immunology at Harvard Medical School, is also a senior author of both studies, which appear today in Cell. One of the papers was led by Picower Institute Research Scientist Byeongjun Lee and former Picower Institute research scientist Jeong-Tae Kwon, and the other was led by Harvard Medical School postdoc Yunjin Lee and Picower Institute postdoc Tomoe Ishikawa.
Behavioral effects
Choi and Huh became interested in IL-17 several years ago, when they found it was involved in a phenomenon known as the fever effect. Large-scale studies of autistic children have found that for many of them, their behavioral symptoms temporarily diminish when they have a fever.
In a 2019 study in mice, Choi and Huh showed that in some cases of infection, IL-17 is released and suppresses a small region of the brain’s cortex known as S1DZ. Overactivation of neurons in this region can lead to autism-like behavioral symptoms in mice, including repetitive behaviors and reduced sociability.
“This molecule became a link that connects immune system activation, manifested as a fever, to changes in brain function and changes in the animals’ behavior,” Choi says.
IL-17 comes in six different forms, and there are five different receptors that can bind to it. In their two new papers, the researchers set out to map which of these receptors are expressed in different parts of the brain. This mapping revealed that a pair of receptors known as IL-17RA and IL-17RB is found in the cortex, including in the S1DZ region that the researchers had previously identified. The receptors are located in a population of neurons that receive proprioceptive input and are involved in controlling behavior.
When a type of IL-17 known as IL-17E binds to these receptors, the neurons become less excitable, which leads to the behavioral effects seen in the 2019 study.
“IL-17E, which we’ve shown to be necessary for behavioral mitigation, actually does act almost exactly like a neuromodulator in that it will immediately reduce these neurons’ excitability,” Choi says. “So, there is an immune molecule that’s acting as a neuromodulator in the brain, and its main function is to regulate excitability of neurons.”
Choi hypothesizes that IL-17 may have originally evolved as a neuromodulator, and later on was appropriated by the immune system to play a role in promoting inflammation. That idea is consistent with previous work showing that in the worm C. elegans, IL-17 has no role in the immune system but instead acts on neurons. Among its effects in worms, IL-17 promotes aggregation, a form of social behavior. Additionally, in mammals, IL-17E is actually made by neurons in the cortex, including S1DZ.
“There’s a possibility that a couple of forms of IL-17 perhaps evolved first and foremost to act as a neuromodulator in the brain, and maybe later were hijacked by the immune system also to act as immune modulators,” Choi says.
Provoking anxiety
In the other Cell paper, the researchers explored another brain location where they found IL-17 receptors — the amygdala. This almond-shaped structure plays an important role in processing emotions, including fear and anxiety.
That study revealed that in a region known as the basolateral amygdala (BLA), the IL-17RA and IL-17RE receptors, which work as a pair, are expressed in a discrete population of neurons. When these receptors bind to IL-17A and IL-17C, the neurons become more excitable, leading to an increase in anxiety.
The researchers also found that, counterintuitively, if animals are treated with antibodies that block IL-17 receptors, it actually increases the amount of IL-17C circulating in the body. This finding may help to explain unexpected outcomes observed in a clinical trial of a drug targeting the IL-17-RA receptor for psoriasis treatment, particularly regarding its potential adverse effects on mental health.
“We hypothesize that there’s a possibility that the IL-17 ligand that is upregulated in this patient cohort might act on the brain to induce suicide ideation, while in animals there is an anxiogenic phenotype,” Choi says.
During infections, this anxiety may be a beneficial response, keeping the sick individual away from others to whom the infection could spread, Choi hypothesizes.
“Other than its main function of fighting pathogens, one of the ways that the immune system works is to control the host behavior, to protect the host itself and also protect the community the host belongs to,” she says. “One of the ways the immune system is doing that is to use cytokines, secreted factors, to go to the brain as communication tools.”
The researchers found that the same BLA neurons that have receptors for IL-17 also have receptors for IL-10, a cytokine that suppresses inflammation. This molecule counteracts the excitability generated by IL-17, giving the body a way to shut off anxiety once it’s no longer useful.
Distinctive behaviors
Together, the two studies suggest that the immune system, and even a single family of cytokines, can exert a variety of effects in the brain.
“We have now different combinations of IL-17 receptors being expressed in different populations of neurons, in two different brain regions, that regulate very distinct behaviors. One is actually somewhat positive and enhances social behaviors, and another is somewhat negative and induces anxiogenic phenotypes,” Choi says.
Her lab is now working on additional mapping of IL-17 receptor locations, as well as the IL-17 molecules that bind to them, focusing on the S1DZ region. Eventually, a better understanding of these neuro-immune interactions may help researchers develop new treatments for neurological conditions such as autism or depression.
“The fact that these molecules are made by the immune system gives us a novel approach to influence brain function as means of therapeutics,” Choi says. “Instead of thinking about directly going for the brain, can we think about doing something to the immune system?”
The research was funded, in part, by Jeongho Kim and the Brain Impact Foundation Neuro-Immune Fund, the Simons Foundation Autism Research Initiative, the Simons Center for the Social Brain, the Marcus Foundation, the N of One: Autism Research Foundation, the Burroughs Wellcome Fund, the Picower Institute Innovation Fund, the MIT John W. Jarve Seed Fund for Science Innovation, Young Soo Perry and Karen Ha, and the National Institutes of Health.
Breakerspace image contest showcases creativity, perseverance
The MIT Department of Materials Science and Engineering Breakerspace transformed into an art gallery on March 10, with six easels arranged in an arc to showcase arresting images — black-and-white scanning electron microscope (SEM) images of crumpled biological structures alongside the brilliant hues of digital optical microscopy.
The images were the winning entries from the inaugural Breakerspace Microscope Image Contest, which opened in fall 2024. The contest invited all MIT undergraduates to train on the Breakerspace’s microscopic instruments, explore material samples, and capture images that were artistic, instructive, or technically challenging.
“The goal of the contest is to inspire curiosity and creativity, encouraging students to explore the imaging tools in the Breakerspace,” says Professor Jeffrey Grossman of the Department of Materials Science and Engineering (DMSE). “We want students to see the beauty and complexity of materials at the microscopic level, to think critically about the images they capture, and to communicate what they mean to others.”
Grossman was a driving force behind the Breakerspace, a laboratory and lounge designed to encourage MIT undergraduates to explore the world of materials.
The contest drew about 50 entries across four categories:
- Most Instructive, for images illustrating key concepts with documentation
- Most Challenging, requiring significant sample preparation
- Best Optical Microscope Image of a sample, rendered in color
- Best Electron Microscope Image, magnified hundreds or even thousands of times
Winners in the four categories received $500, and two runners-up received $100.
“By making this a competition with prizes, we hope to motivate more students to explore microscopy and develop a stronger connection to the materials science community at MIT,” Grossman says.
A window onto research
Amelia How, a DMSE sophomore and winner of the Most Instructive category, used an SEM to show how hydrogen atoms seep into titanium — a phenomenon called hydrogen embrittlement, which can weaken metals and lead to material failure in applications such as aerospace, energy, or construction. The image stemmed from How’s research in Associate Professor Cem Tasan’s research lab, through MIT’s Undergraduate Research Opportunities Program (UROP). She trained on the SEM for the contest after seeing an email announcement.
“It helped me realize how to explain what I was actually doing,” How says, “because the work that I’m doing is something that’s going into a paper, but most people won’t end up reading that.”
Mishael Quraishi, a DMSE senior and winner of Best SEM Image, captured the flower Alstroemeria and its pollen-bearing structure, the anther. She entered the contest mainly to explore microscopy — but sharing that experience was just as rewarding.
“I really love how electron images look,” Quraishi says. “But as I was taking the images, I was also able to show people what pollen looked like at a really small scale — it’s kind of unrecognizable. That was the most fun part: sharing the image and then telling people about the technique.”
Quraishi, president of the Society of Undergraduate Materials Scientists, also organized the event, part of Materials Week, a student-run initiative that highlights the department’s people, research, and impact.
Persistence in practice
The winner of the Most Challenging category, DMSE sophomore Nelushi Vithanachchi gained not just microscopy experience, but also perseverance. The category called for significant effort put into the sample preparation — and Vithanachchi spent hours troubleshooting.
Her sample — a carving of MIT’s Great Dome in silicon carbide — was made using a focused ion beam, a tool that sculpts materials by bombarding them with ions, or charged atoms. The process requires precision, as even minor shifts can ruin a sample.
In her first attempt, while milling the dome’s façade, the sample shifted and broke. A second try with a different design also failed. She credits her UROP advisor, Aaditya Bhat from Associate Professor James LeBeau’s research group, for pushing her to keep going.
“It was four in the morning, and after failing for the third time, I said, ‘I’m not doing this,’” Vithanachchi recalls. “Then Aaditya said, ‘No, we’ve got to finish what we started.’” After a fourth attempt, using the lessons learned from the previous failures, they were finally able to create a structure that resembled the MIT dome.
Anna Beck, a DMSE sophomore and runner-up for Best Electron Microscope Image, had a much different experience. “It was very relaxed for me. I just sat down and took images,” she says. Her entry was an SEM image of high-density polyethylene (HDPE) fibers from an event wrist band. HDPE is a durable material used in packaging, plumbing, and consumer goods.
Through the process, Beck gained insight into composition and microscopy techniques — and she’s excited to apply what she’s learned in the next competition in fall 2025. “In hindsight, I look at mine now and I wish I turned the brightness up a little more.”
Although 35 percent of the entries came from DMSE students, a majority — 65 percent — came from other majors, or first-year students.
With the first contest showcasing both creativity and technical skill, organizers hope even more students will take on the challenge, bringing fresh perspectives and discoveries to the microscopic world. The contest will run again in fall 2025.
“The inaugural contest brought in an incredible range of submissions. It was exciting to see students engage with microscopy in new ways and share their discoveries,” Grossman says. “The Breakerspace was designed for all undergraduates, regardless of major or experience level — whether they’re conducting research, exploring new materials, or simply curious about what something is made of. We’re excited to expand participation and encourage even more entries in the next competition.”
Lincoln Laboratory honored for technology transfer of hurricane-tracking satellites
The Federal Laboratory Consortium (FLC) has awarded MIT Lincoln Laboratory a 2025 FLC Excellence in Technology Transfer Award. The award recognizes the laboratory's exceptional efforts in commercializing microwave sounders hosted on small satellites called CubeSats. The laboratory first developed the technology for NASA, demonstrating that such satellites could work in tandem to collect hurricane data more frequently than previously possible and significantly improve hurricane forecasts. The technology is now licensed to the company Tomorrow.io, which will launch a large constellation of the sounder-equipped satellites to enhance hurricane prediction and expand global weather coverage.
"This FLC award recognizes a technology with significant impact, one that could enhance hourly weather forecasting for aviation, logistics, agriculture, and emergency management, and highlights the laboratory's important role in bringing federally funded innovation to the commercial sector," says Asha Rajagopal, Lincoln Laboratory's chief technology transfer officer.
A nationwide network of more than 300 government laboratories, agencies, and research centers, the FLC helps facilitate the transfer of technologies out of federal labs and into the marketplace to benefit the U.S. economy, society, and national security.
Lincoln Laboratory originally proposed and demonstrated the technology for NASA's TROPICS (Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of SmallSats) mission. For TROPICS, the laboratory put its microwave sounders on low-cost, commercially available CubeSats for the first time.
Of all the technology used for sensing hurricanes, microwave sounders provide the greatest improvement to forecasting models. From space, these instruments detect a range of microwave frequencies that penetrate clouds, allowing them to measure 3D temperature, humidity, and precipitation in a storm. State-of-the-art instruments are typically large (the size of a washing machine) and hosted aboard $2 billion polar-orbiting satellites, which collectively may revisit a storm every six hours. If sounders could be miniaturized, laboratory researchers imagined, then they could be put on small satellites and launched in large numbers, working together to revisit storms more often.
The TROPICS sounder is the size of a coffee cup. The laboratory team worked for several years to develop and demonstrate the technology that resulted in a miniaturized instrument, while maintaining performance on par with traditional sounders for the frequencies that provide the most useful tropical cyclone observations. By 2023, NASA launched a constellation of four TROPICS satellites, which have since collected rapidly refreshed data of many tropical storms.
Now, Tomorrow.io plans to increase that constellation to a global network of 18 satellites. The resulting high-rate observations — under an hour — are expected to improve weather forecasts, hurricane tracking, and early-warning systems.
"This partnership with Tomorrow.io expands the impact of the TROPICS mission. Tomorrow.io’s increased constellation size, software pipeline, and resilient business model enable it to support a number of commercial and government organizations. This transfer to industry has resulted in a self-sustaining national capability, one that is expected to help the economy and the government for years to come," says Tom Roy, who managed the transfer of the technology to Tomorrow.io.
The technology transfer spanned 18 months. Under a cooperative research and development agreement (CRADA), the laboratory team adapted the TROPICS payload to an updated satellite design and delivered to Tomorrow.io the first three units, two of which were launched in September 2024. The team also provided in-depth training to Tomorrow.io and seven industry partners who will build, test, launch, and operate the future full commercial constellation. The remaining satellites are expected to launch before the end of this year.
"With these microwave sounders, we can set a new standard in atmospheric data collection and prediction. This technology allows us to capture atmospheric data with exceptional accuracy, especially over oceans and remote areas where traditional observations are scarce," said Rei Goffer, co-founder of Tomorrow.io, in a press release announcing the September launches.
Tomorrow.io will use the sounder data as input into their weather forecasts, data products, and decision support tools available to their customers, who range from major airlines to governments. Tomorrow.io's nonprofit partner, TomorrowNow, also plans to use the data as input to its climate model for improving food security in Africa.
This technology is especially relevant as hurricanes and severe weather events continue to cause significant destruction. In 2024, the United States experienced a near-record 27 disaster events that each exceeded $1 billion in damage, resulting in a total cost of approximately $182.7 billion, and that caused the deaths of at least 568 people. Globally, these storm systems cause thousands of deaths and billions of dollars in damage each year.
“It has been great to see the Lincoln Laboratory, Tomorrow.io, and industry partner teams work together so effectively to rapidly incorporate the TROPICS technology and bring the new Tomorrow.io microwave sounder constellation online,” says Bill Blackwell, principal investigator of the NASA TROPICS mission and the CRADA with Tomorrow.io. “I expect that the improved revisit rate provided by the Tomorrow.io constellation will drive further improvements in hurricane forecasting performance over and above what has already been demonstrated by TROPICS.”
The team behind the transfer includes Tom Roy, Bill Blackwell, Steven Gillmer, Rebecca Keenan, Nick Zorn, and Mike DiLiberto of Lincoln Laboratory and Kai Lemay, Scott Williams, Emma Watson, and Jan Wicha of Tomorrow.io. Lincoln Laboratory will be honored among other winners of 2025 FLC Awards at the FLC National Meeting to be held virtually on May 13.
Carsten Rasmussen, LEGO Group COO, discusses the production network that enables the builders of tomorrow
LEGOs are no stranger to many members of the MIT community. Faculty, staff, and students, alike, have developed a love of building and mechanics while playing with the familiar plastic bricks. In just a few hours, a heap of bricks can become a house, a ship, an airplane, or a cat. The simplicity lends itself to creativity and ingenuity, and it has inspired many MIT faculty members to bring LEGOs into the classroom, including class 2.S00 (Introduction to Manufacturing), where students use LEGO bricks to learn about manufacturing processes and systems.
It was perhaps no surprise, then, that the lecture hall in the MIT Schwarzman College of Computing was packed with students, faculty, staff, and guests to hear Carsten Rasmussen, chief operating officer of the LEGO Group, speak as part of the Manufacturing@MIT Distinguished Speaker Series on March 20.
In his engaging and inspiring talk, Rasmussen asked one of the most important questions in manufacturing: How do you balance innovation with sustainability while keeping a complex global supply chain running smoothly? He emphasized that success in modern manufacturing isn’t just about cutting costs — it’s about creating value across the entire network, and integrating every aspect of the business.
Successful manufacturing is all about balance
The way the toy industry views success is evolving, Rasmussen said. In the past, focusing on “cost, quality, safety, delivery, and service” may have been enough, but today’s landscape is far more demanding. “Now, it’s about availability, customers’ happiness, and innovation,” he said.
Rasmussen, who has been with the LEGO Group since 2001, started as a buyer before moving to various leadership roles within the organization. Today, he oversees the LEGO Group’s operations strategy, including manufacturing and supply chain planning, quality, engineering, and sales and operations planning.
“The way we can inspire the builders of tomorrow is basically, whatever we develop, we are able to produce, and we are able to sell,” he said.
The LEGO Group’s operations are intricate. Focusing on areas such as capacity and infrastructure, network utilization, analysis and design, and sustainability, keeps the company true to its mission, “to inspire and develop the builders of tomorrow.” Within the organization, departments operate with a focus on how their decisions will impact the rest of the company. To do this, they need to communicate effectively.
Intuition and experience play a big role in effective decision-making
In a time where data analytics is a huge part of decision-making in manufacturing and supply-chain management, Rasmussen highlighted the importance of blending data with intuition and experience.
“Many of the decisions you have to make are very, very complex,” he explained. “A lot of the data you’re going to provide me is based on history. And what happened in history is not what you’re facing right now. So, you need to really be able to take great data and blend that with your intuition and your experience to make a decision.”
This shift reflects a broader trend in industries where leaders are beginning to see the benefits of looking beyond purely data-driven decision-making. With global supply chains disrupted by unforeseen events like the Covid-19 pandemic, there’s growing acknowledgement that historical data may not be the most effective way to predict the future. Rasmussen said that the audience should practice blending their own intuition and experience with data by asking themselves: “Does it make sense? Does it feel right?”
Prioritizing sustainability
Rasmussen also highlighted the LEGO Group’s ambitious sustainability goals, signaling that innovation cannot come at the expense of environmental responsibility. “There is no excuse for us to not leave a better planet for the next generation, for the next hundred years,” he said.
With an ambition to make its products from more renewable or recycled materials by 2032 and eliminate single-use packaging, the company aims to lead a shift in trends in manufacturing toward being more environmentally friendly, including an effort to turn waste into bricks.
Innovation doesn’t exist in a vacuum
Throughout his talk, Rasmussen underscored the importance of innovation. The only way to stay on top is to be constantly thinking of new ideas, he said.
“Are you daring to put new products into the market?” he asked, adding that it’s not enough to come up with a novel product or approach. How its implementation will work within the system is essential, too. “Our challenge that you need to help me with,” he said to the audience, “is how can we bring in innovation, because we can’t stand still either. We also need to be fit for the future … that is actually one of our bigger challenges.”
He reminded the audience that innovation is not a linear path. It involves risk, some failure, and continuous evolution. “Resilience is absolutely key,” he said.
Q&A
After his presentation, Rasmussen sat down with Professor John Hart for a brief Q&A, followed by audience questions. Among the questions that Hart asked Rasmussen was how he would respond to a designer who presented a model of MIT-themed LEGO set, assuring Rasmussen it would break sales records. “Oh, I’ve heard that so many times,” Rasmussen laughed.
Hart asked what it would take to turn an idea into reality. “How long does it take from bricks to having it on my doorstep?” he asked.
“Typically, a new product takes between 12 to 18 months from idea to when we put it out on the market,” said Rasmussen, explaining that the process requires a good deal of integration and that there is a lot of planning to make sure that new ideas can be implemented across the organization.
Then the microphone was opened up to the crowd. The first audience questions came from Emerson Linville-Engler, the youngest audience member at just 5 years old, who wanted to know what the most difficult LEGO set to make was (the Technic round connector pieces), as well as Rasmussen’s favorite LEGO set (complex builds, like buildings or Technic models).
Other questions showcased how much LEGO inspired the audience. One member asked Rasmussen if it ever got old being told that he worked for a company that inspires the inner child? “No. It motivates me every single day when you meet them,” he said.
Through the Q&A, the audience was also able to ask more about the manufacturing process from ideas to execution, as well as whether Rasmussen was threatened by imitators (he welcomes healthy competition, but not direct copycats), and whether the LEGO Group plans on bringing back some old favorites (they are discussing whether to bring back old sets, but there are no set plans to do so at this time).
For the aspiring manufacturing leaders and innovators in the room, the lesson of Rasmussen’s talk was clear: Success isn’t just about making the right decision, it’s about understanding the entire system, having the courage to innovate, and being resilient enough to navigate unexpected challenges.
The event was hosted by the Manufacturing@MIT Working Group as part of the Manufacturing@MIT Distinguished Speaker Series. Past speakers include the TSMC founder Morris Chang, Office of Science and Technology Policy Director Arati Prabhakar, Under Secretary of Defense for Research and Engineering Heidi Shyu, and Pennsylvania Governor Tom Wolf.
New method assesses and improves the reliability of radiologists’ diagnostic reports
Due to the inherent ambiguity in medical images like X-rays, radiologists often use words like “may” or “likely” when describing the presence of a certain pathology, such as pneumonia.
But do the words radiologists use to express their confidence level accurately reflect how often a particular pathology occurs in patients? A new study shows that when radiologists express confidence about a certain pathology using a phrase like “very likely,” they tend to be overconfident, and vice-versa when they express less confidence using a word like “possibly.”
Using clinical data, a multidisciplinary team of MIT researchers in collaboration with researchers and clinicians at hospitals affiliated with Harvard Medical School created a framework to quantify how reliable radiologists are when they express certainty using natural language terms.
They used this approach to provide clear suggestions that help radiologists choose certainty phrases that would improve the reliability of their clinical reporting. They also showed that the same technique can effectively measure and improve the calibration of large language models by better aligning the words models use to express confidence with the accuracy of their predictions.
By helping radiologists more accurately describe the likelihood of certain pathologies in medical images, this new framework could improve the reliability of critical clinical information.
“The words radiologists use are important. They affect how doctors intervene, in terms of their decision making for the patient. If these practitioners can be more reliable in their reporting, patients will be the ultimate beneficiaries,” says Peiqi Wang, an MIT graduate student and lead author of a paper on this research.
He is joined on the paper by senior author Polina Golland, a Sunlin and Priscilla Chou Professor of Electrical Engineering and Computer Science (EECS), a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and the leader of the Medical Vision Group; as well as Barbara D. Lam, a clinical fellow at the Beth Israel Deaconess Medical Center; Yingcheng Liu, at MIT graduate student; Ameneh Asgari-Targhi, a research fellow at Massachusetts General Brigham (MGB); Rameswar Panda, a research staff member at the MIT-IBM Watson AI Lab; William M. Wells, a professor of radiology at MGB and a research scientist in CSAIL; and Tina Kapur, an assistant professor of radiology at MGB. The research will be presented at the International Conference on Learning Representations.
Decoding uncertainty in words
A radiologist writing a report about a chest X-ray might say the image shows a “possible” pneumonia, which is an infection that inflames the air sacs in the lungs. In that case, a doctor could order a follow-up CT scan to confirm the diagnosis.
However, if the radiologist writes that the X-ray shows a “likely” pneumonia, the doctor might begin treatment immediately, such as by prescribing antibiotics, while still ordering additional tests to assess severity.
Trying to measure the calibration, or reliability, of ambiguous natural language terms like “possibly” and “likely” presents many challenges, Wang says.
Existing calibration methods typically rely on the confidence score provided by an AI model, which represents the model’s estimated likelihood that its prediction is correct.
For instance, a weather app might predict an 83 percent chance of rain tomorrow. That model is well-calibrated if, across all instances where it predicts an 83 percent chance of rain, it rains approximately 83 percent of the time.
“But humans use natural language, and if we map these phrases to a single number, it is not an accurate description of the real world. If a person says an event is ‘likely,’ they aren’t necessarily thinking of the exact probability, such as 75 percent,” Wang says.
Rather than trying to map certainty phrases to a single percentage, the researchers’ approach treats them as probability distributions. A distribution describes the range of possible values and their likelihoods — think of the classic bell curve in statistics.
“This captures more nuances of what each word means,” Wang adds.
Assessing and improving calibration
The researchers leveraged prior work that surveyed radiologists to obtain probability distributions that correspond to each diagnostic certainty phrase, ranging from “very likely” to “consistent with.”
For instance, since more radiologists believe the phrase “consistent with” means a pathology is present in a medical image, its probability distribution climbs sharply to a high peak, with most values clustered around the 90 to 100 percent range.
In contrast the phrase “may represent” conveys greater uncertainty, leading to a broader, bell-shaped distribution centered around 50 percent.
Typical methods evaluate calibration by comparing how well a model’s predicted probability scores align with the actual number of positive results.
The researchers’ approach follows the same general framework but extends it to account for the fact that certainty phrases represent probability distributions rather than probabilities.
To improve calibration, the researchers formulated and solved an optimization problem that adjusts how often certain phrases are used, to better align confidence with reality.
They derived a calibration map that suggests certainty terms a radiologist should use to make the reports more accurate for a specific pathology.
“Perhaps, for this dataset, if every time the radiologist said pneumonia was ‘present,’ they changed the phrase to ‘likely present’ instead, then they would become better calibrated,” Wang explains.
When the researchers used their framework to evaluate clinical reports, they found that radiologists were generally underconfident when diagnosing common conditions like atelectasis, but overconfident with more ambiguous conditions like infection.
In addition, the researchers evaluated the reliability of language models using their method, providing a more nuanced representation of confidence than classical methods that rely on confidence scores.
“A lot of times, these models use phrases like ‘certainly.’ But because they are so confident in their answers, it does not encourage people to verify the correctness of the statements themselves,” Wang adds.
In the future, the researchers plan to continue collaborating with clinicians in the hopes of improving diagnoses and treatment. They are working to expand their study to include data from abdominal CT scans.
In addition, they are interested in studying how receptive radiologists are to calibration-improving suggestions and whether they can mentally adjust their use of certainty phrases effectively.
“Expression of diagnostic certainty is a crucial aspect of the radiology report, as it influences significant management decisions. This study takes a novel approach to analyzing and calibrating how radiologists express diagnostic certainty in chest X-ray reports, offering feedback on term usage and associated outcomes,” says Atul B. Shinagare, associate professor of radiology at Harvard Medical School, who was not involved with this work. “This approach has the potential to improve radiologists’ accuracy and communication, which will help improve patient care.”
The work was funded, in part, by a Takeda Fellowship, the MIT-IBM Watson AI Lab, the MIT CSAIL Wistrom Program, and the MIT Jameel Clinic.
Tabletop factory-in-a-box makes hands-on manufacturing education more accessible
For over a decade, through a collaboration managed by MIT.nano, MIT and Tecnológico de Monterrey (Tec), one of the largest universities in Latin America, have worked together to develop innovative academic and research initiatives with a particular focus in nanoscience and nanotechnology and, more recently, an emphasis on design and smart manufacturing. Now, the collaboration has also expanded to include undergraduate education. Seven Tec undergrads are developing methods to manufacture low-cost, desktop fiber-extrusion devices, or FrEDs, alongside peers at MIT in an “in-the-lab” teaching and learning factory, the FrED Factory.
“The FrED Factory serves as a factory-like education platform for manufacturing scale-up, enabling students and researchers to engage firsthand in the transition from prototype development to small-scale production,” says Brian Anthony, MIT.nano associate director and principal research scientist in the MIT Department of Mechanical Engineering (MechE).
Through on-campus learning, participants observe, analyze, and actively contribute to this process, gaining critical insights into the complexities of scaling manufacturing operations. The product of the FrED Factory are FrED kits — tabletop manufacturing kits that themselves produce fiber and that are used to teach smart manufacturing principles. “We’re thrilled to have students from Monterrey Tec here at MIT, bringing new ideas and perspectives, and helping to develop these new ways to teach manufacturing at both MIT and Tec,” says Anthony.
The FrED factory was originally built by a group of MIT graduate students in 2022 as their thesis project in the Master of Engineering in Advanced Manufacturing and Design program. They adapted and scaled the original design of the device, built by Anthony’s student David Kim, into something that could be manufactured into multiple units at a substantially lower cost. The resulting computer-aided design files were shared with Tec de Monterrey for use by faculty and students. Since launching the FrED curriculum at Tec in 2022, MIT has co-hosted two courses led by Tec faculty: “Mechatronics Design: (Re) Design of FrED,” and “Automation of Manufacturing Systems: FrED Factory Challenge.”
New this academic year, undergraduate Tec students are participating in FrED Factory research immersions. The students engage in collaborative FrED projects at MIT and then return to Tec to implement their knowledge — particularly to help replicate and implement what they have learned, with the launch of a new FrED Factory at Tec de Monterrey this spring. The end goal is to fully integrate this project into Tec’s mechatronics engineering curriculum, in which students learn about automation and robotics firsthand through the devices.
Russel Bradley, a PhD student in MechE supervised by Anthony, is the project lead of FrED Factory and has been working closely with the undergraduate Tec students.
“The process of designing and manufacturing FrEDs is an educational experience in itself,” says Bradley. “Unlike a real factory, which likely wouldn’t welcome students to experiment with the machines, the FrED factory provides an environment where you can fail and learn.”
The Tec undergrads are divided into groups working on specific projects, including Development of an Education 4.0 Framework for FrED, Immersive Technology (AR) for Manufacturing Operations, Gamifying Advanced Manufacturing Education in FrED Factory, and Immersive Cognitive Factory Twins.
Sergio Siller Lobo is a Tec student who is working on the development of the education framework for FrED. He and other students are revising the code to make the interface more student-friendly and best enable the students to learn while working with the devices. They are focused particularly on helping students to engage with the topics of control systems, computer vision, and internet of things (IoT) in both a digital course that they are developing, and in directly working with the devices. The digital course can be presented by an instructor or done autonomously by students.
“Students can be learning the theory with the digital courses, as well as having access to hands-on, practical experience with the device,” says Siller Lobo. “You can have the best of both ways of learning, both the practical and the theoretical.”
Arik Gómez Horita, an undergrad from Tec who has also been working on the education framework, says that the technology that currently exists in terms of how to teach students about control systems, computer vision, and IoT is often very limited in either its capability or quantity.
“A key aspect of the value of the FrEDs is that we are integrating all these concepts and a module for education into a single device,” says Gómez Horita. “Bringing FrED into a classroom is a game-changer. Our main goal is trying to put FrED into the hands of the teacher, to use it for all its teaching capabilities.”
Once the students return to Tec de Monterrey with the educational modules they’ve developed, there will be workshops with the FrEDs and opportunities for Tec students to use their own creativity and iterate on the devices.
“The FrED is really a lab in a box, and one of the best things that FrEDs do is create data,” says Siller Lobo. “Finding new ways to get data from FrED gives it more value.”
Tec students Ángel Alarcón and André Mendoza are preparing to have MIT students test the FrED factory, running a simulation with the two main roles of engineer and operator. The operator role assembles the FrEDs within the workstations that simulate a factory. The engineer role analyzes the data created on the factory side by the operator and tries to find ways to improve production.
“This is a very immersive way to teach manufacturing systems,” says Alarcón. “Many students studying manufacturing, undergraduate and even graduate, finish their education never having even gone to an actual factory. The FrED Factory gives students the valuable opportunity to get to know what a factory is like and experience an industry environment without having to go off campus.”
The data gained from the workstations — including cycle time and defects in an operation — will be used to teach different topics about manufacturing. Ultimately, the FrED factory at Tec will be used to compare the benefits and drawbacks of automation versus manual labor.
Bradley says that the Tec students bring a strong mechatronics background that adds a lot of important insights to the project, and beyond the lab, it’s also a valuable multicultural exchange.
“It’s not just about what the students are learning from us,” says Bradley, “but it’s really a collaborative process in which we’re all complementing each other.”
Taking the “training wheels” off clean energy
Renewable power sources have seen unprecedented levels of investment in recent years. But with political uncertainty clouding the future of subsidies for green energy, these technologies must begin to compete with fossil fuels on equal footing, said participants at the 2025 MIT Energy Conference.
“What these technologies need less is training wheels, and more of a level playing field,” said Brian Deese, an MIT Institute Innovation Fellow, during a conference-opening keynote panel.
The theme of the two-day conference, which is organized each year by MIT students, was “Breakthrough to deployment: Driving climate innovation to market.” Speakers largely expressed optimism about advancements in green technology, balanced by occasional notes of alarm about a rapidly changing regulatory and political environment.
Deese defined what he called “the good, the bad, and the ugly” of the current energy landscape. The good: Clean energy investment in the United States hit an all-time high of $272 billion in 2024. The bad: Announcements of future investments have tailed off. And the ugly: Macro conditions are making it more difficult for utilities and private enterprise to build out the clean energy infrastructure needed to meet growing energy demands.
“We need to build massive amounts of energy capacity in the United States,” Deese said. “And the three things that are the most allergic to building are high uncertainty, high interest rates, and high tariff rates. So that’s kind of ugly. But the question … is how, and in what ways, that underlying commercial momentum can drive through this period of uncertainty.”
A shifting clean energy landscape
During a panel on artificial intelligence and growth in electricity demand, speakers said that the technology may serve as a catalyst for green energy breakthroughs, in addition to putting strain on existing infrastructure. “Google is committed to building digital infrastructure responsibly, and part of that means catalyzing the development of clean energy infrastructure that is not only meeting the AI need, but also benefiting the grid as a whole,” said Lucia Tian, head of clean energy and decarbonization technologies at Google.
Across the two days, speakers emphasized that the cost-per-unit and scalability of clean energy technologies will ultimately determine their fate. But they also acknowledged the impact of public policy, as well as the need for government investment to tackle large-scale issues like grid modernization.
Vanessa Chan, a former U.S. Department of Energy (DoE) official and current vice dean of innovation and entrepreneurship at the University of Pennsylvania School of Engineering and Applied Sciences, warned of the “knock-on” effects of the move to slash National Institutes of Health (NIH) funding for indirect research costs, for example. “In reality, what you’re doing is undercutting every single academic institution that does research across the nation,” she said.
During a panel titled “No clean energy transition without transmission,” Maria Robinson, former director of the DoE’s Grid Deployment Office, said that ratepayers alone will likely not be able to fund the grid upgrades needed to meet growing power demand. “The amount of investment we’re going to need over the next couple of years is going to be significant,” she said. “That’s where the federal government is going to have to play a role.”
David Cohen-Tanugi, a clean energy venture builder at MIT, noted that extreme weather events have changed the climate change conversation in recent years. “There was a narrative 10 years ago that said … if we start talking about resilience and adaptation to climate change, we’re kind of throwing in the towel or giving up,” he said. “I’ve noticed a very big shift in the investor narrative, the startup narrative, and more generally, the public consciousness. There’s a realization that the effects of climate change are already upon us.”
“Everything on the table”
The conference featured panels and keynote addresses on a range of emerging clean energy technologies, including hydrogen power, geothermal energy, and nuclear fusion, as well as a session on carbon capture.
Alex Creely, a chief engineer at Commonwealth Fusion Systems, explained that fusion (the combining of small atoms into larger atoms, which is the same process that fuels stars) is safer and potentially more economical than traditional nuclear power. Fusion facilities, he said, can be powered down instantaneously, and companies like his are developing new, less-expensive magnet technology to contain the extreme heat produced by fusion reactors.
By the early 2030s, Creely said, his company hopes to be operating 400-megawatt power plants that use only 50 kilograms of fuel per year. “If you can get fusion working, it turns energy into a manufacturing product, not a natural resource,” he said.
Quinn Woodard Jr., senior director of power generation and surface facilities at geothermal energy supplier Fervo Energy, said his company is making the geothermal energy more economical through standardization, innovation, and economies of scale. Traditionally, he said, drilling is the largest cost in producing geothermal power. Fervo has “completely flipped the cost structure” with advances in drilling, Woodard said, and now the company is focused on bringing down its power plant costs.
“We have to continuously be focused on cost, and achieving that is paramount for the success of the geothermal industry,” he said.
One common theme across the conference: a number of approaches are making rapid advancements, but experts aren’t sure when — or, in some cases, if — each specific technology will reach a tipping point where it is capable of transforming energy markets.
“I don’t want to get caught in a place where we often descend in this climate solution situation, where it’s either-or,” said Peter Ellis, global director of nature climate solutions at The Nature Conservancy. “We’re talking about the greatest challenge civilization has ever faced. We need everything on the table.”
The road ahead
Several speakers stressed the need for academia, industry, and government to collaborate in pursuit of climate and energy goals. Amy Luers, senior global director of sustainability for Microsoft, compared the challenge to the Apollo spaceflight program, and she said that academic institutions need to focus more on how to scale and spur investments in green energy.
“The challenge is that academic institutions are not currently set up to be able to learn the how, in driving both bottom-up and top-down shifts over time,” Luers said. “If the world is going to succeed in our road to net zero, the mindset of academia needs to shift. And fortunately, it’s starting to.”
During a panel called “From lab to grid: Scaling first-of-a-kind energy technologies,” Hannan Happi, CEO of renewable energy company Exowatt, stressed that electricity is ultimately a commodity. “Electrons are all the same,” he said. “The only thing [customers] care about with regards to electrons is that they are available when they need them, and that they’re very cheap.”
Melissa Zhang, principal at Azimuth Capital Management, noted that energy infrastructure development cycles typically take at least five to 10 years — longer than a U.S. political cycle. However, she warned that green energy technologies are unlikely to receive significant support at the federal level in the near future. “If you’re in something that’s a little too dependent on subsidies … there is reason to be concerned over this administration,” she said.
World Energy CEO Gene Gebolys, the moderator of the lab-to-grid panel, listed off a number of companies founded at MIT. “They all have one thing in common,” he said. “They all went from somebody’s idea, to a lab, to proof-of-concept, to scale. It’s not like any of this stuff ever ends. It’s an ongoing process.”
Surprise discovery could lead to improved catalysts for industrial reactions
The process of catalysis — in which a material speeds up a chemical reaction — is crucial to the production of many of the chemicals used in our everyday lives. But even though these catalytic processes are widespread, researchers often lack a clear understanding of exactly how they work.
A new analysis by researchers at MIT has shown that an important industrial synthesis process, the production of vinyl acetate, requires a catalyst to take two different forms, which cycle back and forth from one to the other as the chemical process unfolds.
Previously, it had been thought that only one of the two forms was needed. The new findings are published today in the journal Science, in a paper by MIT graduate students Deiaa Harraz and Kunal Lodaya, Bryan Tang PhD ’23, and MIT professor of chemistry and chemical engineering Yogesh Surendranath.
There are two broad classes of catalysts: homogeneous catalysts, which consist of dissolved molecules, and heterogeneous catalysts, which are solid materials whose surface provides the site for the chemical reaction. “For the longest time,” Surendranath says, “there’s been a general view that you either have catalysis happening on these surfaces, or you have them happening on these soluble molecules.” But the new research shows that in the case of vinyl acetate — an important material that goes into many polymer products such as the rubber in the soles of your shoes — there is an interplay between both classes of catalysis.
“What we discovered,” Surendranath explains, “is that you actually have these solid metal materials converting into molecules, and then converting back into materials, in a cyclic dance.”
He adds: “This work calls into question this paradigm where there’s either one flavor of catalysis or another. Really, there could be an interplay between both of them in certain cases, and that could be really advantageous for having a process that’s selective and efficient.”
The synthesis of vinyl acetate has been a large-scale industrial reaction since the 1960s, and it has been well-researched and refined over the years to improve efficiency. This has happened largely through a trial-and-error approach, without a precise understanding of the underlying mechanisms, the researchers say.
While chemists are often more familiar with homogeneous catalysis mechanisms, and chemical engineers are often more familiar with surface catalysis mechanisms, fewer researchers study both. This is perhaps part of the reason that the full complexity of this reaction was not previously captured. But Harraz says he and his colleagues are working at the interface between disciplines. “We’ve been able to appreciate both sides of this reaction and find that both types of catalysis are critical,” he says.
The reaction that produces vinyl acetate requires something to activate the oxygen molecules that are one of the constituents of the reaction, and something else to activate the other ingredients, acetic acid and ethylene. The researchers found that the form of the catalyst that worked best for one part of the process was not the best for the other. It turns out that the molecular form of the catalyst does the key chemistry with the ethylene and the acetic acid, while it’s the surface that ends up doing the activation of the oxygen.
They found that the underlying process involved in interconverting the two forms of the catalyst is actually corrosion, similar to the process of rusting. “It turns out that in rusting, you actually go through a soluble molecular species somewhere in the sequence,” Surendranath says.
The team borrowed techniques traditionally used in corrosion research to study the process. They used electrochemical tools to study the reaction, even though the overall reaction does not require a supply of electricity. By making potential measurements, the researchers determined that the corrosion of the palladium catalyst material to soluble palladium ions is driven by an electrochemical reaction with the oxygen, converting it to water. Corrosion is “one of the oldest topics in electrochemistry,” says Lodaya, “but applying the science of corrosion to understand catalysis is much newer, and was essential to our findings.”
By correlating measurements of catalyst corrosion with other measurements of the chemical reaction taking place, the researchers proposed that it was the corrosion rate that was limiting the overall reaction. “That’s the choke point that’s controlling the rate of the overall process,” Surendranath says.
The interplay between the two types of catalysis works efficiently and selectively “because it actually uses the synergy of a material surface doing what it’s good at and a molecule doing what it’s good at,” Surendranath says. The finding suggests that, when designing new catalysts, rather than focusing on either solid materials or soluble molecules alone, researchers should think about how the interplay of both may open up new approaches.
“Now, with an improved understanding of what makes this catalyst so effective, you can try to design specific materials or specific interfaces that promote the desired chemistry,” Harraz says. Since this process has been worked on for so long, these findings may not necessarily lead to improvements in this specific process of making vinyl acetate, but it does provide a better understanding of why the materials work as they do, and could lead to improvements in other catalytic processes.
Understanding that “catalysts can transit between molecule and material and back, and the role that electrochemistry plays in those transformations, is a concept that we are really excited to expand on,” Lodaya says.
Harraz adds: “With this new understanding that both types of catalysis could play a role, what other catalytic processes are out there that actually involve both? Maybe those have a lot of room for improvement that could benefit from this understanding.”
This work is “illuminating, something that will be worth teaching at the undergraduate level," says Christophe Coperet, a professor of inorganic chemistry at ETH Zurich, who was not associated with the research. “The work highlights new ways of thinking. ... [It] is notable in the sense that it not only reconciles homogeneous and heterogeneous catalysis, but it describes these complex processes as half reactions, where electron transfers can cycle between distinct entities.”
The research was supported, in part, by the National Science Foundation as a Phase I Center for Chemical Innovation; the Center for Interfacial Ionics; and the Gordon and Betty Moore Foundation.
Engineers develop a way to mass manufacture nanoparticles that deliver cancer drugs directly to tumors
Polymer-coated nanoparticles loaded with therapeutic drugs show significant promise for cancer treatment, including ovarian cancer. These particles can be targeted directly to tumors, where they release their payload while avoiding many of the side effects of traditional chemotherapy.
Over the past decade, MIT Institute Professor Paula Hammond and her students have created a variety of these particles using a technique known as layer-by-layer assembly. They’ve shown that the particles can effectively combat cancer in mouse studies.
To help move these nanoparticles closer to human use, the researchers have now come up with a manufacturing technique that allows them to generate larger quantities of the particles, in a fraction of the time.
“There’s a lot of promise with the nanoparticle systems we’ve been developing, and we’ve been really excited more recently with the successes that we’ve been seeing in animal models for our treatments for ovarian cancer in particular,” says Hammond, who is also MIT’s vice provost for faculty and a member of the Koch Institute for Integrative Cancer Research. “Ultimately, we need to be able to bring this to a scale where a company is able to manufacture these on a large level.”
Hammond and Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the new study, which appears today in Advanced Functional Materials. Ivan Pires PhD ’24, now a postdoc at Brigham and Women’s Hospital and a visiting scientist at the Koch Institute, and Ezra Gordon ’24 are the lead authors of paper. Heikyung Suh, an MIT research technician, is also an author.
A streamlined process
More than a decade ago, Hammond’s lab developed a novel technique for building nanoparticles with highly controlled architectures. This approach allows layers with different properties to be laid down on the surface of a nanoparticle by alternately exposing the surface to positively and negatively charged polymers.
Each layer can be embedded with drug molecules or other therapeutics. The layers can also carry targeting molecules that help the particles find and enter cancer cells.
Using the strategy that Hammond’s lab originally developed, one layer is applied at a time, and after each application, the particles go through a centrifugation step to remove any excess polymer. This is time-intensive and would be difficult to scale up to large-scale production, the researchers say.
More recently, a graduate student in Hammond’s lab developed an alternative approach to purifying the particles, known as tangential flow filtration. However, while this streamlined the process, it still was limited by its manufacturing complexity and maximum scale of production.
“Although the use of tangential flow filtration is helpful, it’s still a very small-batch process, and a clinical investigation requires that we would have many doses available for a significant number of patients,” Hammond says.
To create a larger-scale manufacturing method, the researchers used a microfluidic mixing device that allows them to sequentially add new polymer layers as the particles flow through a microchannel within the device. For each layer, the researchers can calculate exactly how much polymer is needed, which eliminates the need to purify the particles after each addition.
“That is really important because separations are the most costly and time-consuming steps in these kinds of systems,” Hammond says.
This strategy eliminates the need for manual polymer mixing, streamlines production, and integrates good manufacturing practice (GMP)-compliant processes. The FDA’s GMP requirements ensure that products meet safety standards and can be manufactured in a consistent fashion, which would be highly challenging and costly using the previous step-wise batch process. The microfluidic device that the researchers used in this study is already used for GMP manufacturing of other types of nanoparticles, including mRNA vaccines.
“With the new approach, there’s much less chance of any sort of operator mistake or mishaps,” Pires says. “This is a process that can be readily implemented in GMP, and that’s really the key step here. We can create an innovation within the layer-by-layer nanoparticles and quickly produce it in a manner that we could go into clinical trials with.”
Scaled-up production
Using this approach, the researchers can generate 15 milligrams of nanoparticles (enough for about 50 doses) in just a few minutes, while the original technique would take close to an hour to create the same amount. This could enable the production of more than enough particles for clinical trials and patient use, the researchers say.
“To scale up with this system, you just keep running the chip, and it is much easier to produce more of your material,” Pires says.
To demonstrate their new production technique, the researchers created nanoparticles coated with a cytokine called interleukin-12 (IL-12). Hammond’s lab has previously shown that IL-12 delivered by layer-by-layer nanoparticles can activate key immune cells and slow ovarian tumor growth in mice.
In this study, the researchers found that IL-12-loaded particles manufactured using the new technique showed similar performance as the original layer-by-layer nanoparticles. And, not only do these nanoparticles bind to cancer tissue, but they show a unique ability to not enter the cancer cells. This allows the nanoparticles to serve as markers on the cancer cells that activate the immune system locally in the tumor. In mouse models of ovarian cancer, this treatment can lead to both tumor growth delay and even cures.
The researchers have filed for a patent on the technology and are now working with MIT’s Deshpande Center for Technological Innovation in hopes of potentially forming a company to commercialize the technology. While they are initially focusing on cancers of the abdominal cavity, such as ovarian cancer, the work could also be applied to other types of cancer, including glioblastoma, the researchers say.
The research was funded by the U.S. National Institutes of Health, the Marble Center for Nanomedicine, the Deshpande Center for Technological Innovation, and the Koch Institute Support (core) Grant from the National Cancer Institute.