MIT Latest News
Study: More eyes on the skies will help planes reduce climate-warming contrails
Aviation’s climate impact is partly due to contrails — condensation that a plane streaks across the sky when it flies through icy and humid layers of the atmosphere. Contrails trap heat that radiates from the planet’s surface, and while the magnitude of this impact is uncertain, several studies suggest contrails may be responsible for about half of aviation’s climate impact.
Pilots could conceivably reduce their planes’ climate impact by avoiding contrail-prone regions, similarly to making altitude adjustments to avoid turbulence. But to do so requires knowing where in the sky contrails are likely to form.
To make these predictions, scientists are studying images of contrails that have formed in the past. Images taken by geostationary satellites are one of the main tools scientists use to develop contrail identification and avoidance systems.
But a new study shows there are limits to what geostationary satellites can see. MIT engineers analyzed contrail images taken with geostationary satellites, and compared them with images of the same areas taken by low-Earth-orbiting (LEO) satellites. LEO satellites orbit the Earth at lower altitudes and therefore can capture more detail. However, since LEO satellites only snap an image as they fly by, they capture images of the same area far less frequently than geostationary (GEO) satellites, which continuously image the same region of the Earth every few minutes.
The researchers found that geostationary satellites miss about 80 percent of the contrails that appear in LEO imagery. Geostationary satellites mainly see larger contrails that have had time to grow and spread across the atmosphere. The many more contrails that LEO satellites can pick up are often shorter and thinner. These finer threads likely formed immediately from a plane’s engines and are still too small or otherwise not distinct enough for geostationary satellites to discern.
The study highlights the need for a multiobservational approach in developing contrail identification and avoidance systems. The researchers emphasize that both GEO and LEO satellite images have their strengths and limitations. Observations from both sources, as well as images taken from the ground, could provide a more complete picture of contrails and how they evolve.
“With more ‘eyes’ on the sky, we could start to see what a contrail’s life looks like,” says Prakash Prashanth, a research scientist in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “Then you can understand what are its radiative properties over its entire life, and when and why a contrail is climatically important.”
The new study appears today in the journal Geophysical Research Letters. The study’s MIT co-authors include first author Marlene Euchenhofer, a graduate student in AeroAstro; Sydney Parke, an undergraduate student; Ian Waitz, the Jerome C. Hunsaker Professor of Aeronautics and Astronautics and MIT’s vice president of research; and Sebastian Eastham of Imperial College London.
Imaging backbone
Contrails form when the exhaust from planes meets icy, humid air, and the particles from the exhaust act as seeds on which water vapor collects and freezes into ice crystals. As a plane moves forward, it leaves a trail of condensation in its wake that starts as a thin thread that can grow and spread over large distances, lasting for several hours before dissipating.
When it persists, a contrail acts similar to an ice cloud and, as such, can have two competing effects: one in which the contrail is a sort of heat shield, reflecting some incoming radiation from the sun. On the other hand, a contrail can also act as a blanket, absorbing and reflecting back some of the heat from the surface. During the daytime, when the sun is shining, contrails can have both heat shielding and trapping effects. At night, the cloud-like threads have only a trapping, warming effect. On balance, studies have shown that contrails as a whole contribute to warming the planet.
There are multiple efforts underway to develop and test aircraft contrail-avoidance systems to reduce aviation’s climate-warming impact. And scientists are using images of contrails from space to help inform those systems.
“Geostationary satellite images are the workhorse of observations for detecting contrails,” says Euchenhofer. “Because they are at 36,000 kilometers above the surface, they can cover a wide area, and they look at the same point day and night so you can get new images of the same location every five minutes.”
But what they bring in rate and coverage, geostationary satellites lack in clarity. The images they take are about one-fifth the resolution of those taken by LEO satellites. This wouldn’t be a surprise to most scientists. But Euchenhofer wondered how different the geostationary and LEO contrail pictures would look, and what opportunities there might be to improve the picture if both sources could be combined.
“We still think geostationary satellites are the backbone of observation-based avoidance because of the spatial coverage and the high frequency at which we get an image,” she says. “We think that the data could be enhanced if we include observations from LEO and other data sources like ground-based cameras.”
Catching the trail
In their new study, the researchers analyzed contrail images from two satellite imagers: the Advanced Baseline Imager (ABI) aboard a geostationary satellite that is typically used to observe contrails and the higher-resolution Visible Infrared Radiometer Suite (VIIRS), an instrument onboard several LEO satellites.
For each month from December 2023 to November 2024, the team picked out an image of the contiguous United States taken by VIIRS during its flyby. They found corresponding images of the same location, taken at about the same time of day by the geostationary ABI. The images were taken in the infrared spectrum and represented in false color, which enabled the researchers to more easily identify contrails that formed during both the day and night. The researchers then worked by eye, zooming in on each image to identify, outline, and label each contrail they could see.
When they compared the images, they found that GEO images missed about 80 percent of the contrails observed in the LEO images. They also assessed the length and width of contrails in each image and found that GEO images mostly captured larger and longer contrails, while LEO images could also discern shorter, smaller contrails.
“We found 80 percent of the contrails we could see with LEO satellites, we couldn’t see with GEO imagers,” says Prashanth, who is the executive officer of MIT’s Laboratory for Aviation and the Environment. “That does not mean that 80 percent of the climate impact wasn’t captured. Because the contrails we see with GEO imagers are the bigger ones that likely have a bigger climate effect.”
Still, the study highlights an opportunity.
“We want to make sure this message gets across: Geostationary imagers are extremely powerful in terms of the spatial extent they cover and the number of images we can get,” Euchenhofer says. “But solely relying on one instrument, especially when policymaking comes into play, is probably too incomplete a picture to inform science and also airlines regarding contrail avoidance. We really need to fill this gap with other sensors.”
The team says other sensors could include networks of cameras on the ground that under ideal conditions can spot contrails as planes form them in real time. These smaller, “younger” contrails are typically missed by geostationary satellites. Once scientists have these ground-based data, they can match the contrail to the plane and use the plane’s flight data to identify the exact altitude at which the contrail appears. They could then track the contrail as it grows and spreads through the atmosphere, using geostationary images. Eventually, with enough data, scientists could develop an accurate forecasting model, in real time, to predict whether a plane is heading toward a region where contrails might form and persist, and how it could change its altitude to avoid the region.
“People see contrail avoidance as a near-term and cheap opportunity to attack one of the hardest-to-abate sectors in transportation,” Prashanth says. “We don’t have a lot of easy solutions in aviation to reduce our climate impact. But it is premature to do so until we have better tools to determine where in the atmosphere contrails will form, to understand their relative impacts and to verify avoidance outcomes. We have to do this in a careful and rigorous manner, and this is where a lot of these pieces come in.”
This work was supported, in part, by the U.S. Federal Aviation Administration Office of Environment and Energy.
Anything-goes “anyons” may be at the root of surprising quantum experiments
In the past year, two separate experiments in two different materials captured the same confounding scenario: the coexistence of superconductivity and magnetism. Scientists had assumed that these two quantum states are mutually exclusive; the presence of one should inherently destroy the other.
Now, theoretical physicists at MIT have an explanation for how this Jekyll-and-Hyde duality could emerge. In a paper appearing today in the Proceedings of the National Academy of Sciences, the team proposes that under certain conditions, a magnetic material’s electrons could splinter into fractions of themselves to form quasiparticles known as “anyons.” In certain fractions, the quasiparticles should flow together without friction, similar to how regular electrons can pair up to flow in conventional superconductors.
If the team’s scenario is correct, it would introduce an entirely new form of superconductivity — one that persists in the presence of magnetism and involves a supercurrent of exotic anyons rather than everyday electrons.
“Many more experiments are needed before one can declare victory,” says study lead author Senthil Todadri, the William and Emma Rogers Professor of Physics at MIT. “But this theory is very promising and shows that there can be new ways in which the phenomenon of superconductivity can arise.”
What’s more, if the idea of superconducting anyons can be confirmed and controlled in other materials, it could provide a new way to design stable qubits — atomic-scale “bits” that interact quantum mechanically to process information and carry out complex computations far more efficiently than conventional computer bits.
“These theoretical ideas, if they pan out, could make this dream one tiny step within reach,” Todadri says.
The study’s co-author is MIT physics graduate student Zhengyan Darius Shi.
“Anything goes”
Superconductivity and magnetism are macroscopic states that arise from the behavior of electrons. A material is a magnet when electrons in its atomic structure have roughly the same spin, or orbital motion, creating a collective pull in the form of a magnetic field within the material as a whole. A material is a superconductor when electrons passing through, in the form of voltage, can couple up in “Cooper pairs.” In this teamed-up state, electrons can glide through a material without friction, rather than randomly knocking against its atomic latticework.
For decades, it was thought that superconductivity and magnetism should not co-exist; superconductivity is a delicate state, and any magnetic field can easily sever the bonds between Cooper pairs. But earlier this year, two separate experiments proved otherwise. In the first experiment, MIT’s Long Ju and his colleagues discovered superconductivity and magnetism in rhombohedral graphene — a synthesized material made from four or five graphene layers.
“It was electrifying,” says Todadri, who recalls hearing Ju present the results at a conference. “It set the place alive. And it introduced more questions as to how this could be possible.”
Shortly after, a second team reported similar dual states in the semiconducting crystal molybdenium ditelluride (MoTe2). Interestingly, the conditions in which MoTe2 becomes superconductive happen to be the same conditions in which the material exhibits an exotic “fractional quantum anomalous Hall effect,” or FQAH — a phenomenon in which any electron passing through the material should split into fractions of itself. These fractional quasiparticles are known as “anyons.”
Anyons are entirely different from the two main types of particles that make up the universe: bosons and fermions. Bosons are the extroverted particle type, as they prefer to be together and travel in packs. The photon is the classic example of a boson. In contrast, fermions prefer to keep to themselves, and repel each other if they are too near. Electrons, protons, and neutrons are examples of fermions. Together, bosons and fermions are the two major kingdoms of particles that make up matter in the three-dimensional universe.
Anyons, in contrast, exist only in two-dimensional space. This third type of particle was first predicted in the 1980s, and its name was coined by MIT’s Frank Wilczek, who meant it as a tongue-in-cheek reference to the idea that, in terms of the particle’s behavior, “anything goes.”
A few years after anyons were first predicted, physicists such as Robert Laughlin PhD ’79, Wilczek, and others also theorized that, in the presence of magnetism, the quasiparticles should be able to superconduct.
“People knew that magnetism was usually needed to get anyons to superconduct, and they looked for magnetism in many superconducting materials,” Todadri says. “But superconductivity and magnetism typically do not occur together. So then they discarded the idea.”
But with the recent discovery that the two states can, in fact, peacefully coexist in certain materials, and in MoTe2 in particular, Todadri wondered: Could the old theory, and superconducting anyons, be at play?
Moving past frustration
Todadri and Shi set out to answer that question theoretically, building on their own recent work. In their new study, the team worked out the conditions under which superconducting anyons could emerge in a two-dimensional material. To do so, they applied equations of quantum field theory, which describes how interactions at the quantum scale, such as the level of individual anyons, can give rise to macroscopic quantum states, such as superconductivity. The exercise was not an intuitive one, since anyons are known to stubbornly resist moving, let alone superconducting, together.
“When you have anyons in the system, what happens is each anyon may try to move, but it’s frustrated by the presence of other anyons,” Todadri explains. “This frustration happens even if the anyons are extremely far away from each other. And that’s a purely quantum mechanical effect.”
Even so, the team looked for conditions in which anyons might break out of this frustration and move as one macroscopic fluid. Anyons are formed when electrons splinter into fractions of themselves under certain conditions in two-dimensional, single-atom-thin materials, such as MoTe2. Scientists had previously observed that MoTe2 exhibits the FQAH, in which electrons fractionalize, without the help of an external magnetic field.
Todadri and Shi took MoTe2 as a starting point for their theoretical work. They modeled the conditions in which the FQAH phenomenon emerged in MoTe2, and then looked to see how electrons would splinter, and what types of anyons would be produced, as they theoretically increased the number of electrons in the material.
They noted that, depending on the material’s electron density, two types of anyons can form: anyons with either 1/3 or 2/3 the charge of an electron. They then applied equations of quantum field theory to work out how either of the two anyon types would interact, and found that when the anyons are mostly of the 1/3 flavor, they are predictably frustrated, and their movement leads to ordinary metallic conduction. But when anyons are mostly of the 2/3 flavor, this particular fraction encourages the normally stodgy anyons to instead move collectively to form a superconductor, similar to how electrons can pair up and flow in conventional superconductors.
“These anyons break out of their frustration and can move without friction,” Todadri says. “The amazing thing is, this is an entirely different mechanism by which a superconductor can form, but in a way that can be described as Cooper pairs in any other system.”
Their work revealed that superconducting anyons can emerge at certain electron densities. What’s more, they found that when superconducting anyons first emerge, they do so in a totally new pattern of swirling supercurrents that spontaneously appear in random locations throughout the material. This behavior is distinct from conventional superconductors and is an exotic state that experimentalists can look for as a way to confirm the team’s theory. If their theory is correct, it would introduce a new form of superconductivity, through the quantum interactions of anyons.
“If our anyon-based explanation is what is happening in MoTe2, it opens the door to the study of a new kind of quantum matter which may be called ‘anyonic quantum matter,’” Todadri says. “This will be a new chapter in quantum physics.”
This research was supported, in part, by the National Science Foundation.
Statement on Professor Nuno Loureiro
As the authorities work to answer remaining questions, our continuing position is to refer to the law enforcement agencies and the U.S. Attorney of Massachusetts for information.
For now, our focus is on our community, on Nuno’s family, and all those who knew him.”
Remembering Nuno
- Letter to the MIT community from President Kornbluth (Dec 19)
- MIT News Obituary (Dec. 16)
- Letter to the MIT community (Dec. 16)
“Wait, we have the tech skills to build that”
Students can take many possible routes through MIT’s curriculum, which can zigag through different departments, linking classes and disciplines in unexpected ways. With so many options, charting an academic path can be overwhelming, but a new tool called NerdXing is here to help.
The brainchild of senior Julianna Schneider and other students in the MIT Schwarzman College of Computing Undergraduate Advisory Group (UAG), NerdXing lets students search for a class and see all the other classes students have gone on to take in the past, including options that are off the beaten track.
“I hope that NerdXing will democratize course knowledge for everyone,” Schneider says. “I hope that for anyone who's a freshman and maybe hasn't picked their major yet, that they can go to NerdXing and start with a class that they would maybe never consider — and then discover that, ‘Oh wait, this is perfect for this really particular thing I want to study.’”
As a student double-majoring in artificial intelligence and decision-making and in mathematics, and doing research in the Biomimetic Robotics Laboratory in the Department of Mechanical Engineering, Schneider knows the benefits of interdisciplinary studies. It’s a part of the reason why she joined the UAG, which advises the MIT Schwarzman College of Computing’s leadership as it advances education and research at the intersections between computing, engineering, the arts, and more.
Through all of her activities, Schneider seeks to make people’s lives better through technology.
“This process of finding a problem in my community and then finding the right technology to solve that — that sort of approach and that framework is what guides all the things I do,” Schneider says. “And even in robotics, the things that I care about are guided by the sort of skills that I think we need to develop to be able to have meaningful applications.”
From Albania to MIT
Before she ever touched a robot or wrote code, Schneider was an accomplished young classical pianist in Albania. When she discovered her passion for robotics at age 13, she applied some of the skills she had learned while playing piano.
“I think on some fundamental level, when I was a pianist, I thought constantly about my motor dynamics as a human being, and how I execute really complex skills but do it over and over again at the top of my ability,” Schneider says. “When it came to robotics, I was building these robotic arms that also had to operate at the top of their ability every time and do really complex tasks. It felt kind of similar to me, like a fun crossover.”
Schneider joined her high school’s robotics team as a middle schooler, and she was so immediately enamored that she ended up taking over most of the coding and building of the team’s robot. She went on to win 14 regional and national awards across the three teams she led throughout middle and high school. It was clear to her that she’d found her calling.
NerdXing wasn’t Schneider’s first experience building new technology. At just 16, she built an app meant to connect English-speaking volunteers from her international school in Tirana, Albania, to local charities that only posted jobs in Albanian. By last year, the platform, called VoluntYOU, had 18 ambassadors across four continents. It has enabled volunteers to give out more than 2,000 burritos in Reno, Nevada; register hundreds of signatures to support women’s rights legislation in Albania; and help with administering Covid-19 vaccines to more than 1,200 individuals a day in Italy.
Schneider says her experience at an international school encouraged her to recognize problems and solutions all around her.
“When I enter a new community and I can immediately be like, ‘Oh wait, if we had this tool, that would be so cool and that would help all these people,’ I think that’s just a derivative of having grown up in a place where you hear about everyone’s super different life experiences,” she says.
Schneider describes NerdXing as a continuation of many of the skills she picked up while building VoluntYOU.
“They were both motivated by seeing a challenge where I thought, ‘Wait, we have the tech skills to build that. This is something that I can envision the solution to.’ And then I wanted to actually go and make that a reality,” Schneider says.
Robotics with a positive impact
At MIT, Schneider started working in the Biomimetic Robotics Laboratory of Professor Sangbae Kim, where she has now participated in three research projects, one of which she’s co-authoring a paper on. She’s part of a team that tests how robots, including the famous back-flipping mini cheetah, move, in order to see how they could complement humans in high-stakes scenarios.
Most of her work has revolved around crafting controllers, including one hybrid-learning and model-based controller that is well-suited to robots with limited onboard computing capacity. It would allow the robot to be used in regions with less access to technology.
“It’s not just doing technology for technology's sake, but because it will bridge out into the world and make a positive difference. I think legged robotics have some of the best potential to actually be a robotic partner to human beings in the scenarios that are most high-stakes,” Schneider says.
Schneider hopes to further robotic capabilities so she can find applications that will service communities around the world. One of her goals is to help create tools that allow a surgeon to operate on a patient a long distance away.
To take a break from academics, Schneider has channeled her love of the arts into MIT’s vibrant social dancing scene. This year, she’s especially excited about country line dancing events where the music comes on and students have to guess the choreography.
“I think it's a really fun way to make friends and to connect with the community,” she says.
Q&A: The secret sauce behind successful collegiate dining
MIT Director of Dining Andrew Mankus has been serving the Institute community since his arrival on campus in June. He brings a wealth of energy and experience — and a problem-solver’s sensibilities — to food service at MIT. Most recently, he led dining at the University of Massachusetts at Amherst, which has won the top prize in student dining nine years in a row. Prior to that, Mankus worked in civic centers and large commissaries, among other dining environments. In this Q&A, Mankus speaks about what makes a standout dining environment on a college campus, his tenure so far at MIT, and some dining plans for the near future.
Q: What’s the secret sauce to success in academic dining?
A: You start with the obvious thing: The food’s got to be good. You can’t just serve pizza and chicken tenders, but you also can’t leave them out — students want and need their comfort food.
Students also want food that’s authentic. The dining hall is like their home away from home on campus. So if you’re calling something “Northern Indian,” it can’t taste like Southern Indian, because the student from Northern India knows exactly how it’s supposed to taste. And if someone tells me it doesn’t taste like what they had at home, I need to ask, “How should it taste? Let’s talk to our chefs.” Students should see that we’re willing to do that, willing to go there for them.
Collegiate dining is not like anything else in the food industry, because we are an integral part of the whole campus-life experience. We look at how dining can help build community around food and how to elevate cultural aspects and authenticity through food.
Q: How do you manage authenticity at a large scale?
A: It sounds silly, but it’s really one meal at a time. But as somebody who comes from an operations background, it’s also about standard operating procedures. We follow recipes, we know how many people are coming through the door, we know how much to prep — a whole bunch of things. You need to know how many students like certain things and how to be ready for them — so when you’re cooking things like stir fries, students can customize their own ingredients. It turns out you can cook something hot and fresh and make it authentic at the same time.
I like to tell people I didn’t go to school for culinary. I went to school for management. So basically, I’m a professional problem-solver. I just found my passion in food service. I like to solve problems, and MIT likes to solve problems. What better place to have this skill set?
Q: What were your first impressions of dining at MIT?
A: The thing about MIT is: The product is here. We just need to do the things we should be doing here — like integrating technology, providing service, updating meal plans, and the like — and do them better. There’s nothing Earth-shattering about it. We just need to elevate our program to new levels.
I will say, the geography of MIT’s campus is a real challenge. Many colleges have dining programs that are built around concentrated residential housing. This lets them serve a lot of meals in fewer locations. MIT has 11 dorms spread across campus. There are six dining halls and a dozen retail locations. Students who live on the west side of campus are often on east campus, away from their dining halls and meal plans, for most of the day. It’s a complicated landscape, and none of it is easy to change.
Q: What are your biggest lessons so far?
A: To start with: Every college student has limited time, and MIT students are certainly busy. In addition to course work, pretty much everybody is involved in an extracurricular activity or athletics for a couple of hours.
This is where campus dining can help. When students only have a 30-minute window between classes, we need to figure out how to feed them. If we can figure this out, it’s a win — and if we can do that with their meal plan, they’ll be more likely to eat on campus.
I’m also starting to understand MIT students’ value equation. That’s always the No. 1 thing — and I’m not just talking about the price of a meal plan. Value could mean a lot of different things. It definitely could be the cash, but it could also be quality, access, nutrition, convenience, operating hours, using swipes — whatever. I want to know how to make their meal plan as valuable to them as possible.
I don’t have the data for MIT just because I haven’t been here long enough, but it’s broadly true that college students eat a little more often than four times a day. They snack. They graze. Here, students don’t have the same options because of their schedules, the meal plans, and geography. We need to figure out where MIT students are and try to meet them there.
Basically, I want campus dining to lead with a students-first mentality. Does this or that idea bring value? Does it contribute to campus life and the student experience? If the answer is yes, then we move on to the next step. Let’s put all the ideas on the table, and let’s be transparent and tell the students: There are going to be things we try that work, and some things we try that might not.
Q: What’s an example of something you’ve tried since you got here?
A: I’ll give you three. First, we started a new grab-and-go lunch program in Baker. That’s very popular.
Second, we ran a promotion to give away MIT Dining Dollars to students on the meal plan and to students in cook-for-yourself locations. It was basically to provide more value in the meal plan and raise awareness about Dining Dollars, which students can use at any dining hall or retail location on campus and the Concord Market. When I met with students about it, they asked me: “What's the catch?” I told them, “It’s pretty simple. I want you to eat with us. I don’t want you to go across the street.” Also, it helps build morale with dining staff. People get into dining to make food for people to eat. They don’t make food so people can throw it away.
Third, we’re starting a student ambassador program. They will be an extension of our management team and will help us tell the story of campus dining through the lenses of students — how things are going on campus or in their houses.
Q: Do you have plans for working with graduate students at MIT?
A: This is a huge area of opportunity for dining at MIT. Graduate students are not on meal plans, because the plans don’t fit their needs, but many of them live on or near campus. What if there was some kind of pilot program that was more Dining Dollar-based, where it suits a graduate student and their family, or it doesn’t expire and can be very portable? I’m pretty sure we can come up with something that fits their needs better than grocery shopping and cooking for yourself in Cambridge.
Q: What’s your favorite thing to cook?
A: Lately, it’s been a chicken fricassee. It’s my wife’s father’s recipe. It’s Hungarian, like a paprikash chicken. You boil onions and water for a really long time and load it up with paprika. It takes hours to make. But when you do it right, it’s really, really good.
This is an edited version of an article first published by the MIT Division of Student Life.
Building reuse into the materials around us
In a field defined by discovering, designing, and processing the materials that underpin modern technology, Diran Apelian ScD ’73 has a resounding message: Reuse can’t remain just the focus of a PhD thesis or a startup. It needs to be engineered from the beginning.
Apelian, a metallurgist and MIT alumnus known for his pioneering work in molten metal processing, framed his plea with a look at society’s growing needs for materials like copper, nickel, iron, and manganese — and how demand for them has surged alongside population growth over the past 150 years.
“We’re using more and more stuff — that’s the takeaway,” said Apelian, the speaker for the MIT Department of Materials Science and Engineering (DMSE)’s Wulff Lecture on Nov. 19. “Now, where’s all this stuff coming from? It doesn’t come from Home Depot. It comes from the Earth — planet Earth — where we take the ores out of the Earth, and we have to extract them out.”
And more and more everyday goods depend on those ores, depleting the planet’s supplies while expending massive amounts of energy to do it, Apelian said. As one example, Apelian pointed out that computer chips, which incorporated 11 elements in 1980, now contain 52.
Instead of simply taking, processing, and eventually discarding materials — often after passing them through inefficient recycling systems — Apelian proposes another approach: designing materials and products so that the value inside them can be recovered.
Examples include aerospace-grade materials made from scrap aluminum alloys, optimized using AI-driven alloy blending, and shredding lithium-ion batteries to produce “black mass,” a mixture rich in cobalt, nickel, and lithium that can be refined into new cathode materials for the next generation of batteries.
“Sustainable growth, sustainability, the development of the planet Earth is a challenge,” said Apelian — one that materials scientists and engineers are in a prime position to tackle. “It’s a profound change, but it requires material issues and challenges that are also an opportunity for us.”
Reshaping materials design
The Wulff Lecture is no stranger to sustainability and climate issues — past speakers have discussed green iron and steel production and hydrogen-powered fuel cells. But what marked Apelian’s talk was a call for an overhaul of how materials are produced, used, and — crucially — used again. The key, he said, is “materials circularity,” which keeps Earth-derived minerals moving through the economy as long as possible, instead of being extracted, processed, used, and thrown away.
Apelian referenced the “materials tetrahedron,” the classic framework connecting processing, structure, properties, and performance — the foundation underlying the development of most materials around us. Highlighting what’s missing, he asked DMSE students about materials at the end of their life cycle: “You don’t really spend too much time on it, right?”
He proposed a new framework of concentric circles that reimagines the materials life cycle — from mining, extraction, processing, and design, to new phases focused on repair, reuse, remanufacturing, and recycling — “all the R’s,” he said.
One pathway to more sustainable materials use, Apelian said, is tackling post-consumer waste — the everyday products people throw away once they’re done using them.
“How can we take the waste and recover it and reuse it?” Apelian asked.
One example is aluminum scrap processing, which has seen several advances in recent years. Traditionally, end-of-life vehicles were stripped of valuable parts and fed through giant shredders; the resulting mix of metals were melted together, forfeiting much of its engineered value, and “downcycled” into cast alloys used for products like engine blocks or patio furniture.
Today, advancements in automated sensor-based sorting, machine learning and robotics, and improved melting practices mean aluminum scrap can now be directed into higher-value applications, including aerospace components and structural automotive parts — beams and supports that form a vehicle’s frame.
“So that’s the aim, that’s the motivation: creating value out of waste,” Apelian said.
He highlighted ongoing efforts to modernize scrap processing. He is a co-founder of Solvus Global Inc., which develops systems to convert metal scrap into high-value products, and Valis Insights, a Solvus spinout that uses sensor-based systems to identify and sort metal scraps with high precision.
At the University of California at Irvine — where Apelian serves as distinguished professor of materials science and engineering — his group is “studying the DNA” of mixed scrap, analyzing and testing blends to prepare them for high-value applications. He has also done significant work in lithium-ion battery recycling, including co-inventing the process, commercialized by Ascend Elements, that shreds batteries and produces as a byproduct the black mass used as feedstock for new cathode materials.
Believing in circularity
Apelian also pointed to ways of extracting value from industrial waste: recovering metals from red mud — the highly alkaline byproduct of aluminum production — and reclaiming rare-earth elements from mine tailings. And he spotlighted the work of Shaolou Wei ScD ’22, a DMSE alum joining the faculty in 2026, who has developed ways to bypass the long, energy-intensive sequences traditionally used to make many alloys — reducing energy consumption and eliminating processing steps.
Stressing that business models and policy play a critical role in enabling a circular economy, Apelian offered a scenario: “Right now, in America, when you buy a car, it’s yours. At the end of life, it’s your problem.” Owners can trade it in or sell it, but ultimately, they need to dispose of it, he said. He then mused about reversing this responsibility — requiring manufacturers to take cars back at end of life. “I’ve got to tell you, when that happens, things are going to be designed very differently.”
Audience member Evia Rodriguez, a senior in materials science and engineering, was struck by Apelian’s emphasis on circularity. She pointed to Patagonia — one of Apelian’s examples — as a company weaving circularity into its business model by encouraging customers to repair clothing instead of replacing it.
“That definitely represents an optimistic idea of what could happen,” Rodriguez said. “I tend to be more skeptical, but I like to think that we could get there someday, and that we could have all companies operating on a more sustainable front.”
First-year undergraduate Brandon Mata shared a similar outlook — balancing doubt with hope. “I think it’s easy to be pessimistic about how companies are going to act. You’re going to say people are always going to be greedy. They’re going to be selfish,” Mata said. “But regardless, I think it’s still important to have somebody like that saying, even just stating, ‘It’s important that we do this, and doing this would clearly benefit the world.’”
Yanna Tenorio, a first-year undergraduate who’s interested in the energy side of materials science, zoomed out to the overarching questions raised in the talk. “Thinking about what happens at the end of these materials’ life, how can they be reused? How can we take accountability for them?” Tenorio asked. “What I find very exciting about material science in general is how much there is to be discovered.”
Guided learning lets “untrainable” neural networks realize their potential
Even networks long considered “untrainable” can learn effectively with a bit of a helping hand. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have shown that a brief period of alignment between neural networks, a method they call guidance, can dramatically improve the performance of architectures previously thought unsuitable for modern tasks.
Their findings suggest that many so-called “ineffective” networks may simply start from less-than-ideal starting points, and that short-term guidance can place them in a spot that makes learning easier for the network.
The team’s guidance method works by encouraging a target network to match the internal representations of a guide network during training. Unlike traditional methods like knowledge distillation, which focus on mimicking a teacher’s outputs, guidance transfers structural knowledge directly from one network to another. This means the target learns how the guide organizes information within each layer, rather than simply copying its behavior. Remarkably, even untrained networks contain architectural biases that can be transferred, while trained guides additionally convey learned patterns.
“We found these results pretty surprising,” says Vighnesh Subramaniam ’23, MEng ’24, MIT Department of Electrical Engineering and Computer Science (EECS) PhD student and CSAIL researcher, who is a lead author on a paper presenting these findings. “It’s impressive that we could use representational similarity to make these traditionally ‘crappy’ networks actually work.”
Guide-ian angel
A central question was whether guidance must continue throughout training, or if its primary effect is to provide a better initialization. To explore this, the researchers performed an experiment with deep fully connected networks (FCNs). Before training on the real problem, the network spent a few steps practicing with another network using random noise, like stretching before exercise. The results were striking: Networks that typically overfit immediately remained stable, achieved lower training loss, and avoided the classic performance degradation seen in something called standard FCNs. This alignment acted like a helpful warmup for the network, showing that even a short practice session can have lasting benefits without needing constant guidance.
The study also compared guidance to knowledge distillation, a popular approach in which a student network attempts to mimic a teacher’s outputs. When the teacher network was untrained, distillation failed completely, since the outputs contained no meaningful signal. Guidance, by contrast, still produced strong improvements because it leverages internal representations rather than final predictions. This result underscores a key insight: Untrained networks already encode valuable architectural biases that can steer other networks toward effective learning.
Beyond the experimental results, the findings have broad implications for understanding neural network architecture. The researchers suggest that success — or failure — often depends less on task-specific data, and more on the network’s position in parameter space. By aligning with a guide network, it’s possible to separate the contributions of architectural biases from those of learned knowledge. This allows scientists to identify which features of a network’s design support effective learning, and which challenges stem simply from poor initialization.
Guidance also opens new avenues for studying relationships between architectures. By measuring how easily one network can guide another, researchers can probe distances between functional designs and reexamine theories of neural network optimization. Since the method relies on representational similarity, it may reveal previously hidden structures in network design, helping to identify which components contribute most to learning and which do not.
Salvaging the hopeless
Ultimately, the work shows that so-called “untrainable” networks are not inherently doomed. With guidance, failure modes can be eliminated, overfitting avoided, and previously ineffective architectures brought into line with modern performance standards. The CSAIL team plans to explore which architectural elements are most responsible for these improvements and how these insights can influence future network design. By revealing the hidden potential of even the most stubborn networks, guidance provides a powerful new tool for understanding — and hopefully shaping — the foundations of machine learning.
“It’s generally assumed that different neural network architectures have particular strengths and weaknesses,” says Leyla Isik, Johns Hopkins University assistant professor of cognitive science, who wasn’t involved in the research. “This exciting research shows that one type of network can inherit the advantages of another architecture, without losing its original capabilities. Remarkably, the authors show this can be done using small, untrained ‘guide’ networks. This paper introduces a novel and concrete way to add different inductive biases into neural networks, which is critical for developing more efficient and human-aligned AI.”
Subramaniam wrote the paper with CSAIL colleagues: Research Scientist Brian Cheung; PhD student David Mayo ’18, MEng ’19; Research Associate Colin Conwell; principal investigators Boris Katz, a CSAIL principal research scientist, and Tomaso Poggio, an MIT professor in brain and cognitive sciences; and former CSAIL research scientist Andrei Barbu. Their work was supported, in part, by the Center for Brains, Minds, and Machines, the National Science Foundation, the MIT CSAIL Machine Learning Applications Initiative, the MIT-IBM Watson AI Lab, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. Department of the Air Force Artificial Intelligence Accelerator, and the U.S. Air Force Office of Scientific Research.
Their work was recently presented at the Conference and Workshop on Neural Information Processing Systems (NeurIPS).
A new way to increase the capabilities of large language models
Most languages use word position and sentence structure to extract meaning. For example, “The cat sat on the box,” is not the same as “The box was on the cat.” Over a long text, like a financial document or a novel, the syntax of these words likely evolves.
Similarly, a person might be tracking variables in a piece of code or following instructions that have conditional actions. These are examples of state changes and sequential reasoning that we expect state-of-the-art artificial intelligence systems to excel at; however, the existing, cutting-edge attention mechanism within transformers — the primarily architecture used in large language models (LLMs) for determining the importance of words — has theoretical and empirical limitations when it comes to such capabilities.
An attention mechanism allows an LLM to look back at earlier parts of a query or document and, based on its training, determine which details and words matter most; however, this mechanism alone does not understand word order. It “sees” all of the input words, a.k.a. tokens, at the same time and handles them in the order that they’re presented, so researchers have developed techniques to encode position information. This is key for domains that are highly structured, like language. But the predominant position-encoding method, called rotary position encoding (RoPE), only takes into account the relative distance between tokens in a sequence and is independent of the input data. This means that, for example, words that are four positions apart, like “cat” and “box” in the example above, will all receive the same fixed mathematical rotation specific to that relative distance.
Now research led by MIT and the MIT-IBM Watson AI Lab has produced an encoding technique known as “PaTH Attention” that makes positional information adaptive and context-aware rather than static, as with RoPE.
“Transformers enable accurate and scalable modeling of many domains, but they have these limitations vis-a-vis state tracking, a class of phenomena that is thought to underlie important capabilities that we want in our AI systems. So, the important question is: How can we maintain the scalability and efficiency of transformers, while enabling state tracking?” says the paper’s senior author Yoon Kim, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and a researcher with the MIT-IBM Watson AI Lab.
A new paper on this work was presented earlier this month at the Conference on Neural Information Processing Systems (NeurIPS). Kim’s co-authors include lead author Songlin Yang, an EECS graduate student and former MIT-IBM Watson AI Lab Summer Program intern; Kaiyue Wen of Stanford University; Liliang Ren of Microsoft; and Yikang Shen, Shawn Tan, Mayank Mishra, and Rameswar Panda of IBM Research and the MIT-IBM Watson AI Lab.
Path to understanding
Instead of assigning every word a fixed rotation based on relative distance between tokens, as RoPE does, PaTH Attention is flexible, treating the in-between words as a path made up of small, data-dependent transformations. Each transformation, based on a mathematical operation called a Householder reflection, acts like a tiny mirror that adjusts depending on the content of each token it passes. Each step in a sequence can influence how the model interprets information later on. The cumulative effect lets the system model how the meaning changes along the path between words, not just how far apart they are. This approach allows transformers to keep track of how entities and relationships change over time, giving it a sense of “positional memory.” Think of this as walking a path while experiencing your environment and how it affects you. Further, the team also developed a hardware-efficient algorithm to more efficiently compute attention scores between every pair of tokens so that the cumulative mathematical transformation from PaTH Attention is compressed and broken down into smaller computations so that it’s compatible with fast processing on GPUs.
The MIT-IBM researchers then explored PaTH Attention’s performance on synthetic and real-world tasks, including reasoning, long-context benchmarks, and full LLM training to see whether it improved a model’s ability to track information over time. The team tested its ability to follow the most recent “write” command despite many distracting steps and multi-step recall tests, tasks that are difficult for standard positional encoding methods like RoPE. The researchers also trained mid-size LLMs and compared them against other methods. PaTH Attention improved perplexity and outcompeted other methods on reasoning benchmarks it wasn’t trained on. They also evaluated retrieval, reasoning, and stability with inputs of tens of thousands of tokens. PaTH Attention consistently proved capable of content-awareness.
“We found that both on diagnostic tasks that are designed to test the limitations of transformers and on real-world language modeling tasks, our new approach was able to outperform existing attention mechanisms, while maintaining their efficiency,” says Kim. Further, “I’d be excited to see whether these types of data-dependent position encodings, like PATH, improve the performance of transformers on structured domains like biology, in [analyzing] proteins or DNA.”
Thinking bigger and more efficiently
The researchers then investigated how the PaTH Attention mechanism would perform if it more similarly mimicked human cognition, where we ignore old or less-relevant information when making decisions. To do this, they combined PaTH Attention with another position encoding scheme known as the Forgetting Transformer (FoX), which allows models to selectively “forget.” The resulting PaTH-FoX system adds a way to down-weight information in a data-dependent way, achieving strong results across reasoning, long-context understanding, and language modeling benchmarks. In this way, PaTH Attention extends the expressive power of transformer architectures.
Kim says research like this is part of a broader effort to develop the “next big thing” in AI. He explains that a major driver of both the deep learning and generative AI revolutions has been the creation of “general-purpose building blocks that can be applied to wide domains,” such as “convolution layers, RNN [recurrent neural network] layers,” and, most recently, transformers. Looking ahead, Kim notes that considerations like accuracy, expressivity, flexibility, and hardware scalability have been and will be essential. As he puts it, “the core enterprise of modern architecture research is trying to come up with these new primitives that maintain or improve the expressivity, while also being scalable.”
This work was supported, in part, by the MIT-IBM Watson AI Lab and the AI2050 program at Schmidt Sciences.
Digital innovations and cultural heritage in rural towns
Population decline often goes hand-in-hand with economic stagnation in rural areas — and the two reinforce each other in a cycle. Can digital technologies advance equitable innovation and, at the same time, preserve cultural heritage in shrinking regions?
A new open-access book, edited by MIT Vice Provost and Department of Urban Studies and Planning (DUSP) Professor Brent D. Ryan PhD ’02, Carmelo Ignaccolo PhD ’24 of Rutgers University, and Giovanna Fossa of the Politecnico di Milano, explores the transformative power of community-centered technologies in the rural areas of Italy.
“Small Town Renaissance: Bridging Technology, Heritage and Planning in Shrinking Italy” (Springer Nature, 2025) investigates the future of small towns through empirical analyses of cellphone data, bold urban design visions, collaborative digital platforms for small businesses, and territorial strategies for remote work. The work examines how technology may open up these regions to new economic opportunities. The book shares data-driven scholarly work on shrinking towns, economic development, and digital innovation from multiple planning scholars and practitioners, several of whom traveled to Italy in fall 2022 as part of a DUSP practicum taught by Ryan and Ignaccolo, and sponsored by MISTI Italy and Fondazione Rocca, in collaboration with Liminal.
“What began as a hands-on MIT practicum grew into a transatlantic book collaboration uniting scholars in design, planning, heritage, law, and telecommunications to explore how technology can sustain local economies and culture,” says Ignaccolo.
Now an assistant professor of city planning at Rutgers University’s E.J. Bloustein School of Planning and Public Policy, Ignaccolo says the book provides concrete and actionable strategies to support shrinking regions in leveraging cultural heritage and smart technologies to strengthen opportunities and local economies.
“Depopulation linked to demographic change is reshaping communities worldwide,” says Ryan. “Italy is among the hardest hit, and the United States is heading in the same direction. This project offered students a chance to harness technology and innovation to imagine bold responses to this growing challenge.”
The researchers note that similar struggles also exist in rural communities across Germany, Spain, Japan, and Korea. The book provides policymakers, urban planners, designers, tech innovators, and heritage advocates with fresh insights and actionable strategies to shape the future of rural development in the digital age. The book and chapters can be downloaded for free through most university libraries via open access.
Post-COP30, more aggressive policies needed to cap global warming at 1.5 C
The latest United Nations Climate Change Conference (COP30) concluded in November without a roadmap to phase out fossil fuels and without significant progress in strengthening national pledges to reduce climate-altering greenhouse gas emissions. In aggregate, today’s climate policies remain far too unambitious to meet the Paris Agreement’s goal of capping global warming at 1.5 degrees Celsius, setting the world on course to experience more frequent and intense storms, flooding, droughts, wildfires, and other climate impacts. A global policy regime aligned with the 1.5 C target would almost certainly reduce the severity of those impacts.
In the “2025 Global Change Outlook,” researchers at the MIT Center for Sustainability Science and Strategy (CS3) compare the consequences of these two approaches to climate policy through modeled projections of critical natural and societal systems under two scenarios. The Current Trends scenario represents the researchers’ assessment of current measures for reducing greenhouse gas (GHG) emissions; the Accelerated Actions scenario is a credible pathway to stabilizing the climate at a global mean surface temperature of 1.5 C above preindustrial levels, in which countries impose more aggressive GHG emissions-reduction targets.
By quantifying the risks posed by today’s climate policies — and the extent to which accelerated climate action aligned with the 1.5 C goal could reduce them — the “Global Change Outlook” aims to clarify what’s at stake for environments and economies around the world. Here, we summarize the report’s key findings at the global level; regional details can also be accessed in several sections and through MIT CS3’s interactive global visualization tool.
Emerging headwinds for global climate action
Projections under Current Trends show higher GHG emissions than in our previous 2023 outlook, indicating reduced action on GHG emissions mitigation in the upcoming decade. The difference, roughly equivalent to the annual emissions from Brazil or Japan, is driven by current geopolitical events.
Additional analysis in this report indicates that global GHG emissions in 2050 could be 10 percent higher than they would be under Current Trends if regional rivalries triggered by U.S. tariff policy prompt other regions to weaken their climate regulations. In that case, the world would see virtually no emissions reduction in the next 25 years.
Energy and electricity projections
Between 2025 and 2050, global energy consumption rises by 17 percent under Current Trends, with a nearly nine-fold increase in wind and solar. Under Accelerated Actions, global energy consumption declines by 16 percent, with a nearly 13-fold increase in wind and solar, driven by improvements in energy efficiency, wider use of electricity, and demand response. In both Current Trends and Accelerated Actions, global electricity consumption increases substantially (by 90 percent and 100 percent, respectively), with generation from low-carbon sources becoming a dominant source of power, though Accelerated Actions has a much larger share of renewables.
“Achieving long-term climate stabilization goals will require more ambitious policy measures that reduce fossil-fuel dependence and accelerate the energy transition toward low-carbon sources in all regions of the world. Our Accelerated Actions scenario provides a pathway for scaling up global climate ambition,” says MIT CS3 Deputy Director Sergey Paltsev, co-lead author of the report.
Greenhouse gas emissions and climate projections
Under Current Trends, global anthropogenic (human-caused) GHG emissions decline by 10 percent between 2025 and 2050, but start to rise again later in the century; under Accelerated Actions, however, they fall by 60 percent by 2050. Of the two scenarios, only the latter could put the world on track to achieve long-term climate stabilization.
Median projections for global warming by 2050, 2100, and 2150 are projected to reach 1.79, 2.74, and 3.72 degrees C (relative to the global mean surface temperature (GMST) average for the years 1850-1900) under Current Trends and 1.62, 1.56, and 1.50 C under Accelerated Actions. Median projections for global precipitation show increases from 2025 levels of 0.04, 0.11, and 0.18 millimeters per day in 2050, 2100, and 2150 under Current Trends and 0.03, 0.04, and 0.03 mm/day for those years under Accelerated Actions.
“Our projections demonstrate that aggressive cuts in GHG emissions can lead to substantial reductions in the upward trends of GMST, as well as global precipitation,” says CS3 deputy director C. Adam Schlosser, co-lead author of the outlook. “These reductions to both climate warming and acceleration of the global hydrologic cycle lower the risks of damaging impacts, particularly toward the latter half of this century.”
Implications for sustainability
The report’s modeled projections imply significantly different risk levels under the two scenarios for water availability, biodiversity, air quality, human health, economic well-being, and other sustainability indicators.
Among the key findings: Policies that align with Accelerated Actions could yield substantial co-benefits for water availability, biodiversity, air quality, and health. For example, combining Accelerated Actions-aligned climate policies with biodiversity targets, or with air-quality targets, could achieve biodiversity and air quality/health goals more efficiently and cost-effectively than a more siloed approach. The outlook’s analysis of the global economy under Current Trends suggests that decision-makers need to account for climate impacts outside their home region and the resilience of global supply chains.
Finally, CS3’s new data-visualization platform provides efficient, screening-level mapping of current and future climate, socioeconomic, and demographic-related conditions and changes — including global mapping for many of the model outputs featured in this report.
“Our comparison of outcomes under Current Trends and Accelerated Actions scenarios highlights the risks of remaining on the world’s current emissions trajectory and the benefits of pursuing a much more aggressive strategy,” says CS3 Director Noelle Selin, a co-author of the report and a professor in the Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences at MIT. “We hope that our risk-benefit analysis will help inform decision-makers in government, industry, academia, and civil society as they confront sustainability-relevant challenges.”
Student Spotlight: Diego Temkin
This interview is part of a series of short interviews from the Department of Electrical Engineering and Computer Science (EECS). Each spotlight features a student answering their choice of questions about themselves and life at MIT. Today’s interviewee, senior Diego Temkin, is double majoring in courses 6-3 (Computer Science and Engineering) and 11 (Urban Planning). The McAllen, Texas, native is involved with MIT’s Dormitory Council (DormCon), helps to maintain Hydrant (formerly Firehose)/CourseRoad, and is both a member of the Student Information Processing Board (MIT’s oldest computing club) and an Advanced Undergraduate Research Opportunities Program (SuperUROP) scholar.
Q: What’s your favorite key on a standard computer keyboard, and why?
A: The “1” key! During Covid, I ended up starting a typewriter collection and trying to fix them up, and I always thought it was interesting how they didn’t have a 1 key. People were just expected to use the lowercase “l,” which presumably makes anyone who cares about ASCII very upset.
Q: Tell us about a teacher from your past who had an influence on the person you’ve become.
A: Back in middle school, everyone had to take a technology class that taught things like typing skills, Microsoft Word and Excel, and some other things. I was a bit of a nerd and didn’t have too many friends interested in the sort of things I was, but the teacher of that technology class, Mrs. Camarena, would let me stay for a bit after school and encouraged me to explore more of my interests. She helped me become more confident in wanting to go into computer science, and now here I am.
Q: What’s your favorite trivia factoid?
A: Every floor in Building 13 is painted as a different MBTA line. I don’t know why and can’t really find anything about it online, but once you notice it you can’t unsee it!
Q: Do you have any pets?
A: I do! His name is Skateboard, and he is the most quintessentially orange cat. I got him off reuse@mit.edu during my first year here at MIT (shout out to Patty K), and he’s been with me ever since. He’s currently five years old, and he’s a big fan of goldfish and stepping on my face at 7 a.m. Best decision I’ve ever made.
Q: Are you a re-reader or a re-watcher? If so, what are your comfort books, shows, or movies?
A: Definitely a re-watcher, and definitely “Doctor Who.” I’ve watched far too much of that show and there are episodes I can recite from memory (looking at you, “The Eleventh Hour”). Anyone I know will tell you that I can go on about that show for hours, and before anyone asks, my favorite doctor is Matt Smith (sorry to the David Tennant fans; I like him too, though!)
Q: Do you have a bucket list? If so, share one or two of the items on it.
A: I’ve been wanting to take a cross-country Amtrak trip for a while … I think I might try going to the West Coast and some national parks during IAP [Independent Activities Period], if I have the time. Now that it’s on here, I definitely have to do it!
A “scientific sandbox” lets researchers explore the evolution of vision systems
Why did humans evolve the eyes we have today?
While scientists can’t go back in time to study the environmental pressures that shaped the evolution of the diverse vision systems that exist in nature, a new computational framework developed by MIT researchers allows them to explore this evolution in artificial intelligence agents.
The framework they developed, in which embodied AI agents evolve eyes and learn to see over many generations, is like a “scientific sandbox” that allows researchers to recreate different evolutionary trees. The user does this by changing the structure of the world and the tasks AI agents complete, such as finding food or telling objects apart.
This allows them to study why one animal may have evolved simple, light-sensitive patches as eyes, while another has complex, camera-type eyes.
The researchers’ experiments with this framework showcase how tasks drove eye evolution in the agents. For instance, they found that navigation tasks often led to the evolution of compound eyes with many individual units, like the eyes of insects and crustaceans.
On the other hand, if agents focused on object discrimination, they were more likely to evolve camera-type eyes with irises and retinas.
This framework could enable scientists to probe “what-if” questions about vision systems that are difficult to study experimentally. It could also guide the design of novel sensors and cameras for robots, drones, and wearable devices that balance performance with real-world constraints like energy efficiency and manufacturability.
“While we can never go back and figure out every detail of how evolution took place, in this work we’ve created an environment where we can, in a sense, recreate evolution and probe the environment in all these different ways. This method of doing science opens to the door to a lot of possibilities,” says Kushagra Tiwary, a graduate student at the MIT Media Lab and co-lead author of a paper on this research.
He is joined on the paper by co-lead author and fellow graduate student Aaron Young; graduate student Tzofi Klinghoffer; former postdoc Akshat Dave, who is now an assistant professor at Stony Brook University; Tomaso Poggio, the Eugene McDermott Professor in the Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute, and co-director of the Center for Brains, Minds, and Machines; co-senior authors Brian Cheung, a postdoc in the Center for Brains, Minds, and Machines and an incoming assistant professor at the University of California San Francisco; and Ramesh Raskar, associate professor of media arts and sciences and leader of the Camera Culture Group at MIT; as well as others at Rice University and Lund University. The research appears today in Science Advances.
Building a scientific sandbox
The paper began as a conversation among the researchers about discovering new vision systems that could be useful in different fields, like robotics. To test their “what-if” questions, the researchers decided to use AI to explore the many evolutionary possibilities.
“What-if questions inspired me when I was growing up to study science. With AI, we have a unique opportunity to create these embodied agents that allow us to ask the kinds of questions that would usually be impossible to answer,” Tiwary says.
To build this evolutionary sandbox, the researchers took all the elements of a camera, like the sensors, lenses, apertures, and processors, and converted them into parameters that an embodied AI agent could learn.
They used those building blocks as the starting point for an algorithmic learning mechanism an agent would use as it evolved eyes over time.
“We couldn’t simulate the entire universe atom-by-atom. It was challenging to determine which ingredients we needed, which ingredients we didn’t need, and how to allocate resources over those different elements,” Cheung says.
In their framework, this evolutionary algorithm can choose which elements to evolve based on the constraints of the environment and the task of the agent.
Each environment has a single task, such as navigation, food identification, or prey tracking, designed to mimic real visual tasks animals must overcome to survive. The agents start with a single photoreceptor that looks out at the world and an associated neural network model that processes visual information.
Then, over each agent’s lifetime, it is trained using reinforcement learning, a trial-and-error technique where the agent is rewarded for accomplishing the goal of its task. The environment also incorporates constraints, like a certain number of pixels for an agent’s visual sensors.
“These constraints drive the design process, the same way we have physical constraints in our world, like the physics of light, that have driven the design of our own eyes,” Tiwary says.
Over many generations, agents evolve different elements of vision systems that maximize rewards.
Their framework uses a genetic encoding mechanism to computationally mimic evolution, where individual genes mutate to control an agent’s development.
For instance, morphological genes capture how the agent views the environment and control eye placement; optical genes determine how the eye interacts with light and dictate the number of photoreceptors; and neural genes control the learning capacity of the agents.
Testing hypotheses
When the researchers set up experiments in this framework, they found that tasks had a major influence on the vision systems the agents evolved.
For instance, agents that were focused on navigation tasks developed eyes designed to maximize spatial awareness through low-resolution sensing, while agents tasked with detecting objects developed eyes focused more on frontal acuity, rather than peripheral vision.
Another experiment indicated that a bigger brain isn’t always better when it comes to processing visual information. Only so much visual information can go into the system at a time, based on physical constraints like the number of photoreceptors in the eyes.
“At some point a bigger brain doesn’t help the agents at all, and in nature that would be a waste of resources,” Cheung says.
In the future, the researchers want to use this simulator to explore the best vision systems for specific applications, which could help scientists develop task-specific sensors and cameras. They also want to integrate LLMs into their framework to make it easier for users to ask “what-if” questions and study additional possibilities.
“There’s a real benefit that comes from asking questions in a more imaginative way. I hope this inspires others to create larger frameworks, where instead of focusing on narrow questions that cover a specific area, they are looking to answer questions with a much wider scope,” Cheung says.
This work was supported, in part, by the Center for Brains, Minds, and Machines and the Defense Advanced Research Projects Agency (DARPA) Mathematics for the Discovery of Algorithms and Architectures (DIAL) program.
Teen builds an award-winning virtual reality prototype thanks to free MIT courses
When Freesia Gaul discovered MIT Open Learning’s OpenCourseWare at just 14 years old, it opened up a world of learning far beyond what her classrooms could offer. Her parents had started a skiing company, and the seasonal work meant that Gaul had to change schools every six months. Growing up in small towns in Australia and Canada, she relied on the internet to fuel her curiosity.
“I went to 13 different schools, which was hard because you're in a different educational system every single time,” says Gaul. “That’s one of the reasons I gravitated toward online learning and teaching myself. Knowledge is something that exists beyond a curriculum.”
The small towns she lived in often didn’t have a lot of resources, she says, so a computer served as a main tool for learning. She enjoyed engaging with Wikipedia, ultimately researching topics and writing and editing content for pages. In 2018, she discovered MIT OpenCourseWare, part of MIT Open Learning, and took her first course. OpenCouseWare offers free, online, open educational resources from more than 2,500 MIT undergraduate and graduate courses.
“I really got started with the OpenCourseWare introductory electrical engineering classes, because I couldn’t find anything else quite like it online,” says Gaul, who was initially drawn to courses on circuits and electronics, such as 6.002 (Circuits and Electronics) and 6.01SC (Introduction to Electrical Engineering and Computer Science). “It really helped me in terms of understanding how electrical engineering worked in a practical sense, and I just started modding things.”
In true MIT “mens et manus” (“mind and hand”) fashion, Gaul spent much of her childhood building and inventing, especially when she was able to access a 3D printer. She says that a highlight was when she built a life-sized, working version of a Mario Kart, constructed out of materials she had printed.
Gaul calls herself a “serial learner,” and has taken many OpenCourseWare courses. In addition to classes on circuits and electronics, she also took courses in linear algebra, calculus, and quantum physics — in which she took a particular interest.
When she was 15, she participated in Qubit by Qubit. Hosted by The Coding School, in collaboration with universities (including MIT) and tech companies, this two-semester course introduces high schoolers to quantum computing and quantum physics.
During that time she started a blog called On Zero, representing the “zero state” of a qubit. “The ‘zero state’ in a quantum computer is the representation of creativity from nothing, infinite possibilities,” says Gaul. For the blog, she found different topics and researched them in depth. She would think of a topic or question, such as “What is color?” and then explore it in great detail. What she learned eventually led her to start asking questions such as “What is a hamiltonian?” and teaching quantum physics alongside PhDs.
Building on these interests, Gaul chose to study quantum engineering at the University of New South Wales. She notes that on her first day of university, she participated in iQuHack, the MIT Quantum Hackathon. Her team worked to find a new way to approximate the value of a hyperbolic function using quantum logic, and received an honorable mention for “exceptional creativity.”
Gaul’s passion for making things continued during her college days, especially in terms of innovating to solve a problem. When she found herself on a train, wanting to code a personal website on a computer with a dying battery, she wondered if there might be a way to make a glove that can act as a type of Bluetooth keyboard — essentially creating a way to type in the air. In her spare time, she started working on such a device, ultimately finding a less expensive way to build a lightweight, haptic, gesture-tracking glove with applications for virtual reality (VR) and robotics.
Gaul says she has always had an interest in VR, using it to create her own worlds, reconstruct an old childhood house, and play Dungeons and Dragons with friends. She discovered a way to put into a glove some small linear resonant actuators, which can be found in a smartphone or gaming controller, and map to any object in VR so that the user can feel it.
An early prototype that Gaul put together in her dorm room received a lot of attention on YouTube. She went on to win the People’s Choice award for it at the SxSW Sydney 2025 Tech and Innovation Festival. This design also sparked her co-founding of the tech startup On Zero, named after her childhood blog dedicated to the love of creation from nothing.
Gaul sees the device, in general, as a way of “paying it forward,” making improved human-computer interaction available to many — from young students to professional technologists. She hopes to enable creative freedom in as many as she can. “The mind is just such a fun thing. I want to empower others to have the freedom to follow their curiosity, even if it's pointless on paper.
“I’ve benefited from people going far beyond what they needed to do to help me,” says Gaul. “I see OpenCourseWare as a part of that. The free courses gave me a solid foundation of knowledge and problem-solving abilities. Without these, it wouldn’t be possible to do what I’m doing now.”
MIT-Hood Pediatric Innovation Hub convenes leaders to advance pediatric health
Facing hospital closures, underfunded pediatric trials, and a persistent reliance on adult-oriented tools for children, the Hood Pediatric Innovation Hub welcomed nearly 200 leaders at Boston’s Museum of Science for MIT-Hood Pediatric Innovation 2025, an event focused on transforming the future of pediatric care through engineering and collaboration.
Hosted by the Hood Pediatric Innovation Hub — established at MIT through a gift by the Hood Foundation — the event brought together attendees from academia, health care, and industry to rethink how medical and technological breakthroughs can reach children faster. The gathering marked a new phase in the hub’s mission to connect scientific discovery with real-world impact.
“We have extraordinary science emerging every day, but the translation gap is widening,” said Joseph Frassica, professor of the practice in MIT’s Institute for Medical Engineering and Science and executive director of the Hood Pediatric Innovation Hub. “We can’t rely on the old model of innovation — we need new connective tissue between ideas, institutions, and implementation.”
Building collaboration across sectors
Speakers emphasized that pediatric medicine has long faced structural disadvantages compared with other fields — from smaller patient populations to limited commercial incentives. Yet they also described a powerful opportunity: to make pediatric innovation a proving ground for smarter, more human-centered health systems.
“The Hood Foundation has always believed that if you can improve care for children, you improve care for everyone,” said Neil Smiley, president of the Charles H. Hood Foundation. “Pediatrics pushes medicine to be smarter, more precise, and more humane — and that’s why this collaboration with MIT feels so right.”
Participants discussed how aligning efforts across universities, hospitals, and industry partners could help overcome the fragmentation that slows innovation, and ultimately translation. Speakers at the event highlighted case studies where cross-sector collaboration is already yielding results — from novel medical devices to data-driven clinical insights.
Connecting discovery to delivery
In his remarks, Elazer R. Edelman, the Edward J. Poitras Professor in Medical Engineering and Science at MIT and faculty lead for the Hood Pediatric Innovation Hub, reflected on how MIT’s engineering and medical communities can help close the loop between research and clinical application.
“This isn’t about creating something new for the sake of it — it’s about finally connecting the extraordinary expertise that already exists, from the lab to the clinic to the child’s bedside,” Edelman said. “That’s what MIT does best — we connect the dots.”
Throughout the day, attendees shared experiences from both the engineering and clinical viewpoints — acknowledging the complexities of regulation, funding, and adoption, while highlighting the shared responsibility to move faster on behalf of children.
A moment of convergence
The conversation also turned to the economics of innovation and the broader societal benefits of investing in pediatric health.
“The economic and social stakes couldn’t be higher,” said Jonathan Gruber, Ford Professor of Economics at MIT. “When we invest in children’s health, we invest in longer lives, stronger communities, and greater prosperity. The energy in this room shows what’s possible when we stop working in silos.”
By the end of the event, discussions had shifted from identifying barriers to designing solutions. Participants explored ideas ranging from translational fellowships and shared data platforms to new models for academic–industry partnership — each aimed at accelerating impact where it is needed most.
Looking ahead
“There’s a feeling that this is the moment,” Frassica said. “We have the tools, the data, and the will to transform how we care for children. The key now is keeping that spirit of collaboration alive — because when we do, we move the whole field forward.”
Building on the momentum from MIT-Hood Pediatric Innovation 2025, the Hood Pediatric Innovation Hub will continue to serve as a connector across disciplines and institutions, advancing projects that translate cutting-edge research into improved outcomes for children everywhere. In January, a new cohort of MIT Catalyst Fellows — early-career researchers embedded with frontline clinicians to identify unmet needs — will begin exploring solutions to challenges in pediatric and neonatal health care in partnership with the hub.
This work is also part of a wider Institute effort. The Hood Pediatric Innovation Hub contributes to the broader mission of the MIT Health and Life Sciences Collaborative (HEALS), which brings together faculty, clinicians, and industry partners to accelerate breakthroughs across all areas of human health. As the hub deepens its own collaborations, its connection to HEALS helps ensure that advances in pediatric medicine are integrated into MIT’s larger push to improve health outcomes at scale.
The hub will also release a request for proposals in the coming months for the development of its first mentored projects — designed to bring together teams from engineering, medicine, and industry to accelerate progress in children’s health. Updates and details will be available at hoodhub.mit.edu.
As Smiley noted, progress in pediatric health often drives progress across all of medicine — and this gathering underscored that shared belief: when we work together for children, we build a healthier future for everyone.
New study suggests a way to rejuvenate the immune system
As people age, their immune system function declines. T cell populations become smaller and can’t react to pathogens as quickly, making people more susceptible to a variety of infections.
To try to overcome that decline, researchers at MIT and the Broad Institute have found a way to temporarily program cells in the liver to improve T-cell function. This reprogramming can compensate for the age-related decline of the thymus, where T cell maturation normally occurs.
Using mRNA to deliver three key factors that usually promote T-cell survival, the researchers were able to rejuvenate the immune systems of mice. Aged mice that received the treatment showed much larger and more diverse T cell populations in response to vaccination, and they also responded better to cancer immunotherapy treatments.
If developed for use in patients, this type of treatment could help people lead healthier lives as they age, the researchers say.
“If we can restore something essential like the immune system, hopefully we can help people stay free of disease for a longer span of their life,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT, who has joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering.
Zhang, who is also an investigator at the McGovern Institute for Brain Research at MIT, a core institute member at the Broad Institute of MIT and Harvard, an investigator in the Howard Hughes Medical Institute, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, is the senior author of the new study. Former MIT postdoc Mirco Friedrich is the lead author of the paper, which appears today in Nature.
A temporary factory
The thymus, a small organ located in front of the heart, plays a critical role in T-cell development. Within the thymus, immature T cells go through a checkpoint process that ensures a diverse repertoire of T cells. The thymus also secretes cytokines and growth factors that help T cells to survive.
However, starting in early adulthood, the thymus begins to shrink. This process, known as thymic involution, leads to a decline in the production of new T cells. By the age of approximately 75, the thymus is greatly reduced.
“As we get older, the immune system begins to decline. We wanted to think about how can we maintain this kind of immune protection for a longer period of time, and that's what led us to think about what we can do to boost immunity,” Friedrich says.
Previous work on rejuvenating the immune system has focused on delivering T cell growth factors into the bloodstream, but that can have harmful side effects. Researchers are also exploring the possibility of using transplanted stem cells to help regrow functional tissue in the thymus.
The MIT team took a different approach: They wanted to see if they could create a temporary “factory” in the body that would generate the T-cell-stimulating signals that are normally produced by the thymus.
“Our approach is more of a synthetic approach,” Zhang says. “We're engineering the body to mimic thymic factor secretion.”
For their factory location, they settled on the liver, for several reasons. First, the liver has a high capacity for producing proteins, even in old age. Also, it’s easier to deliver mRNA to the liver than to most other organs of the body. The liver was also an appealing target because all of the body’s circulating blood has to flow through it, including T cells.
To create their factory, the researchers identified three immune cues that are important for T-cell maturation. They encoded these three factors into mRNA sequences that could be delivered by lipid nanoparticles. When injected into the bloodstream, these particles accumulate in the liver and the mRNA is taken up by hepatocytes, which begin to manufacture the proteins encoded by the mRNA.
The factors that the researchers delivered are DLL1, FLT-3, and IL-7, which help immature progenitor T cells mature into fully differentiated T cells.
Immune rejuvenation
Tests in mice revealed a variety of beneficial effects. First, the researchers injected the mRNA particles into 18-month-old mice, equivalent to humans in their 50s. Because mRNA is short-lived, the researchers gave the mice multiple injections over four weeks to maintain a steady production by the liver.
After this treatment, T cell populations showed significant increases in size and function.
The researchers then tested whether the treatment could enhance the animals’ response to vaccination. They vaccinated the mice with ovalbumin, a protein found in egg whites that is commonly used to study how the immune system responds to a specific antigen. In 18-month-old mice that received the mRNA treatment before vaccination, the researchers found that the population of cytotoxic T-cells specific to ovalbumin doubled, compared to mice of the same age that did not receive the mRNA treatment.
The mRNA treatment can also boost the immune system’s response to cancer immunotherapy, the researchers found. They delivered the mRNA treatment to 18-month-old mice, who were then implanted with tumors and treated with a checkpoint inhibitor drug. This drug, which targets the protein PD-L1, is designed to help take the brakes off the immune system and stimulate T cells to attack tumor cells.
Mice that received the treatment showed much higher survival rates and longer lifespan that those that received the checkpoint inhibitor drug but not the mRNA treatment.
The researchers found that all three factors were necessary to induce this immune enhancement; none could achieve all aspects of it on their own. They now plan to study the treatment in other animal models and to identify additional signaling factors that may further enhance immune system function. They also hope to study how the treatment affects other immune cells, including B cells.
Other authors of the paper include Julie Pham, Jiakun Tian, Hongyu Chen, Jiahao Huang, Niklas Kehl, Sophia Liu, Blake Lash, Fei Chen, Xiao Wang, and Rhiannon Macrae.
The research was funded, in part, by the Howard Hughes Medical Institute, the K. Lisa Yang Brain-Body Center, part of the Yang Tan Collective at MIT, Broad Institute Programmable Therapeutics Gift Donors, the Pershing Square Foundation, J. and P. Poitras, and an EMBO Postdoctoral Fellowship.
Nuno Loureiro, professor and director of MIT’s Plasma Science and Fusion Center, dies at 47
This article may be updated.
Nuno Loureiro, a professor of nuclear science and engineering and of physics at MIT, has died. He was 47.
A lauded theoretical physicist and fusion scientist, and director of the MIT Plasma Science and Fusion Center, Loureiro joined MIT’s faculty in 2016. His research addressed complex problems lurking at the center of fusion vacuum chambers and at the edges of the universe.
Loureiro’s research at MIT advanced scientists’ understanding of plasma behavior, including turbulence, and uncovered the physics behind astronomical phenomena like solar flares. He was the Herman Feshbach (1942) Professor of Physics at MIT and was named director of the Plasma Science and Fusion Center in 2024, though his contributions to fusion science and engineering began far before that.
His research on magnetized plasma dynamics, magnetic field amplification, and confinement and transport in fusion plasmas helped inform the design of fusion devices that could harness the energy of fusing plasmas, bringing the dream of clean, near-limitless fusion power closer to reality.
“Nuno was not only a brilliant scientist, he was a brilliant person,” says Dennis Whyte, the Hitachi America Professor of Engineering, who previously served as the head of the Department of Nuclear Science and Engineering and director of the Plasma Science and Fusion Center. “He shone a bright light as a mentor, friend, teacher, colleague and leader, and was universally admired for his articulate, compassionate manner. His loss is immeasurable to our community at the PSFC, NSE and MIT, and around the entire fusion and plasma research world.”
“Nuno was a champion for plasma physics within the Physics Department, a wonderful and engaging colleague, and an inspiring and caring mentor for graduate students working in plasma science. His recent work on quantum computing algorithms for plasma physics simulations was a particularly exciting new scientific direction,” says Deepto Chakrabarty, the William A. M. Burden Professor in Astrophysics and head of the Department of Physics.
Whether working on fusion or astrophysics research, Loureiro merged fundamental physics with technology and engineering, to maximize impact.
“There are people who are driven by technology and engineering, and others who are driven by fundamental mathematics and physics. We need both,” Loureiro said in 2019. “When we stimulate theoretically inclined minds by framing plasma physics and fusion challenges as beautiful theoretical physics problems, we bring into the game incredibly brilliant students — people who we want to attract to fusion development.”
Loureiro majored in physics at Instituto Superior Tecnico (IST) in Portugal and obtained a PhD in physics at Imperial College London in 2005. He conducted postdoctoral work at the Princeton Plasma Physics Laboratory for the next two years before moving to the UKAEA Culham Center for Fusion Energy in 2007. Loureiro returned to IST in 2009, where he was a researcher at the Institute for Plasmas and Nuclear Fusion until coming to MIT in 2016.
He wasted no time contributing to the intellectual environment at MIT, spending part of his first two years at the Institute working on the vexing problem of plasma turbulence. Plasma is the super-hot state of matter that serves as the fuel for fusion reactors. Loureiro’s lab at PSFC illuminated how plasma behaves inside fusion reactors, which could help prevent material failures and better contain the plasma to harvest electricity.
“Nuno was not only an extraordinary scientist and educator, but also a tremendous colleague, mentor, and friend who cared deeply about his students and his community. His absence will be felt profoundly across NSE and far beyond,” Benoit Forget, the KEPCO Professor and head of the Department of Nuclear Science and Engineering, wrote in an email to the department today.
On other fronts, Loureiro’s work in astrophysics helped reveal fundamental mechanisms of the universe. He put forward the first theory of turbulence in pair plasmas, which differ from regular plasmas and may be abundant in space. The work was driven, in part, by unprecedented observations of a binary neutron star merger in 2018.
As an assistant professor and then a full professor at MIT, Loureiro taught course 22.612 (Intro to Plasma Physics) and course 22.615 (MHD Theory of Fusion Systems), for which he was twice recognized with the Department of Nuclear Science and Engineering’s PAI Outstanding Professor Award.
Loureiro’s research earned him many prominent awards throughout his prolific career, including the National Science Foundation Career Award and the American Physical Society Thomas H. Stix Award for Outstanding Early Career Contributions to Plasma Physics Research. He was also an APS fellow. Earlier this year, he earned the Presidential Early Career Award for Scientists and Engineers.
How cement “breathes in” and stores millions of tons of CO₂ a year
The world’s most common construction material has a secret. Cement, the “glue” that holds concrete together, gradually “breathes in” and stores millions of tons of carbon dioxide (CO2) from the air over the lifetimes of buildings and infrastructure.
A new study from the MIT Concrete Sustainability Hub quantifies this process, carbon uptake, at a national scale for the first time. Using a novel approach, the research team found that the cement in U.S. buildings and infrastructure sequesters over 6.5 million metric tons of CO2 annually. This corresponds to roughly 13 percent of the process emissions — the CO2 released by the underlying chemical reaction — in U.S. cement manufacturing. In Mexico, the same building stock sequesters about 5 million tons a year.
But how did the team come up with those numbers?
Scientists have known how carbon uptake works for decades. CO2 enters concrete or mortar — the mixture that glues together blocks, brick, and stones — through tiny pores, reacts with the calcium-rich products in cement, and becomes locked into a stable mineral called calcium carbonate, or limestone.
The chemistry is well-known, but calculating the magnitude of this at scale is not. A concrete highway in Dallas sequesters CO2 differently than Mexico City apartments made from concrete masonry units (CMUs), also called concrete blocks or, colloquially, cinder blocks. And a foundation slab buried under the snow in Fairbanks, Alaska, “breathes in” CO2 at a different pace entirely.
As Hessam AzariJafari, lead author and research scientist in the MIT Department of Civil and Environmental Engineering, explains, “Carbon uptake is very sensitive to context. Four major factors drive it: the type of cement used, the product we make with it — concrete, CMUs, or mortar — the geometry of the structure, and the climate and conditions it’s exposed to. Even within the same structure, uptake can vary five-fold between different elements.”
As no two structures sequester CO2 in the same way, estimating uptake nationwide would normally require simulating an array of cement-based elements: slabs, walls, beams, columns, pavements, and more. On top of that, each of those has its own age, geometry, mixture, and exposure condition to account for.
Seeing that this approach would be like trying to count every grain of sand on a beach, the team took a different route. They developed hundreds of archetypes, typical designs that could stand in for different buildings and pieces of infrastructure. It’s a bit like measuring the beach instead by mapping out its shape, depth, and shoreline to estimate how much sand usually sits in a given spot.
With these archetypes in hand, the team modeled how each one sequesters CO2 in different environments and how common each is across every state in the United States and Mexico. In this way, they could estimate not just how much CO2 structures sequester, but why those numbers differ.
Two factors stood out. The first was the “construction trend,” or how the amount of new construction had changed over the previous five years. Because it reflects how quickly cement products are being added to the building stock, it shapes how much cement each state consumes and, therefore, how much of that cement is actively carbonating. The second was the ratio of mortar to concrete, since porous mortars sequester CO2 an order of magnitude faster than denser concrete.
In states where mortar use was higher, the fraction of CO2 uptake relative to process emissions was noticeably greater. “We observed something unique about Mexico: Despite using half the cement that the U.S. does, the country has three-quarters of the uptake,” notes AzariJafari. “This is because Mexico makes more use of mortars and lower-strength concrete, and bagged cement mixed on-site. These practices are why their uptake sequesters about a quarter of their cement manufacturing emissions.”
While care must be taken for structural elements that use steel reinforcement, as uptake can accelerate corrosion, it’s possible to enhance the uptake of many elements without negative impacts.
Randolph Kirchain, director of the MIT Concrete Sustainability Hub, principal research scientist in the MIT Materials Research Laboratory, and the senior author of this study, explains: “For instance, increasing the amount of surface area exposed to air accelerates uptake and can be achieved by foregoing painting or tiling, or choosing designs like waffle slabs with a higher surface area-to-volume ratio. Additionally, avoiding unnecessarily stronger, less-porous concrete mixtures than required would speed up uptake while using less cement.”
“There is a real opportunity to refine how carbon uptake from cement is represented in national inventories,” AzariJafari comments. “The buildings around us and the concrete beneath our feet are constantly ‘breathing in’ millions of tons of CO2. Nevertheless, some of the simplified values in widely used reporting frameworks can lead to higher estimates than what we observe empirically. Integrating updated science into international inventories and guidelines such as the Intergovernmental Panel on Climate Change (IPCC) would help ensure that reported numbers reflect the material and temporal realities of the sector.”
By offering the first rigorous, bottom-up estimation of carbon uptake at a national scale, the team’s work provides a more representative picture of cement’s environmental impact. As we work to decarbonize the built environment, understanding what our structures are already doing in the background may be just as important as the innovations we pursue moving forward. The approach developed by MIT researchers could be extended to other countries by combining global building-stock databases with national cement-production statistics. It could also inform the design of structures that safely maximize uptake.
The findings were published Dec. 15 in the Proceedings of the National Academy of Sciences. Joining AzariJafari and Kirchain on the paper are MIT researchers Elizabeth Moore of the Department of Materials Science and Engineering and the MIT Climate Project and former postdocs Ipek Bensu Manav SM ’21, PhD ’24 and Motahareh Rahimi, along with Bruno Huet and Christophe Levy from the Holcim Innovation Center in France.
A new immunotherapy approach could work for many types of cancer
Researchers at MIT and Stanford University have developed a new way to stimulate the immune system to attack tumor cells, using a strategy that could make cancer immunotherapy work for many more patients.
The key to their approach is reversing a “brake” that cancer cells engage to prevent immune cells from launching an attack. This brake is controlled by sugar molecules known as glycans that are found on the surface of cancer cells.
By blocking those glycans with molecules called lectins, the researchers showed they could dramatically boost the immune system’s response to cancer cells. To achieve this, they created multifunctional molecules known as AbLecs, which combine a lectin with a tumor-targeting antibody.
“We created a new kind of protein therapeutic that can block glycan-based immune checkpoints and boost anti-cancer immune responses,” says Jessica Stark, the Underwood-Prescott Career Development Professor in the MIT departments of Biological Engineering and Chemical Engineering. “Because glycans are known to restrain the immune response to cancer in multiple tumor types, we suspect our molecules could offer new and potentially more effective treatment options for many cancer patients.”
Stark, who is also a member of MIT’s Koch Institute for Integrative Cancer Research, is the lead author of the paper. Carolyn Bertozzi, a professor of chemistry at Stanford and director of the Sarafan ChEM Institute, is the senior author of the study, which appears today in Nature Biotechnology.
Releasing the brakes
Training the immune system to recognize and destroy tumor cells is a promising approach to treating many types of cancer. One class of immunotherapy drugs known as checkpoint inhibitors stimulate immune cells by blocking an interaction between the proteins PD-1 and PD-L1. This removes a brake that tumor cells use to prevent immune cells like T cells from killing cancer cells.
Drugs targeting the PD-1- PD-L1 checkpoint have been approved to treat several kinds of cancer. In some of these patients, checkpoint inhibitors can lead to long-lasting remission, but for many others, they don’t work at all.
In hopes of generating immune responses in a greater number of patients, researchers are now working on ways to target other immunosuppressive interactions between cancer cells and immune cells. One such interaction occurs between glycans on tumor cells and receptors found on immune cells.
Glycans are found on nearly all living cells, but tumor cells often express glycans that are not found on healthy cells, including glycans that contain a monosaccharide called sialic acid. When sialic acids bind to lectin receptors, located on immune cells, it turns on an immunosuppressive pathway in the immune cells. These lectins that bind to sialic acid are known as Siglecs.
“When Siglecs on immune cells bind to sialic acids on cancer cells, it puts the brakes on the immune response. It prevents that immune cell from becoming activated to attack and destroy the cancer cell, just like what happens when PD-1 binds to PD-L1,” Stark says.
Currently, there aren’t any approved therapies that target this Siglec-sialic acid interaction, despite a number of drug development approaches that have been tried. For example, researchers have tried to develop lectins that could bind to sialic acids and prevent them from interacting with immune cells, but so far, this approach hasn’t worked well because lectins don’t bind strongly enough to accumulate on the cancer cell surface in large numbers.
To overcome that, Stark and her colleagues developed a way to deliver larger quantities of lectins by attaching them to antibodies that target cancer cells. Once there, the lectins can bind to sialic acid, preventing sialic acid from interacting with Siglec receptors on immune cells. This lifts the brakes off the immune response, allowing immune cells such as macrophages and natural killer (NK) cells to launch an attack on the tumor.
“This lectin binding domain typically has relatively low affinity, so you can’t use it by itself as a therapeutic. But, when the lectin domain is linked to a high-affinity antibody, you can get it to the cancer cell surface where it can bind and block sialic acids,” Stark says.
A modular system
In this study, the researchers designed an AbLec based on the antibody trastuzumab, which binds to HER2 and is approved as a cancer therapy to treat breast, stomach, and colorectal cancers. To form the AbLec, they replaced one arm of the antibody with a lectin, either Siglec-7 or Siglec-9.
Tests using cells grown in the lab showed that this AbLec rewired immune cells to attack and destroy cancer cells.
The researchers then tested their AbLecs in a mouse model that was engineered to express human Siglec receptors and antibody receptors. These mice were then injected with cancer cells that formed metastases in the lungs. When treated with the AbLec, these mice showed fewer lung metastases than mice treated with trastuzumab alone.
The researchers also showed that they could swap in other tumor-specific antibodies, such as rituximab, which targets CD20, or cetuximab, which targets EGFR. They could also swap in lectins that target other glycans involved in immunosuppression, or antibodies that target checkpoint proteins such as PD-1.
“AbLecs are really plug-and-play. They’re modular,” Stark says. “You can imagine swapping out different decoy receptor domains to target different members of the lectin receptor family, and you can also swap out the antibody arm. This is important because different cancer types express different antigens, which you can address by changing the antibody target.”
Stark, Bertozzi, and others have started a company called Valora Therapeutics, which is now working on developing lead AbLec candidates. They hope to begin clinical trials in the next two to three years.
The research was funded, in part, by a Burroughs Wellcome Fund Career Award at the Scientific Interface, a Society for Immunotherapy of Cancer Steven A. Rosenberg Scholar Award, a V Foundation V Scholar Grant, the National Cancer Institute, the National Institute of General Medical Sciences, a Merck Discovery Biologics SEEDS grant, an American Cancer Society Postdoctoral Fellowship, and a Sarafan ChEM-H Postdocs at the Interface seed grant.
“Robot, make me a chair”
Computer-aided design (CAD) systems are tried-and-true tools used to design many of the physical objects we use each day. But CAD software requires extensive expertise to master, and many tools incorporate such a high level of detail they don’t lend themselves to brainstorming or rapid prototyping.
In an effort to make design faster and more accessible for non-experts, researchers from MIT and elsewhere developed an AI-driven robotic assembly system that allows people to build physical objects by simply describing them in words.
Their system uses a generative AI model to build a 3D representation of an object’s geometry based on the user’s prompt. Then, a second generative AI model reasons about the desired object and figures out where different components should go, according to the object’s function and geometry.
The system can automatically build the object from a set of prefabricated parts using robotic assembly. It can also iterate on the design based on feedback from the user.
The researchers used this end-to-end system to fabricate furniture, including chairs and shelves, from two types of premade components. The components can be disassembled and reassembled at will, reducing the amount of waste generated through the fabrication process.
They evaluated these designs through a user study and found that more than 90 percent of participants preferred the objects made by their AI-driven system, as compared to different approaches.
While this work is an initial demonstration, the framework could be especially useful for rapid prototyping complex objects like aerospace components and architectural objects. In the longer term, it could be used in homes to fabricate furniture or other objects locally, without the need to have bulky products shipped from a central facility.
“Sooner or later, we want to be able to communicate and talk to a robot and AI system the same way we talk to each other to make things together. Our system is a first step toward enabling that future,” says lead author Alex Kyaw, a graduate student in the MIT departments of Electrical Engineering and Computer Science (EECS) and Architecture.
Kyaw is joined on the paper by Richa Gupta, an MIT architecture graduate student; Faez Ahmed, associate professor of mechanical engineering; Lawrence Sass, professor and chair of the Computation Group in the Department of Architecture; senior author Randall Davis, an EECS professor and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); as well as others at Google Deepmind and Autodesk Research. The paper was recently presented at the Conference on Neural Information Processing Systems.
Generating a multicomponent design
While generative AI models are good at generating 3D representations, known as meshes, from text prompts, most do not produce uniform representations of an object’s geometry that have the component-level details needed for robotic assembly.
Separating these meshes into components is challenging for a model because assigning components depends on the geometry and functionality of the object and its parts.
The researchers tackled these challenges using a vision-language model (VLM), a powerful generative AI model that has been pre-trained to understand images and text. They task the VLM with figuring out how two types of prefabricated parts, structural components and panel components, should fit together to form an object.
“There are many ways we can put panels on a physical object, but the robot needs to see the geometry and reason over that geometry to make a decision about it. By serving as both the eyes and brain of the robot, the VLM enables the robot to do this,” Kyaw says.
A user prompts the system with text, perhaps by typing “make me a chair,” and gives it an AI-generated image of a chair to start.
Then, the VLM reasons about the chair and determines where panel components go on top of structural components, based on the functionality of many example objects it has seen before. For instance, the model can determine that the seat and backrest should have panels to have surfaces for someone sitting and leaning on the chair.
It outputs this information as text, such as “seat” or “backrest.” Each surface of the chair is then labeled with numbers, and the information is fed back to the VLM.
Then the VLM chooses the labels that correspond to the geometric parts of the chair that should receive panels on the 3D mesh to complete the design.
Human-AI co-design
The user remains in the loop throughout this process and can refine the design by giving the model a new prompt, such as “only use panels on the backrest, not the seat.”
“The design space is very big, so we narrow it down through user feedback. We believe this is the best way to do it because people have different preferences, and building an idealized model for everyone would be impossible,” Kyaw says.
“The human‑in‑the‑loop process allows the users to steer the AI‑generated designs and have a sense of ownership in the final result,” adds Gupta.
Once the 3D mesh is finalized, a robotic assembly system builds the object using prefabricated parts. These reusable parts can be disassembled and reassembled into different configurations.
The researchers compared the results of their method with an algorithm that places panels on all horizontal surfaces that are facing up, and an algorithm that places panels randomly. In a user study, more than 90 percent of individuals preferred the designs made by their system.
They also asked the VLM to explain why it chose to put panels in those areas.
“We learned that the vision language model is able to understand some degree of the functional aspects of a chair, like leaning and sitting, to understand why it is placing panels on the seat and backrest. It isn’t just randomly spitting out these assignments,” Kyaw says.
In the future, the researchers want to enhance their system to handle more complex and nuanced user prompts, such as a table made out of glass and metal. In addition, they want to incorporate additional prefabricated components, such as gears, hinges, or other moving parts, so objects could have more functionality.
“Our hope is to drastically lower the barrier of access to design tools. We have shown that we can use generative AI and robotics to turn ideas into physical objects in a fast, accessible, and sustainable manner,” says Davis.
3 Questions: Using computation to study the world’s best single-celled chemists
Today, out of an estimated 1 trillion species on Earth, 99.999 percent are considered microbial — bacteria, archaea, viruses, and single-celled eukaryotes. For much of our planet’s history, microbes ruled the Earth, able to live and thrive in the most extreme of environments. Researchers have only just begun in the last few decades to contend with the diversity of microbes — it’s estimated that less than 1 percent of known genes have laboratory-validated functions. Computational approaches offer researchers the opportunity to strategically parse this truly astounding amount of information.
An environmental microbiologist and computer scientist by training, new MIT faculty member Yunha Hwang is interested in the novel biology revealed by the most diverse and prolific life form on Earth. In a shared faculty position as the Samuel A. Goldblith Career Development Professor in the Department of Biology, as well as an assistant professor at the Department of Electrical Engineering and Computer Science and the MIT Schwarzman College of Computing, Hwang is exploring the intersection of computation and biology.
Q: What drew you to research microbes in extreme environments, and what are the challenges in studying them?
A: Extreme environments are great places to look for interesting biology. I wanted to be an astronaut growing up, and the closest thing to astrobiology is examining extreme environments on Earth. And the only thing that lives in those extreme environments are microbes. During a sampling expedition that I took part in off the coast of Mexico, we discovered a colorful microbial mat about 2 kilometers underwater that flourished because the bacteria breathed sulfur instead of oxygen — but none of the microbes I was hoping to study would grow in the lab.
The biggest challenge in studying microbes is that a majority of them cannot be cultivated, which means that the only way to study their biology is through a method called metagenomics. My latest work is genomic language modeling. We’re hoping to develop a computational system so we can probe the organism as much as possible “in silico,” just using sequence data. A genomic language model is technically a large language model, except the language is DNA as opposed to human language. It’s trained in a similar way, just in biological language as opposed to English or French. If our objective is to learn the language of biology, we should leverage the diversity of microbial genomes. Even though we have a lot of data, and even as more samples become available, we’ve just scratched the surface of microbial diversity.
Q: Given how diverse microbes are and how little we understand about them, how can studying microbes in silico, using genomic language modeling, advance our understanding of the microbial genome?
A: A genome is many millions of letters. A human cannot possibly look at that and make sense of it. We can program a machine, though, to segment data into pieces that are useful. That’s sort of how bioinformatics works with a single genome. But if you’re looking at a gram of soil, which can contain thousands of unique genomes, that’s just too much data to work with — a human and a computer together are necessary in order to grapple with that data.
During my PhD and master’s degree, we were only just discovering new genomes and new lineages that were so different from anything that had been characterized or grown in the lab. These were things that we just called “microbial dark matter.” When there are a lot of uncharacterized things, that’s where machine learning can be really useful, because we’re just looking for patterns — but that’s not the end goal. What we hope to do is to map these patterns to evolutionary relationships between each genome, each microbe, and each instance of life.
Previously, we’ve been thinking about proteins as a standalone entity — that gets us to a decent degree of information because proteins are related by homology, and therefore things that are evolutionarily related might have a similar function.
What is known about microbiology is that proteins are encoded into genomes, and the context in which that protein is bounded — what regions come before and after — is evolutionarily conserved, especially if there is a functional coupling. This makes total sense because when you have three proteins that need to be expressed together because they form a unit, then you might want them located right next to each other.
What I want to do is incorporate more of that genomic context in the way that we search for and annotate proteins and understand protein function, so that we can go beyond sequence or structural similarity to add contextual information to how we understand proteins and hypothesize about their functions.
Q: How can your research be applied to harnessing the functional potential of microbes?
A: Microbes are possibly the world’s best chemists. Leveraging microbial metabolism and biochemistry will lead to more sustainable and more efficient methods for producing new materials, new therapeutics, and new types of polymers.
But it’s not just about efficiency — microbes are doing chemistry we don’t even know how to think about. Understanding how microbes work, and being able to understand their genomic makeup and their functional capacity, will also be really important as we think about how our world and climate are changing. A majority of carbon sequestration and nutrient cycling is undertaken by microbes; if we don’t understand how a given microbe is able to fix nitrogen or carbon, then we will face difficulties in modeling the nutrient fluxes of the Earth.
On the more therapeutic side, infectious diseases are a real and growing threat. Understanding how microbes behave in diverse environments relative to the rest of our microbiome is really important as we think about the future and combating microbial pathogens.
