MIT Latest News

Researchers present bold ideas for AI at MIT Generative AI Impact Consortium kickoff event
Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.
The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the college. The overwhelming response across the Institute exemplifies the growing interest in AI and follows in the wake of MIT’s Generative AI Week and call for impact papers. Fifty-five proposals were selected for MGAIC’s inaugural seed grants, with several more selected to be funded by the consortium’s founding company members.
Over 30 funding recipients presented their proposals to the greater MIT community at a kickoff event on May 13. Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering who is head of the consortium, welcomed the attendees and thanked the consortium’s founding industry members.
“The amazing response to our call for proposals is an incredible testament to the energy and creativity that MGAIC has sparked at MIT. We are especially grateful to our founding members, whose support and vision helped bring this endeavor to life,” adds Chandrakasan. “One of the things that has been most remarkable about MGAIC is that this is a truly cross-Institute initiative. Deans from all five schools and the college collaborated in shaping and implementing it.”
Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management and co-faculty director of the consortium with Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), emceed the afternoon of five-minute lightning presentations.
Presentation highlights include:
“AI-Driven Tutors and Open Datasets for Early Literacy Education,” presented by Ola Ozernov-Palchik, a research scientist at the McGovern Institute for Brain Research, proposed a refinement for AI-tutors for pK-7 students to potentially decrease literacy disparities.
“Developing jam_bots: Real-Time Collaborative Agents for Live Human-AI Musical Improvisation,” presented by Anna Huang, assistant professor of music and assistant professor of electrical engineering and computer science, and Joe Paradiso, the Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab, aims to enhance human-AI musical collaboration in real-time for live concert improvisation.
“GENIUS: GENerative Intelligence for Urban Sustainability,” presented by Norhan Bayomi, a postdoc at the MIT Environmental Solutions Initiative and a research assistant in the Urban Metabolism Group, which aims to address the critical gap of a standardized approach in evaluating and benchmarking cities’ climate policies.
Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research, and statistics, who serves as co-chair of the GenAI Dean’s oversight group with Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, ended the event with closing remarks that emphasized “the readiness and eagerness of our community to lead in this space.”
“This is only the beginning,” he continued. “We are at the front edge of a historic moment — one where MIT has the opportunity, and the responsibility, to shape the future of generative AI with purpose, with excellence, and with care.”
Introducing the L. Rafael Reif Innovation Corridor
The open space connecting Hockfield Court with Massachusetts Avenue, in the heart of MIT’s campus, is now the L. Rafael Reif Innovation Corridor, in honor of the Institute’s 17th president. At a dedication ceremony Monday, Reif’s colleagues, friends, and family gathered to honor his legacy and unveil a marker for the walkway that was previously known as North Corridor or “the Outfinite.”
“It’s no accident that the space we dedicate today is not a courtyard, but a corridor — a channel for people and ideas to flow freely through the heart of MIT, and to carry us outward, to limits of our aspirations,” said Sally Kornbluth, who succeeded Reif as MIT president in 2023.
“With his signature combination of new-world thinking and old-world charm, and as a grand thinker and doer, Rafael left an indelible mark on MIT,” Kornbluth said. “As a permanent testament to his service and his achievements in service to MIT, the nation, and the world, we now dedicate this space as the L. Rafael Reif Innovation Corridor.”
Reif served as president for more than 10 years, following seven years as provost. He has been at MIT since 1980, when he joined the faculty as an assistant professor of electrical engineering.
“Through all those roles, what stood out most was his humility, his curiosity, and his remarkable ability to speak with clarity and conviction,” said Corporation Chair Mark Gorenberg, who opened the ceremony. “Under his leadership, MIT not only stayed true to its mission, it thrived, expanding its impact and strengthening its global voice.”
Gorenberg introduced Abraham J. Siegel Professor of Management and professor of operations research Cindy Barnhart, who served as chancellor, then provost, during Reif’s term as president. Barnhart, who will be stepping down as provost on July 1, summarized the many highlights from Reif’s presidency, such as the establishment of MIT Schwarzman College of Computing, the revitalization of Kendall Square, and the launch of The Engine, as well as the construction or modernization of many buildings, from the Wright Brothers Wind Tunnel to the new Edward and Joyce Linde Music Building, among other accomplishments.
“Beyond space, Rafael’s bold thinking and passion extends to MIT’s approach to education,” Barnhart continued, describing how Reif championed the building of OpenCourseWare, MITx, and edX. She also noted his support for the health and well-being of the MIT community, through efforts such as addressing student sexual misconduct and forming the MindHandHeart initiative. He also hosted dance parties and socials, joined students in the dining halls for dinner, chatted with faculty and staff over breakfasts and at forums, and more.
“At gatherings over the years, Rafael’s wife, Chris, was there by his side,” Barnhart noted, adding, “I’d like to take this opportunity to acknowledge her and thank her for her welcoming and gracious spirit.”
In summary, “I am grateful to Rafael for his visionary leadership and for his love of MIT and its people,” Barnhart said as she presented Reif with a 3D-printed replica of the Maclaurin buildings (MIT Buildings 3, 4, and 10), which was created through a collaboration between the Glass Lab, Edgerton Center, and Project Manus.
Next, Institute Professor Emeritus John Harbison played an interlude on the piano, and a musical ensemble reprised the “Rhumba for Rafael,” which Harbison composed for Reif’s inauguration in 2012.
When Reif took the podium, he reflected on the location of the corridor and its significance to early chapters in his own career; his first office and lab were in Building 13, overlooking what is now the eponymous walkway.
He also considered the years ahead: “The people who pass through this corridor in the future will surely experience the unparalleled excitement of being young at MIT, with the full expectation of upending the world to improve it,” he said.
Faculty and staff walking through the corridor may experience the “undimmed excitement” of working and studying alongside extraordinary students and colleagues, and feeling the “deep satisfaction of having created infinite memories here throughout a long career.”
“Even if none of them gives me a thought,” Reif continued, “I would like to believe that my spirit will be here, watching them with pride as they continue the never-ending mission of creating a better world.”
Island rivers carve passageways through coral reefs
Volcanic islands, such as the islands of Hawaii and the Caribbean, are surrounded by coral reefs that encircle an island in a labyrinthine, living ring. A coral reef is punctured at points by reef passes — wide channels that cut through the coral and serve as conduits for ocean water and nutrients to filter in and out. These watery passageways provide circulation throughout a reef, helping to maintain the health of corals by flushing out freshwater and transporting key nutrients.
Now, MIT scientists have found that reef passes are shaped by island rivers. In a study appearing today in the journal Geophysical Research Letters, the team shows that the locations of reef passes along coral reefs line up with where rivers funnel out from an island’s coast.
Their findings provide the first quantitative evidence of rivers forming reef passes. Scientists and explorers had speculated that this may be the case: Where a river on a volcanic island meets the coast, the freshwater and sediment it carries flows toward the reef, where a strong enough flow can tunnel into the surrounding coral. This idea has been proposed from time to time but never quantitatively tested, until now.
“The results of this study help us to understand how the health of coral reefs depends on the islands they surround,” says study author Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT.
“A lot of discussion around rivers and their impact on reefs today has been negative because of human impact and the effects of agricultural practices,” adds lead author Megan Gillen, a graduate student in the MIT-WHOI Joint Program in Oceanography. “This study shows the potential long-term benefits rivers can have on reefs, which I hope reshapes the paradigm and highlights the natural state of rivers interacting with reefs.”
The study’s other co-author is Andrew Ashton of the Woods Hole Oceanographic Institution.
Drawing the lines
The new study is based on the team’s analysis of the Society Islands, a chain of islands in the South Pacific Ocean that includes Tahiti and Bora Bora. Gillen, who joined the MIT-WHOI program in 2020, was interested in exploring connections between coral reefs and the islands they surround. With limited options for on-site work during the Covid-19 pandemic, she and Perron looked to see what they could learn through satellite images and maps of island topography. They did a quick search using Google Earth and zeroed in on the Society Islands for their uniquely visible reef and island features.
“The islands in this chain have these iconic, beautiful reefs, and we kept noticing these reef passes that seemed to align with deeply embayed portions of the coastline,” Gillen says. “We started asking ourselves, is there a correlation here?”
Viewed from above, the coral reefs that circle some islands bear what look to be notches, like cracks that run straight through a ring. These breaks in the coral are reef passes — large channels that run tens of meters deep and can be wide enough for some boats to pass through. On first look, Gillen noticed that the most obvious reef passes seemed to line up with flooded river valleys — depressions in the coastline that have been eroded over time by island rivers that flow toward the ocean. She wondered whether and to what extent island rivers might shape reef passes.
“People have examined the flow through reef passes to understand how ocean waves and seawater circulate in and out of lagoons, but there have been no claims of how these passes are formed,” Gillen says. “Reef pass formation has been mentioned infrequently in the literature, and people haven’t explored it in depth.”
Reefs unraveled
To get a detailed view of the topography in and around the Society Islands, the team used data from the NASA Shuttle Radar Topography Mission — two radar antennae that flew aboard the space shuttle in 1999 and measured the topography across 80 percent of the Earth’s surface.
The researchers used the mission’s topographic data in the Society Islands to create a map of every drainage basin along the coast of each island, to get an idea of where major rivers flow or once flowed. They also marked the locations of every reef pass in the surrounding coral reefs. They then essentially “unraveled” each island’s coastline and reef into a straight line, and compared the locations of basins versus reef passes.
“Looking at the unwrapped shorelines, we find a significant correlation in the spatial relationship between these big river basins and where the passes line up,” Gillen says. “So we can say that statistically, the alignment of reef passes and large rivers does not seem random. The big rivers have a role in forming passes.”
As for how rivers shape the coral conduits, the team has two ideas, which they call, respectively, reef incision and reef encroachment. In reef incision, they propose that reef passes can form in times when the sea level is relatively low, such that the reef is exposed above the sea surface and a river can flow directly over the reef. The water and sediment carried by the river can then erode the coral, progressively carving a path through the reef.
When sea level is relatively higher, the team suspects a reef pass can still form, through reef encroachment. Coral reefs naturally live close to the water surface, where there is light and opportunity for photosynthesis. When sea levels rise, corals naturally grow upward and inward toward an island, to try to “catch up” to the water line.
“Reefs migrate toward the islands as sea levels rise, trying to keep pace with changing average sea level,” Gillen says.
However, part of the encroaching reef can end up in old river channels that were previously carved out by large rivers and that are lower than the rest of the island coastline. The corals in these river beds end up deeper than light can extend into the water column, and inevitably drown, leaving a gap in the form of a reef pass.
“We don’t think it’s an either/or situation,” Gillen says. “Reef incision occurs when sea levels fall, and reef encroachment happens when sea levels rise. Both mechanisms, occurring over dozens of cycles of sea-level rise and island evolution, are likely responsible for the formation and maintenance of reef passes over time.”
The team also looked to see whether there were differences in reef passes in older versus younger islands. They observed that younger islands were surrounded by more reef passes that were spaced closer together, versus older islands that had fewer reef passes that were farther apart.
As islands age, they subside, or sink, into the ocean, which reduces the amount of land that funnels rainwater into rivers. Eventually, rivers are too weak to keep the reef passes open, at which point, the ocean likely takes over, and incoming waves could act to close up some passes.
Gillen is exploring ideas for how rivers, or river-like flow, can be engineered to create paths through coral reefs in ways that would promote circulation and benefit reef health.
“Part of me wonders: If you had a more persistent flow, in places where you don’t naturally have rivers interacting with the reef, could that potentially be a way to increase health, by incorporating that river component back into the reef system?” Gillen says. “That’s something we’re thinking about.”
This research was supported, in part, by the WHOI Watson and Von Damm fellowships.
MIT engineers uncover a surprising reason why tissues are flexible or rigid
Water makes up around 60 percent of the human body. More than half of this water sloshes around inside the cells that make up organs and tissues. Much of the remaining water flows in the nooks and crannies between cells, much like seawater between grains of sand.
Now, MIT engineers have found that this “intercellular” fluid plays a major role in how tissues respond when squeezed, pressed, or physically deformed. Their findings could help scientists understand how cells, tissues, and organs physically adapt to conditions such as aging, cancer, diabetes, and certain neuromuscular diseases.
In a paper appearing today in Nature Physics, the researchers show that when a tissue is pressed or squeezed, it is more compliant and relaxes more quickly when the fluid between its cells flows easily. When the cells are packed together and there is less room for intercellular flow, the tissue as a whole is stiffer and resists being pressed or squeezed.
The findings challenge conventional wisdom, which has assumed that a tissue’s compliance depends mainly on what’s inside, rather than around, a cell. Now that the researchers have shown that intercellular flow determines how tissues will adapt to physical forces, the results can be applied to understand a wide range of physiological conditions, including how muscles withstand exercise and recover from injury, and how a tissue’s physical adaptability may affect the progression of aging, cancer, and other medical conditions.
The team envisions the results could also inform the design of artificial tissues and organs. For instance, in engineering artificial tissue, scientists might optimize intercellular flow within the tissue to improve its function or resilience. The researchers suspect that intercellular flow could also be a route for delivering nutrients or therapies, either to heal a tissue or eradicate a tumor.
“People know there is a lot of fluid between cells in tissues, but how important that is, in particular in tissue deformation, is completely ignored,” says Ming Guo, associate professor of mechanical engineering at MIT. “Now we really show we can observe this flow. And as the tissue deforms, flow between cells dominates the behavior. So, let’s pay attention to this when we study diseases and engineer tissues.”
Guo is a co-author of the new study, which includes lead author and MIT postdoc Fan Liu PhD ’24, along with Bo Gao and Hui Li of Beijing Normal University, and Liran Lei and Shuainan Liu of Peking Union Medical College.
Pressed and squeezed
The tissues and organs in our body are constantly undergoing physical deformations, from the large stretch and strain of muscles during motion to the small and steady contractions of the heart. In some cases, how easily tissues adapt to deformation can relate to how quickly a person can recover from, for instance, an allergic reaction, a sports injury, or a brain stroke. However, exactly what sets a tissue’s response to deformation is largely unknown.
Guo and his group at MIT looked into the mechanics of tissue deformation, and the role of intercellular flow in particular, following a study they published in 2020. In that study, they focused on tumors and observed the way in which fluid can flow from the center of a tumor out to its edges, through the cracks and crevices between individual tumor cells. They found that when a tumor was squeezed or pressed, the intercellular flow increased, acting as a conveyor belt to transport fluid from the center to the edges. Intercellular flow, they found, could fuel tumor invasion into surrounding regions.
In their new study, the team looked to see what role this intercellular flow might play in other, noncancerous tissues.
“Whether you allow the fluid to flow between cells or not seems to have a major impact,” Guo says. “So we decided to look beyond tumors to see how this flow influences how other tissues respond to deformation.”
A fluid pancake
Guo, Liu, and their colleagues studied the intercellular flow in a variety of biological tissues, including cells derived from pancreatic tissue. They carried out experiments in which they first cultured small clusters of tissue, each measuring less than a quarter of a millimeter wide and numbering tens of thousands of individual cells. They placed each tissue cluster in a custom-designed testing platform that the team built specifically for the study.
“These microtissue samples are in this sweet zone where they are too large to see with atomic force microscopy techniques and too small for bulkier devices,” Guo says. “So, we decided to build a device.”
The researchers adapted a high-precision microbalance that measures minute changes in weight. They combined this with a step motor that is designed to press down on a sample with nanometer precision. The team placed tissue clusters one at a time on the balance and recorded each cluster’s changing weight as it relaxed from a sphere into the shape of a pancake in response to the compression. The team also took videos of the clusters as they were squeezed.
For each type of tissue, the team made clusters of varying sizes. They reasoned that if the tissue’s response is ruled by the flow between cells, then the bigger a tissue, the longer it should take for water to seep through, and therefore, the longer it should take the tissue to relax. It should take the same amount of time, regardless of size, if a tissue’s response is determined by the structure of the tissue rather than fluid.
Over multiple experiments with a variety of tissue types and sizes, the team observed a similar trend: The bigger the cluster, the longer it took to relax, indicating that intercellular flow dominates a tissue’s response to deformation.
“We show that this intercellular flow is a crucial component to be considered in the fundamental understanding of tissue mechanics and also applications in engineering living systems,” Liu says.
Going forward, the team plans to look into how intercellular flow influences brain function, particularly in disorders such as Alzheimer’s disease.
“Intercellular or interstitial flow can help you remove waste and deliver nutrients to the brain,” Liu adds. “Enhancing this flow in some cases might be a good thing.”
“As this work shows, as we apply pressure to a tissue, fluid will flow,” Guo says. “In the future, we can think of designing ways to massage a tissue to allow fluid to transport nutrients between cells.”
“Cold spray” 3D printing technique proves effective for on-site bridge repair
More than half of the nation’s 623,218 bridges are experiencing significant deterioration. Through an in-field case study conducted in western Massachusetts, a team led by the University of Massachusetts at Amherst in collaboration with researchers from the MIT Department of Mechanical Engineering (MechE) has just successfully demonstrated that 3D printing may provide a cost-effective, minimally disruptive solution.
“Anytime you drive, you go under or over a corroded bridge,” says Simos Gerasimidis, associate professor of civil and environmental engineering at UMass Amherst and former visiting professor in the Department of Civil and Environmental Engineering at MIT, in a press release. “They are everywhere. It’s impossible to avoid, and their condition often shows significant deterioration. We know the numbers.”
The numbers, according to the American Society of Civil Engineers’ 2025 Report Card for America’s Infrastructure, are staggering: Across the United States, 49.1 percent of the nation’s 623,218 bridges are in “fair” condition and 6.8 percent are in “poor” condition. The projected cost to restore all of these failing bridges exceeds $191 billion.
A proof-of-concept repair took place last month on a small, corroded section of a bridge in Great Barrington, Massachusetts. The technique, called cold spray, can extend the life of beams, reinforcing them with newly deposited steel. The process accelerates particles of powdered steel in heated, compressed gas, and then a technician uses an applicator to spray the steel onto the beam. Repeated sprays create multiple layers, restoring thickness and other structural properties.
This method has proven to be an effective solution for other large structures like submarines, airplanes, and ships, but bridges present a problem on a greater scale. Unlike movable vessels, stationary bridges cannot be brought to the 3D printer — the printer must be brought on-site — and, to lessen systemic impacts, repairs must also be made with minimal disruptions to traffic, which the new approach allows.
“Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” says Gerasimidis. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that.”
“This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” says John Hart, Class of 1922 Professor and head of the Department of MechE at MIT. Hart and Haden Quinlan, senior program manager in the Center for Advanced Production Technologies at MIT, are leading MIT’s efforts in in the project. Hart is also faculty co-lead of the recently announced MIT Initiative for New Manufacturing.
“Integrating digital systems with advanced physical processing is the future of infrastructure,” says Quinlan. “We’re excited to have moved this technology beyond the lab and into the field, and grateful to our collaborators in making this work possible.”
UMass says the Massachusetts Department of Transportation (MassDOT) has been a valued research partner, helping to identify the problem and providing essential support for the development and demonstration of the technology. Technical guidance and funding support were provided by the MassDOT Highway Division and the Research and Technology Transfer Program.
Equipment for this project was supported through the Massachusetts Manufacturing Innovation Initiative, a statewide program led by the Massachusetts Technology Collaborative (MassTech)’s Center for Advanced Manufacturing that helps bridge the gap between innovation and commercialization in hard tech manufacturing.
“It’s a very Massachusetts success story,” Gerasimidis says. “It involves MassDOT being open-minded to new ideas. It involves UMass and MIT putting [together] the brains to do it. It involves MassTech to bring manufacturing back to Massachusetts. So, I think it’s a win-win for everyone involved here.”
The bridge in Great Barrington is scheduled for demolition in a few years. After demolition occurs, the recently-sprayed beams will be taken back to UMass for testing and measurement to study how well the deposited steel powder adhered to the structure in the field compared to in a controlled lab setting, if it corroded further after it was sprayed, and determine its mechanical properties.
This demonstration builds on several years of research by the UMass and MIT teams, including development of a “digital thread” approach to scan corroded beam surfaces and determine material deposition profiles, alongside laboratory studies of cold spray and other additive manufacturing approaches that are suited to field deployment.
Altogether, this work is a collaborative effort among UMass Amherst, MIT MechE, MassDOT, the Massachusetts Technology Collaborative (MassTech), the U.S. Department of Transportation, and the Federal Highway Administration. Research reports are available on the MassDOT website.
When Earth iced over, early life may have sheltered in meltwater ponds
When the Earth froze over, where did life shelter? MIT scientists say one refuge may have been pools of melted ice that dotted the planet’s icy surface.
In a study appearing today in Nature Communications, the researchers report that 635 million to 720 million years ago, during periods known as “Snowball Earth,” when much of the planet was covered in ice, some of our ancient cellular ancestors could have waited things out in meltwater ponds.
The scientists found that eukaryotes — complex cellular lifeforms that eventually evolved into the diverse multicellular life we see today — could have survived the global freeze by living in shallow pools of water. These small, watery oases may have persisted atop relatively shallow ice sheets present in equatorial regions. There, the ice surface could accumulate dark-colored dust and debris from below, which enhanced its ability to melt into pools. At temperatures hovering around 0 degrees Celsius, the resulting meltwater ponds could have served as habitable environments for certain forms of early complex life.
The team drew its conclusions based on an analysis of modern-day meltwater ponds. Today in Antarctica, small pools of melted ice can be found along the margins of ice sheets. The conditions along these polar ice sheets are similar to what likely existed along ice sheets near the equator during Snowball Earth.
The researchers analyzed samples from a variety of meltwater ponds located on the McMurdo Ice Shelf in an area that was first described by members of Robert Falcon Scott's 1903 expedition as “dirty ice.” The MIT researchers discovered clear signatures of eukaryotic life in every pond. The communities of eukaryotes varied from pond to pond, revealing a surprising diversity of life across the setting. The team also found that salinity plays a key role in the kind of life a pond can host: Ponds that were more brackish or salty had more similar eukaryotic communities, which differed from those in ponds with fresher waters.
“We’ve shown that meltwater ponds are valid candidates for where early eukaryotes could have sheltered during these planet-wide glaciation events,” says lead author Fatima Husain, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “This shows us that diversity is present and possible in these sorts of settings. It’s really a story of life’s resilience.”
The study’s MIT co-authors include Schlumberger Professor of Geobiology Roger Summons and former postdoc Thomas Evans, along with Jasmin Millar of Cardiff University, Anne Jungblut at the Natural History Museum in London, and Ian Hawes of the University of Waikato in New Zealand.
Polar plunge
“Snowball Earth” is the colloquial term for periods of time in Earth history during which the planet iced over. It is often used as a reference to the two consecutive, multi-million-year glaciation events which took place during the Cryogenian Period, which geologists refer to as the time between 635 and 720 million years ago. Whether the Earth was more of a hardened snowball or a softer “slushball” is still up for debate. But scientists are certain of one thing: Most of the planet was plunged into a deep freeze, with average global temperatures of minus 50 degrees Celsius. The question has been: How and where did life survive?
“We’re interested in understanding the foundations of complex life on Earth. We see evidence for eukaryotes before and after the Cryogenian in the fossil record, but we largely lack direct evidence of where they may have lived during,” Husain says. “The great part of this mystery is, we know life survived. We’re just trying to understand how and where.”
There are a number of ideas for where organisms could have sheltered during Snowball Earth, including in certain patches of the open ocean (if such environments existed), in and around deep-sea hydrothermal vents, and under ice sheets. In considering meltwater ponds, Husain and her colleagues pursued the hypothesis that surface ice meltwaters may also have been capable of supporting early eukaryotic life at the time.
“There are many hypotheses for where life could have survived and sheltered during the Cryogenian, but we don’t have excellent analogs for all of them,” Husain notes. “Above-ice meltwater ponds occur on Earth today and are accessible, giving us the opportunity to really focus in on the eukaryotes which live in these environments.”
Small pond, big life
For their new study, the researchers analyzed samples taken from meltwater ponds in Antarctica. In 2018, Summons and colleagues from New Zealand traveled to a region of the McMurdo Ice Shelf in East Antarctica, known to host small ponds of melted ice, each just a few feet deep and a few meters wide. There, water freezes all the way to the seafloor, in the process trapping dark-colored sediments and marine organisms. Wind-driven loss of ice from the surface creates a sort of conveyer belt that brings this trapped debris to the surface over time, where it absorbs the sun’s warmth, causing ice to melt, while surrounding debris-free ice reflects incoming sunlight, resulting in the formation of shallow meltwater ponds.
The bottom of each pond is lined with mats of microbes that have built up over years to form layers of sticky cellular communities.
“These mats can be a few centimeters thick, colorful, and they can be very clearly layered,” Husain says.
These microbial mats are made up of cyanobacteria, prokaryotic, single-celled photosynthetic organisms that lack a cell nucleus or other organelles. While these ancient microbes are known to survive within some of the the harshest environments on Earth including meltwater ponds, the researchers wanted to know whether eukaryotes — complex organisms that evolved a cell nucleus and other membrane bound organelles — could also weather similarly challenging circumstances. Answering this question would take more than a microscope, as the defining characteristics of the microscopic eukaryotes present among the microbial mats are too subtle to distinguish by eye.
To characterize the eukaryotes, the team analyzed the mats for specific lipids they make called sterols, as well as genetic components called ribosomal ribonucleic acid (rRNA), both of which can be used to identify organisms with varying degrees of specificity. These two independent sets of analyses provided complementary fingerprints for certain eukaryotic groups. As part of the team’s lipid research, they found many sterols and rRNA genes closely associated with specific types of algae, protists, and microscopic animals among the microbial mats. The researchers were able to assess the types and relative abundance of lipids and rRNA genes from pond to pond, and found the ponds hosted a surprising diversity of eukaryotic life.
“No two ponds were alike,” Husain says. “There are repeating casts of characters, but they’re present in different abundances. And we found diverse assemblages of eukaryotes from all the major groups in all the ponds studied. These eukaryotes are the descendants of the eukaryotes that survived the Snowball Earth. This really highlights that meltwater ponds during Snowball Earth could have served as above-ice oases that nurtured the eukaryotic life that enabled the diversification and proliferation of complex life — including us — later on.”
This research was supported, in part, by the NASA Exobiology Program, the Simons Collaboration on the Origins of Life, and a MISTI grant from MIT-New Zealand.
QS ranks MIT the world’s No. 1 university for 2025-26
MIT has again been named the world’s top university by the QS World University Rankings, which were announced today. This is the 14th year in a row MIT has received this distinction.
The full 2026 edition of the rankings — published by Quacquarelli Symonds, an organization specializing in education and study abroad — can be found at TopUniversities.com. The QS rankings are based on factors including academic reputation, employer reputation, citations per faculty, student-to-faculty ratio, proportion of international faculty, and proportion of international students.
MIT was also ranked the world’s top university in 11 of the subject areas ranked by QS, as announced in March of this year.
The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; Physics and Astronomy; and Statistics and Operational Research.
MIT also placed second in seven subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Business and Management Studies; Chemistry; Earth and Marine Sciences; and Economics and Econometrics.
Memory safety is at a tipping point
Social security numbers stolen. Public transport halted. Hospital systems frozen until ransoms are paid. These are some of the damaging consequences of unsecure memory in computer systems.
Over the past decade, public awareness of such cyberattacks has intensified, as their impacts have harmed individuals, corporations, and governments. Today, this awareness is coinciding with technologies that are finally mature enough to eliminate vulnerabilities in memory safety.
"We are at a tipping point — now is the right time to move to memory-safe systems," says Hamed Okhravi, a cybersecurity expert in MIT Lincoln Laboratory’s Secure Resilient Systems and Technology Group.
In an op-ed earlier this year in Communications of the ACM, Okhravi joined 20 other luminaries in the field of computer security to lay out a plan for achieving universal memory safety. They argue for a standardized framework as an essential next step to adopting memory-safety technologies throughout all forms of computer systems, from fighter jets to cell phones.
Memory-safety vulnerabilities occur when a program performs unintended or erroneous operations in memory. Such operations are prevalent, accounting for an estimated 70 percent of software vulnerabilities. If attackers gain access to memory, they can potentially steal sensitive information, alter program execution, or even take control of the computer system.
These vulnerabilities exist largely because common software programming languages, such as C or C++, are inherently memory-insecure. A simple error by a software engineer, perhaps one line in a system’s multimillion lines of code, could be enough for an attacker to exploit. In recent years, new memory-safe languages, such as Rust, have been developed. But rewriting legacy systems in new, memory-safe languages can be costly and complicated.
Okhravi focuses on the national security implications of memory-safety vulnerabilities. For the U.S. Department of Defense (DoD), whose systems comprise billions of lines of legacy C or C++ code, memory safety has long been a known problem. The National Security Agency (NSA) and the federal government have recently urged technology developers to eliminate memory-safety vulnerabilities from their products. Security concerns extend beyond military systems to widespread consumer products.
"Cell phones, for example, are not immediately important for defense or war-fighting, but if we have 200 million vulnerable cell phones in the nation, that’s a serious matter of national security," Okhravi says.
Memory-safe technology
In recent years, several technologies have emerged to help patch memory vulnerabilities in legacy systems. As the guest editor for a special issue of IEEE Security and Privacy, Okhravi solicited articles from top contributors in the field to highlight these technologies and the ways they can build on one another.
Some of these memory-safety technologies have been developed at Lincoln Laboratory, with sponsorship from DoD agencies. These technologies include TRACER and TASR, which are software products for Windows and Linux systems, respectively, that reshuffle the location of code in memory each time a program accesses it, making it very difficult for attackers to find exploits. These moving-target solutions have since been licensed by cybersecurity and cloud services companies.
"These technologies are quick wins, enabling us to make a lot of immediate impact without having to rebuild the whole system. But they are only a partial solution, a way of securing legacy systems while we are transitioning to safer languages," Okhravi says.
Innovative work is underway to make that transition easier. For example, the TRACTOR program at the U.S. Defense Advanced Research Projects Agency is developing artificial intelligence tools to automatically translate legacy C code to Rust. Lincoln Laboratory researchers will test and evaluate the translator for use in DoD systems.
Okhravi and his coauthors acknowledged in their op-ed that the timeline for full adoption of memory-safe systems is long — likely decades. It will require the deployment of a combination of new hardware, software, and techniques, each with their own adoption paths, costs, and disruptions. Organizations should prioritize mission-critical systems first.
"For example, the most important components in a fighter jet, such as the flight-control algorithm or the munition-handling logic, would be made memory-safe, say, within five years," Okhravi says. Subsystems less important to critical functions would have a longer time frame.
Use of memory-safe programming languages at Lincoln Laboratory
As Lincoln Laboratory continues its leadership in advancing memory-safety technologies, the Secure Resilient Systems and Technology Group has prioritized adopting memory-safe programming languages. "We’ve been investing in the group-wide use of Rust for the past six years as part of our broader strategy to prototype cyber-hardened mission systems and high-assurance cryptographic implementations for the DoD and intelligence community," says Roger Khazan, who leads the group. "Memory safety is fundamental to trustworthiness in these systems."
Rust’s strong guarantees around memory safety, along with its speed and ability to catch bugs early during development, make it especially well-suited for building secure and reliable systems. The laboratory has been using Rust to prototype and transition secure components for embedded, distributed, and cryptographic systems where resilience, performance, and correctness are mission-critical.
These efforts support both immediate U.S. government needs and a longer-term transformation of the national security software ecosystem. "They reflect Lincoln Laboratory’s broader mission of advancing technology in service to national security, grounded in technical excellence, innovation, and trust," Khazan adds.
A technology-agnostic framework
As new computer systems are designed, developers need a framework of memory-safety standards guiding them. Today, attempts to request memory safety in new systems are hampered by the lack of a clear set of definitions and practice.
Okhravi emphasizes that this standardized framework should be technology-agnostic and provide specific timelines with sets of requirements for different types of systems.
"In the acquisition process for the DoD, and even the commercial sector, when we are mandating memory safety, it shouldn’t be tied to a specific technology. It should be generic enough that different types of systems can apply different technologies to get there," he says.
Filling this gap not only requires building industrial consensus on technical approaches, but also collaborating with government and academia to bring this effort to fruition.
The need for collaboration was an impetus for the op-ed, and Okhravi says that the consortium of experts will push for standardization from their positions across industry, government, and academia. Contributors to the paper represent a wide range of institutes, from the University of Cambridge and SRI International to Microsoft and Google. Together, they are building momentum to finally root out memory vulnerabilities and the costly damages associated with them.
"We are seeing this cost-risk trade-off mindset shifting, partly because of the maturation of technology and partly because of such consequential incidents,” Okhravi says. "We hear all the time that such-and-such breach cost billions of dollars. Meanwhile, making the system secure might have cost 10 million dollars. Wouldn’t we have been better off making that effort?"
The MIT Press acquires University Science Books from AIP Publishing
The MIT Press announces the acquisition of textbook publisher University Science Books from AIP Publishing, a subsidiary of the American Institute of Physics (AIP).
University Science Books was founded in 1978 to publish intermediate- and advanced-level science and reference books by respected authors, published with the highest design and production standards, and priced as affordably as possible. Over the years, USB’s authors have acquired international followings, and its textbooks in chemistry, physics, and astronomy have been recognized as the gold standard in their respective disciplines. USB was acquired by AIP Publishing in 2021.
Bestsellers include John Taylor’s “Classical Mechanics,” the No. 1 adopted text for undergrad mechanics courses in the United States and Canada, and his “Introduction to Error Analysis;” and Don McQuarrie’s “Physical Chemistry: A Molecular Approach” (commonly known as “Big Red”), the second-most adopted physical chemistry textbook in the U.S.
“We are so pleased to have found a new home for USB’s prestigious list of textbooks in the sciences,” says Alix Vance, CEO of AIP Publishing. “With its strong STEM focus, academic rigor, and high production standards, the MIT Press is the perfect partner to continue the publishing legacy of University Science Books.”
“This acquisition is both a brand and content fit for the MIT Press,” says Amy Brand, director and publisher of the MIT Press. “USB’s respected science list will complement our long-established publishing history of publishing foundational texts in computer science, finance, and economics.”
The MIT Press will take over the USB list as of July 1, with inventory transferring to Penguin Random House Publishing Services, the MIT Press’ sales and distribution partner.
For details regarding University Science Books titles, inventory, and how to order, please contact the MIT Press.
Established in 1962, The MIT Press is one of the largest and most distinguished university presses in the world and a leading publisher of books and journals at the intersection of science, technology, art, social science, and design.
AIP Publishing is a wholly owned not-for-profit subsidiary of the AIP and supports the charitable, scientific, and educational purposes of AIP through scholarly publishing activities on its behalf and on behalf of our publishing partners.
Supercharged vaccine could offer strong protection with just one dose
Researchers at MIT and the Scripps Research Institute have shown that they can generate a strong immune response to HIV with just one vaccine dose, by adding two powerful adjuvants — materials that help stimulate the immune system.
In a study of mice, the researchers showed that this approach produced a much wider diversity of antibodies against an HIV antigen, compared to the vaccine given on its own or with just one of the adjuvants. The dual-adjuvant vaccine accumulated in the lymph nodes and remained there for up to a month, allowing the immune system to build up a much greater number of antibodies against the HIV protein.
This strategy could lead to the development of vaccines that only need to be given once, for infectious diseases including HIV or SARS-CoV-2, the researchers say.
“This approach is compatible with many protein-based vaccines, so it offers the opportunity to engineer new formulations for these types of vaccines across a wide range of different diseases, such as influenza, SARS-CoV-2, or other pandemic outbreaks,” says J. Christopher Love, the Raymond A. and Helen E. St. Laurent Professor of Chemical Engineering at MIT, and a member of the Koch Institute for Integrative Cancer Research and the Ragon Institute of MGH, MIT, and Harvard.
Love and Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the study, which appears today in Science Translational Medicine. Kristen Rodrigues PhD ’23 and Yiming Zhang PhD ’25 are the lead authors of the paper.
More powerful vaccines
Most vaccines are delivered along with adjuvants, which help to stimulate a stronger immune response to the antigen. One adjuvant commonly used with protein-based vaccines, including those for hepatitis A and B, is aluminum hydroxide, also known as alum. This adjuvant works by activating the innate immune response, helping the body to form a stronger memory of the vaccine antigen.
Several years ago, Irvine developed another adjuvant based on saponin, an FDA-approved adjuvant derived from the bark of the Chilean soapbark tree. His work showed that nanoparticles containing both saponin and a molecule called MPLA, which promotes inflammation, worked better than saponin on its own. That nanoparticle, known as SMNP, is now being used as an adjuvant for an HIV vaccine that is currently in clinical trials.
Irvine and Love then tried combining alum and SMNP and showed that vaccines containing both of those adjuvants could generate even more powerful immune responses against either HIV or SARS-CoV-2.
In the new paper, the researchers wanted to explore why these two adjuvants work so well together to boost the immune response, specifically the B cell response. B cells produce antibodies that can circulate in the bloodstream and recognize a pathogen if the body is exposed to it again.
For this study, the researchers used an HIV protein called MD39 as their vaccine antigen, and anchored dozens of these proteins to each alum particle, along with SMNP.
After vaccinating mice with these particles, the researchers found that the vaccine accumulated in the lymph nodes — structures where B cells encounter antigens and undergo rapid mutations that generate antibodies with high affinity for a particular antigen. This process takes place within clusters of cells known as germinal centers.
The researchers showed that SMNP and alum helped the HIV antigen to penetrate through the protective layer of cells surrounding the lymph nodes without being broken down into fragments. The adjuvants also helped the antigens to remain intact in the lymph nodes for up to 28 days.
“As a result, the B cells that are cycling in the lymph nodes are constantly being exposed to the antigen over that time period, and they get the chance to refine their solution to the antigen,” Love says.
This approach may mimic what occurs during a natural infection, when antigens can remain in the lymph nodes for weeks, giving the body time to build up an immune response.
Antibody diversity
Single-cell RNA sequencing of B cells from the vaccinated mice revealed that the vaccine containing both adjuvants generated a much more diverse repertoire of B cells and antibodies. Mice that received the dual-adjuvant vaccine produced two to three times more unique B cells than mice that received just one of the adjuvants.
That increase in B cell number and diversity boosts the chances that the vaccine could generate broadly neutralizing antibodies — antibodies that can recognize a variety of strains of a given virus, such as HIV.
“When you think about the immune system sampling all of the possible solutions, the more chances we give it to identify an effective solution, the better,” Love says. “Generating broadly neutralizing antibodies is something that likely requires both the kind of approach that we showed here, to get that strong and diversified response, as well as antigen design to get the right part of the immunogen shown.”
Using these two adjuvants together could also contribute to the development of more potent vaccines against other infectious diseases, with just a single dose.
“What’s potentially powerful about this approach is that you can achieve long-term exposures based on a combination of adjuvants that are already reasonably well-understood, so it doesn’t require a different technology. It’s just combining features of these adjuvants to enable low-dose or potentially even single-dose treatments,” Love says.
The research was funded by the National Institutes of Health; the Koch Institute Support (core) Grant from the National Cancer Institute; the Ragon Institute of MGH, MIT, and Harvard; and the Howard Hughes Medical Institute.
New 3D chips could make electronics faster and more energy-efficient
The advanced semiconductor material gallium nitride will likely be key for the next generation of high-speed communication systems and the power electronics needed for state-of-the-art data centers.
Unfortunately, the high cost of gallium nitride (GaN) and the specialization required to incorporate this semiconductor material into conventional electronics have limited its use in commercial applications.
Now, researchers from MIT and elsewhere have developed a new fabrication process that integrates high-performance GaN transistors onto standard silicon CMOS chips in a way that is low-cost and scalable, and compatible with existing semiconductor foundries.
Their method involves building many tiny transistors on the surface of a GaN chip, cutting out each individual transistor, and then bonding just the necessary number of transistors onto a silicon chip using a low-temperature process that preserves the functionality of both materials.
The cost remains minimal since only a tiny amount of GaN material is added to the chip, but the resulting device can receive a significant performance boost from compact, high-speed transistors. In addition, by separating the GaN circuit into discrete transistors that can be spread over the silicon chip, the new technology is able to reduce the temperature of the overall system.
The researchers used this process to fabricate a power amplifier, an essential component in mobile phones, that achieves higher signal strength and efficiencies than devices with silicon transistors. In a smartphone, this could improve call quality, boost wireless bandwidth, enhance connectivity, and extend battery life.
Because their method fits into standard procedures, it could improve electronics that exist today as well as future technologies. Down the road, the new integration scheme could even enable quantum applications, as GaN performs better than silicon at the cryogenic temperatures essential for many types of quantum computing.
“If we can bring the cost down, improve the scalability, and, at the same time, enhance the performance of the electronic device, it is a no-brainer that we should adopt this technology. We’ve combined the best of what exists in silicon with the best possible gallium nitride electronics. These hybrid chips can revolutionize many commercial markets,” says Pradyot Yadav, an MIT graduate student and lead author of a paper on this method.
He is joined on the paper by fellow MIT graduate students Jinchen Wang and Patrick Darmawi-Iskandar; MIT postdoc John Niroula; senior authors Ulrich L. Rohde, a visiting scientist at the Microsystems Technology Laboratories (MTL), and Ruonan Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and member of MTL; and Tomás Palacios, the Clarence J. LeBel Professor of EECS, and director of MTL; as well as collaborators at Georgia Tech and the Air Force Research Laboratory. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.
Swapping transistors
Gallium nitride is the second most widely used semiconductor in the world, just after silicon, and its unique properties make it ideal for applications such as lighting, radar systems and power electronics.
The material has been around for decades and, to get access to its maximum performance, it is important for chips made of GaN to be connected to digital chips made of silicon, also called CMOS chips. To enable this, some integration methods bond GaN transistors onto a CMOS chip by soldering the connections, but this limits how small the GaN transistors can be. The tinier the transistors, the higher the frequency at which they can work.
Other methods integrate an entire gallium nitride wafer on top of a silicon wafer, but using so much material is extremely costly, especially since the GaN is only needed in a few tiny transistors. The rest of the material in the GaN wafer is wasted.
“We wanted to combine the functionality of GaN with the power of digital chips made of silicon, but without having to compromise on either cost of bandwidth. We achieved that by adding super-tiny discrete gallium nitride transistors right on top of the silicon chip,” Yadav explains.
The new chips are the result of a multistep process.
First, a tightly packed collection of miniscule transistors is fabricated across the entire surface of a GaN wafer. Using very fine laser technology, they cut each one down to just the size of the transistor, which is 240 by 410 microns, forming what they call a dielet. (A micron is one millionth of a meter.)
Each transistor is fabricated with tiny copper pillars on top, which they use to bond directly to the copper pillars on the surface of a standard silicon CMOS chip. Copper to copper bonding can be done at temperatures below 400 degrees Celsius, which is low enough to avoid damaging either material.
Current GaN integration techniques require bonds that utilize gold, an expensive material that needs much higher temperatures and stronger bonding forces than copper. Since gold can contaminate the tools used in most semiconductor foundries, it typically requires specialized facilities.
“We wanted a process that was low-cost, low-temperature, and low-force, and copper wins on all of those related to gold. At the same time, it has better conductivity,” Yadav says.
A new tool
To enable the integration process, they created a specialized new tool that can carefully integrate the extremely tiny GaN transistor with the silicon chips. The tool uses a vacuum to hold the dielet as it moves on top of a silicon chip, zeroing in on the copper bonding interface with nanometer precision.
They used advanced microscopy to monitor the interface, and then when the dielet is in the right position, they apply heat and pressure to bond the GaN transistor to the chip.
“For each step in the process, I had to find a new collaborator who knew how to do the technique that I needed, learn from them, and then integrate that into my platform. It was two years of constant learning,” Yadav says.
Once the researchers had perfected the fabrication process, they demonstrated it by developing power amplifiers, which are radio frequency circuits that boost wireless signals.
Their devices achieved higher bandwidth and better gain than devices made with traditional silicon transistors. Each compact chip has an area of less than half a square millimeter.
In addition, because the silicon chip they used in their demonstration is based on Intel 16 22nm FinFET state-of-the-art metallization and passive options, they were able to incorporate components often used in silicon circuits, such as neutralization capacitors. This significantly improved the gain of the amplifier, bringing it one step closer to enabling the next generation of wireless technologies.
“To address the slowdown of Moore’s Law in transistor scaling, heterogeneous integration has emerged as a promising solution for continued system scaling, reduced form factor, improved power efficiency, and cost optimization. Particularly in wireless technology, the tight integration of compound semiconductors with silicon-based wafers is critical to realizing unified systems of front-end integrated circuits, baseband processors, accelerators, and memory for next-generation antennas-to-AI platforms. This work makes a significant advancement by demonstrating 3D integration of multiple GaN chips with silicon CMOS and pushes the boundaries of current technological capabilities,” says Atom Watanabe, a research scientist at IBM who was not involved with this paper.
This work is supported, in part, by the U.S. Department of Defense through the National Defense Science and Engineering Graduate (NDSEG) Fellowship Program and CHIMES, one of the seven centers in JUMP 2.0, a Semiconductor Research Corporation Program by the Department of Defense and the Defense Advanced Research Projects Agency (DARPA). Fabrication was carried out using facilities at MIT.Nano, the Air Force Research Laboratory, and Georgia Tech.
Combining technology, education, and human connection to improve online learning
MIT Morningside Academy for Design (MAD) Fellow Caitlin Morris is an architect, artist, researcher, and educator who has studied psychology and used online learning tools to teach herself coding and other skills. She’s a soft-spoken observer, with a keen interest in how people use space and respond to their environments. Combining her observational skills with active community engagement, she works at the intersection of technology, education, and human connection to improve digital learning platforms.
Morris grew up in rural upstate New York in a family of makers. She learned to sew, cook, and build things with wood at a young age. One of her earlier memories is of a small handsaw she made — with the help of her father, a professional carpenter. It had wooden handles on both sides to make sawing easier for her.
Later, when she needed to learn something, she’d turn to project-based communities, rather than books. She taught herself to code late at night, taking advantage of community-oriented platforms where people answer questions and post sketches, allowing her to see the code behind the objects people made.
“For me, that was this huge, wake-up moment of feeling like there was a path to expression that was not a traditional computer-science classroom,” she says. “I think that’s partly why I feel so passionate about what I’m doing now. That was the big transformation: having that community available in this really personal, project-based way.”
Subsequently, Morris has become involved in community-based learning in diverse ways: She’s a co-organizer of the MIT Media Lab’s Festival of Learning; she leads creative coding community meetups; and she’s been active in the open-source software community development.
“My years of organizing learning and making communities — both in person and online — have shown me firsthand how powerful social interaction can be for motivation and curiosity,” Morris said. “My research is really about identifying which elements of that social magic are most essential, so we can design digital environments that better support those dynamics.”
Even in her artwork, Morris sometimes works with a collective. She’s contributed to the creation of about 10 large art installations that combine movement, sound, imagery, lighting, and other technologies to immerse the visitor in an experience evoking some aspect of nature, such as flowing water, birds in flight, or crowd kinetics. These marvelous installations are commanding and calming at the same time, possibly because they focus the mind, eye, and sometimes the ear.
She did much of this work with New York-based Hypersonic, a company of artists and technologists specializing in large kinetic installations in public spaces. Before that, she earned a BS in psychology and a BS in architectural building sciences from Rensselaer Polytechnic Institute, then an MFA in design and technology from the Parsons School of Design at The New School.
During, in between, after, and sometimes concurrently, she taught design, coding, and other technologies at the high school, undergraduate, and graduate-student levels.
“I think what kind of got me hooked on teaching was that the way I learned as a child was not the same as in the classroom,” Morris explains. “And I later saw this in many of my students. I got the feeling that the normal way of learning things was not working for them. And they thought it was their fault. They just didn’t really feel welcome within the traditional education model.”
Morris says that when she worked with those students, tossing aside tradition and instead saying — “You know, we’re just going to do this animation. Or we’re going to make this design or this website or these graphics, and we’re going to approach it in this totally different way” — she saw people “kind of unlock and be like, ‘Oh my gosh. I never thought I could do that.’
“For me, that was the hook, that’s the magic of it. Because I was coming from that experience of having to figure out those unlock mechanisms for myself, it was really exciting to be able to share them with other people, those unlock moments.”
For her doctoral work with the MIT Media Lab’s Fluid Interfaces Group, she’s focusing on the personal space and emotional gaps associated with learning, particularly online and AI-assisted learning. This research builds on her experience increasing human connection in both physical and virtual learning environments.
“I’m developing a framework that combines AI-driven behavioral analysis with human expert assessment to study social learning dynamics,” she says. “My research investigates how social interaction patterns influence curiosity development and intrinsic motivation in learning, with particular focus on understanding how these dynamics differ between real peers and AI-supported environments.”
The first step in her research is determining which elements of social interaction are not replaceable by an AI-based digital tutor. Following that assessment, her goal is to build a prototype platform for experiential learning.
“I’m creating tools that can simultaneously track observable behaviors — like physical actions, language cues, and interaction patterns — while capturing learners’ subjective experiences through reflection and interviews,” Morris explains. “This approach helps connect what people do with how they feel about their learning experience.
“I aim to make two primary contributions: first, analysis tools for studying social learning dynamics; and second, prototype tools that demonstrate practical approaches for supporting social curiosity in digital learning environments. These contributions could help bridge the gap between the efficiency of digital platforms and the rich social interaction that occurs in effective in-person learning.”
Her goals make Morris a perfect fit for the MIT MAD Fellowship. One statement in MAD’s mission is: “Breaking away from traditional education, we foster creativity, critical thinking, making, and collaboration, exploring a range of dynamic approaches to prepare students for complex, real-world challenges.”
Morris wants to help community organizations deal with the rapid AI-powered changes in education, once she finishes her doctorate in 2026. “What should we do with this ‘physical space versus virtual space’ divide?” she asks. That is the space currently captivating Morris’s thoughts.
Unpacking the bias of large language models
Research has shown that large language models (LLMs) tend to overemphasize information at the beginning and end of a document or conversation, while neglecting the middle.
This “position bias” means that, if a lawyer is using an LLM-powered virtual assistant to retrieve a certain phrase in a 30-page affidavit, the LLM is more likely to find the right text if it is on the initial or final pages.
MIT researchers have discovered the mechanism behind this phenomenon.
They created a theoretical framework to study how information flows through the machine-learning architecture that forms the backbone of LLMs. They found that certain design choices which control how the model processes input data can cause position bias.
Their experiments revealed that model architectures, particularly those affecting how information is spread across input words within the model, can give rise to or intensify position bias, and that training data also contribute to the problem.
In addition to pinpointing the origins of position bias, their framework can be used to diagnose and correct it in future model designs.
This could lead to more reliable chatbots that stay on topic during long conversations, medical AI systems that reason more fairly when handling a trove of patient data, and code assistants that pay closer attention to all parts of a program.
“These models are black boxes, so as an LLM user, you probably don’t know that position bias can cause your model to be inconsistent. You just feed it your documents in whatever order you want and expect it to work. But by understanding the underlying mechanism of these black-box models better, we can improve them by addressing these limitations,” says Xinyi Wu, a graduate student in the MIT Institute for Data, Systems, and Society (IDSS) and the Laboratory for Information and Decision Systems (LIDS), and first author of a paper on this research.
Her co-authors include Yifei Wang, an MIT postdoc; and senior authors Stefanie Jegelka, an associate professor of electrical engineering and computer science (EECS) and a member of IDSS and the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Ali Jadbabaie, professor and head of the Department of Civil and Environmental Engineering, a core faculty member of IDSS, and a principal investigator in LIDS. The research will be presented at the International Conference on Machine Learning.
Analyzing attention
LLMs like Claude, Llama, and GPT-4 are powered by a type of neural network architecture known as a transformer. Transformers are designed to process sequential data, encoding a sentence into chunks called tokens and then learning the relationships between tokens to predict what words comes next.
These models have gotten very good at this because of the attention mechanism, which uses interconnected layers of data processing nodes to make sense of context by allowing tokens to selectively focus on, or attend to, related tokens.
But if every token can attend to every other token in a 30-page document, that quickly becomes computationally intractable. So, when engineers build transformer models, they often employ attention masking techniques which limit the words a token can attend to.
For instance, a causal mask only allows words to attend to those that came before it.
Engineers also use positional encodings to help the model understand the location of each word in a sentence, improving performance.
The MIT researchers built a graph-based theoretical framework to explore how these modeling choices, attention masks and positional encodings, could affect position bias.
“Everything is coupled and tangled within the attention mechanism, so it is very hard to study. Graphs are a flexible language to describe the dependent relationship among words within the attention mechanism and trace them across multiple layers,” Wu says.
Their theoretical analysis suggested that causal masking gives the model an inherent bias toward the beginning of an input, even when that bias doesn’t exist in the data.
If the earlier words are relatively unimportant for a sentence’s meaning, causal masking can cause the transformer to pay more attention to its beginning anyway.
“While it is often true that earlier words and later words in a sentence are more important, if an LLM is used on a task that is not natural language generation, like ranking or information retrieval, these biases can be extremely harmful,” Wu says.
As a model grows, with additional layers of attention mechanism, this bias is amplified because earlier parts of the input are used more frequently in the model’s reasoning process.
They also found that using positional encodings to link words more strongly to nearby words can mitigate position bias. The technique refocuses the model’s attention in the right place, but its effect can be diluted in models with more attention layers.
And these design choices are only one cause of position bias — some can come from training data the model uses to learn how to prioritize words in a sequence.
“If you know your data are biased in a certain way, then you should also finetune your model on top of adjusting your modeling choices,” Wu says.
Lost in the middle
After they’d established a theoretical framework, the researchers performed experiments in which they systematically varied the position of the correct answer in text sequences for an information retrieval task.
The experiments showed a “lost-in-the-middle” phenomenon, where retrieval accuracy followed a U-shaped pattern. Models performed best if the right answer was located at the beginning of the sequence. Performance declined the closer it got to the middle before rebounding a bit if the correct answer was near the end.
Ultimately, their work suggests that using a different masking technique, removing extra layers from the attention mechanism, or strategically employing positional encodings could reduce position bias and improve a model’s accuracy.
“By doing a combination of theory and experiments, we were able to look at the consequences of model design choices that weren’t clear at the time. If you want to use a model in high-stakes applications, you must know when it will work, when it won’t, and why,” Jadbabaie says.
In the future, the researchers want to further explore the effects of positional encodings and study how position bias could be strategically exploited in certain applications.
“These researchers offer a rare theoretical lens into the attention mechanism at the heart of the transformer model. They provide a compelling analysis that clarifies longstanding quirks in transformer behavior, showing that attention mechanisms, especially with causal masks, inherently bias models toward the beginning of sequences. The paper achieves the best of both worlds — mathematical clarity paired with insights that reach into the guts of real-world systems,” says Amin Saberi, professor and director of the Stanford University Center for Computational Market Design, who was not involved with this work.
This research is supported, in part, by the U.S. Office of Naval Research, the National Science Foundation, and an Alexander von Humboldt Professorship.
This compact, low-power receiver could give a boost to 5G smart devices
MIT researchers have designed a compact, low-power receiver for 5G-compatible smart devices that is about 30 times more resilient to a certain type of interference than some traditional wireless receivers.
The low-cost receiver would be ideal for battery-powered internet of things (IoT) devices like environmental sensors, smart thermostats, or other devices that need to run continuously for a long time, such as health wearables, smart cameras, or industrial monitoring sensors.
The researchers’ chip uses a passive filtering mechanism that consumes less than a milliwatt of static power while protecting both the input and output of the receiver’s amplifier from unwanted wireless signals that could jam the device.
Key to the new approach is a novel arrangement of precharged, stacked capacitors, which are connected by a network of tiny switches. These miniscule switches need much less power to be turned on and off than those typically used in IoT receivers.
The receiver’s capacitor network and amplifier are carefully arranged to leverage a phenomenon in amplification that allows the chip to use much smaller capacitors than would typically be necessary.
“This receiver could help expand the capabilities of IoT gadgets. Smart devices like health monitors or industrial sensors could become smaller and have longer battery lives. They would also be more reliable in crowded radio environments, such as factory floors or smart city networks,” says Soroush Araei, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on the receiver.
He is joined on the paper by Mohammad Barzgari, a postdoc in the MIT Research Laboratory of Electronics (RLE); Haibo Yang, an EECS graduate student; and senior author Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in EECS at MIT and a member of the Microsystems Technology Laboratories and RLE. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.
A new standard
A receiver acts as the intermediary between an IoT device and its environment. Its job is to detect and amplify a wireless signal, filter out any interference, and then convert it into digital data for processing.
Traditionally, IoT receivers operate on fixed frequencies and suppress interference using a single narrow-band filter, which is simple and inexpensive.
But the new technical specifications of the 5G mobile network enable reduced-capability devices that are more affordable and energy-efficient. This opens a range of IoT applications to the faster data speeds and increased network capability of 5G. These next-generation IoT devices need receivers that can tune across a wide range of frequencies while still being cost-effective and low-power.
“This is extremely challenging because now we need to not only think about the power and cost of the receiver, but also flexibility to address numerous interferers that exist in the environment,” Araei says.
To reduce the size, cost, and power consumption of an IoT device, engineers can’t rely on the bulky, off-chip filters that are typically used in devices that operate on a wide frequency range.
One solution is to use a network of on-chip capacitors that can filter out unwanted signals. But these capacitor networks are prone to special type of signal noise known as harmonic interference.
In prior work, the MIT researchers developed a novel switch-capacitor network that targets these harmonic signals as early as possible in the receiver chain, filtering out unwanted signals before they are amplified and converted into digital bits for processing.
Shrinking the circuit
Here, they extended that approach by using the novel switch-capacitor network as the feedback path in an amplifier with negative gain. This configuration leverages the Miller effect, a phenomenon that enables small capacitors to behave like much larger ones.
“This trick lets us meet the filtering requirement for narrow-band IoT without physically large components, which drastically shrinks the size of the circuit,” Araei says.
Their receiver has an active area of less than 0.05 square millimeters.
One challenge the researchers had to overcome was determining how to apply enough voltage to drive the switches while keeping the overall power supply of the chip at only 0.6 volts.
In the presence of interfering signals, such tiny switches can turn on and off in error, especially if the voltage required for switching is extremely low.
To address this, the researchers came up with a novel solution, using a special circuit technique called bootstrap clocking. This method boosts the control voltage just enough to ensure the switches operate reliably while using less power and fewer components than traditional clock boosting methods.
Taken together, these innovations enable the new receiver to consume less than a milliwatt of power while blocking about 30 times more harmonic interference than traditional IoT receivers.
“Our chip also is very quiet, in terms of not polluting the airwaves. This comes from the fact that our switches are very small, so the amount of signal that can leak out of the antenna is also very small,” Araei adds.
Because their receiver is smaller than traditional devices and relies on switches and precharged capacitors instead of more complex electronics, it could be more cost-effective to fabricate. In addition, since the receiver design can cover a wide range of signal frequencies, it could be implemented on a variety of current and future IoT devices.
Now that they have developed this prototype, the researchers want to enable the receiver to operate without a dedicated power supply, perhaps by harvesting Wi-Fi or Bluetooth signals from the environment to power the chip.
This research is supported, in part, by the National Science Foundation.
Gaspare LoDuca named VP for information systems and technology and CIO
Gaspare LoDuca has been appointed MIT’s vice president for information systems and technology (IS&T) and chief information officer, effective Aug. 18. Currently vice president for information technology and CIO at Columbia University, LoDuca has held IT leadership roles in or related to higher education for more than two decades. He succeeds Mark Silis, who led IS&T from 2019 until 2024, when he left MIT to return to the entrepreneurial ecosystem in the San Francisco Bay area.
Executive Vice President and Treasurer Glen Shor announced the appointment today in an email to MIT faculty and staff.
“I believe that Gaspare will be an incredible asset to MIT, bringing wide-ranging experience supporting faculty, researchers, staff, and students and a highly collaborative style,” says Shor. “He is eager to start his work with our talented IS&T team to chart and implement their contributions to the future of information technology at MIT.”
LoDuca will lead the IS&T organization and oversee MIT’s information technology infrastructure and services that support its research and academic enterprise across student and administrative systems, network operations, cloud services, cybersecurity, and customer support. As co-chair of the Information Technology Governance Committee, he will guide the development of IT policy and strategy at the Institute. He will also play a key role in MIT’s effort to modernize its business processes and administrative systems, working in close collaboration with the Business and Digital Transformation Office.
“Gaspare brings to his new role extensive experience leading a complex IT organization,” says Provost Cynthia Barnhart, who served as one of Shor's advisors during the search process. “His depth of experience, coupled with his vision for the future state of information technology and digital transformation at MIT, are compelling, and I am excited to see the positive impact he will have here.”
“As I start my new role, I plan to learn more about MIT’s culture and community to ensure that any decisions or changes we make are shaped by the community’s needs and carried out in a way that fits the culture. I’m also looking forward to learning more about the research and work being done by students and faculty to advance MIT’s mission. It’s inspiring, and I’m eager to support their success,” says LoDuca.
In his role at Columbia, LoDuca has overseen the IT department, headed IT governance committees for school and department-level IT functions, and ensured the secure operation of the university’s enterprise-class systems since 2015. During his tenure, he has crafted a culture of customer service and innovation — building a new student information system, identifying emerging technologies for use in classrooms and labs, and creating a data-sharing platform for university researchers and a grants dashboard for principal investigators. He also revamped Columbia’s technology infrastructure and implemented tools to ensure the security and reliability of its technology resources.
Before joining Columbia, LoDuca was the technology managing director for the education practice at Accenture from 1998 to 2015. In that role, he helped universities to develop and implement technology strategies and adopt modern applications and systems. His projects included overseeing the implementation of finance, human resources, and student administration systems for clients such as Columbia University, University of Miami, Carnegie Mellon University, the University System of Georgia, and Yale University.
“At a research institution, there’s a wide range of activities happening every day, and our job in IT is to support them all while also managing cybersecurity risks. We need to be creative and thoughtful in our solutions, and consider the needs and expectations of our community,” he says.
LoDuca holds a bachelor’s degree in chemical engineering from Michigan State University. He and his wife are recent empty nesters, and are in the process of relocating to Boston.
Closing in on superconducting semiconductors
In 2023, about 4.4 percent (176 terawatt-hours) of total energy consumption in the United States was by data centers that are essential for processing large quantities of information. Of that 176 TWh, approximately 100 TWh (57 percent) was used by CPU and GPU equipment. Energy requirements have escalated substantially in the past decade and will only continue to grow, making the development of energy-efficient computing crucial.
Superconducting electronics have arisen as a promising alternative for classical and quantum computing, although their full exploitation for high-end computing requires a dramatic reduction in the amount of wiring linking ambient temperature electronics and low-temperature superconducting circuits. To make systems that are both larger and more streamlined, replacing commonplace components such as semiconductors with superconducting versions could be of immense value. It’s a challenge that has captivated MIT Plasma Science and Fusion Center senior research scientist Jagadeesh Moodera and his colleagues, who described a significant breakthrough in a recent Nature Electronics paper, “Efficient superconducting diodes and rectifiers for quantum circuitry.”
Moodera was working on a stubborn problem. One of the critical long-standing requirements is the need for the efficient conversion of AC currents into DC currents on a chip while operating at the extremely cold cryogenic temperatures required for superconductors to work efficiently. For example, in superconducting “energy-efficient rapid single flux quantum” (ERSFQ) circuits, the AC-to-DC issue is limiting ERSFQ scalability and preventing their use in larger circuits with higher complexities. To respond to this need, Moodera and his team created superconducting diode (SD)-based superconducting rectifiers — devices that can convert AC to DC on the same chip. These rectifiers would allow for the efficient delivery of the DC current necessary to operate superconducting classical and quantum processors.
Quantum computer circuits can only operate at temperatures close to 0 kelvins (absolute zero), and the way power is supplied must be carefully controlled to limit the effects of interference introduced by too much heat or electromagnetic noise. Most unwanted noise and heat come from the wires connecting cold quantum chips to room-temperature electronics. Instead, using superconducting rectifiers to convert AC currents into DC within a cryogenic environment reduces the number of wires, cutting down on heat and noise and enabling larger, more stable quantum systems.
In a 2023 experiment, Moodera and his co-authors developed SDs that are made of very thin layers of superconducting material that display nonreciprocal (or unidirectional) flow of current and could be the superconducting counterpart to standard semiconductors. Even though SDs have garnered significant attention, especially since 2020, up until this point the research has focused only on individual SDs for proof of concept. The group’s 2023 paper outlined how they created and refined a method by which SDs could be scaled for broader application.
Now, by building a diode bridge circuit, they demonstrated the successful integration of four SDs and realized AC-to-DC rectification at cryogenic temperatures.
The new approach described in their recent Nature Electronics paper will significantly cut down on the thermal and electromagnetic noise traveling from ambient into cryogenic circuitry, enabling cleaner operation. The SDs could also potentially serve as isolators/circulators, assisting in insulating qubit signals from external influence. The successful assimilation of multiple SDs into the first integrated SD circuit represents a key step toward making superconducting computing a commercial reality.
“Our work opens the door to the arrival of highly energy-efficient, practical superconductivity-based supercomputers in the next few years,” says Moodera. “Moreover, we expect our research to enhance the qubit stability while boosting the quantum computing program, bringing its realization closer." Given the multiple beneficial roles these components could play, Moodera and his team are already working toward the integration of such devices into actual superconducting logic circuits, including in dark matter detection circuits that are essential to the operation of experiments at CERN and LUX-ZEPLIN in at the Berkeley National Lab.
This work was partially funded by MIT Lincoln Laboratory’s Advanced Concepts Committee, the U.S. National Science Foundation, U.S. Army Research Office, and U.S. Air Force Office of Scientific Research.
A brief history of the global economy, through the lens of a single barge
In 1989, New York City opened a new jail. But not on dry land. The city leased a barge, then called the “Bibby Resolution,” which had been topped with five stories of containers made into housing, and anchored it in the East River. For five years, the vessel lodged inmates.
A floating detention center is a curiosity. But then, the entire history of this barge is curious. Built in 1979 in Sweden, it housed British troops during the Falkland Islands war with Argentina, became worker housing for Volkswagen employees in West Germany, got sent to New York, also became a detention center off the coast of England, then finally was deployed as oil worker housing off the coast of Nigeria. The barge has had nine names, several owners, and flown the flags of five countries.
In this one vessel, then, we can see many currents: globalization, the transience of economic activity, and the hazy world of transactions many analysts and observers call “the offshore,” the lightly regulated sphere of economic activity that encourages short-term actions.
“The offshore presents a quick and potentially cheap solution to a crisis,” says MIT lecturer Ian Kumekawa. “It is not a durable solution. The story of the barge is the story of it being used as a quick fix in all sorts of crises. Then these expediences become the norm, and people get used to them and have an expectation that this is the way the world works.”
Now Kumekawa, a historian who started teaching as a lecturer at MIT earlier this year, explores the ship’s entire history in “Empty Vessel: The Global Economy in One Barge,” just published by Knopf and John Murray. In it, he traces the barge’s trajectory and the many economic and geopolitical changes that helped create the ship’s distinctive deployments around the world.
“The book is about a barge, but it’s also about the developing, emerging offshore world, where you see these layers of globalization, financialization, privatization, and the dissolution of territoriality and orders,” Kumekawa says. “The barge is a vehicle through which I can tell the story of those layers together.”
“Never meant to be permanent”
Kumekawa first found out about the vessel several years ago; New York City obtained another floating detention center in the 1990s, which prompted Kumekawa to start looking into the past of the older jail ship, the former “Bibby Resolution,” from the 1990s. The more he found out about its distinctive past, the more curious he became.
“You start pulling on a thread, and you realize you can keep pulling,” Kumekawa says.
The barge Kumekawa follows in the book was built in Sweden in 1979 as the “Balder Scapa.” Even then, commerce was plenty globalized: The vessel was commissioned by a Norwegian shell company, with negotiations run by an expatriate Swedish shipping agent whose firm was registered in Panama and used a Miami bank.
The barge was built at an inflection point following the economic slowdown and oil shocks of the 1970s. Manufacturing was on the verge of declining in both Western Europe and the U.S.; about half as many people now work in manufacturing in those regions, compared to 1960. Companies were looking to find cheaper global locations for production, reinforcing the sense that economic activity was now less durable in any given place.
The barge became part of this transience. The five-story accommodation block was added in the early 1980s; in 1983 it was re-registered in the UK and sent to the Falkland Islands as a troop accommodation named the “COASTEL 3.” Then it was re-registered in the Bahamas and sent to Emden, West Germany, as housing for Volkswagen workers. The vessel then served its stints as inmate housing — first in New York, then off the coast of England from 1997 to 2005. By 2010, it had been re-re-re-registered, in St. Vincent and Grenadines, and was housing oil workers off the coast of Nigeria.
“Globalization is more about flow than about stocks, and the barge is a great example of that,” Kumekawa says. “It’s always on the move, and never meant to be a permanent container. It’s understood people are going to be passing through.”
As Kumekawa explores in the book, this sense of social dislocation overlapped with the shrinking of state capacity, as many states increasingly encouraged companies to pursue globalized production and lightly regulated financial activities in numerous jurisdictions, in the hope it would enhance growth. And it has, albeit with unresolved questions about who the benefits accrue to, the social dislocation of workers, and more.
“In a certain sense it’s not an erosion of state power at all,” Kumekawa says. “These states are making very active choices to use offshore tools, to circumvent certain roadblocks.” He adds: “What happens in the 1970s and certainly in the 1980s is that the offshore comes into its own as an entity, and didn’t exist in the same way even in the 1950s and 1960s. There’s a money interest in that, and there’s a political interest as well.”
Abstract forces, real materials and people
Kumekawa is a scholar with a strong interest in economic history; his previous book, “The First Serious Optimist: A.C. Pigou and the Birth of Welfare Economics,” was published in 2017. This coming fall, Kumekawa will be team-teaching a class on the relationship between economics and history, along with MIT economists Abhijit Banerjee and Jacob Moscona.
Working on “Empty Vessel” also necessitated that Kumekawa use a variety of research techniques, from archival work to journalistic interviews with people who knew the vessel well.
“I had a wonderful set of conversations with the man who was the last bargemaster,” Kumekawa says. “He was the person in effect steering the vessel for many years. He was so aware of all of the forces at play — the market for oil, the prices of accommodations, the regulations, the fact no one had reinforced the frame.”
“Empty Vessel” has already received critical acclaim. Reviewing it in The New York Times, Jennifer Szalai writes that this “elegant and enlightening book is an impressive feat.”
For his part, Kumekawa also took inspiration from a variety of writings about ships, voyages, commerce, and exploration, recognizing that these vessels contain stories and vignettes that illuminate the wider world.
“Ships work very well as devices connecting the global and the local,” he says. Using the barge as the organizing principle of his book, Kumekawa adds, “makes a whole bunch of abstract processes very concrete. The offshore itself is an abstraction, but it’s also entirely dependent on physical infrastructure and physical places. My hope for the book is it reinforces the material dimension of these abstract global forces.”
Students and staff work together for MIT’s first “No Mow May”
In recent years, some grass lawns around the country have grown a little taller in springtime thanks to No Mow May, a movement originally launched by U.K. nonprofit Plantlife in 2019 designed to raise awareness about the ecological impacts of the traditional, resource-intensive, manicured grass lawn. No Mow May encourages people to skip spring mowing to allow for grass to grow tall and provide food and shelter for beneficial creatures including bees, beetles, and other pollinators.
This year, MIT took part in the practice for the first time, with portions of the Kendall/MIT Open Space, Bexley Garden, and the Tang Courtyard forgoing mowing from May 1 through June 6 to make space for local pollinators, decrease water use, and encourage new thinking about the traditional lawn. MIT’s first No Mow May was the result of championing by the Graduate Student Council Sustainability Subcommittee (GSC Sustain) and made possible by the Office of the Vice Provost for Campus Space Management and Planning.
A student idea sprouts
Despite being a dense urban campus, MIT has no shortage of green spaces — from pocket gardens and community-managed vegetable plots to thousands of shade trees — and interest in these spaces continues to grow. In recent years, student-led initiatives supported by Institute leadership and operational staff have transformed portions of campus by increasing the number of native pollinator plants and expanding community gardens, like the Hive Garden. With No Mow May, these efforts stepped out of the garden and into MIT’s many grassy open spaces.
“The idea behind it was to raise awareness for more sustainable and earth-friendly lawn practices,” explains Gianmarco Terrones, GSC Sustain member. Those practices include reducing the burden of mowing, limiting use of fertilizers, and providing shelter and food for pollinators. “The insects that live in these spaces are incredibly important in terms of pollination, but they’re also part of the food chain for a lot of animals,” says Terrones.
Research has shown that holding off on mowing in spring, even in small swaths of green space, can have an impact. The early months of spring have the lowest number of flowers in regions like New England, and providing a resource and refuge — even for a short duration — can support fragile pollinators like bees. Additionally, No Mow May aims to help people rethink their yards and practices, which are not always beneficial for local ecosystems.
Signage at each No Mow site on campus highlighted information on local pollinators, the impact of the project, and questions for visitors to ask themselves. “Having an active sign there to tell people, ‘look around. How many butterflies do you see after six weeks of not mowing? Do you see more? Do you see more bees?’ can cause subtle shifts in people’s awareness of ecosystems,” says GSC Sustain member Mingrou Xie. A mowed barrier around each project also helped visitors know that areas of tall grass at No Mow sites are intentional.
Campus partners bring sustainable practices to life
To make MIT’s No Mow May possible, GSC Sustain members worked with the Office of the Vice Provost and the Open Space Working Group, co-chaired by Vice Provost for Campus Space Management and Planning Brent Ryan and Director of Sustainability Julie Newman. The Working Group, which also includes staff from Open Space Programming, Campus Planning, and faculty in the School of Architecture and Planning, helped to identify potential No Mow locations and develop strategies for educational signage and any needed maintenance. “Massachusetts is a biodiverse state, and No Mow May provides an exciting opportunity for MIT to support that biodiversity on its own campus,” says Ryan.
Students were eager for space on campus with high visibility, and the chosen locations of the Kendall/MIT Open Space, Bexley Garden, and the Tang Courtyard fit the bill. “We wanted to set an example and empower the community to feel like they can make a positive change to an environment they spend so much time in,” says Xie.
For GSC Sustain, that positive change also takes the form of the Native Plant Project, which they launched in 2022 to increase the number of Massachusetts-native pollinator plants on campus — plants like swamp milkweed, zigzag goldenrod, big leaf aster, and red columbine, with which native pollinators have co-evolved. Partnering with the Open Space Working Group, GSC Sustain is currently focused on two locations for new native plant gardens — the President’s Garden and the terrace gardens at the E37 Graduate Residence. “Our short-term goal is to increase the number of native [plants] on campus, but long term we want to foster a community of students and staff interested in supporting sustainable urban gardening,” says Xie.
Campus as a test bed continues to grow
After just a few weeks of growing, the campus No Mow May locations sprouted buttercups, mouse ear chickweed, and small tree saplings, highlighting the diversity waiting dormant in the average lawn. Terrones also notes other discoveries: “It’s been exciting to see how much the grass has sprung up these last few weeks. I thought the grass would all grow at the same rate, but as May has gone on the variations in grass height have become more apparent, leading to non-uniform lawns with a clearly unmanicured feel,” he says. “We hope that members of MIT noticed how these lawns have evolved over the span of a few weeks and are inspired to implement more earth-friendly lawn practices in their own homes/spaces.”
No Mow May and the Native Plant Project fit into MIT’s overall focus on creating resilient ecosystems that support and protect the MIT community and the beneficial critters that call it home. MIT Grounds Services has long included native plants in the mix of what is grown on campus and native pollinator gardens, like the Hive Garden, have been developed and cared for through partnerships with students and Grounds Services in recent years. Grounds, along with consultants that design and install our campus landscape projects, strive to select plants that assist us with meeting sustainability goals, like helping with stormwater runoff and cooling. No Mow May can provide one more data point for the iterative process of choosing the best plants and practices for a unique microclimate like the MIT campus.
“We are always looking for new ways to use our campus as a test bed for sustainability,” says Director of Sustainability Julie Newman. “Community-led projects like No Mow May help us to learn more about our campus and share those lessons with the larger community.”
The Office of the Vice Provost, the Open Space Working Group, and GSC Sustain will plan to reconnect in the fall for a formal debrief of the project and its success. Given the positive community feedback, future possibilities of expanding or extending No Mow May will be discussed.
Professor Emeritus Hank Smith honored for pioneering work in nanofabrication
Nanostructures are a stunning array of intricate patterns that are imperceptible to the human eye, yet they help power modern life. They are the building blocks of microchip transistors, etched onto grating substrates of space-based X-ray telescopes, and drive innovations in medicine, sustainability, and quantum computing.
Since the 1970s, Henry “Hank” Smith, MIT professor emeritus of electrical engineering, has been a leading force in this field. He pioneered the use of proximity X-ray lithography, proving that X-rays’ short optical wavelength could produce high-resolution patterns at the nanometer scale. Smith also made significant advancements in phase-shifting masks (PSMs), a technique that disrupts light waves to enhance contrast. His design of attenuated PSMs, which he co-created with graduate students Mark Schattenburg PhD ʼ84 and Erik H. Anderson ʼ81, SM ʼ84, PhD ʼ88, is still used today in the semiconductor industry.
In recognition of these contributions, as well as highly influential achievements in liquid-immersion lithography, achromatic-interference lithography, and zone-plate array lithography, Smith recently received the 2025 SPIE Frits Zernike Award for Microlithography. Given by the Society of Photo-Optical Instrumentation Engineers (SPIE), the accolade recognizes scientists for their outstanding accomplishments in microlithographic technology.
“The Zernike Award is an impressive honor that aptly recognizes Hank’s pioneering contributions,” says Karl Berggren, MIT’s Joseph F. and Nancy P. Keithley Professor in Electrical Engineering and faculty head of electrical engineering. “Whether it was in the classroom, at a research conference, or in the lab, Hank approached his work with a high level of scientific rigor that helped make him decades ahead of industry practices.”
Now 88 years old, Smith has garnered many other honors. He was also awarded the SPIE BACUS Prize, named a member of the National Academy of Engineering, and is a fellow of the American Academy of Arts and Sciences, IEEE, the National Academy of Inventors, and the International Society for Nanomanufacturing.
Jump-starting the nano frontier
From an early age, Smith was fascinated by the world around him. He took apart clocks to see how they worked, explored the outdoors, and even observed the movement of water. After graduating from high school in New Jersey, Smith majored in physics at College of the Holy Cross. From there, he pursued his doctorate at Boston College and served three years as an officer in the U.S. Air Force.
It was his job at MIT Lincoln Laboratory that ultimately changed Smith’s career trajectory. There, he met visitors from MIT and Harvard University who shared their big ideas for electronic and surface acoustic wave devices but were stymied by the physical limitations of fabrication. Yet, few were inclined to tackle this challenge.
“The job of making things was usually brushed off the table with, ‘oh well, we’ll get some technicians to do that,’” Smith said in his oral history for the Center for Nanotechnology in Society. “And the intellectual content of fabrication technology was not appreciated by people who had been ‘traditionally educated,’ I guess.”
More interested in solving problems than maintaining academic rank, Smith set out to understand the science of fabrication. His breakthrough in X-ray lithography signaled to the world the potential and possibilities of working on the nanometer scale, says Schattenburg, who is a senior research scientist at MIT Kavli Institute for Astrophysics and Space Research.
“His early work proved to people at MIT and researchers across the country that nanofabrication had some merit,” Schattenburg says. “By showing what was possible, Hank really jump-started the nano frontier.”
Cracking open lithography’s black box
By 1980, Smith left Lincoln Lab for MIT’s main campus and continued to push forward new ideas in his NanoStructures Laboratory (NSL), formerly the Submicron Structures Laboratory. NSL served as both a research lab and a service shop that provided optical gratings, which are pieces of glass engraved with sub-micron periodic patterns, to the MIT community and outside scientists. It was a busy time for the lab; NSL attracted graduate students and international visitors. Still, Smith and his staff ensured that anyone visiting NSL would also receive a primer on nanotechnology.
“Hank never wanted anything we produced to be treated as a black box,” says Mark Mondol, MIT.nano e-beam lithography domain expert who spent 23 years working with Smith in NSL. “Hank was always very keen on people understanding our work and how it happens, and he was the perfect person to explain it because he talked in very clear and basic terms.”
The physical NSL space in MIT Building 39 shuttered in 2023, a decade after Smith became an emeritus faculty member. NSL’s knowledgeable staff and unique capabilities transferred to MIT.nano, which now serves as MIT’s central hub for supporting nanoscience and nanotechnology advancements. Unstoppable, Smith continues to contribute his wisdom to the ever-expanding nano community by giving talks at the NSL Community Meetings at MIT.nano focused on lithography, nanofabrication, and their future.
Smith’s career is far from complete. Through his startup LumArray, Smith continues to push the boundaries of knowledge. He recently devised a maskless lithography method, known as X-ray Maskless Lithography (XML), that has the potential to lower manufacturing costs of microchips and thwart the sale of counterfeit microchips.
Dimitri Antoniadis, MIT professor emeritus of electrical engineering and computer science, is Smith’s longtime collaborator and friend. According to him, Smith’s commitment to research is practically unheard-of.
“Once professors reach emeritus status, we usually inspire and supervise research,” Antoniadis says. “It’s very rare for retired professors to do all the work themselves, but he loves it.”
Enduring influence
Smith’s legacy extends far beyond the groundbreaking tools and techniques he pioneered, say his friends, colleagues, and former students. His relentless curiosity and commitment to his graduate students helped propel his field forward.
He earned a reputation for sitting in the front row at research conferences, ready to ask the first question. Fellow researchers sometimes dreaded seeing him there.
“Hank kept us honest,” Berggren says. “Scientists and engineers knew that they couldn’t make a claim that was a little too strong, or use data that didn’t support the hypothesis, because Hank would hold them accountable.”
Smith never saw himself as playing the good cop or bad cop — he was simply a curious learner unafraid to look foolish.
“There are famous people, Nobel Prize winners, that will sit through research presentations and not have a clue as to what’s going on,” Smith says. “That is an utter waste of time. If I don’t understand something, I’m going to ask a question.”
As an advisor, Smith held his graduate students to high standards. If they came unprepared or lacked understanding of their research, he would challenge them with tough, unrelenting questions. Yet, he was also their biggest advocate, helping students such as Lisa Su SB/SM ʼ91, PhD ʼ94, who is now the chair and chief executive officer of AMD, and Dario Gil PhD ʼ03, who is now the chair of the National Science Board and senior vice president and director of research at IBM, succeed in the lab and beyond.
Research Specialist James Daley has spent nearly three decades at MIT, most of them working with Smith. In that time, he has seen hundreds of advisees graduate and return to offer their thanks. “Hank’s former students are all over the world,” Daley says. “Many are now professors mentoring their own graduate students and bringing with them some of Hank’s style. They are his greatest legacy.”
Celebrating an academic-industry collaboration to advance vehicle technology
On May 6, MIT AgeLab’s Advanced Vehicle Technology (AVT) Consortium, part of the MIT Center for Transportation and Logistics, celebrated 10 years of its global academic-industry collaboration. AVT was founded with the aim of developing new data that contribute to automotive manufacturers, suppliers, and insurers’ real-world understanding of how drivers use and respond to increasingly sophisticated vehicle technologies, such as assistive and automated driving, while accelerating the applied insight needed to advance design and development. The celebration event brought together stakeholders from across the industry for a set of keynote addresses and panel discussions on critical topics significant to the industry and its future, including artificial intelligence, automotive technology, collision repair, consumer behavior, sustainability, vehicle safety policy, and global competitiveness.
Bryan Reimer, founder and co-director of the AVT Consortium, opened the event by remarking that over the decade AVT has collected hundreds of terabytes of data, presented and discussed research with its over 25 member organizations, supported members’ strategic and policy initiatives, published select outcomes, and built AVT into a global influencer with tremendous impact in the automotive industry. He noted that current opportunities and challenges for the industry include distracted driving, a lack of consumer trust and concerns around transparency in assistive and automated driving features, and high consumer expectations for vehicle technology, safety, and affordability. How will industry respond? Major players in attendance weighed in.
In a powerful exchange on vehicle safety regulation, John Bozzella, president and CEO of the Alliance for Automotive Innovation, and Mark Rosekind, former chief safety innovation officer of Zoox, former administrator of the National Highway Traffic Safety Administration, and former member of the National Transportation Safety Board, challenged industry and government to adopt a more strategic, data-driven, and collaborative approach to safety. They asserted that regulation must evolve alongside innovation, not lag behind it by decades. Appealing to the automakers in attendance, Bozzella cited the success of voluntary commitments on automatic emergency braking as a model for future progress. “That’s a way to do something important and impactful ahead of regulation.” They advocated for shared data platforms, anonymous reporting, and a common regulatory vision that sets safety baselines while allowing room for experimentation. The 40,000 annual road fatalities demand urgency — what’s needed is a move away from tactical fixes and toward a systemic safety strategy. “Safety delayed is safety denied,” Rosekind stated. “Tell me how you’re going to improve safety. Let’s be explicit.”
Drawing inspiration from aviation’s exemplary safety record, Kathy Abbott, chief scientific and technical advisor for the Federal Aviation Administration, pointed to a culture of rigorous regulation, continuous improvement, and cross-sectoral data sharing. Aviation’s model, built on highly trained personnel and strict predictability standards, contrasts sharply with the fragmented approach in the automotive industry. The keynote emphasized that a foundation of safety culture — one that recognizes that technological ability alone isn’t justification for deployment — must guide the auto industry forward. Just as aviation doesn’t equate absence of failure with success, vehicle safety must be measured holistically and proactively.
With assistive and automated driving top of mind in the industry, Pete Bigelow of Automotive News offered a pragmatic diagnosis. With companies like Ford and Volkswagen stepping back from full autonomy projects like Argo AI, the industry is now focused on Level 2 and 3 technologies, which refer to assisted and automated driving, respectively. Tesla, GM, and Mercedes are experimenting with subscription models for driver assistance systems, yet consumer confusion remains high. JD Power reports that many drivers do not grasp the differences between L2 and L2+, or whether these technologies offer safety or convenience features. Safety benefits have yet to manifest in reduced traffic deaths, which have risen by 20 percent since 2020. The recurring challenge: L3 systems demand that human drivers take over during technical difficulties, despite driver disengagement being their primary benefit, potentially worsening outcomes. Bigelow cited a quote from Bryan Reimer as one of the best he’s received in his career: “Level 3 systems are an engineer’s dream and a plaintiff attorney’s next yacht,” highlighting the legal and design complexity of systems that demand handoffs between machine and human.
In terms of the impact of AI on the automotive industry, Mauricio Muñoz, senior research engineer at AI Sweden, underscored that despite AI’s transformative potential, the automotive industry cannot rely on general AI megatrends to solve domain-specific challenges. While landmark achievements like AlphaFold demonstrate AI’s prowess, automotive applications require domain expertise, data sovereignty, and targeted collaboration. Energy constraints, data firewalls, and the high costs of AI infrastructure all pose limitations, making it critical that companies fund purpose-driven research that can reduce costs and improve implementation fidelity. Muñoz warned that while excitement abounds — with some predicting artificial superintelligence by 2028 — real progress demands organizational alignment and a deep understanding of the automotive context, not just computational power.
Turning the focus to consumers, a collision repair panel drawing Richard Billyeald from Thatcham Research, Hami Ebrahimi from Caliber Collision, and Mike Nelson from Nelson Law explored the unintended consequences of vehicle technology advances: spiraling repair costs, labor shortages, and a lack of repairability standards. Panelists warned that even minor repairs for advanced vehicles now require costly and complex sensor recalibrations — compounded by inconsistent manufacturer guidance and no clear consumer alerts when systems are out of calibration. The panel called for greater standardization, consumer education, and repair-friendly design. As insurance premiums climb and more people forgo insurance claims, the lack of coordination between automakers, regulators, and service providers threatens consumer safety and undermines trust. The group warned that until Level 2 systems function reliably and affordably, moving toward Level 3 autonomy is premature and risky.
While the repair panel emphasized today’s urgent challenges, other speakers looked to the future. Honda’s Ryan Harty, for example, highlighted the company’s aggressive push toward sustainability and safety. Honda aims for zero environmental impact and zero traffic fatalities, with plans to be 100 percent electric by 2040 and to lead in energy storage and clean power integration. The company has developed tools to coach young drivers and is investing in charging infrastructure, grid-aware battery usage, and green hydrogen storage. “What consumers buy in the market dictates what the manufacturers make,” Harty noted, underscoring the importance of aligning product strategy with user demand and environmental responsibility. He stressed that manufacturers can only decarbonize as fast as the industry allows, and emphasized the need to shift from cost-based to life-cycle-based product strategies.
Finally, a panel involving Laura Chace of ITS America, Jon Demerly of Qualcomm, Brad Stertz of Audi/VW Group, and Anant Thaker of Aptiv covered the near-, mid-, and long-term future of vehicle technology. Panelists emphasized that consumer expectations, infrastructure investment, and regulatory modernization must evolve together. Despite record bicycle fatality rates and persistent distracted driving, features like school bus detection and stop sign alerts remain underutilized due to skepticism and cost. Panelists stressed that we must design systems for proactive safety rather than reactive response. The slow integration of digital infrastructure — sensors, edge computing, data analytics — stems not only from technical hurdles, but procurement and policy challenges as well.
Reimer concluded the event by urging industry leaders to re-center the consumer in all conversations — from affordability to maintenance and repair. With the rising costs of ownership, growing gaps in trust in technology, and misalignment between innovation and consumer value, the future of mobility depends on rebuilding trust and reshaping industry economics. He called for global collaboration, greater standardization, and transparent innovation that consumers can understand and afford. He highlighted that global competitiveness and public safety both hang in the balance. As Reimer noted, “success will come through partnerships” — between industry, academia, and government — that work toward shared investment, cultural change, and a collective willingness to prioritize the public good.