MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 23 hours 10 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

From nanoscale to global scale: Advancing MIT’s special initiatives in manufacturing, health, and climate

Thu, 11/13/2025 - 3:45pm

“MIT.nano is essential to making progress in high-priority areas where I believe that MIT has a responsibility to lead,” opened MIT president Sally Kornbluth at the 2025 Nano Summit. “If we harness our collective efforts, we can make a serious positive impact.”

It was these collective efforts that drove discussions at the daylong event hosted by MIT.nano and focused on the importance of nanoscience and nanotechnology across MIT's special initiatives — projects deemed critical to MIT’s mission to help solve the world’s greatest challenges. With each new talk, common themes were reemphasized: collaboration across fields, solutions that can scale up from lab to market, and the use of nanoscale science to enact grand-scale change.

“MIT.nano has truly set itself apart, in the Institute's signature way, with an emphasis on cross-disciplinary collaboration and open access,” said Kornbluth. “Today, you're going to hear about the transformative impact of nanoscience and nanotechnology, and how working with the very small can help us do big things for the world together.”

Collaborating on health

Angela Koehler, faculty director of the MIT Health and Life Sciences Collaborative (MIT HEALS) and the Charles W. and Jennifer C. Johnson Professor of Biological Engineering, opened the first session with a question: How can we build a community across campus to tackle some of the most transformative problems in human health? In response, three speakers shared their work enabling new frontiers in medicine.

Ana Jaklenec, principal research scientist at the Koch Institute for Integrative Cancer Research, spoke about single-injection vaccines, and how her team looked to the techniques used in fabrication of electrical engineering components to see how multiple pieces could be packaged into a tiny device. “MIT.nano was instrumental in helping us develop this technology,” she said. “We took something that you can do in microelectronics and the semiconductor industry and brought it to the pharmaceutical industry.”

While Jaklenec applied insight from electronics to her work in health care, Giovanni Traverso, the Karl Van Tassel Career Development Professor of Mechanical Engineering, who is also a gastroenterologist at Brigham and Women’s Hospital, found inspiration in nature, studying the cephalopod squid and remora fish to design ingestible drug delivery systems. Representing the industry side of life sciences, Mirai Bio senior vice president Jagesh Shah SM ’95, PhD ’99 presented his company’s precision-targeted lipid nanoparticles for therapeutic delivery. Shah, as well as the other speakers, emphasized the importance of collaboration between industry and academia to make meaningful impact, and the need to strengthen the pipeline for young scientists.

Manufacturing, from the classroom to the workforce

Paving the way for future generations was similarly emphasized in the second session, which highlighted MIT’s Initiative for New Manufacturing (MIT INM). “MIT’s dedication to manufacturing is not only about technology research and education, it’s also about understanding the landscape of manufacturing, domestically and globally,” said INM co-director A. John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering. “It’s about getting people — our graduates who are budding enthusiasts of manufacturing — out of campus and starting and scaling new companies,” he said.

On progressing from lab to market, Dan Oran PhD ’21 shared his career trajectory from technician to PhD student to founding his own company, Irradiant Technologies. “How are companies like Dan’s making the move from the lab to prototype to pilot production to demonstration to commercialization?” asked the next speaker, Elisabeth Reynolds, professor of the practice in urban studies and planning at MIT. “The U.S. capital market has not historically been well organized for that kind of support.” She emphasized the challenge of scaling innovations from prototype to production, and the need for workforce development.

“Attracting and retaining workforce is a major pain point for manufacturing businesses,” agreed John Liu, principal research scientist in mechanical engineering at MIT. To keep new ideas flowing from the classroom to the factory floor, Liu proposes a new worker type in advanced manufacturing — the technologist — someone who can be a bridge to connect the technicians and the engineers.

Bridging ecosystems with nanoscience

Bridging people, disciplines, and markets to affect meaningful change was also emphasized by Benedetto Marelli, mission director for the MIT Climate Project and associate professor of civil and environmental engineering at MIT.

“If we’re going to have a tangible impact on the trajectory of climate change in the next 10 years, we cannot do it alone,” he said. “We need to take care of ecology, health, mobility, the built environment, food, energy, policies, and trade and industry — and think about these as interconnected topics.”

Faculty speakers in this session offered a glimpse of nanoscale solutions for climate resiliency. Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering, presented his group’s work on using nanoparticles to turn waste methane and urea into renewable materials. Desirée Plata, the School of Engineering Distinguished Climate and Energy Professor, spoke about scaling carbon dioxide removal systems. Mechanical engineering professor Kripa Varanasi highlighted, among other projects, his lab’s work on improving agricultural spraying so pesticides adhere to crops, reducing agricultural pollution and cost.

In all of these presentations, the MIT faculty highlighted the tie between climate and the economy. “The economic systems that we have today are depleting to our resources, inherently polluting,” emphasized Plata. “The goal here is to use sustainable design to transition the global economy.”

What do people do at MIT.nano?

This is where MIT.nano comes in, offering shared access facilities where researchers can design creative solutions to these global challenges. “What do people do at MIT.nano?” asked associate director for Fab.nano Jorg Scholvin ’00, MNG ’01, PhD ’06 in the session on MIT.nano’s ecosystem. With 1,500 individuals and over 20 percent of MIT faculty labs using MIT.nano, it’s a difficult question to quickly answer. However, in a rapid-fire research showcase, students and postdocs gave a response that spanned 3D transistors and quantum devices to solar solutions and art restoration. Their work reflects the challenges and opportunities shared at the Nano Summit: developing technologies ready to scale, uniting disciplines to tackle complex problems, and gaining hands-on experience that prepares them to contribute to the future of hard tech.

The researchers’ enthusiasm carried the excitement and curiosity that President Kornbluth mentioned in her opening remarks, and that many faculty emphasized throughout the day. “The solutions to the problems we heard about today may come from inventions that don't exist yet,” said Strano. “These are some of the most creative people, here at MIT. I think we inspire each other.”

Robert N. Noyce (1953) Cleanroom at MIT.nano

Collaborative inspiration is not new to the MIT culture. The Nano Summit sessions focused on where we are today, and where we might be going in the future, but also reflected on how we arrived at this moment. Honoring visionaries of nanoscience and nanotechnology, President Emeritus L. Rafael Reif delivered the closing remarks and an exciting announcement — the dedication of the MIT.nano cleanroom complex. Made possible through a gift by Ray Stata SB ’57, SM ’58, this research space, 45,000 square feet of ISO 5, 6, and 7 cleanrooms, will be named the Robert N. Noyce (1953) Cleanroom.

“Ray Stata was — and is — the driving force behind nanoscale research at MIT,” said Reif. “I want to thank Ray, whose generosity has allowed MIT to honor Robert Noyce in such a fitting way.”

Ray Stata co-founded Analog Devices in 1965, and Noyce co-founded Fairchild Semiconductor in 1957, and later Intel in 1968. Noyce, widely regarded as the “Mayor of Silicon Valley,” became chair of the Semiconductor Industry Association in 1977, and over the next 40 years, semiconductor technology advanced a thousandfold, from micrometers to nanometers.

“Noyce was a pioneer of the semiconductor industry,” said Stata. “It is due to his leadership and remarkable contributions that electronics technology is where it is today. It is an honor to be able to name the MIT.nano cleanroom after Bob Noyce, creating a permanent tribute to his vision and accomplishments in the heart of the MIT campus.”

To conclude his remarks and the 2025 Nano Summit, Reif brought the nano journey back to today, highlighting technology giants such as Lisa Su ’90, SM ’91, PhD ’94, for whom Building 12, the home of MIT.nano, is named. “MIT has educated a large number of remarkable leaders in the semiconductor space,” said Reif. “Now, with the Robert Noyce Cleanroom, this amazing MIT community is ready to continue to shape the future with the next generation of nano discoveries — and the next generation of nano leaders, who will become living legends in their own time.”

Green bananas can’t throw 3.091 Fun Run off course

Thu, 11/13/2025 - 3:00pm

The night before the Department of Materials Science and Engineering (DMSE)’s 3.091 Fun Run, organizer Bianca Sinausky opened a case of bananas she’d ordered and was met with a surprise: the fruit was bright green.

“I looked around for paper bags, but I only found a few,” says Sinausky, graduate academic administrator for the department, referring to a common hack for speeding up ripening. “It was hopeless.”

That is, until facilities manager Kevin Rogers came up with a plan: swap the green bananas for ripe ones from MIT’s Banana Lounge, a free campus snack and study space stocked with fruit.

“It was genius,” Sinausky says. “The runners would have their snack, and the race could go on.”

DMSE checked in with the Banana Lounge a little late, but logistics lead senior Colin Clark approved anyway. “So that’s where that box came from,” he says.

On a bright fall morning, ripe bananas awaited 20 DMSE students and faculty in the Oct. 15 run, which started and finished at the Zesiger Sports and Fitness Center and wound along pedestrian paths across the MIT campus. Department head Polina Anikeeva, an avid runner, says the goal was to build community, enjoy the outdoors, and celebrate 3.091 (Introduction to Solid-State Chemistry), a popular first-year class and General Institute Requirement.

“We realized 3.091 was so close to 5 kilometers — 3.1 miles — it was the perfect opportunity,” Anikeeva says, admitting she made the initial connection. “I think about things like that.”

For many participants, running is a regular hobby—but doing it with colleagues made it even more enjoyable. “I usually run a few times a week, and I thought it would be fun to log some more miles in my training block with the DMSE community,” says graduate student Jessica Dong, who is training for the Cambridge Half Marathon this month.

Fellow graduate student Rishabh Kothari agrees. “I was excited to support a department event that aligns with my general hobbies,” says Kothari, who recently ran the Chicago Marathon and tied for first in his age category in the DMSE run. “I find running to be a great community-building activity.”

While fun runs are usually noncompetitive, organizers still recognized the fastest runners by age group.

Unlike an official road race, organized by a race company — the City of Cambridge currently isn’t allowing new races — the DMSE run was managed internally by an informal cohort of colleagues, Sinausky says, which meant a fair amount of work.

“The hardest part was walking the route and putting the mileage out, and also putting out arrows,” she says. “When a race company does it, they do it properly.”

There were a few minor snags — some runners went the wrong way, and two walkers got lost. “So I think we need to mark the course better,” Sinausky says.

Others found charm in the run’s rough edges.

“My favorite part of the run was when a group of us got confused about the route, so we cut through the lawn in front of Tang Hall,” Dong says. At the finish line, she showed off a red DMSE hat — one of the giveaways laid out alongside ripe bananas and bottles of water.

Looking ahead to what organizers hope will be an annual event, the team is considering purchasing race timing equipment. Modern road races distribute bibs outfitted with RFID chips, which track each runner’s start and finish. Sinausky’s method — employing a smartphone timer and Anikeeva tracking finish times on a clipboard — was less high-tech, but effective for the small number of participants.

“We would see the runners coming, and Polina would say, ‘OK, bib 21.’ And then I would yell out the time,” she says. “I think that if more people showed up, it would’ve been harder.”

Sinausky hopes to boost participation in coming years. Early interest was strong, with 63 registering, but fewer than a third showed up on race day. The week’s delay due to rain — and several straight days of rain since — likely didn’t help, she says.

Overall, she says, the run was a success, with participants saying they hope it will become a new DMSE tradition.

“It was great to see everyone finish and enjoy themselves,” Kothari says. “A nice morning to be around friends.”

Transforming complex research into compelling stories

Thu, 11/13/2025 - 2:25pm

For students, postdocs, and early-career researchers, communicating complex ideas in a clear and compelling manner has become an essential skill. Whether applying for academic positions, pitching research to funders, or collaborating across disciplines, the ability to present work clearly and effectively can be as critical as the work itself.

Recognizing this need, The MIT Office of Graduate Education (OGE) has partnered with the Writing and Communication Center (WCC) to launch the WCC Communication Studio: a self-service recording and editing space designed to help users sharpen their oral presentation and communication skills. Open to all members of the MIT community as of this fall, the studio offers a first-of-its-kind resource at MIT for developing and refining research presentations, mock interview conversations, elevator pitches, and more.

Housed in WCC’s Ames Street office, the studio is equipped with high-quality microphones and user-friendly video recording and editing tools, all designed to be used with the PitchVantage software.

How does it work? Users can access tutorials, example videos, and a reservation system through the WCC’s website. After completing a short orientation on how to use the technology and space responsibly, users are ready to pitch to simulated audiences, who react in real time to various elements of delivery. Users can also watch their recorded presentations and receive personalized feedback on nine elements of presentation delivery: pitch, pace, volume variability, verbal distractors, pace, eye contact, volume, engagement, and pauses.

Designed with students in mind

“Through years of individual and group consultations with MIT students and scholars, we realized that developing strong presentation skills requires more than feedback — it requires sustained, embodied practice,” explains Elena Kallestinova, director of the WCC. “The Oral Communication Studio was created to fill that gap.”

Those who have used the studio during its initial lifespan say that its interactive format helps to provide real-time, actionable feedback on their verbal delivery. Additionally, the program offers notes on overall stage presence, including subtle actions such as hand gestures and eye contact. For students, this can be the key to ensuring that their delivery is both confident and clearly accessible once it comes time to present. 

“I’ve been using the studio to practice for conferences and job interviews,” says Fabio Castro, a PhD student studying civil engineering. His favorite feature? The instant feedback from the virtual figures watching the presentation, which allows him to not only prepare to speak in front of an audience, but to read their nonverbal cues and adjust his delivery accordingly.

The studio also addresses a practical challenge facing many PhD students and postdocs in their role as emerging researchers: the high stakes of presenting. For many, their first major talk may be in front of a hiring committee, research institute, or funding body — audiences that may heavily influence their next career step. The studio gives them a low-pressure environment in which to rehearse so that they enter these spaces confidently.

Aditi Ramakrishnan, an MBA student in the MIT Sloan School of Management, acknowledges the importance of this tool for emerging professionals. As a business student, she explains, “a lot of your job involves pitching.” She credits the WCC with helping to take her pitching game “from good to excellent,” identifying small details such as unnecessary “filler” words and understanding the difference between a strong stage presence and a distracting one. 

A new frontier in communication support at MIT

While MIT has long been recognized for its excellence in technical education, the studio represents a broader focus on arming students and researchers alike with the tools that they need to amplify their work to larger audiences. 

“The WCC Communication Studio  gives students a place to rehearse, get immediate feedback, and iterate until their ideas land clearly and confidently,” explains Denzil Streete, OGE’s senior associate dean and director. “It’s not just about better slides or smoother delivery; it’s about unlocking and scaling access to more modern tools so more graduate students can translate breakthrough research into real-world impact.”

"The studio is a resource for the entire MIT community,” says Kallestinova, emphasizing that this new resource serves as a support for not only graduate students, but also undergrads, researchers, and even faculty. “Whether used as a supplement to classroom instruction or as a follow-up to coaching sessions, the studio offers a dedicated space for rehearsal, reflection, and growth, helping all users build confidence, clarity, and command in their communication."

The studio joins an array of existing resources within the WCC, including a Public Speaking Certificate Program, a peer-review group for creative writers, and a number of revolving workshops throughout the year. 

A culture of communication

From grant funding and academic collaboration to public outreach and policy impact, effective speaking skills are more important than ever.

“No matter how brilliant the idea, it has to be clearly communicated by the researcher or scholar in order to have impact,” says Amanda Cornwall, associate director of graduate student professional development at Career Advising and Professional Development (CAPD). 

“Explaining complex concepts to a broader audience takes practice and skill. When a researcher can build confidence in their speaking abilities, they have the power to transport their audience and show the way to new possibilities,” she adds. “This is why communication is one of the professional development competencies that we emphasize at MIT; it matters in every context, from small conversations to teaching to speeches that might change the world.”

The studio’s launch comes among a broader institutional focus on communication. CAPD, the Teaching and Learning Lab, the OGE, and academic departments have recognized the value of, and provided increasing levels of support for, professional development training alongside technical expertise.

Workshops already offered by the WCC, CAPD, and other campus partners work to highlight best practices for conference talks, long-form interviews, and more. The WCC Communication Studio provides a practical extension of these efforts. Looking ahead, the studio aims to not only serve as a training space, but also help foster a culture of communication excellence among researchers and educators.

Returning farming to city centers

Thu, 11/13/2025 - 9:15am

A new class is giving MIT students the opportunity to examine the historical and practical considerations of urban farming while developing a real-world understanding of its value by working alongside a local farm’s community.

Course 4.182 (Resilient Urbanism: Green Commons in the City) is taught in two sections by instructors in the Program in Science, Technology, and Society and the School of Architecture and Planning, in collaboration with The Common Good Co-op in Dorchester.

The first section was completed in spring 2025 and the second section is scheduled for spring 2026. The course is taught by STS professor Kate Brownvisiting lecturer Justin Brazier MArch ’24, and Kafi Dixon, lead farmer and executive director of The Common Good.

“This project is a way for students to investigate the real political, financial, and socio-ecological phenomena that can help or hinder an urban farm’s success,” says Brown, the Thomas M. Siebel Distinguished Professor in History of Science. 

Brown teaches environmental history, the history of food production, and the history of plants and people. She describes a history of urban farming that centered sustainable practices, financial investment and stability, and lasting connections among participants. 

Brown says urban farms have sustained cities for decades.

“Cities are great places to grow produce,” Brown asserts. “City dwellers produce lots of compostable materials.”

Brazier’s research ranges from affordable housing to urban agricultural gardens, exploring topics like sustainable architecture, housing, and food security.

“My work designing vacant lots as community gardens offered a link between Kafi’s work with Common Good and my interests in urban design,” Brazier says. “Urban farms offer opportunities to eliminate food deserts in underserved areas while also empowering historically marginalized communities.”

Before they agreed to collaborate on the course, Dixon reached out to Brown asking for help with several challenges related to her urban farm including zoning, location, and infrastructure.

“As the lead farmer and executive director of Common Good Co-op, I happened upon Kate Brown’s research and work and saw that it aligned with our cooperative model’s intentions,” Dixon says. “I reached out to Kate, and she replied, which humbled and excited me.” 

“Design itself is a form of communication,” Dixon adds, describing the collaborative nature of farming sustenance and development. “For many under-resourced communities, innovating requires a research-based approach.”

The project is among the inaugural cohort of initiatives to receive support from the SHASS Education Innovation Fund, which is administered by the MIT Human Insight Collaborative (MITHIC).

Community development, investment, and collaboration

The class’s first section paired students with community members and the City of Boston to change the farm’s zoning status and create a green space for long-term farming and community use. Students spent time at Common Good during the course, including one weekend during which they helped with weeding the garden beds for spring planting.

One objective of the class is to help Common Good avoid potential pitfalls associated with gentrification. “A study in Philadelphia showed that gentrification occurs within 1,000 feet of a community garden,” Brown says. 

“Farms and gardens are a key part of community and public health,” Dixon continues. 

Students in the second section will design and build infrastructure — including a mobile chicken coop and a pavilion to protect farmers from the elements — for Common Good. The course also aims to secure a green space designation for the farm and ensure it remains an accessible community space. “We want to prevent developers from acquiring the land and displacing the community,” Brown says, avoiding past scenarios in which governments seized inhabitants’ property while offering little or no compensation.

Students in the 2025 course also produced a guide on how to navigate the complex rules surrounding zoning and related development. Students in the next STS section will research the history of food sovereignty and Black feminist movements in Dorchester and Roxbury. Using that research, they will construct an exhibit focused on community activism for incorporation into the coop’s facade.

Imani Bailey, a second-year master’s student in the Department of Architecture’s MArch program, was among the students in the course’s first section.

“By taking this course, I felt empowered to directly engage with the community in a way no other class I have taken so far has afforded me the ability to,” she says.

Bailey argues for urban farms’ value as both a financial investment and space for communal interaction, offering opportunities for engagement and the implementation of sustainable practices. 

“Urban farms are important in the same way a neighbor is,” she adds. “You may not necessarily need them to own your home, but a good one makes your property more valuable, sometimes financially, but most importantly in ways that cannot be assigned a monetary value.”

The intersection of agriculture, community, and technology

Technology, the course’s participants believe, can offer solutions to some of the challenges related to ensuring urban farms’ viability. 

“Cities like Amsterdam are redesigning themselves to improve walkability, increase the appearance of small gardens in the city, and increase green space,” Brown says. By creating spaces that center community and a collective approach to farming, it’s possible to reduce both greenhouse emissions and impacts related to climate change.

Additionally, engineers, scientists, and others can partner with communities to develop solutions to transportation and public health challenges. By redesigning sewer systems, empowering microbiologists to design microbial inoculants that can break down urban food waste at the neighborhood level, and centering agriculture-related transportation in the places being served, it’s possible to sustain community support and related infrastructure.

“Community is cultivated, nurtured, and grown from prolonged interaction, sharing ideas, and the creation of place through a shared sense of ownership,” Bailey argues. “Urban farms present the conditions for communities to develop.” 

Bailey values the course because it leaves the theoretical behind, instead focusing on practical solutions. “We seldom see our design ideas become tangible," she says. “This class offered an opportunity to design and build for a real client in the real world.”

Brazier says the course and its projects prove everyone has something to contribute and can have a voice in what happens with their neighborhoods. “Despite these communities’ distrust of some politicians, we partnered to work on solutions related to zoning,” he says, “and supported community members’ advocacy efforts.”

How drones are altering contemporary warfare

Thu, 11/13/2025 - 12:00am

In recent months, Russia has frequently flown drones into NATO territory, where NATO countries typically try to shoot them down. By contrast, when three Russian fighter jets made an incursion into Estonian airspace in September, they were intercepted and no attempt was made to shoot them down — although the incident did make headlines and led to a Russian diplomat being expelled from Estonia.

Those incidents follow a global pattern of recent years. Drone operations, to this point, seem to provoke different responses compared to other kinds of military action, especially the use of piloted warplanes. Drone warfare is expanding but not necessarily provoking major military responses, either by the countries being attacked or by the aggressor countries that have drones shot down.

“There was a conventional wisdom that drones were a slippery slope that would enable leaders to use force in all kinds of situations, with a massively destabilizing effect,” says MIT political scientist Erik Lin-Greenberg. “People thought if drones were used all over the place, this would lead to more escalation. But in many cases where drones are being used, we don’t see that escalation.”

On the other hand, drones have made military action more pervasive. It is at least possible that in the future, drone-oriented combat will be both more common and more self-contained.

“There is a revolutionary effect of these systems, in that countries are essentially increasing the range of situations in which leaders are willing to deploy military force,” Lin-Greenberg says. To this point, though, he adds, “these confrontations are not necessarily escalating.”

Now Lin-Greenberg examines these dynamics in a new book, “The Remote Revolution: Drones and Modern Statecraft,” published by Cornell University Press. Lin-Greenberg is an associate professor in MIT’s Department of Political Science.

Lin-Greenberg brings a distinctive professional background to the subject of drone warfare. Before returning to graduate school, he served as a U.S. Air Force officer; today he commands a U.S. Air Force reserve squadron. His thinking is informed by his experiences as both a scholar and practitioner.

“The Remote Revolution” also has a distinctive methodology that draws on multiple ways of studying the topic. In writing the book, Lin-Greenberg conducted experiments based on war games played by national security professionals; conducted surveys of expert and public thinking about drones; developed in-depth case studies from history; and dug into archives broadly to fully understand the history of drone use, which in fact goes back several decades.

The book’s focus is drone use during the 2000s, as the technology has become more readily available; today about 100 countries have access to military drones. Many have used them during tensions and skirmishes with other countries.

“Where I argue this is actually revolutionary is during periods of crises, which fall below the threshold of war, in that these new technologies take human operators out of harm’s way and enable states to do things they wouldn’t otherwise do,” Lin-Greenberg says.

Indeed, a key point is that drones lower the costs of military action for countries — and not just financial costs, but human and political costs, too. Incidents and problems that might plague leaders if they involved military personnel, forcing major responses, seem to lessen when drones are involved.

“Because these systems don’t have a human on board, they’re inherently cheaper and different in the minds of decision-makers,” Lin-Greenberg says. “That means they’re willing to use these systems during disputes, and if other states are shooting them down, the side sending them is less likely to retaliate, because they’re losing a machine but not a man or woman on board.”

In this sense, the uses of drones “create new rungs on the escalation ladder,” as Lin-Greenberg writes in the book. Drone incidents don’t necessarily lead to wider military action, and may not even lead to the same kinds of international relations issues as incidents involving piloted aircraft.

Consider a counterfactual that Lin-Greenberg raises in the book. One of the most notorious episodes of Cold War tension between the U.S. and U.S.S.R. occurred in 1960, when U.S. pilot Gary Powers was shot down and captured in the Soviet Union, leading to a diplomatic standoff and a canceled summit between U.S. President Dwight Eisenhower and Soviet leader Nikita Khrushchev.

“Had that been a drone, it’s very likely the summit would have continued,” Lin-Greenberg says. “No one would have said anything. The Soviet Union would have been embarrassed to admit their airspace was violated and the U.S. would have just [publicly] ignored what was going on, because there would not have been anyone sitting in a prison. There are a lot of exercises where you can ask how history could have been different.”

None of this is to say that drones present straightforward solutions to international relations problems. They may present the appearance of low-cost military engagement, but as Lin-Greenberg underlines in the book, the effects are more complicated.

“To be clear, the remote revolution does not suggest that drones prevent war,” Lin-Greenberg writes. Indeed, one of the problems they raise, he emphasizes, is the “moral hazard” that arises from leaders viewing drones as less costly, which can lead to even more military confrontations.

Moreover, the trends in drone warfare so far yield predictions for the future that are “probabilistic rather than deterministic,” as Lin-Greenberg writes. Perhaps some political or military leaders will start to use drones to attack new targets that will inevitably generate major responses and quickly escalate into broad wars. Current trends do not guarantee future outcomes.

“There are a lot of unanswered questions in this area,” Lin-Greenberg says. “So much is changing. What does it look like when more drones are more autonomous? I still hope this book lays a foundation for future dicussions, even as drones are used in different ways.”

Other scholars have praised “The Remote Revolution.” Joshua Kertzer, a professor of international studies and government at Harvard University, has hailed Lin-Greenberg’s “rich expertise, methodological rigor, and creative insight,” while Michael Horowitz, a political scientist and professor of international relations at the University of Pennsylvania, has called it “an incredible book about the impact of drones on the international security environment.”

For his part, Lin-Greenberg says, “My hope is the book will be read by academics and practitioners and people who choose to focus on parts of it they’re interested in. I tried to write the book in way that’s approachable.”

Publication of the book was supported by funding from MIT’s Security Studies Program. 

MIT senior turns waste from the fishing industry into biodegradable plastic

Wed, 11/12/2025 - 4:25pm

Sometimes the answers to seemingly intractable environmental problems are found in nature itself.
 
Take the growing challenge of plastic waste. Jacqueline Prawira, an MIT senior in the Department of Materials Science and Engineering (DMSE), has developed biodegradable, plastic-like materials from fish offal, as featured in a recent segment on the CBS show “The Visioneers with Zay Harding.”
 
“We basically made plastics to be too good at their job. That also means the environment doesn’t know what to do with this, because they simply won’t degrade,” Prawira told Harding. “And now we’re literally drowning in plastic. By 2050, plastics are expected to outweigh fish in the ocean.”
 
“The Visioneers” regularly highlights environmental innovators. The episode featuring Prawira premiered during a special screening at Climate Week NYC on Sept. 24.

Her inspiration came from the Asian fish market her family visits. Once the fish they buy are butchered, the scales are typically discarded.
 
“But I also started noticing they’re actually fairly strong. They’re thin, somewhat flexible, and pretty lightweight, too, for their strength,” Prawira says. “And that got me thinking: Well, what other material has these properties? Plastics.”
 
She transformed this waste product into a transparent, thin-film material that can be used for disposable products such as grocery bags, packaging, and utensils.
 
Both her fish-scale material and a composite she developed don’t just mimic plastic — they address one of its biggest flaws. “If you put them in composting environments, [they] will degrade on their own naturally without needing much, if any, external help,” Prawira says.
 
This isn’t Prawira’s first environmental innovation. Working in DMSE Professor Yet-Ming Chiang’s lab, she helped develop a low-carbon process for making cement — the world’s most widely used construction material, and a major emitter of carbon dioxide. The process, called silicate subtraction, enables compounds to form at lower temperatures, cutting fossil fuel use.
 
Prawira and her co-inventors in the Chiang lab are also using the method to extract valuable lithium with zero waste. The process is patented and is being commercialized through the startup Rock Zero.
 
For her achievements, Prawira recently received the Barry Goldwater Scholarship, awarded to undergraduates pursuing careers in science, mathematics, or engineering.
 
In her “Visioneers” interview, she shared her hope for more sustainable ways of living. 

“I’m hoping that we can have daily lives that can be more in sync with the environment,” Prawira said. “So you don’t always have to choose between the convenience of daily life and having to help protect the environment.”

New lightweight polymer film can prevent corrosion

Wed, 11/12/2025 - 11:00am

MIT researchers have developed a lightweight polymer film that is nearly impenetrable to gas molecules, raising the possibility that it could be used as a protective coating to prevent solar cells and other infrastructure from corrosion, and to slow the aging of packaged food and medicines.

The polymer, which can be applied as a film mere nanometers thick, completely repels nitrogen and other gases, as far as can be detected by laboratory equipment, the researchers found. That degree of impermeability has never been seen before in any polymer, and rivals the impermeability of molecularly-thin crystalline materials such as graphene.

“Our polymer is quite unusual. It’s obviously produced from a solution-phase polymerization reaction, but the product behaves like graphene, which is gas-impermeable because it’s a perfect crystal. However, when you examine this material, one would never confuse it with a perfect crystal,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT.

The polymer film, which the researchers describe today in Nature, is made using a process that can be scaled up to large quantities and applied to surfaces much more easily than graphene.

Strano and Scott Bunch, an associate professor of mechanical engineering at Boston University, are the senior authors of the new study. The paper’s lead authors are Cody Ritt, a former MIT postdoc who is now an assistant professor at the University of Colorado at Boulder; Michelle Quien, an MIT graduate student; and Zitang Wei, an MIT research scientist.

Bubbles that don’t collapse

Strano’s lab first reported the novel material — a two-dimensional polymer called a 2D polyaramid that self-assembles into molecular sheets using hydrogen bonds — in 2022. To create such 2D polymer sheets, which had never been done before, the researchers used a building block called melamine, which contains a ring of carbon and nitrogen atoms. Under the right conditions, these monomers can expand in two dimensions, forming nanometer-sized disks. These disks stack on top of each other, held together by hydrogen bonds between the layers, which make the structure very stable and strong.

That polymer, which the researchers call 2DPA-1, is stronger than steel but has only one-sixth the density of steel.

In their 2022 study, the researchers focused on testing the material’s strength, but they also did some preliminary studies of its gas permeability. For those studies, they created “bubbles” out of the films and filled them with gas. With most polymers, such as plastics, gas that is trapped inside will seep out through the material, causing the bubble to deflate quickly.

However, the researchers found that bubbles made of 2DPA-1 did not collapse — in fact, bubbles that they made in 2021 are still inflated. “I was quite surprised initially,” Ritt says. “The behavior of the bubbles didn’t follow what you’d expect for a typical, permeable polymer. This required us to rethink how to properly study and understand molecular transport across this new material.”  

“We set up a series of careful experiments to first prove that the material is molecularly impermeable to nitrogen,” Strano says. “It could be considered tedious work. We had to make micro-bubbles of the polymer and fill them with a pure gas like nitrogen, and then wait. We had to repeatedly check over an exceedingly long period of time that they weren’t collapsed, in order to report the record impermeability value.”

Traditional polymers allow gases through because they consist of a tangle of spaghetti-like molecules that are loosely joined together. This leaves tiny gaps between the strands. Gas molecules can seep through these gaps, which is why polymers always have at least some degree of gas permeability.

However, the new 2D polymer is essentially impermeable because of the way that the layers of disks stick to each other.

“The fact that they can pack flat means there’s no volume between the two-dimensional disks, and that’s unusual. With other polymers, there’s still space between the one-dimensional chains, so most polymer films allow at least a little bit of gas to get through,” Strano says.

George Schatz, a professor of chemistry and chemical and biological engineering at Northwestern University, described the results as “remarkable.”

“Normally polymers are reasonably permeable to gases, but the polyaramids reported in this paper are orders of magnitude less permeable to most gases under conditions with industrial relevance,” says Schatz, who was not involved in the study.

A protective coating

In addition to nitrogen, the researchers also exposed the polymer to helium, argon, oxygen, methane, and sulfur hexafluoride. They found that 2DPA-1’s permeability to those gases was at least 1/10,000 that of any other existing polymer. That makes it nearly as impermeable as graphene, which is completely impermeable to gases because of its defect-free crystalline structure.

Scientists have been working on developing graphene coatings as a barrier to prevent corrosion in solar cells and other devices. However, scaling up the creation of graphene films is difficult, in large part because they can’t be simply painted onto surfaces.

“We can only make crystal graphene in very small patches,” Strano says. “A little patch of graphene is molecularly impermeable, but it doesn’t scale. People have tried to paint it on, but graphene does not stick to itself but slides when sheared. Graphene sheets moving past each other are considered almost frictionless.”

On the other hand, the 2DPA-1 polymer sticks easily because of the strong hydrogen bonds between the layered disks. In this paper, the researchers showed that a layer just 60 nanometers thick could extend the lifetime of a perovskite crystal by weeks. Perovskites are materials that hold promise as cheap and lightweight solar cells, but they tend to break down much faster than the silicon solar panels that are now widely used.

A 60-nanometer coating extended the perovskite’s lifetime to about three weeks, but a thicker coating would offer longer protection, the researchers say. The films could also be applied to a variety of other structures.

“Using an impermeable coating such as this one, you could protect infrastructure such as bridges, buildings, rail lines — basically anything outside exposed to the elements. Automotive vehicles, aircraft and ocean vessels could also benefit. Anything that needs to be sheltered from corrosion. The shelf life of food and medications can also be extended using such materials,” Strano says.

The other application demonstrated in this paper is a nanoscale resonator — essentially a tiny drum that vibrates at a particular frequency. Larger resonators, with sizes around 1 millimeter or less, are found in cell phones, where they allow the phone to pick up the frequency bands it uses to transmit and receive signals.

“In this paper, we made the first polymer 2D resonator, which you can do with our material because it’s impermeable and quite strong, like graphene,” Strano says. “Right now, the resonators in your phone and other communications devices are large, but there’s an effort to shrink them using nanotechnology. To make them less than a micron in size would be revolutionary. Cell phones and other devices could be smaller and reduce the power expenditures needed for signal processing.”

Resonators can also be used as sensors to detect very tiny molecules, including gas molecules. 

The research was funded, in part, by the Center for Enhanced Nanofluidic Transport-Phase 2, an Energy Frontier Research Center funded by the U.S. Department of Energy Office of Science, as well as the National Science Foundation.

This research was carried out, in part, using MIT.nano’s facilities.

Teaching large language models how to absorb new knowledge

Wed, 11/12/2025 - 12:00am

In an MIT classroom, a professor lectures while students diligently write down notes they will reread later to study and internalize key information ahead of an exam.

Humans know how to learn new information, but large language models can’t do this in the same way. Once a fully trained LLM has been deployed, its “brain” is static and can’t permanently adapt itself to new knowledge.

This means that if a user tells an LLM something important today, it won’t remember that information the next time this person starts a new conversation with the chatbot.

Now, a new approach developed by MIT researchers enables LLMs to update themselves in a way that permanently internalizes new information. Just like a student, the LLM generates its own study sheets from a user’s input, which it uses to memorize the information by updating its inner workings.

The model generates multiple self-edits to learn from one input, then applies each one to see which improves its performance the most. This trial-and-error process teaches the model the best way to train itself.

The researchers found this approach improved the accuracy of LLMs at question-answering and pattern-recognition tasks, and it enabled a small model to outperform much larger LLMs.

While there are still limitations that must be overcome, the technique could someday help artificial intelligence agents consistently adapt to new tasks and achieve changing goals in evolving environments.   

“Just like humans, complex AI systems can’t remain static for their entire lifetimes. These LLMs are not deployed in static environments. They are constantly facing new inputs from users. We want to make a model that is a bit more human-like — one that can keep improving itself,” says Jyothish Pari, an MIT graduate student and co-lead author of a paper on this technique.

He is joined on the paper by co-lead author Adam Zweiger, an MIT undergraduate; graduate students Han Guo and Ekin Akyürek; and senior authors Yoon Kim, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Pulkit Agrawal, an associate professor in EECS and member of CSAIL. The research will be presented at the Conference on Neural Information Processing Systems.

Teaching the model to learn

LLMs are neural network models that have billions of parameters, called weights, that contain the model’s knowledge and process inputs to make predictions. During training, the model adapts these weights to learn new information contained in its training data.

But once it is deployed, the weights are static and can’t be permanently updated anymore.

However, LLMs are very good at a process called in-context learning, in which a trained model learns a new task by seeing a few examples. These examples guide the model’s responses, but the knowledge disappears before the next conversation.

The MIT researchers wanted to leverage a model’s powerful in-context learning capabilities to teach it how to permanently update its weights when it encounters new knowledge.

The framework they developed, called SEAL for “self-adapting LLMs,” enables an LLM to generate new synthetic data based on an input, and then determine the best way to adapt itself and learn from that synthetic data. Each piece of synthetic data is a self-edit the model can apply.

In the case of language, the LLM creates synthetic data by rewriting the information, and its implications, in an input passage. This is similar to how students make study sheets by rewriting and summarizing original lecture content.

The LLM does this multiple times, then quizzes itself on each self-edit to see which led to the biggest boost in performance on a downstream task like question answering. It uses a trial-and-error method known as reinforcement learning, where it receives a reward for the greatest performance boost.

Then the model memorizes the best study sheet by updating its weights to internalize the information in that self-edit.

“Our hope is that the model will learn to make the best kind of study sheet — one that is the right length and has the proper diversity of information — such that updating the model based on it leads to a better model,” Zweiger explains.

Choosing the best method

Their framework also allows the model to choose the way it wants to learn the information. For instance, the model can select the synthetic data it wants to use, the rate at which it learns, and how many iterations it wants to train on.

In this case, not only does the model generate its own training data, but it also configures the optimization that applies that self-edit to its weights.

“As humans, we know how we learn best. We want to grant that same ability to large language models. By providing the model with the ability to control how it digests this information, it can figure out the best way to parse all the data that are coming in,” Pari says.

SEAL outperformed several baseline methods across a range of tasks, including learning a new skill from a few examples and incorporating knowledge from a text passage. On question answering, SEAL improved model accuracy by nearly 15 percent and on some skill-learning tasks, it boosted the success rate by more than 50 percent.

But one limitation of this approach is a problem called catastrophic forgetting: As the model repeatedly adapts to new information, its performance on earlier tasks slowly declines.

The researchers plan to mitigate catastrophic forgetting in future work. They also want to apply this technique in a multi-agent setting where several LLMs train each other.

“One of the key barriers to LLMs that can do meaningful scientific research is their inability to update themselves based on their interactions with new information. Though fully deployed self-adapting models are still far off, we hope systems able to learn this way could eventually overcome this and help advance science,” Zweiger says.

This work is supported, in part, by the U.S. Army Research Office, the U.S. Air Force AI Accelerator, the Stevens Fund for MIT UROP, and the MIT-IBM Watson AI Lab. 

Understanding the nuances of human-like intelligence

Tue, 11/11/2025 - 12:00am

What can we learn about human intelligence by studying how machines “think?” Can we better understand ourselves if we better understand the artificial intelligence systems that are becoming a more significant part of our everyday lives?

These questions may be deeply philosophical, but for Phillip Isola, finding the answers is as much about computation as it is about cogitation.

Isola, the newly tenured associate professor in the Department of Electrical Engineering and Computer Science (EECS), studies the fundamental mechanisms involved in human-like intelligence from a computational perspective.

While understanding intelligence is the overarching goal, his work focuses mainly on computer vision and machine learning. Isola is particularly interested in exploring how intelligence emerges in AI models, how these models learn to represent the world around them, and what their “brains” share with the brains of their human creators.

“I see all the different kinds of intelligence as having a lot of commonalities, and I’d like to understand those commonalities. What is it that all animals, humans, and AIs have in common?” says Isola, who is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

To Isola, a better scientific understanding of the intelligence that AI agents possess will help the world integrate them safely and effectively into society, maximizing their potential to benefit humanity.

Asking questions

Isola began pondering scientific questions at a young age.

While growing up in San Francisco, he and his father frequently went hiking along the northern California coastline or camping around Point Reyes and in the hills of Marin County.

He was fascinated by geological processes and often wondered what made the natural world work. In school, Isola was driven by an insatiable curiosity, and while he gravitated toward technical subjects like math and science, there was no limit to what he wanted to learn.

Not entirely sure what to study as an undergraduate at Yale University, Isola dabbled until he came upon cognitive sciences.

“My earlier interest had been with nature — how the world works. But then I realized that the brain was even more interesting, and more complex than even the formation of the planets. Now, I wanted to know what makes us tick,” he says.

As a first-year student, he started working in the lab of his cognitive sciences professor and soon-to-be mentor, Brian Scholl, a member of the Yale Department of Psychology. He remained in that lab throughout his time as an undergraduate.

After spending a gap year working with some childhood friends at an indie video game company, Isola was ready to dive back into the complex world of the human brain. He enrolled in the graduate program in brain and cognitive sciences at MIT.

“Grad school was where I felt like I finally found my place. I had a lot of great experiences at Yale and in other phases of my life, but when I got to MIT, I realized this was the work I really loved and these are the people who think similarly to me,” he says.

Isola credits his PhD advisor, Ted Adelson, the John and Dorothy Wilson Professor of Vision Science, as a major influence on his future path. He was inspired by Adelson’s focus on understanding fundamental principles, rather than only chasing new engineering benchmarks, which are formalized tests used to measure the performance of a system.

A computational perspective

At MIT, Isola’s research drifted toward computer science and artificial intelligence.

“I still loved all those questions from cognitive sciences, but I felt I could make more progress on some of those questions if I came at it from a purely computational perspective,” he says.

His thesis was focused on perceptual grouping, which involves the mechanisms people and machines use to organize discrete parts of an image as a single, coherent object.

If machines can learn perceptual groupings on their own, that could enable AI systems to recognize objects without human intervention. This type of self-supervised learning has applications in areas such autonomous vehicles, medical imaging, robotics, and automatic language translation.

After graduating from MIT, Isola completed a postdoc at the University of California at Berkeley so he could broaden his perspectives by working in a lab solely focused on computer science.

“That experience helped my work become a lot more impactful because I learned to balance understanding fundamental, abstract principles of intelligence with the pursuit of some more concrete benchmarks,” Isola recalls.

At Berkeley, he developed image-to-image translation frameworks, an early form of generative AI model that could turn a sketch into a photographic image, for instance, or turn a black-and-white photo into a color one.

He entered the academic job market and accepted a faculty position at MIT, but Isola deferred for a year to work at a then-small startup called OpenAI.

“It was a nonprofit, and I liked the idealistic mission at that time. They were really good at reinforcement learning, and I thought that seemed like an important topic to learn more about,” he says.

He enjoyed working in a lab with so much scientific freedom, but after a year Isola was ready to return to MIT and start his own research group.

Studying human-like intelligence

Running a research lab instantly appealed to him.

“I really love the early stage of an idea. I feel like I am a sort of startup incubator where I am constantly able to do new things and learn new things,” he says.

Building on his interest in cognitive sciences and desire to understand the human brain, his group studies the fundamental computations involved in the human-like intelligence that emerges in machines.

One primary focus is representation learning, or the ability of humans and machines to represent and perceive the sensory world around them.

In recent work, he and his collaborators observed that the many varied types of machine-learning models, from LLMs to computer vision models to audio models, seem to represent the world in similar ways.

These models are designed to do vastly different tasks, but there are many similarities in their architectures. And as they get bigger and are trained on more data, their internal structures become more alike.

This led Isola and his team to introduce the Platonic Representation Hypothesis (drawing its name from the Greek philosopher Plato) which says that the representations all these models learn are converging toward a shared, underlying representation of reality.

“Language, images, sound — all of these are different shadows on the wall from which you can infer that there is some kind of underlying physical process — some kind of causal reality — out there. If you train models on all these different types of data, they should converge on that world model in the end,” Isola says.

A related area his team studies is self-supervised learning. This involves the ways in which AI models learn to group related pixels in an image or words in a sentence without having labeled examples to learn from.

Because data are expensive and labels are limited, using only labeled data to train models could hold back the capabilities of AI systems. With self-supervised learning, the goal is to develop models that can come up with an accurate internal representation of the world on their own.

“If you can come up with a good representation of the world, that should make subsequent problem solving easier,” he explains.

The focus of Isola’s research is more about finding something new and surprising than about building complex systems that can outdo the latest machine-learning benchmarks.

While this approach has yielded much success in uncovering innovative techniques and architectures, it means the work sometimes lacks a concrete end goal, which can lead to challenges.

For instance, keeping a team aligned and the funding flowing can be difficult when the lab is focused on searching for unexpected results, he says.

“In a sense, we are always working in the dark. It is high-risk and high-reward work. Every once in while, we find some kernel of truth that is new and surprising,” he says.

In addition to pursuing knowledge, Isola is passionate about imparting knowledge to the next generation of scientists and engineers. Among his favorite courses to teach is 6.7960 (Deep Learning), which he and several other MIT faculty members launched four years ago.

The class has seen exponential growth, from 30 students in its initial offering to more than 700 this fall.

And while the popularity of AI means there is no shortage of interested students, the speed at which the field moves can make it difficult to separate the hype from truly significant advances.

“I tell the students they have to take everything we say in the class with a grain of salt. Maybe in a few years we’ll tell them something different. We are really on the edge of knowledge with this course,” he says.

But Isola also emphasizes to students that, for all the hype surrounding the latest AI models, intelligent machines are far simpler than most people suspect.

“Human ingenuity, creativity, and emotions — many people believe these can never be modeled. That might turn out to be true, but I think intelligence is fairly simple once we understand it,” he says.

Even though his current work focuses on deep-learning models, Isola is still fascinated by the complexity of the human brain and continues to collaborate with researchers who study cognitive sciences.

All the while, he has remained captivated by the beauty of the natural world that inspired his first interest in science.

Although he has less time for hobbies these days, Isola enjoys hiking and backpacking in the mountains or on Cape Cod, skiing and kayaking, or finding scenic places to spend time when he travels for scientific conferences.

And while he looks forward to exploring new questions in his lab at MIT, Isola can’t help but contemplate how the role of intelligent machines might change the course of his work.

He believes that artificial general intelligence (AGI), or the point where machines can learn and apply their knowledge as well as humans can, is not that far off.

“I don’t think AIs will just do everything for us and we’ll go and enjoy life at the beach. I think there is going to be this coexistence between smart machines and humans who still have a lot of agency and control. Now, I’m thinking about the interesting questions and applications once that happens. How can I help the world in this post-AGI future? I don’t have any answers yet, but it’s on my mind,” he says.

Leading quantum at an inflection point

Mon, 11/10/2025 - 10:00am

Danna Freedman is seeking the early adopters.

She is the faculty director of the nascent MIT Quantum Initiative, or QMIT. In this new role, Freedman is giving shape to an ambitious, Institute-wide effort to apply quantum breakthroughs to the most consequential challenges in science, technology, industry, and national security.

The interdisciplinary endeavor, the newest of MIT President Sally Kornbluth’s strategic initiatives, will bring together MIT researchers and domain experts from a range of industries to identify and tackle practical challenges wherever quantum solutions could achieve the greatest impact.

“We’ve already seen how the breadth of progress in quantum has created opportunities to rethink the future of security and encryption, imagine new modes of navigation, and even measure gravitational waves more precisely to observe the cosmos in an entirely new way,” says Freedman, the Frederick George Keyes Professor of Chemistry. “What can we do next? We’re investing in the promise of quantum, and where the legacy will be in 20 years.”

QMIT — the name is a nod to the “qubit,” the basic unit of quantum information — will formally launch on Dec. 8 with an all-day event on campus. Over time, the initiative plans to establish a physical home in the heart of campus for academic, public, and corporate engagement with state-of-the-art integrated quantum systems. Beyond MIT’s campus, QMIT will also work closely with the U.S. government and MIT Lincoln Laboratory, applying the lab’s capabilities in quantum hardware development, systems engineering, and rapid prototyping to national security priorities.

“The MIT Quantum Initiative seizes a timely opportunity in service to the nation’s scientific, economic, and technological competitiveness,” says Ian A. Waitz, MIT’s vice president for research. “With quantum capabilities approaching an inflection point, QMIT will engage students and researchers across all our schools and the college, as well as companies around the world, in thinking about what a step change in sensing and computational power will mean for a wide range of fields. Incredible opportunities exist in health and life sciences, fundamental physics research, cybersecurity, materials science, sensing the world around us, and more.”

Identifying the right questions

Quantum phenomena are as foundational to our world as light or gravity. At an extremely small scale, the interactions of atoms and subatomic particles are controlled by a different set of rules than the physical laws of the macro-sized world. These rules are called quantum mechanics.

“Quantum, in a sense, is what underlies everything,” says Freedman.

By leveraging quantum properties, quantum devices can process information at incredible speed to solve complex problems that aren’t feasible on classical supercomputers, and to enable ultraprecise sensing and measurement. Those improvements in speed and precision will become most powerful when optimized in relation to specific use cases, and as part of a complete quantum system. QMIT will focus on collaboration across domains to co-develop quantum tools, such as computers, sensors, networks, simulations, and algorithms, alongside the intended users of these systems.

As it develops, QMIT will be organized into programmatic pillars led by top researchers in quantum including Paola Cappellaro, Ford Professor of Engineering and professor of nuclear science and engineering and of physics; Isaac Chuang, Julius A. Stratton Professor in Electrical Engineering and Physics; Pablo Jarillo-Herrero, Cecil and Ida Green Professor of Physics; William Oliver, Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and professor of physics; Vladan Vuletić, Lester Wolfe Professor of Physics; and Jonilyn Yoder, associate leader of the Quantum-Enabled Computation Group at MIT Lincoln Laboratory.

While supporting the core of quantum research in physics, engineering, mathematics, and computer science, QMIT promises to expand the community at its frontiers, into astronomy, biology, chemistry, materials science, and medicine.

“If you provide a foundation that somebody can integrate with, that accelerates progress a lot,” says Freedman. “Perhaps we want to figure out how a quantum simulator we’ve built can model photosynthesis, if that’s the right question — or maybe the right question is to study 10 failed catalysts to see why they failed.”

“We are going to figure out what real problems exist that we could approach with quantum tools, and work toward them in the next five years,” she adds. “We are going to change the forward momentum of quantum in a way that supports impact.”

The MIT Quantum Initiative will be administratively housed in the Research Laboratory of Electronics (RLE), with support from the Office of the Vice President for Research (VPR) and the Office of Innovation and Strategy.

QMIT is a natural expansion of MIT’s Center for Quantum Engineering (CQE), a research powerhouse that engages more than 80 principal investigators across the MIT campus and Lincoln Laboratory to accelerate the practical application of quantum technologies.

“CQE has cultivated a tremendously strong ecosystem of students and researchers, engaging with U.S. government sponsors and industry collaborators, including through the popular Quantum Annual Research Conference (QuARC) and professional development classes,” says Marc Baldo, the Dugald C. Jackson Professor in Electrical Engineering and director of RLE.

“With the backing of former vice president for research Maria Zuber, former Lincoln Lab director Eric Evans, and Marc Baldo, we launched CQE and its industry membership group in 2019 to help bridge MIT’s research efforts in quantum science and engineering,” says Oliver, CQE’s director, who also spent 20 years at Lincoln Laboratory, most recently as a Laboratory Fellow. “We have an important opportunity now to deepen our commitment to quantum research and education, and especially in engaging students from across the Institute in thinking about how to leverage quantum science and engineering to solve hard problems.”

Two years ago, Peter Fisher, the Thomas A. Frank (1977) Professor of Physics, in his role as associate vice president for research computing and data, assembled a faculty group led by Cappellaro and involving Baldo, Oliver, Freedman, and others, to begin to build an initiative that would span the entire Institute. Now, capitalizing on CQE’s success, Oliver will lead the new MIT Quantum Initiative’s quantum computing pillar, which will broaden the work of CQE into a larger effort that focuses on quantum computing, industry engagement, and connecting with end users.

The “MIT-hard” problem

QMIT will build upon the Institute’s historic leadership in quantum science and engineering. In the spring of 1981, MIT hosted the first Physics of Computation Conference at the Endicott House, bringing together nearly 50 physics and computing researchers to consider the practical promise of quantum — an intellectual moment that is now widely regarded as the kickoff of the second quantum revolution. (The first was the fundamental articulation of quantum mechanics 100 years ago.)

Today, research in quantum science and engineering produces a steady stream of “firsts” in the lab and a growing number of startup companies.

In collaboration with partners in industry and government, MIT researchers develop advances in areas like quantum sensing, which involves the use of atomic-scale systems to measure certain properties, like distance and acceleration, with extreme precision. Quantum sensing could be used in applications like brain imaging devices that capture more detail, or air traffic control systems with greater positional accuracy.

Another key area of research is quantum simulation, which uses the power of quantum computers to accurately emulate complex systems. This could fuel the discovery of new materials for energy-efficient electronics or streamline the identification of promising molecules for drug development.

“Historically, when we think about the most well-articulated challenges that quantum will solve,” Freedman says, “the best ones have come from inside of MIT. We’re open to technological solutions to problems, and nontraditional approaches to science. In many respects, we are the early adopters.”

But she also draws a sharp distinction between blue-sky thinking about what quantum might do, and the deeply technical, deeply collaborative work of actually drawing the roadmap. “That’s the ‘MIT-hard’ problem,” she says.

The QMIT launch event on Dec. 8 will feature talks and discussions featuring MIT faculty, including Nobel laureates and industry leaders.

MIT Energy Initiative launches Data Center Power Forum

Fri, 11/07/2025 - 2:55pm

With global power demand from data centers expected to more than double by 2030, the MIT Energy Initiative (MITEI) in September launched an effort that brings together MIT researchers and industry experts to explore innovative solutions for powering the data-driven future. At its annual research conference, MITEI announced the Data Center Power Forum, a targeted research effort for MITEI member companies interested in addressing the challenges of data center power demand. The Data Center Power Forum builds on lessons from MITEI’s May 2025 symposium on the energy to power the expansion of artificial intelligence (AI) and focus panels related to data centers at the fall 2024 research conference.

In the United States, data centers consumed 4 percent of the country’s electricity in 2023, with demand expected to increase to 9 percent by 2030, according to the Electric Power Research Institute. Much of the growth in demand is from the increasing use of AI, which is placing an unprecedented strain on the electric grid. This surge in demand presents a serious challenge for the technology and energy sectors, government policymakers, and everyday consumers, who may see their electric bills skyrocket as a result.

“MITEI has long supported research on ways to produce more efficient and cleaner energy and to manage the electric grid. In recent years, MITEI has also funded dozens of research projects relevant to data center energy issues. Building on this history and knowledge base, MITEI’s Data Center Power Forum is convening a specialized community of industry members who have a vital stake in the sustainable growth of AI and the acceleration of solutions for powering data centers and expanding the grid,” says William H. Green, the director of MITEI and the Hoyt C. Hottel Professor of Chemical Engineering.

MITEI’s mission is to advance zero- and low-carbon solutions to expand energy access and mitigate climate change. MITEI works with companies from across the energy innovation chain, including in the infrastructure, automotive, electric power, energy, natural resources, and insurance sectors. MITEI member companies have expressed strong interest in the Data Center Power Forum and are committing to support focused research on a wide range of energy issues associated with data center expansion, Green says.

MITEI’s Data Center Power Forum will provide its member companies with reliable insights into energy supply, grid load operations and management, the built environment, and electricity market design and regulatory policy for data centers. The forum complements MIT’s deep expertise in adjacent topics such as low-power processors, efficient algorithms, task-specific AI, photonic devices, quantum computing, and the societal consequences of data center expansion. As part of the forum, MITEI’s Future Energy Systems Center is funding projects relevant to data center energy in its upcoming proposal cycles. MITEI Research Scientist Deep Deka has been named the program manager for the forum.

“Figuring out how to meet the power demands of data centers is a complicated challenge. Our research is coming at this from multiple directions, from looking at ways to expand transmission capacity within the electrical grid in order to bring power to where it is needed, to ensuring the quality of electrical service for existing users is not diminished when new data centers come online, and to shifting computing tasks to times and places when and where energy is available on the grid," said Deka.

MITEI currently sponsors substantial research related to data center energy topics across several MIT departments. The existing research portfolio includes more than a dozen projects related to data centers, including low- or zero-carbon solutions for energy supply and infrastructure, electrical grid management, and electricity market policy. MIT researchers funded through MITEI’s industry consortium are also designing more energy-efficient power electronics and processors and investigating behind-the-meter low-/no-carbon power plants and energy storage. MITEI-supported experts are studying how to use AI to optimize electrical distribution and the siting of data centers and conducting techno-economic analyses of data center power schemes. MITEI’s consortium projects are also bringing fresh perspectives to data center cooling challenges and considering policy approaches to balance the interests of shareholders. 

By drawing together industry stakeholders from across the AI and grid value chain, the Data Center Power Forum enables a richer dialog about solutions to power, grid, and carbon management problems in a noncommercial and collaborative setting.

“The opportunity to meet and to hold discussions on key data center challenges with other forum members from different sectors, as well as with MIT faculty members and research scientists, is a unique benefit of this MITEI-led effort,” Green says.

MITEI addressed the issue of data center power needs with its company members during its fall 2024 Annual Research Conference with a panel session titled, “The extreme challenge of powering data centers in a decarbonized way.” MITEI Director of Research Randall Field led a discussion with representatives from large technology companies Google and Microsoft, known as “hyperscalers,” as well as Madrid-based infrastructure developer Ferrovial S.E. and utility company Exelon Corp. Another conference session addressed the related topic, “Energy storage and grid expansion.” This past spring, MITEI focused its annual Spring Symposium on data centers, hosting faculty members and researchers from MIT and other universities, business leaders, and a representative of the Federal Energy Regulatory Commission for a full day of sessions on the topic, “AI and energy: Peril and promise.” 

Particles that enhance mRNA delivery could reduce vaccine dosage and costs

Fri, 11/07/2025 - 5:00am

A new delivery particle developed at MIT could make mRNA vaccines more effective and potentially lower the cost per vaccine dose.

In studies in mice, the researchers showed that an mRNA influenza vaccine delivered with their new lipid nanoparticle could generate the same immune response as mRNA delivered by nanoparticles made with FDA-approved materials, but at around 1/100 the dose.

“One of the challenges with mRNA vaccines is the cost,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES). “When you think about the cost of making a vaccine that could be distributed widely, it can really add up. Our goal has been to try to make nanoparticles that can give you a safe and effective vaccine response but at a much lower dose.”

While the researchers used their particles to deliver a flu vaccine, they could also be used for vaccines for Covid-19 and other infectious diseases, they say.

Anderson is the senior author of the study, which appears today in Nature Nanotechnology. The lead authors of the paper are Arnab Rudra, a visiting scientist at the Koch Institute; Akash Gupta, a Koch Institute research scientist; and Kaelan Reed, an MIT graduate student.

Efficient delivery

To protect mRNA vaccines from breaking down in the body after injection, they are packaged inside a lipid nanoparticle, or LNP. These fatty spheres help mRNA get into cells so that it can be translated into a fragment of a protein from a pathogen such as influenza or SARS-CoV-2.

In the new study, the MIT team sought to develop particles that can induce an effective immune response, but at a lower dose than the particles now used to deliver Covid-19 mRNA vaccines. That could not only reduce the costs per vaccine dose, but may also help to lessen the potential side effects, the researchers say.

LNPs typically consist of five elements: an ionizable lipid, cholesterol, a helper phospholipid, a polyethylene glycol lipid, and mRNA. In this study, the researchers focused on the ionizable lipid, which plays a key role in vaccine strength.

Based on their knowledge of chemical structures that might improve delivery efficiency, the researchers designed a library of new ionizable lipids. These contained cyclic structures, which can help enhance mRNA delivery, as well as chemical groups called esters, which the researchers believed could also help improve biodegradability.

The researchers then created and screened many combinations of these particle structures in mice to see which could most effectively deliver the gene for luciferase, a bioluminescent protein. Then, they took their top-performing particle and created a library of new variants, which they tested in another round of screening.

From these screens, the top LNP that emerged is one that the researchers called AMG1541. One key feature of these new LNPs is that they are more effective in dealing with a major barrier for delivery particles, known as endosomal escape. After LNPs enter cells, they are isolated in cellular compartments called endosomes, which they need to break out of to deliver their mRNA. The new particles did this more effectively than existing LNPs.

Another advantage of the new LNPs is that the ester groups in the tails make the particles degradable once they have delivered their cargo. This means they can be cleared from the body quickly, which the researchers believe could reduce side effects from the vaccine.

More powerful vaccines

To demonstrate the potential applications of the AMG1541 LNP, the researchers used it to deliver an mRNA influenza vaccine in mice. They compared this vaccine’s effectiveness to a flu vaccine made with a lipid called SM-102, which is FDA-approved and was used by Moderna in its Covid-19 vaccine.

Mice vaccinated with the new particles generated the same antibody response as mice vaccinated with the SM-102 particle, but only 1/100 of the dose was needed to generate that response, the researchers found.

“It’s almost a hundredfold lower dose, but you generate the same amount of antibodies, so that can significantly lower the dose. If it translates to humans, it should significantly lower the cost as well,” Rudra says.

Further experiments revealed that the new LNPs are better able to deliver their cargo to a critical type of immune cells called antigen-presenting cells. These cells chop up foreign antigens and display them on their surfaces, which signals other immune cells such as B and T cells to become activated against that antigen.

The new LNPs are also more likely to accumulate in the lymph nodes, where they encounter many more immune cells.

Using these particles to deliver mRNA flu vaccines could allow vaccine developers to better match the strains of flu that circulate each winter, the researchers say. “With traditional flu vaccines, they have to start being manufactured almost a year ahead of time,” Reed says. “With mRNA, you can start producing it much later in the season and get a more accurate guess of what the circulating strains are going to be, and it may help improve the efficacy of flu vaccines.”

The particles could also be adapted for vaccines for Covid-19, HIV, or any other infectious disease, the researchers say.

“We have found that they work much better than anything that has been reported so far. That’s why, for any intramuscular vaccines, we think that our LNP platforms could be used to develop vaccines for a number of diseases,” Gupta says.

The research was funded by Sanofi, the National Institutes of Health, the Marble Center for Cancer Nanomedicine, and the Koch Institute Support (core) Grant from the National Cancer Institute.

Giving buildings an “MRI” to make them more energy-efficient and resilient

Fri, 11/07/2025 - 12:00am

Older buildings let thousands of dollars-worth of energy go to waste each year through leaky roofs, old windows, and insufficient insulation. But even as building owners face mounting pressure to comply with stricter energy codes, making smart decisions about how to invest in efficiency is a major challenge.

Lamarr.AI, born in part from MIT research, is making the process of finding ways to improve the energy efficiency of buildings as easy as clicking a button. When customers order a building review, it triggers a coordinated symphony of drones, thermal and visible-range cameras, and artificial intelligence designed to identify problems and quantify the impact of potential upgrades. Lamarr.AI’s technology also assesses structural conditions, creates detailed 3D models of buildings, and recommends retrofits. The solution is already being used by leading organizations across facilities management as well as by architecture, engineering, and construction firms.

“We identify the root cause of the anomalies we find,” says CEO and co-founder Tarek Rakha PhD ’15. “Our platform doesn’t just say, ‘This is a hot spot and this is a cold spot.’ It specifies ‘This is infiltration or exfiltration. This is missing insulation. This is water intrusion.’ The detected anomalies are also mapped to a 3D model of the building, and there are deeper analytics, such as the cost of each retrofit and the return on investment.”

To date, the company estimates its platform has helped clients across health care, higher education, and multifamily housing avoid over $3 million in unnecessary construction and retrofit costs by recommending targeted interventions over costly full-system replacements, while improving energy performance and extending asset life. For building owners managing portfolios worth hundreds of millions of dollars, Lamarr.AI’s approach represents a fundamental shift from reactive maintenance to strategic asset management.

The founders, who also include MIT Professor John Fernández and Research Scientist Norhan Bayomi SM ’17, PhD ’21, are thrilled to see their technology accelerating the transition to more energy-efficient and higher-performing buildings.

“Reducing carbon emissions in buildings gets you the greatest return on investment in terms of climate interventions, but what has been needed are the technologies and tools to help the real estate and construction sectors make the right decisions in a timely and economical way,” Fernández says.

Automating building scans

Bayomi and Rakha completed their PhDs in the MIT Department of Architecture’s Building Technology Program. For her thesis, Bayomi developed technology to detect features of building exteriors and classify thermal anomalies through scans of buildings, with a specific focus on the impact of heat waves on low-income communities. Bayomi and her collaborators eventually deployed the system to detect air leaks as part of a partnership with a community in New York City.

After graduating MIT, Rakha became an assistant professor at Syracuse University. In 2015, together with fellow Syracuse University Professor Senem Velipasalar, he began developing his concept for drone-based building analytics — an idea that later received support through a grant from New York State’s Department of Economic Development. In 2019, Bayomi and Fernández joined the project, and the team received a $1.8 million research award from the U.S. Department of Energy.

“The technology is like giving a building an MRI using drones, infrared imaging, visible light imaging, and proprietary AI that we developed through computer vision technology, along with large language models for report generation,” Rakha explains.

“When we started the research, we saw firsthand how vulnerable communities were suffering from inefficient buildings, but couldn’t afford comprehensive diagnostics,” Bayomi says. “We knew that if we could automate this process and reduce costs while improving accuracy, we’d unlock a massive market. Now we’re seeing demand from everyone, from municipal buildings to major institutional portfolios.”

Lamarr.AI was officially founded in 2021 to commercialize the technology, and the founders wasted no time tapping into MIT’s entrepreneurial ecosystem. First, they received a small seed grant from the MIT Sandbox Innovation Fund. In 2022, they won the MITdesignX prize and were semifinalists in the MIT $100K Entrepreneurship Competition. The founders named the company after Hedy Lamarr, the famous actress and inventor of a patented technology that became the basis for many modern secure communications.

Current methods for detecting air leaks in buildings utilize fan pressurizers or smoke. Contractors or building engineers may also spot-check buildings with handheld infrared cameras to manually identify temperature differences across individual walls, windows, and ductwork.

Lamarr.AI’s system can perform building inspections far more quickly. Building managers can order the company’s scans online and select when they’d like the drone to fly. Lamarr.AI partners with drone companies worldwide to fly off-the-shelf drones around buildings, providing them with flight plans and specifications for success. Images are then uploaded onto Lamarr.AI’s platform for automated analysis.

“As an example, a survey of a 180,000-square-foot building like the MIT Schwarzman College of Computing, which we scanned, produces around 2,000 images,” Fernández says. “For someone to go through those manually would take a couple of weeks. Our models autonomously analyze those images in a few seconds.”

After the analysis, Lamarr.AI’s platform generates a report that includes the suspected root cause of every weak point found, an estimated cost to correct that problem, and its estimated return on investment using advanced building energy simulations.

“We knew if we were able to quickly, inexpensively, and accurately survey the thermal envelope of buildings and understand their performance, we would be addressing a huge need in the real estate, building construction, and built environment sectors,” Fernández explains. “Thermal anomalies are a huge cause of unwanted heat loss, and more than 45 percent of construction defects are tied to envelope failures.”

The ability to operate at scale is especially attractive to building owners and operators, who often manage large portfolios of buildings across multiple campuses.

“We see Lamarr.AI becoming the premier solution for building portfolio diagnostics and prognosis across the globe, where every building can be equipped not just for the climate crisis, but also to minimize energy losses and be more efficient, safer, and sustainable,” Rakha says.

Building science for everyone

Lamarr.AI has worked with building operators across the U.S. as well as in Canada, the United Kingdom, and the United Arab Emirates.

In June, Lamarr.AI partnered with the City of Detroit, with support from Newlab and Michigan Central, to inspect three municipal buildings to identify areas for improvement. Across two of the buildings, the system identified more than 460 problems like insulation gaps and water leaks. The findings were presented in a report that also utilized energy simulations to demonstrate that upgrades, such as window replacements and targeted weatherization, could reduce HVAC energy use by up to 22 percent.

The entire process took a few days. The founders note that it was the first building inspection drone flight to utilize an off-site operator, an approach that further enhances the scalability of their platform. It also helps further reduce costs, which could make building scans available to a broader swath of people around the world.

“We’re democratizing access to very high-value building science expertise that previously cost tens of thousands per audit,” Bayomi says. “Our platform makes advanced diagnostics affordable enough for routine use, not just one-time assessments. The bigger vision is automated, regular building health monitoring that keeps facilities teams informed in real-time, enabling proactive decisions rather than reactive crisis management. When building intelligence becomes continuous and accessible, operators can optimize performance systematically rather than waiting for problems to emerge.”

Charting the future of AI, from safer answers to faster thinking

Thu, 11/06/2025 - 4:40pm

Adoption of new tools and technologies occurs when users largely perceive them as reliable, accessible, and an improvement over the available methods and workflows for the cost. Five PhD students from the inaugural class of the MIT-IBM Watson AI Lab Summer Program are utilizing state-of-the-art resources, alleviating AI pain points, and creating new features and capabilities to promote AI usefulness and deployment — from learning when to trust a model that predicts another’s accuracy to more effectively reasoning over knowledge bases. Together, the efforts from the students and their mentors form a through-line, where practical and technically rigorous research leads to more dependable and valuable models across domains.

Building probes, routers, new attention mechanisms, synthetic datasets, and program-synthesis pipelines, the students’ work spans safety, inference efficiency, multimodal data, and knowledge-grounded reasoning. Their techniques emphasize scaling and integration, with impact always in sight.

Learning to trust, and when

MIT math graduate student Andrey Bryutkin’s research prioritizes the trustworthiness of models. He seeks out internal structures within problems, such as equations governing a system and conservation laws, to understand how to leverage them to produce more dependable and robust solutions. Armed with this and working with the lab, Bryutkin developed a method to peer into the nature of large learning models (LLMs) behaviors. Together with the lab’s Veronika Thost of IBM Research and Marzyeh Ghassemi — associate professor and the Germeshausen Career Development Professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems — Bryutkin explored the “uncertainty of uncertainty” of LLMs. 

Classically, tiny feed-forward neural networks two-to-three layers deep, called probes, are trained alongside LLMs and employed to flag untrustworthy answers from the larger model to developers; however, these classifiers can also produce false negatives and only provide point estimates, which don’t offer much information about when the LLM is failing. Investigating safe/unsafe prompts and question-answer tasks, the MIT-IBM team used prompt-label pairs, as well as the hidden states like activation vectors and last tokens from an LLM, to measure gradient scores, sensitivity to prompts, and out-of-distribution data to determine how reliable the probe was and learn areas of data that are difficult to predict. Their method also helps identify potential labeling noise. This is a critical function, as the trustworthiness of AI systems depends entirely on the quality and accuracy of the labeled data they are built upon. More accurate and consistent probes are especially important for domains with critical data in applications like IBM’s Granite Guardian family of models.

Another way to ensure trustworthy responses to queries from an LLM is to augment them with external, trusted knowledge bases to eliminate hallucinations. For structured data, such as social media connections, financial transactions, or corporate databases, knowledge graphs (KG) are natural fits; however, communications between the LLM and KGs often use fixed, multi-agent pipelines that are computationally inefficient and expensive. Addressing this, physics graduate student Jinyeop Song, along with lab researchers Yada Zhu of IBM Research and EECS Associate Professor Julian Shun created a single-agent, multi-turn, reinforcement learning framework that streamlines this process. Here, the group designed an API server hosting Freebase and Wikidata KGs, which consist of general web-based knowledge data, and a LLM agent that issues targeted retrieval actions to fetch pertinent information from the server. Then, through continuous back-and-forth, the agent appends the gathered data from the KGs to the context and responds to the query. Crucially, the system uses reinforcement learning to train itself to deliver answers that strike a balance between accuracy and completeness. The framework pairs an API server with a single reinforcement learning agent to orchestrate data-grounded reasoning with improved accuracy, transparency, efficiency, and transferability.

Spending computation wisely

The timeliness and completeness of a model’s response carry similar weight to the importance of its accuracy. This is especially true for handling long input texts and those where elements, like the subject of a story, evolve over time, so EECS graduate student Songlin Yang is re-engineering what models can handle at each step of inference. Focusing on transformer limitations, like those in LLMs, the lab’s Rameswar Panda of IBM Research and Yoon Kim, the NBX Professor and associate professor in EECS, joined Yang to develop next-generation language model architectures beyond transformers.

Transformers face two key limitations: high computational complexity in long-sequence modeling due to the softmax attention mechanism, and limited expressivity resulting from the weak inductive bias of RoPE (rotary positional encoding). This means that as the input length doubles, the computational cost quadruples. RoPE allows transformers to understand the sequence order of tokens (i.e., words); however, it does not do a good job capturing internal state changes over time, like variable values, and is limited to the sequence lengths seen during training.

To address this, the MIT-IBM team explored theoretically grounded yet hardware-efficient algorithms. As an alternative to softmax attention, they adopted linear attention, reducing the quadratic complexity that limits the feasible sequence length. They also investigated hybrid architectures that combine softmax and linear attention to strike a better balance between computational efficiency and performance.

Increasing expressivity, they replaced RoPE with a dynamic reflective positional encoding based on the Householder transform. This approach enables richer positional interactions for deeper understanding of sequential information, while maintaining fast and efficient computation. The MIT-IBM team’s advancement reduces the need for transformers to break problems into many steps, instead enabling them to handle more complex subproblems with fewer inference tokens.

Visions anew

Visual data contain multitudes that the human brain can quickly parse, internalize, and then imitate. Using vision-language models (VLMs), two graduate students are exploring ways to do this through code.

Over the past two summers and under the advisement of Aude Oliva, MIT director of the MIT-IBM Watson AI Lab and a senior research scientist in the Computer Science and Artificial Intelligence Laboratory; and IBM Research’s Rogerio Feris, Dan Gutfreund, and Leonid Karlinsky (now at Xero), Jovana Kondic of EECS has explored visual document understanding, specifically charts. These contain elements, such as data points, legends, and axes labels, that require optical character recognition and numerical reasoning, which models still struggle with. In order to facilitate the performance on tasks such as these, Kondic’s group set out to create a large, open-source, synthetic chart dataset from code that could be used for training and benchmarking. 

With their prototype, ChartGen, the researchers created a pipeline that passes seed chart images through a VLM, which is prompted to read the chart and generate a Python script that was likely used to create the chart in the first place. The LLM component of the framework then iteratively augments the code from many charts to ultimately produce over 200,000 unique pairs of charts and their codes, spanning nearly 30 chart types, as well as supporting data and annotation like descriptions and question-answer pairs about the charts. The team is further expanding their dataset, helping to enable critical multimodal understanding to data visualizations for enterprise applications like financial and scientific reports, blogs, and more.

Instead of charts, EECS graduate student Leonardo Hernandez Cano has his eyes on digital design, specifically visual texture generation for CAD applications and the goal of discovering efficient ways to enable to capabilities in VLMs. Teaming up with the lab groups led by Armando Solar-Lezama, EECS professor and Distinguished Professor of Computing in the MIT Schwarzman College of Computing, and IBM Research’s Nathan Fulton, Hernandez Cano created a program synthesis system that learns to refine code on its own. The system starts with a texture description given by a user in the form of an image. It then generates an initial Python program, which produces visual textures, and iteratively refines the code with the goal of finding a program that produces a texture that matches the target description, learning to search for new programs from the data that the system itself produces. Through these refinements, the novel program can create visualizations with the desired luminosity, color, iridescence, etc., mimicking real materials.

When viewed together, these projects, and the people behind them, are making a cohesive push toward more robust and practical artificial intelligence. By tackling the core challenges of reliability, efficiency, and multimodal reasoning, the work paves the way for AI systems that are not only more powerful, but also more dependable and cost-effective, for real-world enterprise and scientific applications.

Where climate meets community

Thu, 11/06/2025 - 4:20pm

The MIT Living Climate Futures Lab (LCFL) centers the human dimensions of climate change, bringing together expertise from across MIT to address one of the world’s biggest challenges.

The LCFL has three main goals: “addressing how climate change plays out in everyday life, focusing on community-oriented partnerships, and encouraging cross-disciplinary conversations around climate change on campus,” says Chris Walley, the SHASS Dean’s Distinguished Professor of Anthropology and head of MIT’s Anthropology Section. “We think this is a crucial direction for MIT and will make a strong statement about the kind of human-centered, interdisciplinary work needed to tackle this issue.”

Walley is faculty lead of LCFL, working in collaboration with a group of 19 faculty colleagues and researchers. The LCFL began to coalesce in 2022 when MIT faculty and affiliates already working with communities dealing with climate change issues organized a symposium, inviting urban farmers, place-based environmental groups, and others to MIT. Since then, the lab has consolidated the efforts of faculty and affiliates representing disciplines from across the MIT School of Humanities, Arts, and Social Sciences (SHASS) and the Institute.

Amah Edoh, a cultural anthropologist and managing director of LCFL, says the lab’s collaboration with community organizations and development of experiential learning classes aims to bridge the gap that can exist between the classroom and the real world.

“Sometimes we can find ourselves in a bubble where we’re only in conversation with other people from within academia or our own field of practice. There can be a disconnect between what students are learning somewhat abstractly and the ‘real world’ experience of the issues” Edoh says. “By taking up topics from the multidimensional approach that experiential learning makes possible, students learn to take complexity as a given, which can help to foster more critical thinking in them, and inform their future practice in profound ways.”

Edoh points out that the effects of climate change play out in a huge array of areas: health, food security, livelihoods, housing, and governance structures, to name a few.

“The Living Climate Futures Lab supports MIT researchers in developing the long-term collaborations with community partners that are essential to adequately identifying and responding to the challenges that climate change creates in everyday life,” she says.

Manduhai Buyandelger, professor of anthropology and one of the participants in LCFL, developed the class 21A.S01 (Anthro-Engineering: Decarbonization at the Million-Person Scale), which has in turn sparked related classes. The goal is “to merge technological innovation with people-centered environments.” Working closely with residents of Ulaanbaatar, Mongolia, Buyandelger and collaborator Mike Short, the Class of 1941 Professor of Nuclear Science and Engineering, helped develop a molten salt heat bank as a reusable energy source.

“My work with Mike Short on energy and alternative heating in Mongolia helps to cultivate a new generation of creative and socially minded engineers who prioritize people in thinking about technical solutions,” Buyandelger says, adding, “In our course, we collaborate on creating interdisciplinary methods where we fuse anthropological methods with engineering innovations so that we can expand and deepen our approach to mitigate climate change.”

Iselle Barrios ’25, says 21A.S01 was her first anthropology course. She traveled to Mongolia and was able to experience firsthand all the ways in which the air pollution and heating problem was much larger and more complicated than it seemed from MIT’s Cambridge, Massachusetts, campus.

“It was my first exposure to anthropological and STS critiques of science and engineering, as well as international development,” says Barrios, a chemical engineering major. “It fundamentally reshaped the way I see the role of technology and engineers in the broader social context in which they operate. It really helped me learn to think about problems in a more holistic and people-centered way.”

LCFL participant Alvin Harvey, a postdoc in the MIT Media Lab’s Space Enabled Research Group and a citizen of the Navajo Nation, works to incorporate traditional knowledge in engineering and science to “support global stewardship of earth and space ecologies.”

"I envision the Living Climate Futures Lab as a collaborative space that can be an igniter and sustainer of relationships, especially between MIT and those whose have generational and cultural ties to land and space that is being impacted by climate change,” Harvey says. “I think everyone in our lab understands that protecting our climate future is a collective journey."

Kate Brown, the Thomas M. Siebel Distinguished Professor in History of Science, is also a participant in LCFL. Her current interest is urban food sovereignty movements, in which working-class city dwellers used waste to create “the most productive agriculture in recorded human history,” Brown says. While pursuing that work, Brown has developed relationships and worked with urban farmers in Mansfield, Ohio, as well as in Washington and Amsterdam.

Brown and Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry, teach a class called STS.055 (Living Dangerously: Environmental Programs from 1900 to Today) that presents the environmental problems and solutions of the 20th century, and how some “solutions” created more problems over time. Brown also plans to teach a class on the history of global food production once she gets access to a small plot of land on campus for a lab site.

“The Living Climate Futures Lab gives us the structure and flexibility to work with communities that are struggling to find solutions to the problems being created by the climate crisis,” says Brown.

Earlier this year, the MIT Human Insight Collaborative (MITHIC) selected the Living Climate Futures Lab as its inaugural Faculty-Driven Initiative (FDI), which comes with a $500,000 seed grant.

MIT Provost Anantha Chandrakasan, co-chair of MITHIC, says the LCFL exemplifies how we can confront the climate crisis by working in true partnership with the communities most affected.

“By combining scientific insight with cultural understanding and lived experience, this initiative brings a deeper dimension to MIT’s climate efforts — one grounded in collaboration, empathy, and real-world impact,” says Chandrakasan.

Agustín Rayo, the Kenan Sahin Dean of SHASS and co-chair of MITHIC, says the LCFL is precisely the type of interdisciplinary collaboration the FDI program was designed to support.

"By bringing together expertise from across MIT, I am confident the Living Climate Futures Lab will make significant contributions in the Institute’s effort to address the climate crisis," says Rayo.

Walley said the seed grant will support a second symposium in 2026 to be co-designed with community groups, a suite of experiential learning classes, workshops, a speaker series, and other programming. Throughout this development phase, the lab will solicit donor support to build it into an ongoing MIT initiative and a leader in the response to climate change.

MIT physicists observe key evidence of unconventional superconductivity in magic-angle graphene

Thu, 11/06/2025 - 2:00pm

Superconductors are like the express trains in a metro system. Any electricity that “boards” a superconducting material can zip through it without stopping and losing energy along the way. As such, superconductors are extremely energy efficient, and are used today to power a variety of applications, from MRI machines to particle accelerators.

But these “conventional” superconductors are somewhat limited in terms of uses because they must be brought down to ultra-low temperatures using elaborate cooling systems to keep them in their superconducting state. If superconductors could work at higher, room-like temperatures, they would enable a new world of technologies, from zero-energy-loss power cables and electricity grids to practical quantum computing systems. And so scientists at MIT and elsewhere are studying “unconventional” superconductors — materials that exhibit superconductivity in ways that are different from, and potentially more promising than, today’s superconductors.

In a promising breakthrough, MIT physicists have today reported their observation of new key evidence of unconventional superconductivity in “magic-angle” twisted tri-layer graphene (MATTG) — a material that is made by stacking three atomically-thin sheets of graphene at a specific angle, or twist, that then allows exotic properties to emerge.

MATTG has shown indirect hints of unconventional superconductivity and other strange electronic behavior in the past. The new discovery, reported in the journal Science, offers the most direct confirmation yet that the material exhibits unconventional superconductivity.

In particular, the team was able to measure MATTG’s superconducting gap — a property that describes how resilient a material’s superconducting state is at given temperatures. They found that MATTG’s superconducting gap looks very different from that of the typical superconductor, meaning that the mechanism by which the material becomes superconductive must also be different, and unconventional.

“There are many different mechanisms that can lead to superconductivity in materials,” says study co-lead author Shuwen Sun, a graduate student in MIT’s Department of Physics. “The superconducting gap gives us a clue to what kind of mechanism can lead to things like room-temperature superconductors that will eventually benefit human society.”

The researchers made their discovery using a new experimental platform that allows them to essentially “watch” the superconducting gap, as the superconductivity emerges in two-dimensional materials, in real-time. They plan to apply the platform to further probe MATTG, and to map the superconducting gap in other 2D materials — an effort that could reveal promising candidates for future technologies.

“Understanding one unconventional superconductor very well may trigger our understanding of the rest,” says Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT and the senior author of the study. “This understanding may guide the design of superconductors that work at room temperature, for example, which is sort of the Holy Grail of the entire field.”

The study’s other co-lead author is Jeong Min Park PhD ’24; Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan are also co-authors.

The ties that bind

Graphene is a material that comprises a single layer of carbon atoms that are linked in a hexagonal pattern resembling chicken wire. A sheet of graphene can be isolated by carefully exfoliating an atom-thin flake from a block of graphite (the same stuff of pencil lead). In the 2010s, theorists predicted that if two graphene layers were stacked at a very special angle, the resulting structure should be capable of exotic electronic behavior.

In 2018, Jarillo-Herrero and his colleagues became the first to produce magic-angle graphene in experiments, and to observe some of its extraordinary properties. That discovery sprouted an entire new field known as “twistronics,” and the study of atomically thin, precisely twisted materials. Jarillo-Herrero’s group has since studied other configurations of magic-angle graphene with two, three, and more layers, as well as stacked and twisted structures of other two-dimensional materials. Their work, along with other groups, have revealed some signatures of unconventional superconductivity in some structures.

Superconductivity is a state that a material can exhibit under certain conditions (usually at very low temperatures). When a material is a superconductor, any electrons that pass through can pair up, rather than repelling and scattering away. When they couple up in what is known as “Cooper pairs,” the electrons can glide through a material without friction, instead of knocking against each other and flying away as lost energy. This pairing up of electrons is what enables superconductivity, though the way in which they are bound can vary.

“In conventional superconductors, the electrons in these pairs are very far away from each other, and weakly bound,” says Park. “But in magic-angle graphene, we could already see signatures that these pairs are very tightly bound, almost like a molecule. There were hints that there is something very different about this material.”

Tunneling through

In their new study, Jarillo-Herrero and his colleagues aimed to directly observe and confirm unconventional superconductivity in a magic-angle graphene structure. To do so, they would have to measure the material’s superconducting gap.

“When a material becomes superconducting, electrons move together as pairs rather than individually, and there’s an energy ‘gap’ that reflects how they’re bound,” Park explains. “The shape and symmetry of that gap tells us the underlying nature of the superconductivity.”

Scientists have measured the superconducting gap in materials using specialized techniques, such as tunneling spectroscopy. The technique takes advantage of a quantum mechanical property known as “tunneling.” At the quantum scale, an electron behaves not just as a particle, but also as a wave; as such, its wave-like properties enable an electron to travel, or “tunnel,” through a material, as if it could move through walls.

Such tunneling spectroscopy measurements can give an idea of how easy it is for an electron to tunnel into a material, and in some sense, how tightly packed and bound the electrons in the material are. When performed in a superconducting state, it can reflect the properties of the superconducting gap. However, tunneling spectroscopy alone cannot always tell whether the material is, in fact, in a superconducting state. Directly linking a tunneling signal to a genuine superconducting gap is both essential and experimentally challenging.

In their new work, Park and her colleagues developed an experimental platform that combines electron tunneling with electrical transport — a technique that is used to gauge a material’s superconductivity, by sending current through and continuously measuring its electrical resistance (zero resistance signals that a material is in a superconducting state).

The team applied the new platform to measure the superconducting gap in MATTG. By combining tunneling and transport measurements in the same device, they could unambiguously identify the superconducting tunneling gap, one that appeared only when the material exhibited zero electrical resistance, which is the hallmark of superconductivity. They then tracked how this gap evolved under varying temperature and magnetic fields. Remarkably, the gap displayed a distinct V-shaped profile, which was clearly different from the flat and uniform shape of conventional superconductors.

This V shape reflects a certain unconventional mechanism by which electrons in MATTG pair up to superconduct. Exactly what that mechanism is remains unknown. But the fact that the shape of the superconducting gap in MATTG stands out from that of the typical superconductor provides key evidence that the material is an unconventional superconductor.

In conventional superconductors, electrons pair up through vibrations of the surrounding atomic lattice, which effectively jostle the particles together. But Park suspects that a different mechanism could be at work in MATTG.

“In this magic-angle graphene system, there are theories explaining that the pairing likely arises from strong electronic interactions rather than lattice vibrations,” she posits. “That means electrons themselves help each other pair up, forming a superconducting state with special symmetry.”

Going forward, the team will test other two-dimensional twisted structures and materials using the new experimental platform.

“This allows us to both identify and study the underlying electronic structures of superconductivity and other quantum phases as they happen, within the same sample,” Park says. “This direct view can reveal how electrons pair and compete with other states, paving the way to design and control new superconductors and quantum materials that could one day power more efficient technologies or quantum computers.”

This research was supported, in part, by the U.S. Army Research Office, the U.S. Air Force Office of Scientific Research, the MIT/MTL Samsung Semiconductor Research Fund, the Sagol WIS-MIT Bridge Program, the National Science Foundation, the Gordon and Betty Moore Foundation, and the Ramon Areces Foundation.

Q&A: How folk ballads explain the world

Thu, 11/06/2025 - 12:00am

Traditional folk ballads are one of our most enduring forms of cultural expression. They can also be lost to society, forgotten over time. That’s why, in the mid-1700s, when a Scottish woman named Anna Gordon was found to know three dozen ancient ballads, collectors tried to document all of these songs — a volume of work that became a kind of sensation in its time, a celebrated piece of cultural heritage.

That story is told in MIT Professor Emerita Ruth Perry’s latest book, “The Ballad World of Anna Gordon, Mrs. Brown of Falkland,” published this year by Oxford University Press. In it, Perry details what we know about the ways folk ballads were created and transmitted; how Anna Gordon came to know so many; the social and political climate in which they existed; and why these songs meant so much in Scotland and elsewhere in the Atlantic world. Indeed, Scottish immigrants brought their music to the U.S., among other places.

MIT News sat down with Perry, who is MIT’s Ann Fetter Friedlaender Professor of Humanities, Emerita, to talk about the book.

Q: This is fascinating topic with a lot of threads woven together. To you, what is the book about?

A: It’s really three books. It’s a book about Anna Gordon and her family, a very interesting middle-class family living in Aberdeen in the middle of the 18th century. And it’s a book about balladry and what a ballad is — a story told in song, and ballads are the oldest known poetry in English. Some of them are gorgeous. Third, it’s a book about the relationship between Scotland and England, the effects of the Jacobite uprising in 1745, social attitudes, how people lived, what they ate, education — it’s very much about 18th century Scotland.

Q: Okay, who was Anna Gordon, and what was her family milieu?

A: Anna’s father, Thomas Gordon, was a professor at King’s College, now the University of Aberdeen. He was a professor of humanity, which in those days meant Greek and Latin, and was well-connected to the intellectual community of the Scottish Enlightenment. A friend of his, an Edinburgh writer, lawyer, and judge, William Tytler, who heard cases all over the country and always stayed with Thomas Gordon and his family when he came to Aberdeen, was intensely interested in Scottish traditional music. He found out that Anna Gordon had learned all these ballads as a child, from her mother and aunt and some servants. Tytler asked if she would write them down, both tunes and words.

That was the earliest manuscript of ballads ever collected from a named person in Scotland. Once it was in existence, all kinds of people wanted to see it; it got spread throughout the country. In my book, I detail much of the excitement over this manuscript.

The thing about Anna’s ballads is: It’s not just that there are more of them, and more complete versions that are fuller, with more verses. They’re more beautiful. The language is more archaic, and there are marvelous touches. It is thought, and I agree, that Anna Gordon was an oral poet. As she remembered ballads and reproduced them, she improved on them. She had a great memory for the best bits and would improve other parts.

Q: How did it come about that at this time, a woman such as Anna Gordon would be the keeper and creator of cultural knowledge?

A: Women were more literate in Scotland than elsewhere. The Scottish Parliament passed an act in 1695 requiring every parish in the Church of Scotland to have not only a minister, but a teacher. Scotland was the most literate country in Europe in the 18th century. And those parish schoolmasters taught local kids. The parents did have to pay a few pennies for their classes, and, true, more parents paid for sons than for daughters. But there were daughters who took classes. And there were no opportunities like this in England at the time. Education was better for women in Scotland. So was their legal position, under common law in Scotland. When the Act of Union was formed in 1707, Scotland retained its own legal system, which had more extensive rights for women than in England.

Q: I know it’s complex, but generally, why was this?

A: Scotland was a much more democratic country, culture, and society than England, period. When Elizabeth I died in 1603, the person who inherited the throne was the King of Scotland James VI, who went to England with his court — which included the Scottish aristocracy. So, the Scottish aristocracy ended up in London. I’m sure they went back to their hunting lodges for the hunting season, but they didn’t live there [in Scotland] and they didn’t set the tone of the country. It was democratized because all that was left were a lot of lawyers and ministers and teachers.

Q: What is distinctive about the ballads in this corpus of songs Anna Gordon knew and documented?

A: A common word about ballads is that there’s a high body count, and they’re all about people dying and killing each other. But that is not true of Anna Gordon’s ballads. They’re about younger women triumphing in the world, often against older women, which is interesting, and even more often against fathers. The ballads are about family discord, inheritance, love, fidelity, lack of fidelity, betrayal. There are ballads about fighting and bloodshed, but not so many. They’re about the human condition. And they have interesting qualities because they’re oral poetry, composed and remembered and changed and transmitted from mouth to ear and not written down. There are repetitions and parallelisms, and other hallmarks of oral poetry. The sort of thing you learned when you read Homer.

Q: So is this a form of culture generated in opposition to those controlling society? Or at least, one that’s popular regardless of what some elites thought?

A: It is in Scotland, because of the enmity between Scotland and England. We’re talking about the period of Great Britain when England is trying to gobble up Scotland and some Scottish folks don’t want that. They want to retain their Scottishness. And the ballad was a Scottish tradition that was not influenced by England. That’s one reason balladry was so important in 18th-century Scotland. Everybody was into balladry partly because it was a unique part of Scottish culture.

Q: To that point, it seems like an unexpected convergence, for the time, to see a more middle-class woman like Anna Gordon transmitting ballads that had often been created and sung by people of all classes.

A: Yes. At first I thought I was just working on a biography of Anna Gordon. But it’s fascinating how the culture was transmitted, how intellectually rich that society was, how much there is to examine in Scottish culture and society of the 18th century. Today people may watch “Outlander,” but they still wouldn’t know anything about this!

MIT researchers invent new human brain model to enable disease research, drug discovery

Wed, 11/05/2025 - 5:15pm

A new 3D human brain tissue platform developed by MIT researchers is the first to integrate all major brain cell types, including neurons, glial cells, and the vasculature, into a single culture. 

Grown from individual donors’ induced pluripotent stem cells, these models — dubbed Multicellular Integrated Brains (miBrains) — replicate key features and functions of human brain tissue, are readily customizable through gene editing, and can be produced in quantities that support large-scale research.

Although each unit is smaller than a dime, miBrains may be worth a great deal to researchers and drug developers who need more complex living lab models to better understand brain biology and treat diseases.

“The miBrain is the only in vitro system that contains all six major cell types that are present in the human brain,” says Li-Huei Tsai, Picower Professor, director of The Picower Institute for Learning and Memory, and a senior author of the open-access study describing miBrains, published Oct. 17 in the Proceedings of the National Academy of Sciences.

“In their first application, miBrains enabled us to discover how one of the most common genetic markers for Alzheimer’s disease alters cells’ interactions to produce pathology,” she adds.

Tsai’s co-senior authors are Robert Langer, David H. Koch (1962) Institute Professor, and Joel Blanchard, associate professor in the Icahn School of Medicine at Mt. Sinai in New York, and a former Tsai Laboratory postdoc. The study is led by Alice Stanton, former postdoc in the Langer and Tsai labs and now assistant professor at Harvard Medical School and Massachusetts General Hospital, and Adele Bubnys, a former Tsai lab postdoc and current senior scientist at Arbor Biotechnologies.

Benefits from two kinds of models

The more closely a model recapitulates the brain’s complexity, the better suited it is for extrapolating how human biology works and how potential therapies may affect patients. In the brain, neurons interact with each other and with various helper cells, all of which are arranged in a three-dimensional tissue environment that includes blood vessels and other components. All of these interactions are necessary for health, and any of them can contribute to disease.

Simple cultures of just one or a few cell types can be created in quantity relatively easily and quickly, but they cannot tell researchers about the myriad interactions that are essential to understanding health or disease. Animal models embody the brain’s complexity, but can be difficult and expensive to maintain, slow to yield results, and different enough from humans to yield occasionally divergent results.

MiBrains combine advantages from each type of model, retaining much of the accessibility and speed of lab-cultured cell lines while allowing researchers to obtain results that more closely reflect the complex biology of human brain tissue. Moreover, they are derived from individual patients, making them personalized to an individual’s genome. In the model, the six cell types self-assemble into functioning units, including blood vessels, immune defenses, and nerve signal conduction, among other features. Researchers ensured that miBrains also possess a blood-brain-barrier capable of gatekeeping which substances may enter the brain, including most traditional drugs.

“The miBrain is very exciting as a scientific achievement,” says Langer. “Recent trends toward minimizing the use of animal models in drug development could make systems like this one increasingly important tools for discovering and developing new human drug targets.”

Two ideal blends for functional brain models

Designing a model integrating so many cell types presented challenges that required many years to overcome. Among the most crucial was identifying a substrate able to provide physical structure for cells and support their viability. The research team drew inspiration from the environment that surrounds cells in natural tissue, the extracellular matrix (ECM). The miBrain’s hydrogel-based “neuromatrix” mimics the brain’s ECM with a custom blend of polysaccharides, proteoglycans, and basement membrane that provide a scaffold for all the brain’s major cell types while promoting the development of functional neurons.

A second blend would also prove critical: the proportion of cells that would result in functional neurovascular units. The actual ratios of cell types have been a matter of debate for the last several decades, with even the more advanced methodologies providing only rough brushstrokes for guidance, for example 45-75 percent for oligodendroglia of all cells or 19-40 percent for astrocytes.

The researchers developed the six cell types from patient-donated induced pluripotent stem cells, verifying that each cultured cell type closely recreated naturally-occurring brain cells. Then, the team experimentally iterated until they hit on a balance of cell types that resulted in functional, properly structured neurovascular units. This laborious process would turn out to be an advantageous feature of miBrains: because cell types are cultured separately, they can each be genetically edited so that the resulting model is tailored to replicate specific health and disease states.

“Its highly modular design sets the miBrain apart, offering precise control over cellular inputs, genetic backgrounds, and sensors — useful features for applications such as disease modeling and drug testing,” says Stanton.

Alzheimer’s discovery using miBrain

To test miBrain’s capabilities, the researchers embarked on a study of the gene variant APOE4, which is the strongest genetic predictor for the development of Alzheimer’s disease. Although one brain cell type, astrocytes, are known to be a primary producer of the APOE protein, the role that astrocytes carrying the APOE4 variant play in disease pathology is poorly understood.

MiBrains were well-suited to the task for two reasons. First of all, they integrate astrocytes with the brain’s other cell types, so that their natural interactions with other cells can be mimicked. Second, because the platform allowed the team to integrate cell types individually, APOE4 astrocytes could be studied in cultures where all other cell types carried APOE3, a gene variant that does not increase Alzheimer’s risk. This enabled the researchers to isolate the contribution APOE4 astrocytes make to pathology.

In one experiment, the researchers examined APOE4 astrocytes cultured alone, versus ones in APOE4 miBrains. They found that only in the miBrains did the astrocytes express many measures of immune reactivity associated with Alzheimer’s disease, suggesting the multicellular environment contributes to that state.

The researchers also tracked the Alzheimer’s-associated proteins amyloid and phosphorylated tau, and found all-APOE4 miBrains accumulated them, whereas all-APOE3 miBrains did not, as expected. However, in APOE3 miBrains with APOE4 astrocytes, they found that APOE4 miBrains still exhibited amyloid and tau accumulation.

Then the team dug deeper into how APOE4 astrocytes’ interactions with other cell types might lead to their contribution to disease pathology. Prior studies have implicated molecular cross-talk with the brain’s microglia immune cells. Notably, when the researchers cultured APOE4 miBrains without microglia, their production of phosphorylated tau was significantly reduced. When the researchers dosed APOE4 miBrains with culture media from astrocytes and microglia combined, phosphorylated tau increased, whereas when they dosed them with media from cultures of astrocytes or microglia alone, the tau production did not increase. The results therefore provided new evidence that molecular cross-talk between microglia and astrocytes is indeed required for phosphorylated tau pathology.

In the future, the research team plans to add new features to miBrains to more closely model characteristics of working brains, such as leveraging microfluidics to add flow through blood vessels, or single-cell RNA sequencing methods to improve profiling of neurons.

Researchers expect that miBrains could advance research discoveries and treatment modalities for Alzheimer’s disease and beyond. 

“Given its sophistication and modularity, there are limitless future directions,” says Stanton. “Among them, we would like to harness it to gain new insights into disease targets, advanced readouts of therapeutic efficacy, and optimization of drug delivery vehicles.”

“I’m most excited by the possibility to create individualized miBrains for different individuals,” adds Tsai. “This promises to pave the way for developing personalized medicine.”

Funding for the study came from the BT Charitable Foundation, Freedom Together Foundation, the Robert A. and Renee E. Belfer Family, Lester A. Gimpelson, Eduardo Eurnekian, Kathleen and Miguel Octavio, David B. Emmes, the Halis Family, the Picower Institute, and an anonymous donor.

MIT study finds targets for a new tuberculosis vaccine

Wed, 11/05/2025 - 2:00pm

A large-scale screen of tuberculosis proteins has revealed several possible antigens that could be developed as a new vaccine for TB, the world’s deadliest infectious disease.

In the new study, a team of MIT biological engineers was able to identify a handful of immunogenic peptides, out of more than 4,000 bacterial proteins, that appear to stimulate a strong response from a type of T cells responsible for orchestrating immune cells’ response to infection.

There is currently only one vaccine for tuberculosis, known as BCG, which is a weakened version of a bacterium that causes TB in cows. This vaccine is widely administered in some parts of the world, but it poorly protects adults against pulmonary TB. Worldwide, tuberculosis kills more than 1 million people every year.

“There’s still a huge TB burden globally that we’d like to make an impact on,” says Bryan Bryson, an associate professor of biological engineering at MIT and a member of the Ragon Institute of Mass General Brigham, MIT, and Harvard. “What we’ve tried to do in this initial TB vaccine is focus on antigens that we saw frequently in our screen and also appear to stimulate a response in T cells from people with prior TB infection.”

Bryson and Forest White, the Ned C. and Janet C. Rice Professor of Biological Engineering at MIT, and a member of the Koch Institute for Integrative Cancer Research, are the senior authors of the study, which appears today in Science Translational Medicine. Owen Leddy PhD ’25 is the paper’s lead author.

Identifying vaccine targets

Since the BCG vaccine was developed more than 100 years ago, no other TB vaccines have been approved for use. Mycobacterium tuberculosis produces more than 4,000 proteins, which makes it a daunting challenge to pick out proteins that might elicit a strong immune response if used as a vaccine.

In the new study, Bryson and his students set out to narrow the field of candidates by identifying TB proteins presented on the surface of infected human cells. When an immune cell such as a phagocyte is infected with Mycobacterium tuberculosis, some of the bacterial proteins get chopped into fragments called peptides, which are then displayed on the surface of the cell by MHC proteins. These MHC-peptide complexes act as a signal that can activate T cells.

MHCs, or major histocompatibility complexes, come in two types known as class I and class II. Class I MHCs activate killer T cells, while class II MHCs stimulate helper T cells. In human cells, there are three genes that can encode MHC-II proteins, and each of these comes in hundreds of variants. This means that any two people can have a very different repertoire of MHC-II molecules, which present different antigens.

“Instead of looking at all of those 4,000 TB proteins, we wanted to ask which of those proteins from TB actually end up being displayed to the rest of the immune system via MHC,” Bryson says. “If we could just answer that question, then we could design vaccines to match that.”

To try to answer the question, the researchers infected human phagocytes with Mycobacterium tuberculosis. After three days, they extracted MHC-peptide complexes from the cell surfaces, then identified the peptides using mass spectrometry.

Focusing on peptides bound to MHC-II, the researchers found 27 TB peptides, from 13 proteins, that appeared most often in the infected cells. Then, they further tested those peptides by exposing them to T cells donated by people who had previously been infected with TB.

They found that 24 of these peptides did elicit a T cell response in at least some of the samples. None of the proteins from which these peptides came worked for every single donor, but Bryson believes that a vaccine using a combination of these peptides would likely work for most people.

“In a perfect world, if you were trying to design a vaccine, you would pick one protein and that protein would be presented across every donor. It should work for every person,” Bryson says. “However, using our measurements, we’ve not yet found a TB protein that covers every donor we’ve analyzed thus far.”

Enter mRNA vaccines

Among the vaccine candidates that the researchers identified are several peptides from a class of proteins called type 7 secretion systems (T7SSs). Some of these peptides also turned up in an earlier study from Bryson’s lab on MHC-1.

“Type 7 secretion system substrates are a very small sliver of the overall TB proteome, but when you look at MHC class I or MHC class II, it seems as though the cells are preferentially presenting these,” Bryson says.

Two of the best-known of these proteins, EsxA and EsxB, are secreted by bacteria to help them escape from the membranes that phagocytes use to envelop them within the cell. Neither protein can break through the membrane on its own, but when joined together to form a heterodimer, they can poke holes, which also allow other T7SS proteins to escape.

To evaluate whether the proteins they identified could make a good vaccine, the researchers created mRNA vaccines encoding two protein sequences — EsxB and EsxG. The researchers designed several versions of the vaccine, which were targeted to different compartments within the cells.

The researchers then delivered this vaccine into human phagocytes, where they found that vaccines that targeted cell lysosomes — organelles that break down molecules — were the most effective. These vaccines induced 1,000 times more MHC presentation of TB peptides than any of the others.

They later found that the presentation was even higher if they added EsxA to the vaccine, because it allows the formation of the heterodimers that can poke through the lysosomal membrane.

The researchers currently have a mix of eight proteins that they believe could offer protection against TB for most people, but they are continuing to test the combination with blood samples from people around the world. They also hope to run additional studies to explore how much protection this vaccine offers in animal models. Tests in humans are likely several years away.

The research was funded by the MIT Center for Precision Cancer Research at the Koch Institute, the National Institutes of Health, the National Institute of Environmental Health Sciences, and the Frederick National Laboratory for Cancer Research.

Pages