MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 6 hours 46 min ago

MIT-affiliated physicists win McMillan Award for discovery of exotic electronic state

Thu, 10/02/2025 - 4:15pm

Last year, MIT physicists reported in the journal Nature that electrons can become fractions of themselves in graphene, an atomically thin form of carbon. This exotic electronic state, called the fractional quantum anomalous Hall effect (FQAHE), could enable more robust forms of quantum computing.

Now two young MIT-affiliated physicists involved in the discovery of FQAHE have been named the 2025 recipients of the McMillan Award from the University of Illinois for their work. Jiaqi Cai and Zhengguang Lu won the award “for the discovery of fractional anomalous quantum hall physics in 2D moiré materials.”

Cai is currently a Pappalardo Fellow at MIT working with Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics, and collaborating with several other labs at MIT including Long Ju, the Lawrence and Sarah W. Biedenharn Career Development Associate Professor in the MIT Department of Physics. He discovered FQAHE while working in the laboratory of Professor Xiaodong Xu at the University of Washington.

Lu discovered FQAHE while working as a postdoc Ju's lab and has since become an assistant professor at Florida State University.

The two independent discoveries were made in the same year.
 
“The McMillan award is the highest honor that a young condensed matter physicist can receive,” says Ju. “My colleagues and I in the Condensed Matter Experiment and the Condensed Matter Theory Group are very proud of Zhengguang and Jiaqi.” 

Ju and Jarillo-Herrero are both also affiliated with the Materials Research Laboratory. 

In addition to a monetary prize and a plaque, Lu and Cai will give a colloquium on their work at the University of Illinois this fall.

Martin Trust Center for MIT Entrepreneurship welcomes Ana Bakshi as new executive director

Thu, 10/02/2025 - 3:55pm

The Martin Trust Center for MIT Entrepreneurship announced that Ana Bakshi has been named its new executive director. Bakshi stepped into the role at the start of the fall semester and will collaborate closely with the managing director, Ethernet Inventors Professor of the Practice Bill Aulet, to elevate the center to higher levels.

“Ana is uniquely qualified for this role. She brings a deep and highly decorated background in entrepreneurship education at the highest levels, along with exceptional leadership and execution skills,” says Aulet. “Since I first met her 12 years ago, I have been extraordinarily impressed with her commitment to create the highest-quality centers and institutes for entrepreneurs, first at King’s College London and then at Oxford University. This ideal skill set is compounded by her experience in leading high-growth companies, most recently as the chief operation officer in an award-winning AI startup. I’m honored and thrilled to welcome her to MIT — her knowledge and energy will greatly elevate our community, and the field as a whole.”

A rapidly changing environment creates imperative for raising the bar for entrepreneurship education

The need to raise the bar for innovation-driven entrepreneurship education is both timely and urgent. The rate of change is getting faster and faster every day, especially with artificial intelligence, and is generating new problems that need to be solved, as well as exacerbating existing problems in climate, health care, manufacturing, future of work, education, and economic stratification, to name but a few. The world needs more entrepreneurs and better entrepreneurs.

Bakshi joins the Trust Center at an exciting time in its history. MIT is at the forefront of helping to develop people and systems that can turn challenges into opportunities using an entrepreneurial mindset, skill set, and way of operating. Bakshi’s deep experience and success will be key to unlocking this opportunity. “I am truly honored to join the Trust Center at such a pivotal moment,” Bakshi says. “In an era defined by both extraordinary challenges and extraordinary possibilities, the future will be built by those bold enough to try, and MIT will be at the forefront of this.”

Translating academic research into real-world impact

Bakshi has a decade of experience building two world-class entrepreneurship centers from the ground up. She served as the founding director at King’s College and then at Oxford. In this role, she was responsible for all aspects of these centers, including fundraising.

While at Oxford, she authored a data-driven approach to determining efficacy of outcomes for their programs, as evidenced by a 61-page study, “Universities: Drivers of Prosperity and Economic Recovery.”

As the director of the Oxford Foundry (Oxford’s cross-university entrepreneurship center), Bakshi focused on investing in ambitious founders and talent. The center was backed by global entrepreneurial leaders such as the founders of LinkedIn and Twitter, with corporate partnerships including Santander and EY, and investment funds including Oxford Science Enterprises (OSE). As of 2021, the startups supported by the Foundry and King’s College have raised over $500 million and have created nearly 3,000 jobs, spanning diverse industries including health tech, climate tech, cybersecurity, fintech, and deep tech spinouts focusing on world-class science.

In addition, she built the highly successful and economically sustainable Entrepreneurship School, Oxford’s first digital online learning platform.

Bakshi comes to MIT after having worked in the private sector as the chief operating officer (COO) in a rapidly growing artificial intelligence startup for almost two years, Quench.ai, with offices in London and New York City. She was the first C-suite employee at Quench.ai, serving as COO and now senior advisor, helping companies unlock value from their knowledge through AI.

Right place, right time, right person moving at the speed of MIT AI

Since its inception, then turbocharged in the 1940s with the creation and operation of the RadLab, and continuing to this day, entrepreneurship is at the core of MIT’s identity and mission.   

"MIT has been a leader in entrepreneurship for decades. It’s now the third leg of the school, alongside teaching and research,” says Mark Gorenberg ’76, chair of the MIT Corporation. “I’m excited to have such a transformative leader as Ana join the Trust Center team, and I look forward to the impact she will have on the students and the wider academic community at MIT as we enter an exciting new phase in company building, driven by the accelerated use of AI and emerging technologies."

“In a time where we are rethinking management education, entrepreneurship as an interdisciplinary field to create impact is even more important to our future. To have such an experienced and accomplished leader in academia and the startup world, especially in AI, reinforces our commitment to be a global leader in this field,” says Richard M. Locke, John C Head III Dean at the MIT Sloan School of Management.

“MIT is a unique hub of research, innovation, and entrepreneurship, and that special mix creates massive positive impact that ripples around the world,” says Frederic Kerrest, MIT Sloan MBA ’09, co-founder of Okta, and member of the MIT Corporation. “In a rapidly changing, AI-driven world, Ana has the skills and experience to further accelerate MIT’s global leadership in entrepreneurship education to ensure that our students launch and scale the next generation of groundbreaking, innovation-driven startups.”

Prior to her time at Oxford and King’s College, Bakshi served as an elected councilor representing 6,000-plus constituents, held roles in international nongovernmental organizations, and led product execution strategy at MAHI, an award-winning family-led craft sauce startup, available in thousands of major retailers across the U.K. Bakshi sits on the advisory council for conservation charity Save the Elephants, leveraging AI-driven and scientific approaches to reduce human-wildlife conflict and protect elephant populations. Her work and impact have been featured across FT, Forbes, BBC, The Times, and The Hill. Bakshi was twice honored as a Top 50 Woman in Tech (U.K.), most recently in 2025.

“As AI changes how we learn, how we build, and how we scale, my focus will be on helping MIT expand its support for phenomenal talent — students and faculty — with the skills, ecosystem, and backing to turn knowledge into impact,” Bakshi says.

35 years of impact to date

The Trust Center was founded in 1990 by the late Professor Edward Roberts and serves all MIT students across all schools and all disciplines. It supports 60-plus courses and extensive extracurricular programming, including the delta v academic accelerator. Much of the work of the center is generated through the Disciplined Entrepreneurship methodology, which offers a proven approach to create new ventures. Over a thousand schools and other organizations across the world use Disciplined Entrepreneurship books and resources to teach entrepreneurship. 

Now, with AI-powered tools like Orbit and JetPack, the Trust Center is changing the way that entrepreneurship is taught and practiced. Its mission is to produce the next generation of innovation-driven entrepreneurs while advancing the field more broadly to make it both rigorous and practical. This approach of leveraging proven evidence-based methodology, emerging technology, the ingenuity of MIT students, and responding to industry shifts is similar to how MIT established the field of chemical engineering in the 1890s. The desired result in both cases was to create a comprehensive, integrated, scalable, rigorous, and practical curriculum to create a new workforce to address the nation’s and world’s greatest challenges.

Lincoln Lab unveils the most powerful AI supercomputer at any US university

Thu, 10/02/2025 - 3:30pm

The new TX-Generative AI Next (TX-GAIN) computing system at the Lincoln Laboratory Supercomputing Center  (LLSC) is the most powerful AI supercomputer at any U.S. university. With its recent ranking from  TOP500, which biannually publishes a list of the top supercomputers in various categories, TX-GAIN joins the ranks of other powerful systems at the LLSC, all supporting research and development at Lincoln Laboratory and across the MIT campus. 

"TX-GAIN will enable our researchers to achieve scientific and engineering breakthroughs. The system will play a large role in supporting generative AI, physical simulation, and data analysis across all research areas," says Lincoln Laboratory Fellow Jeremy Kepner, who heads the LLSC. 

The LLSC is a key resource for accelerating innovation at Lincoln Laboratory. Thousands of researchers tap into the LLSC to analyze data, train models, and run simulations for federally funded research projects. The supercomputers have been used, for example, to simulate billions of aircraft encounters to develop collision-avoidance systems for the Federal Aviation Administration, and to train models in the complex tasks of autonomous navigation for the Department of Defense. Over the years, LLSC capabilities have been essential to numerous award-winning technologies, including those that have improved  airline safety,  prevented the spread of new diseases, and  aided in hurricane responses. 

As its name suggests, TX-GAIN is especially equipped for developing and applying generative AI. Whereas traditional AI focuses on categorization tasks, like identifying whether a photo depicts a dog or cat, generative AI produces entirely new outputs. Kepner describes it as a mathematical combination of interpolation (filling in the gaps between known data points) and extrapolation (extending data beyond known points). Today, generative AI is widely known for its use of large language models to create human-like responses to user prompts. 

At Lincoln Laboratory, teams are applying generative AI to various domains beyond large language models. They are using the technology, for instance, to evaluate radar signatures, supplement weather data where coverage is missing, root out anomalies in network traffic, and explore chemical interactions to design new medicines and materials.

To enable such intense computations, TX-GAIN is powered by more than 600 NVIDIA graphics processing unit accelerators specially designed for AI operations, in addition to traditional high-performance computing hardware. With a peak performance of two AI exaflops (two quintillion floating-point operations per second), TX-GAIN is the top AI system at a university, and in the Northeast. Since TX-GAIN came online this summer, researchers have taken notice. 

"TX-GAIN is allowing us to model not only significantly more protein interactions than ever before, but also much larger proteins with more atoms. This new computational capability is a game-changer for protein characterization efforts in biological defense," says Rafael Jaimes, a researcher in Lincoln Laboratory's Counter–Weapons of Mass Destruction Systems Group

The LLSC's focus on interactive supercomputing makes it especially useful to researchers. For years, the LLSC has pioneered software that lets users access its powerful systems without needing to be experts in configuring algorithms for parallel processing.  

"The LLSC has always tried to make supercomputing feel like working on your laptop," Kepner says. "The amount of data and the sophistication of analysis methods needed to be competitive today are well beyond what can be done on a laptop. But with our user-friendly approach, people can run their model and get answers quickly from their workspace."

Beyond supporting programs solely at Lincoln Laboratory, TX-GAIN is enhancing research collaborations with MIT's campus. Such collaborations include the Haystack ObservatoryCenter for Quantum EngineeringBeaver Works, and Department of Air Force–MIT AI Accelerator. The latter initiative is rapidly prototyping, scaling, and applying AI technologies for the U.S. Air Force and Space Force, optimizing flight scheduling for global operations as one fielded example.

The LLSC systems are housed in an energy-efficient data center and facility in Holyoke, Massachusetts. Research staff in the LLSC are also tackling the immense energy needs of AI and leading research into various power-reduction methods. One software tool they developed can reduce the energy of training an AI model by as much as 80 percent.

"The LLSC provides the capabilities needed to do leading-edge research, while in a cost-effective and energy-efficient manner," Kepner says.

All of the supercomputers at the LLSC use the "TX" nomenclature in homage to Lincoln Laboratory's Transistorized Experimental Computer Zero (TX-0) of 1956. TX-0 was one of the world's first transistor-based machines, and its 1958 successor, TX-2, is storied for its role in pioneering human-computer interaction and AI. With TX-GAIN, the LLSC continues this legacy.

A simple formula could guide the design of faster-charging, longer-lasting batteries

Thu, 10/02/2025 - 2:00pm

At the heart of all lithium-ion batteries is a simple reaction: Lithium ions dissolved in an electrolyte solution “intercalate” or insert themselves into a solid electrode during battery discharge. When they de-intercalate and return to the electrolyte, the battery charges.

This process happens thousands of times throughout the life of a battery. The amount of power that the battery can generate, and how quickly it can charge, depend on how fast this reaction happens. However, little is known about the exact mechanism of this reaction, or the factors that control its rate.

In a new study, MIT researchers have measured lithium intercalation rates in a variety of different battery materials and used that data to develop a new model of how the reaction is controlled. Their model suggests that lithium intercalation is governed by a process known as coupled ion-electron transfer, in which an electron is transferred to the electrode along with a lithium ion.

Insights gleaned from this model could guide the design of more powerful and faster charging lithium-ion batteries, the researchers say.

“What we hope is enabled by this work is to get the reactions to be faster and more controlled, which can speed up charging and discharging,” says Martin Bazant, the Chevron Professor of Chemical Engineering and a professor of mathematics at MIT.

The new model may also help scientists understand why tweaking electrodes and electrolytes in certain ways leads to increased energy, power, and battery life — a process that has mainly been done by trial and error.

“This is one of these papers where now we began to unify the observations of reaction rates that we see with different materials and interfaces, in one theory of coupled electron and ion transfer for intercalation, building up previous work on reaction rates,” says Yang Shao-Horn, the J.R. East Professor of Engineering at MIT and a professor of mechanical engineering, materials science and engineering, and chemistry.

Shao-Horn and Bazant are the senior authors of the paper, which appears today in Science. The paper’s lead authors are Yirui Zhang PhD ’22, who is now an assistant professor at Rice University; Dimitrios Fraggedakis PhD ’21, who is now an assistant professor at Princeton University; Tao Gao, a former MIT postdoc who is now an assistant professor at the University of Utah; and MIT graduate student Shakul Pathak.

Modeling lithium flow

For many decades, scientists have hypothesized that the rate of lithium intercalation at a lithium-ion battery electrode is determined by how quickly lithium ions can diffuse from the electrolyte into the electrode. This reaction, they believed, was governed by a model known as the Butler-Volmer equation, originally developed almost a century ago to describe the rate of charge transfer during an electrochemical reaction.

However, when researchers have tried to measure lithium intercalation rates, the measurements they obtained were not always consistent with the rates predicted by the Butler-Volmer equation. Furthermore, obtaining consistent measurements across labs has been difficult, with different research teams reporting measurements for the same reaction that varied by a factor of up to 1 billion.

In the new study, the MIT team measured lithium intercalation rates using an electrochemical technique that involves applying repeated, short bursts of voltage to an electrode. They generated these measurements for more than 50 combinations of electrolytes and electrodes, including lithium nickel manganese cobalt oxide, which is commonly used in electric vehicle batteries, and lithium cobalt oxide, which is found in the batteries that power most cell phones, laptops, and other portable electronics.

For these materials, the measured rates are much lower than has previously been reported, and they do not correspond to what would be predicted by the traditional Butler-Volmer model.

The researchers used the data to come up with an alternative theory of how lithium intercalation occurs at the surface of an electrode. This theory is based on the assumption that in order for a lithium ion to enter an electrode, an electron from the electrolyte solution must be transferred to the electrode at the same time.

“The electrochemical step is not lithium insertion, which you might think is the main thing, but it’s actually electron transfer to reduce the solid material that is hosting the lithium,” Bazant says. “Lithium is intercalated at the same time that the electron is transferred, and they facilitate one another.”

This coupled-electron ion transfer (CIET) lowers the energy barrier that must be overcome for the intercalation reaction to occur, making it more likely to happen. The mathematical framework of CIET allowed the researchers to make reaction rate predictions, which were validated by their experiments and substantially different from those made by the Butler-Volmer model.

Faster charging

In this study, the researchers also showed that they could tune intercalation rates by changing the composition of the electrolyte. For example, swapping in different anions can lower the amount of energy needed to transfer the lithium and electron, making the process more efficient.

“Tuning the intercalation kinetics by changing electrolytes offers great opportunities to enhance the reaction rates, alter electrode designs, and therefore enhance the battery power and energy,” Shao-Horn says.

Shao-Horn’s lab and their collaborators have been using automated experiments to make and test thousands of different electrolytes, which are used to develop machine-learning models to predict electrolytes with enhanced functions.

The findings could also help researchers to design batteries that would charge faster, by speeding up the lithium intercalation reaction. Another goal is reducing the side reactions that can cause battery degradation when electrons are picked off the electrode and dissolve into the electrolyte.

“If you want to do that rationally, not just by trial and error, you need some kind of theoretical framework to know what are the important material parameters that you can play with,” Bazant says. “That’s what this paper tries to provide.”

The research was funded by Shell International Exploration and Production and the Toyota Research Institute through the D3BATT Center for Data-Driven Design of Rechargeable Batteries.

Accounting for uncertainty to help engineers design complex systems

Thu, 10/02/2025 - 12:00am

Designing a complex electronic device like a delivery drone involves juggling many choices, such as selecting motors and batteries that minimize cost while maximizing the payload the drone can carry or the distance it can travel.

Unraveling that conundrum is no easy task, but what happens if the designers don’t know the exact specifications of each battery and motor? On top of that, the real-world performance of these components will likely be affected by unpredictable factors, like changing weather along the drone’s route.

MIT researchers developed a new framework that helps engineers design complex systems in a way that explicitly accounts for such uncertainty. The framework allows them to model the performance tradeoffs of a device with many interconnected parts, each of which could behave in unpredictable ways.

Their technique captures the likelihood of many outcomes and tradeoffs, giving designers more information than many existing approaches which, at most, can usually only model best-case and worst-case scenarios.

Ultimately, this framework could help engineers develop complex systems like autonomous vehicles, commercial aircraft, or even regional transportation networks that are more robust and reliable in the face of real-world unpredictability.

“In practice, the components in a device never behave exactly like you think they will. If someone has a sensor whose performance is uncertain, and an algorithm that is uncertain, and the design of a robot that is also uncertain, now they have a way to mix all these uncertainties together so they can come up with a better design,” says Gioele Zardini, the Rudge and Nancy Allen Assistant Professor of Civil and Environmental Engineering at MIT, a principal investigator in the Laboratory for Information and Decision Systems (LIDS), an affiliate faculty with the Institute for Data, Systems, and Society (IDSS), and senior author of a paper on this framework.

Zardini is joined on the paper by lead author Yujun Huang, an MIT graduate student; and Marius Furter, a graduate student at the University of Zurich. The research will be presented at the IEEE Conference on Decision and Control.

Considering uncertainty

The Zardini Group studies co-design, a method for designing systems made of many interconnected components, from robots to regional transportation networks.

The co-design language breaks a complex problem into a series of boxes, each representing one component, that can be combined in different ways to maximize outcomes or minimize costs. This allows engineers to solve complex problems in a feasible amount of time.

In prior work, the researchers modeled each co-design component without considering uncertainty. For instance, the performance of each sensor the designers could choose for a drone was fixed.

But engineers often don’t know the exact performance specifications of each sensor, and even if they do, it is unlikely the senor will perfectly follow its spec sheet. At the same time, they don’t know how each sensor will behave once integrated into a complex device, or how performance will be affected by unpredictable factors like weather.

“With our method, even if you are unsure what the specifications of your sensor will be, you can still design the robot to maximize the outcome you care about,” says Furter.

To accomplish this, the researchers incorporated this notion of uncertainty into an existing framework based on category theory.

Using some mathematical tricks, they simplified the problem into a more general structure. This allows them to use the tools of category theory to solve co-design problems in a way that considers a range of uncertain outcomes.

By reformulating the problem, the researchers can capture how multiple design choices affect one another even when their individual performance is uncertain.

This approach is also simpler than many existing tools that typically require extensive domain expertise. With their plug-and-play system, one can rearrange the components in the system without violating any mathematical constraints.

And because no specific domain expertise is required, the framework could be used by a multidisciplinary team where each member designs one component of a larger system.

“Designing an entire UAV isn’t feasible for just one person, but designing a component of a UAV is. By providing the framework for how these components work together in a way that considers uncertainty, we’ve made it easier for people to evaluate the performance of the entire UAV system,” Huang says.

More detailed information

The researchers used this new approach to choose perception systems and batteries for a drone that would maximize its payload while minimizing its lifetime cost and weight.

While each perception system may offer a different detection accuracy under varying weather conditions, the designer doesn’t know exactly how its performance will fluctuate. This new system allows the designer to take these uncertainties into consideration when thinking about the drone’s overall performance.

And unlike other approaches, their framework reveals distinct advantages of each battery technology.

For instance, their results show that at lower payloads, nickel-metal hydride batteries provide the lowest expected lifetime cost. This insight would be impossible to fully capture without accounting for uncertainty, Zardini says.

While another method might only be able to show the best-case and worst-case performance scenarios of lithium polymer batteries, their framework gives the user more detailed information.

For example, it shows that if the drone’s payload is 1,750 grams, there is a 12.8 percent chance the battery design would be infeasible.

“Our system provides the tradeoffs, and then the user can reason about the design,” he adds.

In the future, the researchers want to improve the computational efficiency of their problem-solving algorithms. They also want to extend this approach to situations where a system is designed by multiple parties that are collaborative and competitive, like a transportation network in which rail companies operate using the same infrastructure.

“As the complexity of systems grow, and involves more disparate components, we need a formal framework in which to design these systems. This paper presents a way to compose large systems from modular components, understand design trade-offs, and importantly do so with a notion of uncertainty. This creates an opportunity to formalize the design of large-scale systems with learning-enabled components,” says Aaron Ames, the Bren Professor of Mechanical and Civil Engineering, Control and Dynamical Systems, and Aerospace at Caltech, who was not involved with this research. 

MIT OpenCourseWare is “a living testament to the nobility of open, unbounded learning”

Wed, 10/01/2025 - 4:30pm

Mostafa Fawzy became interested in physics in high school. It was the “elegance and paradox” of quantum theory that got his attention and led to his studies at the undergraduate and graduate level. But even with a solid foundation of coursework and supportive mentors, Fawzy wanted more. MIT Open Learning’s OpenCourseWare was just the thing he was looking for.  

Now a doctoral candidate in atomic physics at Alexandria University and an assistant lecturer of physics at Alamein International University in Egypt, Fawzy reflects on how MIT OpenCourseWare bolstered his learning early in his graduate studies in 2019.  

Part of MIT Open Learning, OpenCourseWare offers free, online, open educational resources from more than 2,500 courses that span the MIT undergraduate and graduate curriculum. Fawzy was looking for advanced resources to supplement his research in quantum mechanics and theoretical physics, and he was immediately struck by the quality, accessibility, and breadth of MIT’s resources. 

“OpenCourseWare was transformative in deepening my understanding of advanced physics,” Fawzy says. “I found the structured lectures and assignments in quantum physics particularly valuable. They enhanced both my theoretical insight and practical problem-solving skills — skills I later applied in research on atomic systems influenced by magnetic fields and plasma environments.”  

He completed educational resources including Quantum Physics I and Quantum Physics II, calling them “dense and mathematically sophisticated.” He met the challenge by engaging with the content in different ways: first, by simply listening to lectures, then by taking detailed notes, and finally by working though problem sets. Although initially he struggled to keep up, this methodical approach paid off, he says. 

Fawzy is now in the final stages of his doctoral research on high-precision atomic calculations under extreme conditions. While in graduate school, he has published eight peer-reviewed international research papers, making him one of the most prolific doctoral researchers in physics working in Egypt currently. He served as an ambassador for the United Nations International Youth Conference (IYC), and he was nominated for both the African Presidential Leadership Program and the Davisson–Germer Prize in Atomic or Surface Physics, a prestigious annual prize offered by the American Physical Society.  

He is grateful to his undergraduate mentors, professors M. Sakr and T. Bahy of Alexandria University, as well as to MIT OpenCourseWare, calling it a “steadfast companion through countless solitary nights of study, a beacon in times when formal resources were scarce, and a living testament to the nobility of open, unbounded learning.”  

Recognizing the power of mentorship and teaching, Fawzy serves as an academic mentor with the African Academy of Sciences, supporting early-career researchers across the continent in theoretical and atomic physics.  

“Many of these mentees lack access to advanced academic resources,” he explains. “I regularly incorporate OpenCourseWare into our mentorship sessions, using it as a foundational teaching and reference tool. It’s an equalizer, providing the same high-caliber content to students regardless of geographical or institutional limitations.” 

As he looks toward the future, Fawzy has big plans, influenced by MIT. 

“I aspire to establish a regional center for excellence in atomic and plasma physics, blending cutting-edge research with open-access education in the Global South,” he says. 

As he continues his research and teaching, he also hopes to influence science policy and contribute to international partnerships that shine the spotlight on research and science in emerging nations.  

Along the way, he says, “OpenCourseWare remains a cornerstone resource that I will return to again and again.”  

Fawzy says he’s also interested in MIT Open Learning resources in computational physics and energy and sustainability. He’s following MIT’s Energy Initiative, calling it increasingly relevant to his current work and future plans.  

Fawzy is a proponent of open learning and a testament to its power. 

“The intellectual seeds sown by Open Learning resources such as MIT OpenCourseWare have flourished within me, shaping my identity as a physicist and affirming my deep belief in the transformative power of knowledge shared freely, without barriers,” he says. 

Concrete “battery” developed at MIT now packs 10 times the power

Wed, 10/01/2025 - 4:25pm

Concrete already builds our world, and now it’s one step closer to powering it, too. Made by combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, electron-conducting carbon concrete (ec3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy. In other words, the concrete around us could one day double as giant “batteries.”

As MIT researchers report in a new PNAS paper, optimized electrolytes and manufacturing processes have increased the energy storage capacity of the latest ec3 supercapacitors by an order of magnitude. In 2023, storing enough energy to meet the daily needs of the average home would have required about 45 cubic meters of ec3, roughly the amount of concrete used in a typical basement. Now, with the improved electrolyte, that same task can be achieved with about 5 cubic meters, the volume of a typical basement wall.

“A key to the sustainability of concrete is the development of ‘multifunctional concrete,’ which integrates functionalities like this energy storage, self-healing, and carbon sequestration. Concrete is already the world’s most-used construction material, so why not take advantage of that scale to create other benefits?” asks Admir Masic, lead author of the new study, MIT Electron-Conducting Carbon-Cement-Based Materials Hub (EC³ Hub) co-director, and associate professor of civil and environmental engineering (CEE) at MIT.

The improved energy density was made possible by a deeper understanding of how the nanocarbon black network inside ec3 functions and interacts with electrolytes. Using focused ion beams for the sequential removal of thin layers of the ec3 material, followed by high-resolution imaging of each slice with a scanning electron microscope (a technique called FIB-SEM tomography), the team across the EC³ Hub and MIT Concrete Sustainability Hub was able to reconstruct the conductive nanonetwork at the highest resolution yet. This approach allowed the team to discover that the network is essentially a fractal-like “web” that surrounds ec3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system. 

“Understanding how these materials ‘assemble’ themselves at the nanoscale is key to achieving these new functionalities,” adds Masic.

Equipped with their new understanding of the nanonetwork, the team experimented with different electrolytes and their concentrations to see how they impacted energy storage density. As Damian Stefaniuk, first author and EC³ Hub research scientist, highlights, “we found that there is a wide range of electrolytes that could be viable candidates for ec3. This even includes seawater, which could make this a good material for use in coastal and marine applications, perhaps as support structures for offshore wind farms.”

At the same time, the team streamlined the way they added electrolytes to the mix. Rather than curing ec3 electrodes and then soaking them in electrolyte, they added the electrolyte directly into the mixing water. Since electrolyte penetration was no longer a limitation, the team could cast thicker electrodes that stored more energy.

The team achieved the greatest performance when they switched to organic electrolytes, especially those that combined quaternary ammonium salts — found in everyday products like disinfectants — with acetonitrile, a clear, conductive liquid often used in industry. A cubic meter of this version of ec3 — about the size of a refrigerator — can store over 2 kilowatt-hours of energy. That’s about enough to power an actual refrigerator for a day.

While batteries maintain a higher energy density, ec3 can in principle be incorporated directly into a wide range of architectural elements — from slabs and walls to domes and vaults — and last as long as the structure itself.

“The Ancient Romans made great advances in concrete construction. Massive structures like the Pantheon stand to this day without reinforcement. If we keep up their spirit of combining material science with architectural vision, we could be at the brink of a new architectural revolution with multifunctional concretes like ec3,” proposes Masic.

Taking inspiration from Roman architecture, the team built a miniature ec3 arch to show how structural form and energy storage can work together. Operating at 9 volts, the arch supported its own weight and additional load while powering an LED light.

However, something unique happened when the load on the arch increased: the light flickered. This is likely due to the way stress impacts electrical contacts or the distribution of charges. “There may be a kind of self-monitoring capacity here. If we think of an ec3 arch at architectural scale, its output may fluctuate when it’s impacted by a stressor like high winds. We may be able to use this as a signal of when and to what extent a structure is stressed, or monitor its overall health in real time,” envisions Masic.

The latest developments in ec³ technology bring it a step closer to real-world scalability. It’s already been used to heat sidewalk slabs in Sapporo, Japan, due to its thermally conductive properties, representing a potential alternative to salting. “With these higher energy densities and demonstrated value across a broader application space, we now have a powerful and flexible tool that can help us address a wide range of persistent energy challenges,” explains Stefaniuk. “One of our biggest motivations was to help enable the renewable energy transition. Solar power, for example, has come a long way in terms of efficiency. However, it can only generate power when there’s enough sunlight. So, the question becomes: How do you meet your energy needs at night, or on cloudy days?”

Franz-Josef Ulm, EC³ Hub co-director and CEE professor, continues the thread: “The answer is that you need a way to store and release energy. This has usually meant a battery, which often relies on scarce or harmful materials. We believe that ec3 is a viable substitute, letting our buildings and infrastructure meet our energy storage needs.” The team is working toward applications like parking spaces and roads that could charge electric vehicles, as well as homes that can operate fully off the grid.

“What excites us most is that we’ve taken a material as ancient as concrete and shown that it can do something entirely new,” says James Weaver, a co-author on the paper who is an associate professor of design technology and materials science and engineering at Cornell University, as well as a former EC³ Hub researcher. “By combining modern nanoscience with an ancient building block of civilization, we’re opening a door to infrastructure that doesn’t just support our lives, it powers them.”

Palladium filters could enable cheaper, more efficient generation of hydrogen fuel

Wed, 10/01/2025 - 2:00pm

Palladium is one of the keys to jump-starting a hydrogen-based energy economy. The silvery metal is a natural gatekeeper against every gas except hydrogen, which it readily lets through. For its exceptional selectivity, palladium is considered one of the most effective materials at filtering gas mixtures to produce pure hydrogen.

Today, palladium-based membranes are used at commercial scale to provide pure hydrogen for semiconductor manufacturing, food processing, and fertilizer production, among other applications in which the membranes operate at modest temperatures. If palladium membranes get much hotter than around 800 kelvins, they can break down.

Now, MIT engineers have developed a new palladium membrane that remains resilient at much higher temperatures. Rather than being made as a continuous film, as most membranes are, the new design is made from palladium that is deposited as “plugs” into the pores of an underlying supporting material. At high temperatures, the snug-fitting plugs remain stable and continue separating out hydrogen, rather than degrading as a surface film would.

The thermally stable design opens opportunities for membranes to be used in hydrogen-fuel-generating technologies such as compact steam methane reforming and ammonia cracking — technologies that are designed to operate at much higher temperatures to produce hydrogen for zero-carbon-emitting fuel and electricity.

“With further work on scaling and validating performance under realistic industrial feeds, the design could represent a promising route toward practical membranes for high-temperature hydrogen production,” says Lohyun Kim PhD ’24, a former graduate student in MIT’s Department of Mechanical Engineering.

Kim and his colleagues report details of the new membrane in a study appearing today in the journal Advanced Functional Materials. The study’s co-authors are Randall Field, director of research at the MIT Energy Initiative (MITEI); former MIT chemical engineering graduate student Chun Man Chow PhD ’23; Rohit Karnik, the Jameel Professor in the Department of Mechanical Engineering at MIT and the director of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS); and Aaron Persad, a former MIT research scientist in mechanical engineering who is now an assistant professor at the University of Maryland Eastern Shore.

Compact future

The team’s new design came out of a MITEI project related to fusion energy. Future fusion power plants, such as the one MIT spinout Commonwealth Fusion Systems is designing, will involve circulating hydrogen isotopes of deuterium and tritium at extremely high temperatures to produce energy from the isotopes’ fusing. The reactions inevitably produce other gases that will have to be separated, and the hydrogen isotopes will be recirculated into the main reactor for further fusion.

Similar issues arise in a number of other processes for producing hydrogen, where gases must be separated and recirculated back into a reactor. Concepts for such recirculating systems would require first cooling down the gas before it can pass through hydrogen-separating membranes — an expensive and energy-intensive step that would involve additional machinery and hardware.

“One of the questions we were thinking about is: Can we develop membranes which could be as close to the reactor as possible, and operate at higher temperatures, so we don’t have to pull out the gas and cool it down first?” Karnik says. “It would enable more energy-efficient, and therefore cheaper and compact, fusion systems.”

The researchers looked for ways to improve the temperature resistance of palladium membranes. Palladium is the most effective metal used today to separate hydrogen from a variety of gas mixtures. It naturally attracts hydrogen molecules (H2) to its surface, where the metal’s electrons interact with and weaken the molecule’s bonds, causing H2 to temporarily break apart into its respective atoms. The individual atoms then diffuse through the metal and join back up on the other side as pure hydrogen.

Palladium is highly effective at permeating hydrogen, and only hydrogen, from streams of various gases. But conventional membranes typically can operate at temperatures of up to 800 kelvins before the film starts to form holes or clumps up into droplets, allowing other gases to flow through.

Plugging in

Karnik, Kim and their colleagues took a different design approach. They observed that at high temperatures, palladium will start to shrink up. In engineering terms, the material is acting to reduce surface energy. To do this, palladium, and most other materials and even water, will pull apart and form droplets with the smallest surface energy. The lower the surface energy, the more stable the material can be against further heating.

This gave the team an idea: If a supporting material’s pores could be “plugged” with deposits of palladium — essentially already forming a droplet with the lowest surface energy — the tight quarters might substantially increase palladium’s heat tolerance while preserving the membrane’s selectivity for hydrogen.

To test this idea, they fabricated small chip-sized samples of membrane using a porous silica supporting layer (each pore measuring about half a micron wide), onto which they deposited a very thin layer of palladium. They applied techniques to essentially grow the palladium into the pores, and polished down the surface to remove the palladium layer and leave palladium only inside the pores.

They then placed samples in a custom-built apparatus in which they flowed hydrogen-containing gas of various mixtures and temperatures to test its separation performance. The membranes remained stable and continued to separate hydrogen from other gases even after experiencing temperatures of up to 1,000 kelvins for over 100 hours — a significant improvement over conventional film-based membranes.

“The use of palladium film membranes are generally limited to below around 800 kelvins, at which point they degrade,” Kim says. “Our plug design therefore extends palladium’s effective heat resilience by roughly at least 200 kelvins and maintains integrity far longer under extreme conditions.”

These conditions are within the range of hydrogen-generating technologies such as steam methane reforming and ammonia cracking.

Steam methane reforming is an established process that has required complex, energy-intensive systems to preprocess methane to a form where pure hydrogen can be extracted. Such preprocessing steps could be replaced with a compact “membrane reactor,” through which a methane gas would directly flow, and the membrane inside would filter out pure hydrogen. Such reactors would significantly cut down the size, complexity, and cost of producing hydrogen from steam methane reforming, and Kim estimates a membrane would have to work reliably in temperatures of up to nearly 1,000 kelvins. The team’s new membrane could work well within such conditions.

Ammonia cracking is another way to produce hydrogen, by “cracking” or breaking apart ammonia. As ammonia is very stable in liquid form, scientists envision that it could be used as a carrier for hydrogen and be safely transported to a hydrogen fuel station, where ammonia could be fed into a membrane reactor that again pulls out hydrogen and pumps it directly into a fuel cell vehicle. Ammonia cracking is still largely in pilot and demonstration stages, and Kim says any membrane in an ammonia cracking reactor would likely operate at temperatures of around 800 kelvins — within the range of the group’s new plug-based design.

Karnik emphasizes that their results are just a start. Adopting the membrane into working reactors will require further development and testing to ensure it remains reliable over much longer periods of time.

“We showed that instead of making a film, if you make discretized nanostructures you can get much more thermally stable membranes,” Karnik says. “It provides a pathway for designing membranes for extreme temperatures, with the added possibility of using smaller amounts of expensive palladium, toward making hydrogen production more efficient and affordable. There is potential there.”

This work was supported by Eni S.p.A. via the MIT Energy Initiative.

A cysteine-rich diet may promote regeneration of the intestinal lining, study suggests

Wed, 10/01/2025 - 11:00am

A diet rich in the amino acid cysteine may have rejuvenating effects in the small intestine, according to a new study from MIT. This amino acid, the researchers discovered, can turn on an immune signaling pathway that helps stem cells to regrow new intestinal tissue.

This enhanced regeneration may help to heal injuries from radiation, which often occur in patients undergoing radiation therapy for cancer. The research was conducted in mice, but if future research shows similar results in humans, then delivering elevated quantities of cysteine, through diet or supplements, could offer a new strategy to help damaged tissue heal faster, the researchers say.

“The study suggests that if we give these patients a cysteine-rich diet or cysteine supplementation, perhaps we can dampen some of the chemotherapy or radiation-induced injury,” says Omer Yilmaz, director of the MIT Stem Cell Initiative, an associate professor of biology at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research. “The beauty here is we’re not using a synthetic molecule; we’re exploiting a natural dietary compound.”

While previous research has shown that certain types of diets, including low-calorie diets, can enhance intestinal stem cell activity, the new study is the first to identify a single nutrient that can help intestinal cells to regenerate.

Yilmaz is the senior author of the study, which appears today in Nature. Koch Institute postdoc Fangtao Chi is the paper’s lead author.

Boosting regeneration

It is well-established that diet can affect overall health: High-fat diets can lead to obesity, diabetes, and other health problems, while low-calorie diets have been shown to extend lifespans in many species. In recent years, Yilmaz’s lab has investigated how different types of diets influence stem cell regeneration, and found that high-fat diets, as well as short periods of fasting, can enhance stem cell activity in different ways.

“We know that macro diets such as high-sugar diets, high-fat diets, and low-calorie diets have a clear impact on health. But at the granular level, we know much less about how individual nutrients impact stem cell fate decisions, as well as tissue function and overall tissue health,” Yilmaz says.

In their new study, the researchers began by feeding mice a diet high in one of 20 different amino acids, the building blocks of proteins. For each group, they measured how the diet affected intestinal stem cell regeneration. Among these amino acids, cysteine had the most dramatic effects on stem cells and progenitor cells (immature cells that differentiate into adult intestinal cells).

Further studies revealed that cysteine initiates a chain of events leading to the activation of a population of immune cells called CD8 T cells. When cells in the lining of the intestine absorb cysteine from digested food, they convert it into CoA, a cofactor that is released into the mucosal lining of the intestine. There, CD8 T cells absorb CoA, which stimulates them to begin proliferating and producing a cytokine called IL-22.

IL-22 is an important player in the regulation of intestinal stem cell regeneration, but until now, it wasn’t known that CD8 T cells can produce it to boost intestinal stem cells. Once activated, those IL-22-releasing T cells are primed to help combat any kind of injury that could occur within the intestinal lining.

“What’s really exciting here is that feeding mice a cysteine-rich diet leads to the expansion of an immune cell population that we typically don’t associate with IL-22 production and the regulation of intestinal stemness,” Yilmaz says. “What happens in a cysteine-rich diet is that the pool of cells that make IL-22 increases, particularly the CD8 T-cell fraction.”

These T cells tend to congregate within the lining of the intestine, so they are already in position when needed. The researchers found that the stimulation of CD8 T cells occurred primarily in the small intestine, not in any other part of the digestive tract, which they believe is because most of the protein that we consume is absorbed by the small intestine.

Healing the intestine

In this study, the researchers showed that regeneration stimulated by a cysteine-rich diet could help to repair radiation damage to the intestinal lining. Also, in work that has not been published yet, they showed that a high-cysteine diet had a regenerative effect following treatment with a chemotherapy drug called 5-fluorouracil. This drug, which is used to treat colon and pancreatic cancers, can also damage the intestinal lining.

Cysteine is found in many high-protein foods, including meat, dairy products, legumes, and nuts. The body can also synthesize its own cysteine, by converting the amino acid methionine to cysteine — a process that takes place in the liver. However, cysteine produced in the liver is distributed through the entire body and doesn’t lead to a buildup in the small intestine the way that consuming cysteine in the diet does.

“With our high-cysteine diet, the gut is the first place that sees a high amount of cysteine,” Chi says.

Cysteine has been previously shown to have antioxidant effects, which are also beneficial, but this study is the first to demonstrate its effect on intestinal stem cell regeneration. The researchers now hope to study whether it may also help other types of stem cells regenerate new tissues. In one ongoing study, they are investigating whether cysteine might stimulate hair follicle regeneration.

They also plan to further investigate some of the other amino acids that appear to influence stem cell regeneration.

“I think we’re going to uncover multiple new mechanisms for how these amino acids regulate cell fate decisions and gut health in the small intestine and colon,” Yilmaz says.

The research was funded, in part, by the National Institutes of Health, the V Foundation, the Koch Institute Frontier Research Program via the Kathy and Curt Marble Cancer Research Fund, the Bridge Project — a partnership between the Koch Institute for Integrative Cancer Research at MIT and the Dana-Farber/Harvard Cancer Center, the American Federation for Aging Research, the MIT Stem Cell Initiative, and the Koch Institute Support (core) Grant from the National Cancer Institute.

System lets people personalize online social spaces while staying connected with others

Wed, 10/01/2025 - 10:00am

Say a local concert venue wants to engage its community by giving social media followers an easy way to share and comment on new music from emerging artists. Rather than working within the constraints of existing social platforms, the venue might want to create its own social app with the functionality that would be best for its community. But building a new social app from scratch involves many complicated programming steps, and even if the venue can create a customized app, the organization’s followers may be unwilling to join the new platform because it could mean leaving their connections and data behind.

Now, researchers from MIT have launched a framework called Graffiti that makes building personalized social applications easier, while allowing users to migrate between multiple applications without losing their friends or data.

“We want to empower people to have control over their own designs rather than having them dictated from the top down,” says electrical engineering and computer science graduate student Theia Henderson.

Henderson and her colleagues designed Graffiti with a flexible structure so individuals have the freedom to create a variety of customized applications, from messenger apps like WhatsApp to microblogging platforms like X to location-based social networking sites like Nextdoor, all using only front-end development tools like HTML.

The protocol ensures all applications can interoperate, so content posted on one application can appear on any other application, even those with disparate designs or functionality. Importantly, Graffiti users retain control of their data, which is stored on a decentralized infrastructure rather than being held by a specific application.

While the pros and cons of implementing Graffiti at scale remain to be fully explored, the researchers hope this new approach can someday lead to healthier online interactions.

“We’ve shown that you can have a rich social ecosystem where everyone owns their own data and can use whatever applications they want to interact with whoever they want in whatever way they want. And they can have their own experiences without losing connection with the people they want to stay connected with,” says David Karger, professor of EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Henderson, the lead author, and Karger are joined by MIT Research Scientist David D. Clark on a paper about Graffiti, which will be presented at the ACM Symposium on User Interface Software and Technology.

Personalized, integrated applications

With Graffiti, the researchers had two main goals: to lower the barrier to creating personalized social applications and to enable those personalized applications to interoperate without requiring permission from developers.

To make the design process easier, they built a collective back-end infrastructure that all applications access to store and share content. This means developers don’t need to write any complex server code. Instead, designing a Graffiti application is more like making a website using popular tools like Vue.

Developers can also easily introduce new features and new types of content, giving them more freedom and fostering creativity.

“Graffiti is so straightforward that we used it as the infrastructure for the intro to web design class I teach, and students were able to write the front-end very easily to come up with all sorts of applications,” Karger says.

The open, interoperable nature of Graffiti means no one entity has the power to set a moderation policy for the entire platform. Instead, multiple competing and contradictory moderation services can operate, and people can choose the ones they like. 

Graffiti uses the idea of “total reification,” where every action taken in Graffiti, such as liking, sharing, or blocking a post, is represented and stored as its own piece of data. A user can configure their social application to interpret or ignore those data using its own rules.

For instance, if an application is designed so a certain user is a moderator, posts blocked by that user won’t appear in the application. But for an application with different rules where that person isn’t considered a moderator, other users might just see a warning or no flag at all.

“Theia’s system lets each person pick their own moderators, avoiding the one-sized-fits-all approach to moderation taken by the major social platforms,” Karger says.

But at the same time, having no central moderator means there is no one to remove content from the platform that might be offensive or illegal.

“We need to do more research to understand if that is going to provide real, damaging consequences or if the kind of personal moderation we created can provide the protections people need,” he adds.

Empowering social media users

The researchers also had to overcome a problem known as context collapse, which conflicts with their goal of interoperation.

For instance, context collapse would occur if a person’s Tinder profile appeared on LinkedIn, or if a post intended for one group, like close friends, would create conflict with another group, such as family members. Context collapse can lead to anxiety and have social repercussions for the user and their different communities.

“We realize that interoperability can sometimes be a bad thing. People have boundaries between different social contexts, and we didn’t want to violate those,” Henderson says.

To avoid context collapse, the researchers designed Graffiti so all content is organized into distinct channels. Channels are flexible and can represent a variety of contexts, such as people, applications, locations, etc.

If a user’s post appears in an application channel but not their personal channel, others using that application will see the post, but those who only follow this user will not.

“Individuals should have the power to choose the audience for whatever they want to say,” Karger adds.

The researchers created multiple Graffiti applications to showcase personalization and interoperability, including a community-specific application for a local concert venue, a text-centric microblogging platform patterned off X, a Wikipedia-like application that enables collective editing, and a real-time messaging app with multiple moderation schemes patterned off WhatsApp and Slack.

“It also leaves room to create so many social applications people haven’t thought of yet. I’m really excited to see what people come up with when they are given full creative freedom,” Henderson says.

In the future, she and her colleagues want to explore additional social applications they could build with Graffiti. They also intend to incorporate tools like graphical editors to simplify the design process. In addition, they want to strengthen Graffiti’s security and privacy.

And while there is still a long way to go before Graffiti could be implemented at scale, the researchers are currently running a user study as they explore the potential positive and negative impacts the system could have on the social media landscape. 

MIT cognitive scientists reveal why some sentences stand out from others

Wed, 10/01/2025 - 12:00am

“You still had to prove yourself.”

“Every cloud has a blue lining!”

Which of those sentences are you most likely to remember a few minutes from now? If you guessed the second, you’re probably correct.

According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen. They found that meaning, not any other trait, is the most important feature when it comes to memorability.

“One might have thought that when you remember sentences, maybe it’s all about the visual features of the sentence, but we found that that was not the case. A big contribution of this paper is pinning down that it is the meaning-related space that makes sentences memorable,” says Greta Tuckute PhD ’25, who is now a research fellow at Harvard University’s Kempner Institute.

The findings support the hypothesis that sentences with distinctive meanings — like “Does olive oil work for tanning?” — are stored in brain space that is not cluttered with sentences that mean almost the same thing. Sentences with similar meanings end up densely packed together and are therefore more difficult to recognize confidently later on, the researchers believe.

“When you encode sentences that have a similar meaning, there’s feature overlap in that space. Therefore, a particular sentence you’ve encoded is not linked to a unique set of features, but rather to a whole bunch of features that may overlap with other sentences,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences (BCS), a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Tuckute and Thomas Clark, an MIT graduate student, are the lead authors of the paper, which appears in the Journal of Memory and Language. MIT graduate student Bryan Medina is also an author.

Distinctive sentences

What makes certain things more memorable than others is a longstanding question in cognitive science and neuroscience. In a 2011 study, Aude Oliva, now a senior research scientist at MIT and MIT director of the MIT-IBM Watson AI Lab, showed that not all items are created equal: Some types of images are much easier to remember than others, and people are remarkably consistent in what images they remember best.

In that study, Oliva and her colleagues found that, in general, images with people in them are the most memorable, followed by images of human-scale space and close-ups of objects. Least memorable are natural landscapes.

As a follow-up to that study, Fedorenko and Oliva, along with Ted Gibson, another faculty member in BCS, teamed up to determine if words also vary in their memorability. In a study published earlier this year, co-led by Tuckute and Kyle Mahowald, a former PhD student in BCS, the researchers found that the most memorable words are those that have the most distinctive meanings.

Words are categorized as being more distinctive if they have a single meaning, and few or no synonyms — for example, words like “pineapple” or “avalanche” which were found to be very memorable. On the other hand, words that can have multiple meanings, such as “light,” or words that have many synonyms, like “happy,” were more difficult for people to recognize accurately.

In the new study, the researchers expanded their scope to analyze the memorability of sentences. Just like words, some sentences have very distinctive meanings, while others communicate similar information in slightly different ways.

To do the study, the researchers assembled a collection of 2,500 sentences drawn from publicly available databases that compile text from novels, news articles, movie dialogues, and other sources. Each sentence that they chose contained exactly six words.

The researchers then presented a random selection of about 1,000 of these sentences to each study participant, including repeats of some sentences. Each of the 500 participants in the study was asked to press a button when they saw a sentence that they remembered seeing earlier.

The most memorable sentences — the ones where participants accurately and quickly indicated that they had seen them before — included strings such as “Homer Simpson is hungry, very hungry,” and “These mosquitoes are — well, guinea pigs.”

Those memorable sentences overlapped significantly with sentences that were determined as having distinctive meanings as estimated through the high-dimensional vector space of a large language model (LLM) known as Sentence BERT. That model is able to generate sentence-level representations of sentences, which can be used for tasks like judging meaning similarity between sentences. This model provided researchers with a distinctness score for each sentence based on its semantic similarity to other sentences.

The researchers also evaluated the sentences using a model that predicts memorability based on the average memorability of the individual words in the sentence. This model performed fairly well at predicting overall sentence memorability, but not as well as Sentence BERT. This suggests that the meaning of a sentence as a whole — above and beyond the contributions from individual words — determines how memorable it will be, the researchers say.

Noisy memories

While cognitive scientists have long hypothesized that the brain’s memory banks have a limited capacity, the findings of the new study support an alternative hypothesis that would help to explain how the brain can continue forming new memories without losing old ones.

This alternative, known as the noisy representation hypothesis, says that when the brain encodes a new memory, be it an image, a word, or a sentence, it is represented in a noisy way — that is, this representation is not identical to the stimulus, and some information is lost. For example, for an image, you may not encode the exact viewing angle at which an object is shown, and for a sentence, you may not remember the exact construction used.

Under this theory, a new sentence would be encoded in a similar part of the memory space as sentences that carry a similar meanings, whether they were encountered recently or sometime across a lifetime of language experience. This jumbling of similar meanings together increases the amount of noise and can make it much harder, later on, to remember the exact sentence you have seen before.

“The representation is gradually going to accumulate some noise. As a result, when you see an image or a sentence for a second time, your accuracy at judging whether you’ve seen it before will be affected, and it’ll be less than 100 percent in most cases,” Clark says.

However, if a sentence has a unique meaning that is encoded in a less densely crowded space, it will be easier to pick out later on.

“Your memory may still be noisy, but your ability to make judgments based on the representations is less affected by that noise because the representation is so distinctive to begin with,” Clark says.

The researchers now plan to study whether other features of sentences, such as more vivid and descriptive language, might also contribute to making them more memorable, and how the language system may interact with the hippocampal memory structures during the encoding and retrieval of memories.

The research was funded, in part, by the National Institutes of Health, the McGovern Institute, the Department of Brain and Cognitive Sciences, the Simons Center for the Social Brain, and the MIT Quest Initiative for Intelligence.

3 Questions: How a new mission to Uranus could be just around the corner

Tue, 09/30/2025 - 8:00am

The successful test of SpaceX’s Starship launch vehicle, following a series of engineering challenges and failed launches, has reignited excitement over the possibilities this massive rocket may unlock for humanity’s greatest ambitions in space. The largest rocket ever built, Starship and its 33-engine “super heavy” booster completed a full launch into Earth orbit on Aug. 26, deployed eight test prototype satellites, and survived reentry for a simulated landing before coming down, mostly intact, in the Indian Ocean. The 400-foot rocket is designed to carry up to 150 tons of cargo to low Earth orbit, dramatically increasing potential payload volume from rockets currently in operation. In addition to the planned Artemis III mission to the lunar surface and proposed missions to Mars in the near future, Starship also poses an opportunity for large-scale scientific missions throughout the solar system.

The National Academy of Sciences Planetary Science Decadal Survey published a recommendation in 2022 outlining exploration of Uranus as its highest-priority flagship mission. This proposed mission was envisioned for the 2030s, assuming use of a Falcon Heavy expendable rocket and anticipating arrival at the planet before 2050. Earlier this summer, a paper from researchers in MIT’s Engineering Systems Lab found that Starship may enable this flagship mission to Uranus in half the flight time. 

In this 3Q, Chloe Gentgen, a PhD student in aeronautics and astronautics and co-author on the recent study, describes the significance of Uranus as a flagship mission and what the current trajectory of Starship means for scientific exploration.

Q: Why has Uranus been identified as the highest-priority flagship mission? 

A: Uranus is one of the most intriguing and least-explored planets in our solar system. The planet is tilted on its side, is extremely cold, presents a highly dynamic atmosphere with fast winds, and has an unusual and complex magnetic field. A few of Uranus’ many moons could be ocean worlds, making them potential candidates in the search for life in the solar system. The ice giants Uranus and Neptune also represent the closest match to most of the exoplanets discovered. A mission to Uranus would therefore radically transform our understanding of ice giants, the solar system, and exoplanets. 

What we know about Uranus largely dates back to Voyager 2’s brief flyby nearly 40 years ago. No spacecraft has visited Uranus or Neptune since, making them the only planets yet to have a dedicated orbital mission. One of the main obstacles has been the sheer distance. Uranus is 19 times farther from the sun than the Earth is, and nearly twice as far as Saturn. Reaching it requires a heavy-lift launch vehicle and trajectories involving gravity assists from other planets. 

Today, such heavy-lift launch vehicles are available, and trajectories have been identified for launch windows throughout the 2030s, which resulted in selecting a Uranus mission as the highest priority flagship in the 2022 decadal survey. The proposed concept, called Uranus Orbiter and Probe (UOP), would release a probe into the planet’s atmosphere and then embark on a multiyear tour of the system to study the planet’s interior, atmosphere, magnetosphere, rings, and moons. 

Q: How do you envision your work on the Starship launch vehicle being deployed for further development?

A: Our study assessed the feasibility and potential benefits of launching a mission to Uranus with a Starship refueled in Earth’s orbit, instead of a Falcon Heavy (another SpaceX launch vehicle, currently operational). The Uranus decadal study showed that launching on a Falcon Heavy Expendable results in a cruise time of at least 13 years. Long cruise times present challenges, such as loss of team expertise and a higher operational budget. With the mission not yet underway, we saw an opportunity to evaluate launch vehicles currently in development, particularly Starship. 

When refueled in orbit, Starship could launch a spacecraft directly to Uranus, without detours by other planets for gravity-assist maneuvers. The proposed spacecraft could then arrive at Uranus in just over six years, less than half the time currently envisioned. These high-energy trajectories require significant deceleration at Uranus to capture in orbit. If the spacecraft slows down propulsively, the burn would require 5 km/s of delta v (which quantifies the energy needed for the maneuver), much higher than is typically performed by spacecraft, which might result in a very complex design. A more conservative approach, assuming a maximum burn of 2 km/s at Uranus, would result in a cruise time of 8.5 years. 

An alternative to propulsive orbit insertion at Uranus is aerocapture, where the spacecraft, enclosed in a thermally protective aeroshell, dips into the planet’s atmosphere and uses aerodynamic drag to decelerate. We examined whether Starship itself could perform aerocapture, rather than being separated from the spacecraft shortly after launch. Starship is already designed to withstand atmospheric entry at Earth and Mars, and thus already has a thermal protection system that could, potentially, be modified for aerocapture at Uranus. While bringing a Starship vehicle all the way to Uranus presents significant challenges, our analysis showed that aerocapture with Starship would produce deceleration and heating loads similar to those of other Uranus aerocapture concepts and would enable a cruise time of six years.

In addition to launching the proposed spacecraft on a faster trajectory that would reach Uranus sooner, Starship’s capabilities could also be leveraged to deploy larger masses to Uranus, enabling an enhanced mission with additional instruments or probes.

Q: What does the recent successful test of Starship tell us about the viability and timeline for a potential mission to the outer solar system?

A: The latest Starship launch marked an important milestone for the company after three failed launches in recent months, renewing optimism about the rocket’s future capabilities. Looking ahead, the program will need to demonstrate on-orbit refueling, a capability central to both SpaceX’s long-term vision of deep-space exploration and this proposed mission.

Launch vehicle selection for flagship missions typically occurs approximately two years after the official mission formulation process begins, which has not yet commenced for the Uranus mission. As such, Starship still has a few more years to demonstrate its on-orbit refueling architecture before a decision has to be made.

Overall, Starship is still under development, and significant uncertainty remains about its performance, timelines, and costs. Even so, our initial findings paint a promising picture of the benefits that could be realized by using Starship for a flagship mission to Uranus.

3 Questions: Addressing the world’s most pressing challenges

Tue, 09/30/2025 - 8:00am

The Center for International Studies (CIS) empowers students, faculty, and scholars to bring MIT’s interdisciplinary style of research and scholarship to address complex global challenges. 

In this Q&A, Mihaela Papa, the center's director of research and a principal research scientist at MIT, describes her role as well as research within the BRICS Lab at MIT — a reference the BRICS intergovernmental organization, which comprises the nations of Brazil, Russia, India, China, South Africa, Egypt, Ethiopia, Indonesia, Iran and the United Arab Emirates. She also discusses the ongoing mission of CIS to tackle the world's most complex challenges in new and creative ways.

Q: What is your role at CIS, and some of your key accomplishments since joining the center just over a year ago?

A: I serve as director of research and principal research scientist at CIS, a role that bridges management and scholarship. I oversee grant and fellowship programs, spearhead new research initiatives, build research communities across our center's area programs and MIT schools, and mentor the next generation of scholars. My academic expertise is in international relations, and I publish on global governance and sustainable development, particularly through my new BRICS Lab. 

This past year, I focused on building collaborative platforms that highlight CIS’ role as an interdisciplinary hub and expand its research reach. With Evan Lieberman, the director of CIS, I launched the CIS Global Research and Policy Seminar series to address current challenges in global development and governance, foster cross-disciplinary dialogue, and connect theoretical insights to policy solutions. We also convened a Climate Adaptation Workshop, which examined promising strategies for financing adaptation and advancing policy innovation. We documented the outcomes in a workshop report that outlines a broader research agenda contributing to MIT’s larger climate mission.

In parallel, I have been reviewing CIS’ grant-making programs to improve how we serve our community, while also supporting regional initiatives such as research planning related to Ukraine. Together with the center's MIT-Brazil faculty director Brad Olsen, I secured a MITHIC [MIT Human Insight Collaboration] Connectivity grant to build an MIT Amazonia research community that connects MIT scholars with regional partners and strengthens collaboration across the Amazon. Finally, I launched the BRICS Lab to analyze transformations in global governance and have ongoing research on BRICS and food security and data centers in BRICS. 

Q: Tell us more about the BRICS Lab.

A: The BRICS countries comprise the majority of the world’s population and an expanding share of the global economy. [Originally comprising Brazil, Russia, India, and China, BRICS currently includes 11 nations.] As a group, they carry the collective weight to shape international rules, influence global markets, and redefine norms — yet the question remains: Will they use this power effectively? The BRICS Lab explores the implications of the bloc’s rise for international cooperation and its role in reshaping global politics. Our work focuses on three areas: the design and strategic use of informal groups like BRICS in world affairs; the coalition’s potential to address major challenges such as food security, climate change, and artificial intelligence; and the implications of U.S. policy toward BRICS for the future of multilateralism.

Q: What are the center’s biggest research priorities right now?

A: Our center was founded in response to rising geopolitical tensions and the urgent need for policy rooted in rigorous, evidence-based research. Since then, we have grown into a hub that combines interdisciplinary scholarship and actively engages with policymakers and the public. Today, as in our early years, the center brings together exceptional researchers with the ambition to address the world’s most pressing challenges in new and creative ways.

Our core focus spans security, development, and human dignity. Security studies have been a priority for the center, and our new nuclear security programming advances this work while training the next generation of scholars in this critical field. On the development front, our work has explored how societies manage diverse populations, navigate international migration, as well as engage with human rights and the changing patterns of regime dynamics.

We are pursuing new research in three areas. First, on climate change, we seek to understand how societies confront environmental risks and harms, from insurance to water and food security in the international context. Second, we examine shifting patterns of global governance as rising powers set new agendas and take on greater responsibilities in the international system. Finally, we are initiating research on the impact of AI — how it reshapes governance across international relations, what is the role of AI corporations, and how AI-related risks can be managed.

As we approach our 75th anniversary in 2026, we are excited to bring researchers together to spark bold ideas that open new possibilities for the future.

Saab 340 becomes permanent flight-test asset at Lincoln Laboratory

Tue, 09/30/2025 - 8:00am

A Saab 340 aircraft recently became a permanent fixture of the fleet at the MIT Lincoln Laboratory Flight Test Facility, which supports R&D programs across the lab. 

Over the past five years, the facility leased and operated the twin-engine turboprop, once commercially used for the regional transport of passengers and cargo. During this time, staff modified the aircraft with a suite of radar, sensing, and communications capabilities. Transitioning the aircraft from a leased to a government-owned asset retains the aircraft's capabilities for present and future R&D in support of national security and reduces costs for Lincoln Laboratory sponsors. 

With the acquisition of the Saab, the Flight Test Facility currently maintains five government-owned aircraft — including three Gulfstream IVs and a Cessna 206 — as well as a leased Twin Otter, all housed on Hanscom Air Force Base, just over a mile from the laboratory's main campus.

"Of all our aircraft, the Saab is the most multi-mission-capable," says David Culbertson, manager of the Flight Test Facility. "It's highly versatile and adaptable, like a Swiss Army knife. Researchers from across the laboratory have conducted flight tests on the Saab to develop all kinds of technologies for national security."

For example, the Saab was modified to host the Airborne Radar Testbed (ARTB), a high-performance radar system based on a computer-controlled array of antennas that can be electronically steered (instead of physically moved) in different directions. With the ARTB, researchers have matured innovative radio-frequency technology; prototyped advanced system concepts; and demonstrated concepts of operation for intelligence, surveillance, and reconnaissance (ISR) missions. With its open-architecture design and compliance with open standards, the ARTB can easily be reconfigured to suit specific R&D needs.

"The Saab has enabled us to rapidly prototype and mature the complex system-of-systems solutions needed to realize critical warfighter capabilities," says Ramu Bhagavatula, an assistant leader of the laboratory's Embedded and Open Systems Group. "Recently, the Saab participated in a major national exercise as a surrogate multi-INT [intelligence] ISR platform. We demonstrated machine-to-machine cueing of our multi-INT payload to automatically recognize targets designated by an operational U.S. Air Force platform. The Saab's flexibility was key to integrating diverse technologies to develop this important capability."

In anticipation of the expiration of the Saab's lease, the Flight Test Facility and Financial Services Department conducted an extensive analysis of alternatives. Comparing the operational effectiveness, suitability, and life-cycle cost of various options, this analysis determined that the optimal solution for the laboratory and the government was to purchase the aircraft.

"Having the Saab in our permanent inventory allows research groups from across the laboratory to continuously leverage each other's test beds and expertise," says Linda McCabe, a project manager in the laboratory's Communication Networks and Analysis Group. "In addition, we can invest in long-term infrastructure updates that will benefit a wide range of users. For instance, my group helped obtain authorizations from various agencies to equip the Saab with Link 16, a secure communications network used by NATO and its allies to share tactical information."

The Saab acquisition is part of a larger recapitalization effort at the Flight Test Facility to support emerging technology development for years to come. This 10-year effort, slated for completion in 2026, is retiring aging, obsolete aircraft and replacing them with newer platforms that will be more cost-effective to maintain, easier to integrate rapidly prototyped systems into, and able to operate under expanded flight envelopes (the performance limits within which an aircraft can safely fly, defined by parameters such as speed, altitude, and maneuverability).

MIT joins in constructing the Giant Magellan Telescope

Tue, 09/30/2025 - 6:00am

The following article is adapted from a joint press release issued today by MIT and the Giant Magellan Telescope.

MIT is lending its support to the Giant Magellan Telescope, joining the international consortium to advance the $2.6 billion observatory in Chile. The Institute’s participation, enabled by a transformational gift from philanthropists Phillip (Terry) Ragon ’72 and Susan Ragon, adds to the momentum to construct the Giant Magellan Telescope, whose 25.4-meter aperture will have five times the light-collecting area and up to 200 times the power of existing observatories.

“As philanthropists, Terry and Susan have an unerring instinct for finding the big levers: those interventions that truly transform the scientific landscape,” says MIT President Sally Kornbluth. “We saw this with their founding of the Ragon Institute, which pursues daring approaches to harnessing the immune system to prevent and cure human diseases. With today’s landmark gift, the Ragons enable an equally lofty mission to better understand the universe — and we could not be more grateful for their visionary support."

MIT will be the 16th member of the international consortium advancing the Giant Magellan Telescope and the 10th participant based in the United States. Together, the consortium has invested $1 billion in the observatory — the largest-ever private investment in ground-based astronomy. The Giant Magellan Telescope is already 40 percent under construction, with major components being designed and manufactured across 36 U.S. states.

“MIT is honored to join the consortium and participate in this exceptional scientific endeavor,” says Ian A. Waitz, MIT’s vice president for research. “The Giant Magellan Telescope will bring tremendous new capabilities to MIT astronomy and to U.S. leadership in fundamental science. The construction of this uniquely powerful telescope represents a vital private and public investment in scientific excellence for decades to come.”

MIT brings to the consortium powerful scientific capabilities and a legacy of astronomical excellence. MIT’s departments of Physics and of Earth, Atmospheric and Planetary Sciences, and the MIT Kavli Institute for Astrophysics and Space Research, are internationally recognized for research in exoplanets, cosmology, and environments of extreme gravity, such as black holes and compact binary stars. MIT’s involvement will strengthen the Giant Magellan Telescope’s unique capabilities in high-resolution spectroscopy, adaptive optics, and the search for life beyond Earth. It also deepens a long-standing scientific relationship: MIT is already a partner in the existing twin Magellan Telescopes at Las Campanas Observatory in Chile — one of the most scientifically valuable observing sites on Earth, and the same site where the Giant Magellan Telescope is now under construction.

“Since Galileo’s first spyglass, the world’s largest telescope has doubled in aperture every 40 to 50 years,” says Robert A. Simcoe, director of the MIT Kavli Institute and the Francis L. Friedman Professor of Physics. “Each generation’s leading instruments have resolved important scientific questions of the day and then surprised their builders with new discoveries not yet even imagined, helping humans understand our place in the universe. Together with the Giant Magellan Telescope, MIT is helping to realize our generation’s contribution to this lineage, consistent with our mission to advance the frontier of fundamental science by undertaking the most audacious and advanced engineering challenges.”

Contributing to the national strategy

MIT’s support comes at a pivotal time for the observatory. In June 2025, the National Science Foundation (NSF) advanced the Giant Magellan Telescope into its Final Design Phase, one of the final steps before it becomes eligible for federal construction funding. To demonstrate readiness and a strong commitment to U.S. leadership, the consortium offered to privately fund this phase, which is traditionally supported by the NSF.

MIT’s investment is an integral part of the national strategy to secure U.S. access to the next generation of research facilities known as “extremely large telescopes.” The Giant Magellan Telescope is a core partner in the U.S. Extremely Large Telescope Program, the nation’s top priority in astronomy. The National Academies’ Astro2020 Decadal Survey called the program “absolutely essential if the United States is to maintain a position as a leader in ground-based astronomy.” This long-term strategy also includes the recently commissioned Vera C. Rubin Observatory in Chile. Rubin is scanning the sky to detect rare, fast-changing cosmic events, while the Giant Magellan Telescope will provide the sensitivity, resolution, and spectroscopic instruments needed to study them in detail. Together, these Southern Hemisphere observatories will give U.S. scientists the tools they need to lead 21st-century astrophysics.

“Without direct access to the Giant Magellan Telescope, the U.S. risks falling behind in fundamental astronomy, as Rubin’s most transformational discoveries will be utilized by other nations with access to their own ‘extremely large telescopes’ under development,” says Walter Massey, board chair of the Giant Magellan Telescope.

MIT’s participation brings the United States a step closer to completing the promise of this powerful new observatory on a globally competitive timeline. With federal construction funding, it is expected that the observatory could reach 90 percent completion in less than two years and become operational by the 2030s.

“MIT brings critical expertise and momentum at a time when global leadership in astronomy hangs in the balance,” says Robert Shelton, president of the Giant Magellan Telescope. “With MIT, we are not just adding a partner; we are accelerating a shared vision for the future and reinforcing the United States’ position at the forefront of science.”

Other members of the Giant Magellan Telescope consortium include the University of Arizona, Carnegie Institution for Science, The University of Texas at Austin, Korea Astronomy and Space Science Institute, University of Chicago, São Paulo Research Foundation (FAPESP), Texas A&M University, Northwestern University, Harvard University, Astronomy Australia Ltd., Australian National University, Smithsonian Institution, Weizmann Institute of Science, Academia Sinica Institute of Astronomy and Astrophysics, and Arizona State University.

A boon for astrophysics research and education

Access to the world’s best optical telescopes is a critical resource for MIT researchers. More than 150 individual science programs at MIT have relied on major astronomical observatories in the past three years, engaging faculty, researchers, and students in investigations into the marvels of the universe. Recent research projects have included chemical studies of the universe’s oldest stars, led by Professor Anna Frebel; spectroscopy of stars shredded by dormant black holes, led by Professor Erin Kara; and measurements of a white dwarf teetering on the precipice of a black hole, led by Professor Kevin Burdge. 

“Over many decades, researchers at the MIT Kavli Institute have used unparalleled instruments to discover previously undetected cosmic phenomena from both ground-based observations and spaceflight missions,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics. “I have no doubt our brilliant colleagues will carry on that tradition with the Giant Magellan Telescope, and I can’t wait to see what they will discover next.”

The Giant Magellan Telescope will also provide a platform for advanced R&D in remote sensing, creating opportunities to build custom infrared and optical spectrometers and high-speed imagers to further study our universe.

“One cannot have a leading physics program without a leading astrophysics program. Access to time on the Giant Magellan Telescope will ensure that future generations of MIT researchers will continue to work at the forefront of astrophysical discovery for decades to come,” says Deepto Chakrabarty, head of the MIT Department of Physics, the William A. M. Burden Professor in Astrophysics, and principal investigator at the MIT Kavli Institute. “Our institutional access will help attract and retain top researchers in astrophysics, planetary science, and advanced optics, and will give our PhD students and postdocs unrivaled educational opportunities.”

Responding to the climate impact of generative AI

Tue, 09/30/2025 - 12:00am

In part 2 of our two-part series on generative artificial intelligence’s environmental impacts, MIT News explores some of the ways experts are working to reduce the technology’s carbon footprint.

The energy demands of generative AI are expected to continue increasing dramatically over the next decade.

For instance, an April 2025 report from the International Energy Agency predicts that the global electricity demand from data centers, which house the computing infrastructure to train and deploy AI models, will more than double by 2030, to around 945 terawatt-hours. While not all operations performed in a data center are AI-related, this total amount is slightly more than the energy consumption of Japan.

Moreover, an August 2025 analysis from Goldman Sachs Research forecasts that about 60 percent of the increasing electricity demands from data centers will be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. In comparison, driving a gas-powered car for 5,000 miles produces about 1 ton of carbon dioxide.

These statistics are staggering, but at the same time, scientists and engineers at MIT and around the world are studying innovations and interventions to mitigate AI’s ballooning carbon footprint, from boosting the efficiency of algorithms to rethinking the design of data centers.

Considering carbon emissions

Talk of reducing generative AI’s carbon footprint is typically centered on “operational carbon” — the emissions used by the powerful processors, known as GPUs, inside a data center. It often ignores “embodied carbon,” which are emissions created by building the data center in the first place, says Vijay Gadepally, senior scientist at MIT Lincoln Laboratory, who leads research projects in the Lincoln Laboratory Supercomputing Center.

Constructing and retrofitting a data center, built from tons of steel and concrete and filled with air conditioning units, computing hardware, and miles of cable, consumes a huge amount of carbon. In fact, the environmental impact of building data centers is one reason companies like Meta and Google are exploring more sustainable building materials. (Cost is another factor.)

Plus, data centers are enormous buildings — the world’s largest, the China Telecomm-Inner Mongolia Information Park, engulfs roughly 10 million square feet — with about 10 to 50 times the energy density of a normal office building, Gadepally adds. 

“The operational side is only part of the story. Some things we are working on to reduce operational emissions may lend themselves to reducing embodied carbon, too, but we need to do more on that front in the future,” he says.

Reducing operational carbon emissions

When it comes to reducing operational carbon emissions of AI data centers, there are many parallels with home energy-saving measures. For one, we can simply turn down the lights.

“Even if you have the worst lightbulbs in your house from an efficiency standpoint, turning them off or dimming them will always use less energy than leaving them running at full blast,” Gadepally says.

In the same fashion, research from the Supercomputing Center has shown that “turning down” the GPUs in a data center so they consume about three-tenths the energy has minimal impacts on the performance of AI models, while also making the hardware easier to cool.

Another strategy is to use less energy-intensive computing hardware.

Demanding generative AI workloads, such as training new reasoning models like GPT-5, usually need many GPUs working simultaneously. The Goldman Sachs analysis estimates that a state-of-the-art system could soon have as many as 576 connected GPUs operating at once.

But engineers can sometimes achieve similar results by reducing the precision of computing hardware, perhaps by switching to less powerful processors that have been tuned to handle a specific AI workload.

There are also measures that boost the efficiency of training power-hungry deep-learning models before they are deployed.

Gadepally’s group found that about half the electricity used for training an AI model is spent to get the last 2 or 3 percentage points in accuracy. Stopping the training process early can save a lot of that energy.

“There might be cases where 70 percent accuracy is good enough for one particular application, like a recommender system for e-commerce,” he says.

Researchers can also take advantage of efficiency-boosting measures.

For instance, a postdoc in the Supercomputing Center realized the group might run a thousand simulations during the training process to pick the two or three best AI models for their project.

By building a tool that allowed them to avoid about 80 percent of those wasted computing cycles, they dramatically reduced the energy demands of training with no reduction in model accuracy, Gadepally says.

Leveraging efficiency improvements

Constant innovation in computing hardware, such as denser arrays of transistors on semiconductor chips, is still enabling dramatic improvements in the energy efficiency of AI models.

Even though energy efficiency improvements have been slowing for most chips since about 2005, the amount of computation that GPUs can do per joule of energy has been improving by 50 to 60 percent each year, says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Laboratory and a principal investigator at MIT’s Initiative on the Digital Economy.

“The still-ongoing ‘Moore’s Law’ trend of getting more and more transistors on chip still matters for a lot of these AI systems, since running operations in parallel is still very valuable for improving efficiency,” says Thomspon.

Even more significant, his group’s research indicates that efficiency gains from new model architectures that can solve complex problems faster, consuming less energy to achieve the same or better results, is doubling every eight or nine months.

Thompson coined the term “negaflop” to describe this effect. The same way a “negawatt” represents electricity saved due to energy-saving measures, a “negaflop” is a computing operation that doesn’t need to be performed due to algorithmic improvements.

These could be things like “pruning” away unnecessary components of a neural network or employing compression techniques that enable users to do more with less computation.

“If you need to use a really powerful model today to complete your task, in just a few years, you might be able to use a significantly smaller model to do the same thing, which would carry much less environmental burden. Making these models more efficient is the single-most important thing you can do to reduce the environmental costs of AI,” Thompson says.

Maximizing energy savings

While reducing the overall energy use of AI algorithms and computing hardware will cut greenhouse gas emissions, not all energy is the same, Gadepally adds.

“The amount of carbon emissions in 1 kilowatt hour varies quite significantly, even just during the day, as well as over the month and year,” he says.

Engineers can take advantage of these variations by leveraging the flexibility of AI workloads and data center operations to maximize emissions reductions. For instance, some generative AI workloads don’t need to be performed in their entirety at the same time.

Splitting computing operations so some are performed later, when more of the electricity fed into the grid is from renewable sources like solar and wind, can go a long way toward reducing a data center’s carbon footprint, says Deepjyoti Deka, a research scientist in the MIT Energy Initiative.

Deka and his team are also studying “smarter” data centers where the AI workloads of multiple companies using the same computing equipment are flexibly adjusted to improve energy efficiency.

“By looking at the system as a whole, our hope is to minimize energy use as well as dependence on fossil fuels, while still maintaining reliability standards for AI companies and users,” Deka says.

He and others at MITEI are building a flexibility model of a data center that considers the differing energy demands of training a deep-learning model versus deploying that model. Their hope is to uncover the best strategies for scheduling and streamlining computing operations to improve energy efficiency.

The researchers are also exploring the use of long-duration energy storage units at data centers, which store excess energy for times when it is needed.

With these systems in place, a data center could use stored energy that was generated by renewable sources during a high-demand period, or avoid the use of diesel backup generators if there are fluctuations in the grid.

“Long-duration energy storage could be a game-changer here because we can design operations that really change the emission mix of the system to rely more on renewable energy,” Deka says.

In addition, researchers at MIT and Princeton University are developing a software tool for investment planning in the power sector, called GenX, which could be used to help companies determine the ideal place to locate a data center to minimize environmental impacts and costs.

Location can have a big impact on reducing a data center’s carbon footprint. For instance, Meta operates a data center in Lulea, a city on the coast of northern Sweden where cooler temperatures reduce the amount of electricity needed to cool computing hardware.

Thinking farther outside the box (way farther), some governments are even exploring the construction of data centers on the moon where they could potentially be operated with nearly all renewable energy.

AI-based solutions

Currently, the expansion of renewable energy generation here on Earth isn’t keeping pace with the rapid growth of AI, which is one major roadblock to reducing its carbon footprint, says Jennifer Turliuk MBA ’25, a short-term lecturer, former Sloan Fellow, and former practice leader of climate and energy AI at the Martin Trust Center for MIT Entrepreneurship.

The local, state, and federal review processes required for a new renewable energy projects can take years.

Researchers at MIT and elsewhere are exploring the use of AI to speed up the process of connecting new renewable energy systems to the power grid.

For instance, a generative AI model could streamline interconnection studies that determine how a new project will impact the power grid, a step that often takes years to complete.

And when it comes to accelerating the development and implementation of clean energy technologies, AI could play a major role.

“Machine learning is great for tackling complex situations, and the electrical grid is said to be one of the largest and most complex machines in the world,” Turliuk adds.

For instance, AI could help optimize the prediction of solar and wind energy generation or identify ideal locations for new facilities.

It could also be used to perform predictive maintenance and fault detection for solar panels or other green energy infrastructure, or to monitor the capacity of transmission wires to maximize efficiency.

By helping researchers gather and analyze huge amounts of data, AI could also inform targeted policy interventions aimed at getting the biggest “bang for the buck” from areas such as renewable energy, Turliuk says.

To help policymakers, scientists, and enterprises consider the multifaceted costs and benefits of AI systems, she and her collaborators developed the Net Climate Impact Score.

The score is a framework that can be used to help determine the net climate impact of AI projects, considering emissions and other environmental costs along with potential environmental benefits in the future.

At the end of the day, the most effective solutions will likely result from collaborations among companies, regulators, and researchers, with academia leading the way, Turliuk adds.

“Every day counts. We are on a path where the effects of climate change won’t be fully known until it is too late to do anything about it. This is a once-in-a-lifetime opportunity to innovate and make AI systems less carbon-intense,” she says.

A beacon of light

Mon, 09/29/2025 - 4:00pm

Placing a lit candle in a window to welcome friends and strangers is an old Irish tradition that took on greater significance when Mary Robinson was elected president of Ireland in 1990. At the time, Robinson placed a lamp in Áras an Uachtaráin — the official residence of Ireland’s presidents — noting that the Irish diaspora and all others are always welcome in Ireland. Decades later, a lit lamp remains in a window in Áras an Uachtaráin.

The symbolism of Robinson’s lamp was shared by Hashim Sarkis, dean of the MIT School of Architecture and Planning (SA+P), at the school’s graduation ceremony in May, where Robinson addressed the class of 2025. To replicate the generous intentions of Robinson’s lamp and commemorate her visit to MIT, Sarkis commissioned a unique lantern as a gift for Robinson. He commissioned an identical one for his office, which is in the front portico of MIT at 77 Massachusetts Ave.

“The lamp will welcome all citizens of the world to MIT,” says Sarkis.

No ordinary lantern

The bespoke lantern was created by Marcelo Coelho SM ’08, PhD ’12, director of the Design Intelligence Lab and associate professor of the practice in the Department of Architecture.

One of several projects in the Geoletric research at the Design Intelligence Lab, the lantern showcases the use of geopolymers as a sustainable material alternative for embedded computers and consumer electronics.

“The materials that we use to make computers have a negative impact on climate, so we’re rethinking how we make products with embedded electronics — such as a lamp or lantern — from a climate perspective,” says Coelho.

Consumer electronics rely on materials that are high in carbon emissions and difficult to recycle. As the demand for embedded computing increases, so too does the need for alternative materials that have a reduced environmental impact while supporting electronic functionality.

The Geolectric lantern advances the formulation and application of geopolymers — a class of inorganic materials that form covalently bonded, non-crystalline networks. Unlike traditional ceramics, geopolymers do not require high-temperature firing, allowing electronic components to be embedded seamlessly during production.

Geopolymers are similar to ceramics, but have a lower carbon footprint and present a sustainable alternative for consumer electronics, product design, and architecture. The minerals Coelho uses to make the geopolymers — aluminum silicate and sodium silicate — are those regularly used to make ceramics.

“Geopolymers aren’t particularly new, but are becoming more popular,” says Coelho. “They have high strength in both tension and compression, superior durability, fire resistance, and thermal insulation. Compared to concrete, geopolymers don’t release carbon dioxide. Compared to ceramics, you don’t have to worry about firing them. What’s even more interesting is that they can be made from industrial byproducts and waste materials, contributing to a circular economy and reducing waste.”

The lantern is embedded with custom electronics that serve as a proximity and touch sensor. When a hand is placed over the top, light shines down the glass tubes.

The timeless design of the Geoelectric lantern — minimalist, composed of natural materials — belies its future-forward function. Coelho’s academic background is in fine arts and computer science. Much of his work, he says, “bridges these two worlds.”

Working at the Design Intelligence Lab with Coelho on the lanterns are Jacob Payne, a graduate architecture student, and Jean-Baptiste Labrune, a research affiliate.

A light for MIT

A few weeks before commencement, Sarkis saw the Geoelectric lantern in Palazzo Diedo Berggruen Arts and Culture in Venice, Italy. The exhibition, a collateral event of the Venice Biennale’s 19th International Architecture Exhibition, featured the work of 40 MIT architecture faculty.

The sustainability feature of Geolectric is the key reason Sarkis regarded the lantern as the perfect gift for Robinson. After her career in politics, Robinson founded the Mary Robinson Foundation — Climate Justice, an international center addressing the impacts of climate change on marginalized communities.

The third iteration of Geolectric for Sarkis’ office is currently underway. While the lantern was a technical prototype and an opportunity to showcase his lab’s research, Coelho — an immigrant from Brazil — was profoundly touched by how Sarkis created the perfect symbolism to both embody the welcoming spirit of the school and honor President Robinson.

“When the world feels most fragile, we need to urgently find sustainable and resilient solutions for our built environment. It’s in the darkest times when we need light the most,” says Coelho. 

The first animals on Earth may have been sea sponges, study suggests

Mon, 09/29/2025 - 3:00pm

A team of MIT geochemists has unearthed new evidence in very old rocks suggesting that some of the first animals on Earth were likely ancestors of the modern sea sponge.

In a study appearing today in the Proceedings of the National Academy of Sciences, the researchers report that they have identified “chemical fossils” that may have been left by ancient sponges in rocks that are more than 541 million years old. A chemical fossil is a remnant of a biomolecule that originated from a living organism that has since been buried, transformed, and preserved in sediment, sometimes for hundreds of millions of years.

The newly identified chemical fossils are special types of steranes, which are the geologically stable form of sterols, such as cholesterol, that are found in the cell membranes of complex organisms. The researchers traced these special steranes to a class of sea sponges known as demosponges. Today, demosponges come in a huge variety of sizes and colors, and live throughout the oceans as soft and squishy filter feeders. Their ancient counterparts may have shared similar characteristics.

“We don’t know exactly what these organisms would have looked like back then, but they absolutely would have lived in the ocean, they would have been soft-bodied, and we presume they didn’t have a silica skeleton,” says Roger Summons, the Schlumberger Professor of Geobiology Emeritus in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

The group’s discovery of sponge-specific chemical fossils offers strong evidence that the ancestors of demosponges were among the first animals to evolve, and that they likely did so much earlier than the rest of Earth’s major animal groups.

The study’s authors, including Summons, are lead author and former MIT EAPS Crosby Postdoctoral Fellow Lubna Shawar, who is now a research scientist at Caltech, along with Gordon Love from the University of California at Riverside, Benjamin Uveges of Cornell University, Alex Zumberge of GeoMark Research in Houston, Paco Cárdenas of Uppsala University in Sweden, and José-Luis Giner of the State University of New York College of Environmental Science and Forestry.

Sponges on steroids

The new study builds on findings that the group first reported in 2009. In that study, the team identified the first chemical fossils that appeared to derive from ancient sponges. They analyzed rock samples from an outcrop in Oman and found a surprising abundance of steranes that they determined were the preserved remnants of 30-carbon (C30) sterols — a rare form of steroid that they showed was likely derived from ancient sea sponges.

The steranes were found in rocks that were very old and formed during the Ediacaran Period — which spans from roughly 541 million to about 635 million years ago. This period took place just before the Cambrian, when the Earth experienced a sudden and global explosion of complex multicellular life. The team’s discovery suggested that ancient sponges appeared much earlier than most multicellular life, and were possibly one of Earth’s first animals.

However, soon after these findings were released, alternative hypotheses swirled to explain the C30 steranes’ origins, including that the chemicals could have been generated by other groups of organisms or by nonliving geological processes.

The team says the new study reinforces their earlier hypothesis that ancient sponges left behind this special chemical record, as they have identified a new chemical fossil in the same Precambrian rocks that is almost certainly biological in origin.

Building evidence

Just as in their previous work, the researchers looked for chemical fossils in rocks that date back to the Ediacaran Period. They acquired samples from drill cores and outcrops in Oman, western India, and Siberia, and analyzed the rocks for signatures of steranes, the geologically stable form of sterols found in all eukaryotes (plants, animals, and any organism with a nucleus and membrane-bound organelles).

“You’re not a eukaryote if you don’t have sterols or comparable membrane lipids,” Summons says.

A sterol’s core structure consists of four fused carbon rings. Additional carbon side chain and chemical add-ons can attach to and extend a sterol’s structure, depending on what an organism’s particular genes can produce. In humans, for instance, the sterol cholesterol contains 27 carbon atoms, while the sterols in plants generally have 29 carbon atoms.

“It’s very unusual to find a sterol with 30 carbons,” Shawar says.

The chemical fossil the researchers identified in 2009 was a 30-carbon sterol. What’s more, the team determined that the compound could be synthesized because of the presence of a distinctive enzyme which is encoded by a gene that is common to demosponges.

In their new study, the team focused on the chemistry of these compounds and realized the same sponge-derived gene could produce an even rarer sterol, with 31 carbon atoms (C31). When they analyzed their rock samples for C31 steranes, they found it in surprising abundance, along with the aforementioned C30 steranes.

“These special steranes were there all along,” Shawar says. “It took asking the right questions to seek them out and to really understand their meaning and from where they come.”

The researchers also obtained samples of modern-day demosponges and analyzed them for C31 sterols. They found that, indeed, the sterols — biological precursors of the C31 steranes found in rocks — are present in some species of contemporary demosponges. Going a step further, they chemically synthesized eight different C31 sterols in the lab as reference standards to verify their chemical structures. Then, they processed the molecules in ways that simulate how the sterols would change when deposited, buried, and pressurized over hundreds of millions of years. They found that the products of only two such sterols were an exact match with the form of C31 sterols that they found in ancient rock samples. The presence of two and the absence of the other six demonstrates that these compounds were not produced by a random nonbiological process.

The findings, reinforced by multiple lines of inquiry, strongly support the idea that the steranes that were found in ancient rocks were indeed produced by living organisms, rather than through geological processes. What’s more, those organisms were likely the ancestors of demosponges, which to this day have retained the ability to produce the same series of compounds.

“It’s a combination of what’s in the rock, what’s in the sponge, and what you can make in a chemistry laboratory,” Summons says. “You’ve got three supportive, mutually agreeing lines of evidence, pointing to these sponges being among the earliest animals on Earth.”

“In this study we show how to authenticate a biomarker, verifying that a signal truly comes from life rather than contamination or non-biological chemistry,” Shawar adds.

Now that the team has shown C30 and C31 sterols are reliable signals of ancient sponges, they plan to look for the chemical fossils in ancient rocks from other regions of the world. They can only tell from the rocks they’ve sampled so far that the sediments, and the sponges, formed some time during the Ediacaran Period. With more samples, they will have a chance to narrow in on when some of the first animals took form.

This research was supported, in part, by the MIT Crosby Fund, the Distinguished Postdoctoral Fellowship program, the Simons Foundation Collaboration on the Origins of Life, and the NASA Exobiology Program. 

How the brain splits up vision without you even noticing

Fri, 09/26/2025 - 3:50pm

The brain divides vision between its two hemispheres — what’s on your left is processed by your right hemisphere, and vice versa — but your experience with every bike or bird that you see zipping by is seamless. A new study by neuroscientists at The Picower Institute for Learning and Memory at MIT reveals how the brain handles the transition.

“It’s surprising to some people to hear that there’s some independence between the hemispheres, because that doesn’t really correspond to how we perceive reality,” says Earl K. Miller, Picower Professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences. “In our consciousness, everything seems to be unified.”

There are advantages to separately processing vision on either side of the brain, including the ability to keep track of more things at once, Miller and other researchers have found, but neuroscientists have been eager to fully understand how perception ultimately appears so unified in the end.

Led by Picower Fellow Matthew Broschard and Research Scientist Jefferson Roy, the research team measured neural activity in the brains of animals as they tracked objects crossing their field of view. The results reveal that different frequencies of brain waves encoded and then transferred information from one hemisphere to the other in advance of the crossing, and then held on to the object representation in both hemispheres until after the crossing was complete. The process is analogous to how relay racers hand off a baton, how a child swings from one monkey bar to the next, and how cellphone towers hand off a call from one to the next as a train passenger travels through their area. In all cases, both towers or hands actively hold what’s being transferred until the handoff is confirmed.

Witnessing the handoff

To conduct the study, published Sept. 19 in the Journal of Neuroscience, the researchers measured both the electrical spiking of individual neurons and the various frequencies of brain waves that emerge from the coordinated activity of many neurons. They studied the dorsal and ventrolateral prefrontal cortex in both hemispheres, brain areas associated with executive brain functions.

The power fluctuations of the wave frequencies in each hemisphere told the researchers a clear story about how the subject’s brains transferred information from the “sending” to the “receiving” hemisphere whenever a target object crossed the middle of their field of view. In the experiments, the target was accompanied by a distractor object on the opposite side of the screen to confirm that the subjects were consciously paying attention to the target object’s motion, and not just indiscriminately glancing at whatever happened to pop up on to the screen.

The highest-frequency “gamma” waves, which encode sensory information, peaked in both hemispheres when the subjects first looked at the screen and again when the two objects appeared. When a color change signaled which object was the target to track, the gamma increase was only evident in the “sending” hemisphere (on the opposite side as the target object), as expected. Meanwhile, the power of somewhat lower-frequency “beta” waves, which regulate when gamma waves are active, varied inversely with the gamma waves. These sensory encoding dynamics were stronger in the ventrolateral locations compared to the dorsolateral ones.

Meanwhile, two distinct bands of lower-frequency waves showed greater power in the dorsolateral locations at key moments related to achieving the handoff. About a quarter of a second before a target object crossed the middle of the field of view, “alpha” waves ramped up in both hemispheres and then peaked just after the object crossed. Meanwhile, “theta” band waves peaked after the crossing was complete, only in the “receiving” hemisphere (opposite from the target’s new position).

Accompanying the pattern of wave peaks, neuron spiking data showed how the brain’s representation of the target’s location traveled. Using decoder software, which interprets what information the spikes represent, the researchers could see the target representation emerge in the sending hemisphere’s ventrolateral location when it was first cued by the color change. Then they could see that as the target neared the middle of the field of view, the receiving hemisphere joined the sending hemisphere in representing the object, so that they both encoded the information during the transfer.

Doing the wave

Taken together, the results showed that after the sending hemisphere initially encoded the target with a ventrolateral interplay of beta and gamma waves, a dorsolateral ramp up of alpha waves caused the receiving hemisphere to anticipate the handoff by mirroring the sending hemisphere’s encoding of the target information. Alpha peaked just after the target crossed the middle of the field of view, and when the handoff was complete, theta peaked in the receiving hemisphere as if to say, “I got it.”

And in trials where the target never crossed the middle of the field of view, these handoff dynamics were not apparent in the measurements.

The study shows that the brain is not simply tracking objects in one hemisphere and then just picking them up anew when they enter the field of view of the other hemisphere.

“These results suggest there are active mechanisms that transfer information between cerebral hemispheres,” the authors wrote. “The brain seems to anticipate the transfer and acknowledge its completion.”

But they also note, based on other studies, that the system of interhemispheric coordination can sometimes appear to break down in certain neurological conditions including schizophrenia, autism, depression, dyslexia, and multiple sclerosis. The new study may lend insight into the specific dynamics needed for it to succeed.

In addition to Broschard, Roy, and Miller, the paper’s other authors are Scott Brincat and Meredith Mahnke.

Funding for the study came from the Office of Naval Research, the National Eye Institute of the National Institutes of Health, The Freedom Together Foundation, and The Picower Institute for Learning and Memory.

An adaptable evaluation of justice and interest groups

Fri, 09/26/2025 - 12:00am

In 2024, an association of female senior citizens in Switzerland won a case at the European Court of Human Rights. Their country, the women contended, needed to do more to protect them from climate change, since heat waves can make the elderly particularly vulnerable. The court ruled in favor of the group, saying that states belonging to the Council of Europe have a “positive obligation” to protect citizens from “serious adverse effects of climate change on lives, health, well-being, and quality of life.”

The exact policy implications of such rulings can be hard to assess. But there are still subtle civic implications related to the ruling that bear consideration.

For one thing, although the case was brought by a particular special-interest association, its impact could benefit everyone in society. Yet the people in the group had not always belonged to it and are not wholly defined by being part of it. In a sense, while the senior-citizen association brought the case as a minority group of sorts, being a senior citizen is not the sole identity marker of the people in it.

These kinds of situations underline the complexity of interest-group dynamics as they engage with legal and political systems. Much public discourse on particularistic groups focuses on them as seemingly fixed entities with clear definitions, but being a member of a minority group is not a static thing.

“What I want to insist on is that it’s not like an absolute property. It’s a dynamic,” says MIT Professor Bruno Perreau. “It is both a complex situation and a mobile situation. You can be a member of a minority group vis-à-vis one category and not another.”

Now Perreau explores these dynamics in a book, “Spheres of Injustice,” published this year by the MIT Press. Perreau is the Cynthia L. Reed Professor of French Studies and Language in MIT’s Literature program. The French-language edition of the book was published in 2023.

Around the world, Perreau observes, much of the political contestation over interest-group politics and policies to protect minorities arrives at a similar tension point: Policies or legal rulings are sometimes crafted to redress problems, but when political conditions shift, those same policies can be discarded with claims that they themselves are unfair. In many places, this dynamic has become familiar through the contestation of policies regarding ethnic identity, gender, sexual orientation, and more.

But this is not the only paradigm of minority group politics. One aim of Perreau’s book is to add breadth to the subject, grounded in the empirical realities people experience.

After all, when it comes to being regarded as a member of a minority group, “in a given situation, some people will claim this label for themselves, whereas others will reject it,” Perreau writes. “Some consider this piece of their identity to be fundamental; others regard it as secondary. … The work of defining it is the very locus of its power.”

“Spheres of Injustice” both lays out that complexity and seeks to find ways to rethink group-oriented politics as part of an expansion of rights generally. The book arises partly out of previous work Perreau has published, often concerning France. It also developed partly in response to Perreau thinking about how rights might evolve in a time of climate change. But it arrived at its exact form as a rethinking of “Spheres of Justice,” a prominent 1980s text by political philosopher Michael Walzer.

Instead of there being a single mechanism through which justice could be applied throughout society, Walzer contended, there are many spheres of life, and the meaning of justice depends on where it is being applied.

“Because of the complexities of social relations, inequalities are impossible to fully erase,” Perreau says. “Even in the act of trying to resist an injustice, we may create other forms of injustice. Inequality is unavoidable, but his [Walzer’s] goal is to reduce injustice to the minimum, in the form of little inequalities that do not matter that much.”

Walzer’s work, however, never grapples with the kinds of political dynamics in which minority groups try to establish rights. To be clear, Perreau notes, in some cases the categorization as a minority is foisted upon people, and in other cases, it is developed by the group itself. In either case, he thinks we should consider how complex the formation and activities of the group may be.

As another example, consider that while disability rights are a contested issue in some countries and ignored in others, they also involve fluidity in terms of who advocates and benefits from them. Imagine, Perreau says, you break a leg. Temporarily, he says, “you experience a little bit of what people with a permanent disability experience.” If you lobby, for, say, better school building access or better transit access, you could be helping kids, the elderly, families with kids, and more — including people and groups not styling themselves as part of a disability-rights movement.

“One goal of the book is to enhance awareness about the virtuous circle that can emerge from this kind of minority politics,” Perreau says. “It’s often regarded by many privileged people as a protection that removes something from them. But that’s not the case.”

Indeed, the politics Perreau envisions in “Spheres of Injustice” have an alternate framework, in which developing rights for some better protects others, to the point where minority rights translate into universal rights. That is not, again, meant to minimize the experience of core members of a group that has been discriminated against, but to encourage thinking about how solidifying rights for a particular group overlaps with the greater expansion of rights generally.

“I’m walking a fine line between different perspectives on what it means to belong,” Perreau says. “But this is indispensable today.”

Indeed, due to the senior citizens in Switzerland, he notes, “There will be better rights in Europe. Politics is not just a matter of diplomacy and majority decision-making. Sharing a complex world means drawing on the minority parts of our lives because it is these parts that most fundamentally connect us to others, intentionally or unintentionally. Thinking in these terms today is an essential civic virtue.”

Pages