MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 32 min 34 sec ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Transforming deep-space signals into cathedral sound

3 hours 5 min ago

A new immersive sound installation at Oulu Cathedral, Finland, brings the research of MIT astrophysicist and associate professor of physics Kiyoshi Masui into a striking sensory form, transforming more than 4,000 cosmic signals into spatial audio.

With its grand opening on April 4, “The Logos” project invites visitors to experience deep-space phenomena not as distant abstractions, but as something immediate and resonant. The work is led by artist and creative technologist Andrew Melchior in collaboration with Masui, philosopher Timothy Morton, and cathedral dean Satu Saarinen. Together, they treat the cathedral, built in 1832, not just as a setting but as part of the instrument itself. Its stone surfaces and reverberant acoustics give physical presence to signals that have traveled from distant galaxies.

At the heart of the installation are data gathered by the Canadian Hydrogen Intensity Mapping Experiment (CHIME) radio telescope, which detects fast radio bursts (FRBs). FRBs are immensely energetic flashes lasting only milliseconds and originating in distant galaxies across the observable universe. The Logos represents one of the most extensive artistic sonifications of FRB data to date. Each day at noon, the cathedral is filled with a one-hour procedural composition derived from these bursts. Some bursts are singular events, never repeating, while others pulse again and again from unknown sources. These patterns remain one of astrophysics’ most compelling mysteries.

“The fast flashes will echo as snare-like beats bouncing through the cathedral,” says Masui. “The sweeping dispersion of the signal — where different radio frequencies arrive at slightly different times — creates harmonies between high and low tones. It should feel rich and layered, while also revealing something real about how these signals travel across billions of years of cosmic space before reaching Earth.”

By converting FRB data into a shared listening experience, the collaboration suggests a different way of understanding the universe: not only through analysis, but through attention.

Running through April 2027 to mark the cathedral’s 250th anniversary, The Logos will feature as part of Oulu2026 European Capital of Culture and the Lumo Art and Tech Festival. 

A month in Panama: Rethinking what real estate development can be

3 hours 45 min ago

Cherry Tang, a master of science in real estate development student at the MIT Center for Real Estate, recently participated in an experiential learning opportunity in Panama working with Conservatorio, a development firm based in Casco Viejo. What began as a modeling exercise quickly became a deeper exploration of how development, community, and environment intersect, shaped as much by people and culture as by the work itself.

“I went in expecting to build a financial model. I didn’t expect that the experience would fundamentally reshape how I think about development,” Tang reflects.

The project centered on Santa Catalina, a remote surf town on Panama’s Pacific coast. The development comprises approximately 140 residential units across condos, villas, and homes, along with vacant lots, four retail spaces, a surf school with a stadium, and a restaurant with a pool — all envisioned as the town’s first true center.

At first glance, Tang says, Santa Catalina didn’t resemble a typical “prime” development market. It had limited infrastructure, low density, and no established core.

“What it does have is something powerful: world-class surf and access to Coiba National Park, a premier diving destination,” Tang says. “Here, the ocean becomes the anchor tenant.”

The project is designed as an open, walkable master-planned community that integrates seamlessly with the existing town. Anchored by surfing and diving, it introduces a diverse product mix and a 600-meter linear park, positioning it as the future heart of Santa Catalina and a differentiated alternative to both local developments and traditional resort-style communities.

Tang saw this as a different vision of place-making. “It wasn’t about building a resort. It was about building a center of gravity for a community that has never really had one.”

Tang’s primary role was to build the project’s financial model from the ground up. The capital structure, with land contributed as equity and sales deposits used to fund construction, required a different way of thinking than the institutional frameworks she had used in previous roles in Toronto and Boston.

“It was more than a technical exercise,” she explains. “It reinforced how financial, physical, and strategic decisions are deeply interconnected, and how thoughtful structuring can unlock projects that might otherwise not be feasible.”

Working directly with KC Hardin, founder and CEO of Conservatorio, and the broader leadership team, Tang gained firsthand exposure to real-time development decision-making. She presented her financial model to leadership and prospective investors, and her assumptions helped shape conversations around phasing, design, and construction.

Development is a feedback loop between underwriting and the built environment,” Tang says.

Throughout the month, Tang and her colleagues met with a range of people shaping the project’s future. They spent time with local developers and brokers, learning about infrastructure improvements and ongoing real estate activity in the region. 

Tang described meeting one family with long-standing ties to the area as one of the more memorable moments.

“Their coastline conservation work in Panama is deeply inspiring,” she says.

They also met with scientists from the Smithsonian Tropical Research Institute, trekking through mangroves and learning about coastal ecosystems and the long-term environmental implications of development.

“It was a vivid reminder that development decisions don’t exist in isolation,” says Tang.

Outside of work, Panama had its way of leaving an impression. Sailing through the Panama Canal ... watching cargo ships pass through landscapes filled with monkeys and sloths ... living in Casco Viejo — each added another layer to the experience for Tang. The neighborhood itself served as a real-life case study in thoughtful, community-oriented development.

“What stayed with me most was Conservatorio’s approach to revitalization, not through displacement, but through deep engagement, trust-building, and creating pathways for local residents to be part of the area’s transformation.”

That same spirit was reflected in everyday moments, from co-workers who went out of their way to make interns feel welcome.

“Strangers greeted us like neighbors,” says Tang. “The level of warmth and hospitality defined the experience as much as the work itself.”

By the end of the month, the experience left her with more than technical skills — she had a shift in perspective.

“I began to see development less as a formula and more as a system,” she explains. “One that sits at the intersection of finance, design, environment, and community.”

Her takeaway is that value can be created in unconventional ways, and leadership in real estate is grounded in trust, curiosity, and a deep respect for place.

Tang arrived in Panama to build a model. She left with a deeper understanding of what it means to build thoughtfully — as a developer, and as a steward of place.

The MIT-IBM Computing Research Lab launches to shape the future of AI and quantum computing

11 hours 35 min ago

The following is a joint announcement by the MIT Schwarzman College of Computing and IBM.

IBM and MIT today announced the launch of the MIT-IBM Computing Research Lab, advancing their long-standing collaboration to shape the next era of computing. The new lab expands its scope to include quantum computing, alongside foundational artificial intelligence research, with the goal of unlocking new computational approaches that go beyond the limits of today’s classical systems.

The MIT-IBM Computing Research Lab builds on a distinguished history of scientific excellence at the intersection of research and academia. Evolving from the MIT-IBM Watson AI Lab, which originated in 2017 on MIT’s campus, the new lab reflects a transformed technology landscape — one which AI has entered mainstream deployment, and quantum computing is rapidly advancing toward practical impact. Together, MIT and IBM aim to help lead research in AI and quantum and to redefine mathematical foundations across both domains.

“We expect the MIT-IBM Computing Research Lab to emerge as one of the world’s premier academic and industrial hubs accelerating the future of computing,” says Jay Gambetta, director of IBM Research and IBM Fellow, and IBM chair of the MIT-IBM Computing Research Lab. “Together, the brightest minds at MIT and IBM will rethink how models, algorithms, and systems are designed for an era that will be defined by the sum of what’s possible when AI and quantum computing come together.”

“For a decade, the collaboration between MIT and IBM has produced leading-edge research and innovation, and provided mentorship and supported the professional growth of researchers both at MIT and IBM,” says Anantha Chandrakasan, MIT’s provost, who, as then-dean of the School of Engineering, spearheaded the creation of the MIT-IBM Watson AI Lab and will continue as MIT chair of the lab. “The incredible technical achievements sets the bar high for our work together over the next 10 years. I look forward to another decade of impact.”

Addressing the next frontiers in computation

The MIT-IBM Computing Research Lab will serve as a focal point for joint research between MIT and IBM in AI, algorithms, and quantum computing, as well as the integration of these technologies into hybrid computing systems. The lab is designed to accelerate progress toward powerful new computational approaches that take advantage of rapid advances in AI and quantum-centric supercomputing, including those that combine maturing quantum hardware with classical systems and advanced AI methods.

This research initiative will include improving capabilities and integrating AI with traditional computing, alongside pursuing advances in small, efficient, modular language model architectures, novel AI computing paradigms, and enterprise-focused AI systems designed for deployment in real-world environments, where reliability, transparency, and trust are essential.

In parallel, the lab will rethink the mathematical and algorithmic foundations that underpin the next era of computing by accelerating the development of novel quantum algorithms for complex problems, with impacts in areas such as materials science, chemistry, and biology.

Additionally, the lab will investigate mathematical and algorithmic foundations of machine learning, optimization, Hamiltonian simulations, and partial differential equations, which are used to approximate the behaviors of dynamical systems that currently stump classical systems beyond limited scales and accuracy. Innovations from the lab could have wide implications for global industries, from more accurate weather and air turbulence prediction to better forecasts of financial market performance. Similarly, with improved optimization approaches, research from the lab could help lower risks in areas like finance, predict protein structures for more targeted medicine, and streamline global supply chains.

With its focus on AI, algorithms, and quantum, the MIT-IBM Computing Research Lab will complement and enhance the work of two of MIT’s strategic initiatives, the MIT Generative AI Impact Consortium and the MIT Quantum Initiative. MIT President Sally Kornbluth launched these strategic initiatives to broaden and deepen MIT’s impact in developing solutions to serious global challenges. The MIT-IBM Computing Research Lab will also leverage IBM’s longtime leadership and expertise in quantum computing. As part of its ambitious roadmap, IBM has laid out a clear path to delivering the world’s first fault-tolerant quantum computer by 2029, and is working across industries to drive value from quantum-centric supercomputing, tightly integrating quantum computers with high-performance computing and AI accelerators to solve the world’s toughest problems.

Deep integration with scientific domains

The MIT-IBM Computing Research Lab will also continue to serve as a foundation for training the next generation of computational scientists and innovators. It will do so by engaging faculty and students across MIT departments, enabling new computational approaches to accelerate discoveries in the physical and life sciences.

The lab will continue to be co-directed by Aude Oliva, senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, and David Cox, vice president of AI Foundations at IBM Research. MIT and IBM have appointed leads for each of the lab’s three focus areas — AI, algorithms, and quantum. Jacob Andreas, associate professor in the Department of Electrical Engineering and Computer Science (EECS), and Kenney Ng, principal research scientist at IBM Research and the MIT-IBM science program manager, will co-lead AI; Vinod Vaikuntanathan, the Ford Foundation Professor of Engineering in EECS, and Vasileios Kalantzis, IBM Research senior research scientist, will co-lead algorithms; and Aram Harrow, professor of physics, and Hanhee Paik, IBM director of Quantum Algorithm Centers, will co-lead quantum.

“The MIT-IBM Computing Research Lab reflects an important expansion of the collaboration between MIT and IBM and the increasing connections across AI, algorithms, and quantum. This deepened focus also underscores a strong alignment with the MIT Schwarzman College of Computing’s mission to advance the forefront of computing and its integration across disciplines,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and MIT co-chair of the lab. “I’m excited about what this next chapter will enable in these three areas, and their impact broadly.”

Building on nearly a decade of collaboration

The MIT-IBM Watson AI Lab helped pioneer a model for academic-industry research collaboration, aligning long-term scientific inquiry with real-world impact. Since its inception, the lab has funded over 210 research projects involving over 150 MIT faculty members and over 200 IBM researchers. Collectively, the projects have led to over 1,500 peer-reviewed articles. The lab also helped shape the career growth of a number of MIT students and junior researchers, funding more than 500 students and postdocs.

“The true measure of this lab is not just innovation, but transformation of a field. Hundreds of students have contributed to thousands of publications in top conferences and journals, demonstrating their capabilities to address meaningful problems,” says Oliva. “The MIT-IBM Computing Research Lab builds on an extraordinary legacy of impact to advance a trusted collaboration that will redefine the future of AI and quantum computing in a way never seen before.”

“By coupling academic rigor with industrial scale, the lab aims to define the computational foundations that will power the next generation of AI, quantum, and scientific breakthroughs,” says Cox. “By bringing together advances in AI, algorithms, and quantum computing under one integrated research effort, we’re creating the conditions to rethink the mathematical and computational foundations of science and engineering.”

The MIT-IBM Computing Research Lab will capitalize on this foundation, expanding both the scientific scope and the ecosystem of collaborators across the Cambridge-Boston region and beyond.

MIT engineers’ virtual violin produces realistic sounds

12 hours 35 min ago

There is no question that violin-making is an art form. It requires a musician’s ear, a craftsperson’s skill, and an historian’s appreciation of lessons learned over time. Making a violin also takes trust: Violin makers, or luthiers, often must wait until the instrument is finished before they can hear how all their hard work will sound.

But a new tool developed by MIT engineers could help luthiers play around with a violin’s design and tweak its sound even before a single part is carved.

In a study appearing today in the journal npj Acoustics, the MIT team reports on a new “computational violin” — a computer simulation that captures the detailed physics of the instrument and realistically produces the sound of a violin when its strings are plucked.

While there are software programs and plug-ins that enable users to play around with virtual violins, their sounds are typically the result of sampling and averaging over thousands of notes played by actual violins.

In contrast, the new computational violin takes a physics-based approach: It produces sound based on the way the instrument, including its vibrating strings, physically interacts with the surrounding air.

As a demonstration, the researchers applied the computational violin to play two short excerpts: one from “Bach’s Fugue in G Minor,” and another from “Daisy Bell” — a nod to the first song that was ever produced by a computer-synthesized voice.

The computational violin currently simulates the sound of plucked strings — a type of playing that musicians know as “pizzicato.” Violin bowing, the researchers say, is a much more complicated interaction to model. However, the computational violin represents the first physics-based foundation of a strung violin sound that could one day be paired with a model of bowing to produce realistic, bowed violin music.

For now, the team says the new virtual violin could be used in the initial stages of violin design. Luthiers can tweak certain parameters such as a violin’s wood type or the thickness of its body, and then listen to the sound that the instrument would make in response.

“These days, people try to improve designs little by little by building a violin, comparing the sound, then making a change to the next instrument,” says Yuming Liu, senior research scientist at MIT. “It’s very slow and expensive. Now they can make a change virtually and see what the sound would be.”

“We’re not saying that we can reproduce the artisan’s magic,” adds Nicholas Makris, professor of mechanical engineering at MIT. “We’re just trying to understand the physics of violin sound, and perhaps help luthiers in the design process.”

Makris and Liu’s MIT co-authors include Arun Krishnadas PhD ’23 and former postdoc Bryce Campbell, along with Roman Barnas of the North Bennet Street School.

Sound matrix

The quality of a violin’s sound is determined by its dimensions and design. The instrument is made from thoughtfully crafted parts and materials that all work to generate and amplify sound. In recent years, scientists have sought to understand what artisans have intuited for centuries, in terms of what specific parameters shape a violin’s sound.

In one early effort in 2006, scientists, as part of the Strad3D project, put a rare Stradivarius violin through a CT scanner. The violin was crafted in 1715 by the master violinmaker Antonio Stradivari, during what is considered the “Golden Age” of violin making. To better understand the violin’s anatomy and its relation to sound, the scientists scanned the instrument and produced 600 “slices,” or views, of the violin.

The CT scans are available online for people to view and use as data for their own experiments. For their study, Makris and his colleagues first imported the CT scans into a solid modeling software program to generate a detailed three-dimensional model of the violin. They then ran a finite element simulation, essentially dividing the violin into millions of tiny individual cubes, or “elements.”

For each cube, they noted its material type, such as if a cube from the violin’s back plate is made from maple or spruce, or if a string is made from steel or natural fibers. They then applied physics-based equations of stress and motion to predict how each material element would move in relation to every other element across the instrument.

They also carried out a similar process for the air surrounding the violin, dividing up a roughly cubic-meter volume of air and applying acoustic wave equations to predict how each tiny parcel of air would move and contribute to generating sound.

“The entire thing is a matrix of millions of individual elements,” explains Krishnadas. “And ultimately, you see this whole three-dimensional being, which is the violin and the air all connected and interacting with each other.”

A plucky model

The team then simulated how the new computational violin would sound when plucked. When a violinist plucks a string, they pull the string sideways and let it go, causing the string to vibrate. These vibrations travel across the instrument and inside it; the air’s vibrations are amplified as they travel out of the violin and into the surroundings, where a listener hears the vibrations as sound.

For their purposes, the engineers simulated a simple string pluck by directing one of the virtual violin’s strings to stretch out and then rebound. The simulation computed all the resulting motions and vibrations of the millions of elements in the violin, and the sound that the pluck would produce.

For notes that require pressing down on a violin’s fingerboard, they simulated the same plucking, and in addition, included a condition in which the string is held fixed in the section of the fingerboard where a violinist’s finger would press down.

The researchers carried out this computational process to virtually pluck out the notes in several measures of “Daisy Bell” and “Bach’s Fugue in G Minor.”

“If there’s anything that’s sounding mechanical to it, it’s because we’re using the exact same time function, or standard way of plucking, for each note,” says Makris, who is himself a lute player. “A musician will adapt the way they’re plucking, to put a little more feeling on certain notes than others. But there could be subtleties which we could incorporate and refine.”

As it is, the new computational model is the first to generate realistic sound based on the laws of physics and acoustics. The researchers say that violin makers could use the model to test how a violin might sound when certain dimensions or properties are changed. For instance, when the researchers varied the thickness of the virtual violin’s back plate or changed its wood type, they could hear clear differences in the resulting sounds.

“You can tweak the model, to hear the effect on the sound,” Makris says. “Since everything obeys the laws of physics, including a violin and the music it makes, this approach can add an appreciation to what makes violin sound. But ultimately, we get most of our inspiration from the artisans.”

This work was supported, in part, by an MIT Bose Research Fellowship.

Enabling privacy-preserving AI training on everyday devices

17 hours 35 min ago

A new method developed by MIT researchers can accelerate a privacy-preserving artificial intelligence training method by about 81 percent. This advance could enable a wider array of resource-constrained edge devices, like sensors and smartwatches, to deploy more accurate AI models while keeping user data secure.

The MIT researchers boosted the efficiency of a technique known as federated learning, which involves a network of connected devices that work together to train a shared AI model.

In federated learning, the model is broadcast from a central server to wireless devices. Each device trains the model using its local data and then transfers model updates back to the server. Data are kept secure because they remain on each device.

But not all devices in the network have enough capacity, computational capability, and connectivity to store, train, and transfer the model back and forth with the server in a timely manner. This causes delays that worsen training performance.

The MIT researchers developed a technique to overcome these memory constraints and communication bottlenecks. Their method is designed to handle a heterogenous network of wireless devices with varied limitations.

This new approach could make it more feasible for AI models to be used in high-stakes applications with strict security and privacy standards, like health care and finance.

“This work is about bringing AI to small devices where it is not currently possible to run these kinds of powerful models. We carry these devices around with us in our daily lives. We need AI to be able to run on these devices, not just on giant servers and GPUs, and this work is an important step toward enabling that,” says Irene Tenison, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.

Her co-authors include Anna Murphy ’25, a machine-learning engineer at Lincoln Laboratory; Charles Beauville, a visiting student from Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and a machine-learning engineer at Flower Labs; and senior author Lalana Kagal, a principal research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. The research will be presented at the IEEE International Joint Conference on Neural Networks. 

Reducing lag time

Many federated learning approaches assume all devices in the network have enough memory to train the full AI model, and stable connectivity to transmit updates back to the server quickly.

But these assumptions fall short with a network of heterogenous devices, like smartwatches, wireless sensors, and mobile phones. These edge devices have limited memory and computational power, and often face intermittent network connectivity.

The central server usually waits to receive model updates from all devices, then averages them to complete the training round. This process repeats until training is complete.

“This lag time can slow down the training procedure or even cause it to fail,” Tenison says.

To overcome these limitations, the MIT researchers developed a new framework called FTTE (Federated Tiny Training Engine) that reduces the memory and communication overhead needed by each mobile device.

Their framework involves three main innovations.

First, rather than broadcasting the entire model to all devices, FTTE sends a smaller subset of model parameters instead, reducing the memory requirement for each device. Parameters are internal variables the model adjusts during training.

FTTE uses a special search procedure to identify parameters that will maximize the model’s accuracy while staying within a certain memory budget. That limit is set based on the most memory-constrained device.

Second, the server updates the model using an asynchronous approach. Rather than waiting for responses from all devices, the server accumulates incoming updates until it reaches a fixed capacity, then proceeds with the training round.

Third, the server weights updates from each device based on when it received them. In this way, older updates don’t contribute as much to the training process. These outdated data can hold the model back, slowing the training process and reducing accuracy.

“We use this semi-asynchronous approach because want to involve the least powerful devices in the training process so they can contribute their data to the model, but we don’t want the more powerful devices in the network to stay idle for a long time and waste resources,” Tenison says.

Achieving acceleration

The researchers tested their framework in simulations with hundreds of heterogeneous devices and a variety of models and datasets. On average, FTTE enabled the training procedure to reach completing 81 percent faster than standard federated learning approaches.

Their method reduced the on-device memory overhead by 80 percent and the communication payload by 69 percent, while attaining near the accuracy of other techniques.

“Because we want the model to train as fast as possible to save the battery life of these resource-constrained devices, we do have a tradeoff in accuracy. But a small drop in accuracy could be acceptable in some applications, especially since our method performs so much faster,” she says.

FTTE also demonstrated effective scalability and delivered higher performance gains for larger groups of devices.

In addition to these simulations, the researchers tested FTTE on a small network of real devices with varying computational capabilities.

“Not everyone has the latest Apple iPhone. In many developing countries, for instance, users might have less powerful mobile phones. With our technique, we can bring the benefits of federated learning to these settings,” she says.

In the future, the researchers want to study how their method could be used to increase the personalized performance of AI models on each device, rather than focusing on the average performance of the model. They also want to conduct larger experiments on real hardware.

This work was funded, in part, by a Takeda PhD Fellowship.

With a swipe of a magnet, microscopic “magno-bots” perform complex maneuvers

Tue, 04/28/2026 - 11:00am

Under a microscope, a bouquet of lollipop-like structures, each smaller than a grain of sand, waves gently in a petri dish of liquid. Suddenly, they snap together, like the jaws of a Venus flytrap, as a scientist waves a small magnet over the dish. What was previously an assemblage of tiny passive structures has transformed instantly into an active robotic gripper.

The lollipop gripper is one demonstration of a new type of soft magnetic hydrogel developed by engineers at MIT and their collaborators at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and the University of Cincinnati. In a study appearing today in the journal Matter, the MIT team reports on a new method to print and fabricate the gel, which can be made into complex, magnetically activated three-dimensional structures.

The new gel could be the basis for soft, microscopic, magnetically responsive robots and materials. Such magno-bots could be used in medicine, for instance to release drugs or grab biopsies when directed by an external magnet.

Making objects move with magnets is nothing new, at least at the macroscale. We can, for example, wave a refrigerator magnet over a pile of paper clips that will trail the magnet in response. And at the microscale, scientists have designed a variety of magnetic “micro-swimmers” — components that are smaller than a millimeter and can be directed remotely by a magnet to squeeze through small spaces. For the most part, these designs work by mixing magnetic particles into a printable resin and pulling the entire swimmer in the direction of an external magnet.

In contrast, the MIT team’s new material can be made into even more complex and deformable structures with micron-scale precision. These features could enable a magnetic millibot to move individual features and perform more complex maneuvers.

“We can now make a soft, intricate 3D architecture with components that can move and deform in complex ways within the same microscopic structure,” says study author Carlos Portela, the Robert N. Noyce Career Development Associate Professor of Mechanical Engineering at MIT. “For soft microscopic robotics, or stimuli-responsive matter, that could be a game-changing capability.”

The study’s MIT co-authors include graduate students Rachel Sun and Andrew Chen, along with Yiming Ji and Daryl Yee of EPFL and Eric Stewart of the University of Cincinnati.

In a flash

At MIT, Portela’s group develops new metamaterials — materials engineered with unique, microscopic architectures that give rise to beyond-normal material properties. Portela has fabricated a variety of such metamaterials, including extremely tough and stretchy architectures and designs that can manipulate sound and withstand violent impacts.

Most recently, he’s expanded his research to “programmable” materials, which can be engineered to change their properties in response to stimuli, such as certain chemicals, light, and electric and magnetic fields.

From the team’s perspective, magnetic stimuli stand out from the rest.

“With a magnetically responsive material, we have control at a distance and the response is instantaneous,” says co-lead author Andrew Chen. “We don’t have to wait for a slow chemical reaction or physical process, and we can manipulate the material without touching it.”

For the new study, the team aimed to create a magnetically responsive metamaterial that can be made into structures smaller than a millimeter. Researchers typically fabricate microstructures by using two-photon lithography — a high-resolution 3D printing technique that flashes a laser into a small pool of resin. With repeated flashes, the laser traces a microscopic pattern into the resin, which solidifies into the same pattern, ultimately creating a tiny, three-dimensional structure, layer by layer.

While 3D resin printing produces intricate microstructures, using the same process to print magnetic structures has been a challenge. Researchers have tried to combine the resin with magnetic nanoparticles before printing the mixture. But magnetic particles are essentially bits of metal that inherently scatter light away or agglomerate and sediment unintentionally. Scientists have found that any magnetic particles in the resin can reduce the laser’s power at a given spot and weaken the resulting structure or prevent its printing altogether.

“Directly 3D printing deformable micron-scale structures with a high fraction of magnetic particles is extremely difficult, often involving a tradeoff between magnetic functionality and structural integrity,” says Sun, a co-lead author on the work.

A printed double-dip

The researchers created a new way to fabricate magnetic microstructures, by combining 3D resin printing with a double-dip process. The researchers first applied conventional resin printing to create a microstructure using a typical polymer gel, with no added magnetic particles. Then they dipped the printed gel into a solution containing iron ions, which the gel can absorb. The iron-soaked structure is then dipped again in a second solution of hydroxide ions. The iron ions in the gel bond with the hydroxide ions, creating iron-oxide nanoparticles that are inherently magnetic.

With this new process, the team can print intricate structures smaller than a millimeter, and add magnetic properties to the structures after printing. What’s more, they are able to control how magnetic a structure’s individual features can be. They found that, by tuning the laser’s power as they print certain features, they can set how cross-linked, or “tight” the gel is when printed. The tighter the gel, the fewer magnetic particles it can form. In this way, the researchers can determine how magnetic each tiny feature can be.

“This provides unprecedented design freedom to print multifunctional structures and materials at the microscale,” Sun says.

As a demonstration, the team fabricated ball-and-stick structures resembling tiny lollipops. The structures were less than a millimeter in height, with balls that were smaller than a grain of sand. The researchers printed the lollipops out of polymer gel and infused each ball with different amounts of magnetic particles, giving them various degrees of magnetism. Under a microscope, they observed that when they passed an ordinary refrigerator magnet over the structures, the lollipops pulled toward the magnet in various degrees, in a configuration that mimicked gripping fingers.

“You could imagine a magnetic architecture like this could act as a small robot that you could guide through the body with an external magnet, and it could latch onto something, for instance to take a biopsy,” Portela says. “That is a vision that others can take from this work.”

The team also fabricated a magnetically responsive, “bistable” switch. They first printed a small millimeter-long rectangle of polymer gel and attached to either side four tiny, oar-like magnetic structures. Each oar measured about 8 microns thick — about the size of a red blood cell. When the team applied a magnet on one end of the rectangle, the oars flipped toward the magnet, pulling the rectangle in the same direction and locking it in that position. When the magnet was applied to the other side, the oars flipped again, pulling the rectangle, like a switch, in the opposite direction.

“We think this is a new kind of bistable mechanism that could be used, for instance, in a microfluidic device, as a magnetic valve to open or shut some flow,” Portela says. “For now, we’ve figured out how to fabricate magnetic complex architectures at the microscale and also spatially tune their properties. That opens up a lot of interesting ideas for soft miniature robots going forward.”

This research was supported, in part, by the National Science Foundation and the MathWorks seed grant program.

This work was performed, in part, in the MIT.nano fabrication and characterization facilities.

Robotically assembled building blocks could make construction more efficient and sustainable

Tue, 04/28/2026 - 12:00am

Robotically assembled building blocks could be a more environmentally friendly method for erecting large-scale structures than some existing construction techniques, according to a new study by MIT researchers.

The team conducted a feasibility study to evaluate the efficiency of constructing a simple building using “voxels,” which are modular 3D subunits that assemble into complex, durable structures.

After studying the performance of multiple voxels, the researchers developed three new designs intended to streamline building construction. They also produced a robotic assembler and a user-friendly interface for generating voxel-based building layouts and feeding instructions to the robots.

Their results indicate this voxel-based robotic assembly system could reduce embodied carbon — all of the carbon emitted during the lifecycle of building materials — by as much as 82 percent, compared with popular techniques like 3D concrete printing, precast modular concrete, and steel framing. The system would also be competitive in terms of cost and construction time. However, the choice of materials used to manufacture the voxels does play a major role in their carbon footprint and cost.

While scalability, durability, long-term robustness, and important considerations like fire resistance remain to be explored before such a system could be widely deployed, the researchers say these initial results highlight the potential of this approach for automated, on-site construction.

“I’m particularly excited about how the robotic assembly of discrete lattices can enable a practical way to apply digital fabrication to the built environment in a way that can let us build much more efficiently and sustainably,” says Miana Smith, a graduate student in the Center for Bits and Atoms (CBA) at MIT and lead author the study.

She is joined on the paper by Paul Richard, a graduate student at École Polytechnique Fédérale de Lausanne in Switzerland and former visiting researcher at MIT; Alfonso Parra Rubio, a CBA graduate student; and senior author Neil Gershenfeld, an MIT professor and the director of the CBA. The research appears in Automation in Construction.

Designing better building blocks

Over the past several years, researchers in the Center for Bits and Atoms have been developing voxels, which are lattice-structured building blocks that can be assembled into objects with high strength and stiffness, like airplane wings, wind turbine blades, and space structures.

“Here, we are taking aerospace principles and applying them to buildings. Why don’t we make buildings as efficiently as we make airplanes?” Gershenfeld says, based on prior work his lab has done on voxel assembly with NASA, Airbus, and Boeing.

To explore the feasibility of voxel-based assembly strategies for buildings, the researchers first evaluated the mechanical performance and sustainability of eight existing voxel designs, including a cuboctahedron made from glass-reinforced nylon and a Kelvin lattice made from steel.

Based on those evaluations, they developed a set of three voxels using a new geometry that could be more easily assembled robotically into a larger structure. The new design, based on a high-strength and high-stiffness octet lattice, mechanically self-aligns into rigid structures.

“The interlocking nature of these voxels means we can get nice mechanical properties without needing to have a lot of connectors in the system, so the construction process can run a lot faster,” Smith says.

To accelerate construction, they designed a robotic assembly system based on inchworm-like robots that crawl across a voxel structure by anchoring and extending their bodies. These Modular Inchworm Lattice Assembler robots, or MILAbots, use grippers on each end to place voxel building blocks and engage the snap-fit connections.

“The robots can assemble the voxels by dropping them into place and then stepping on them to have the pieces interlock. We can do precise maneuvers based on the mechanical relationship between the robots and the voxels,” Smith explains.

The team studied the embodied carbon needed to fabricate their new voxel designs using three materials: plastic, plywood, and steel. Then they evaluated the throughput and cost of using the robotic assembly system to build a simple, one-story building. The researchers compared these estimates with the performance of other construction methods.

Potential environmental benefits

They found that most existing voxels, and especially those made from plastics, performed poorly compared to existing methods in terms of sustainability, but the steel and wood voxels they designed offered significant environmental benefits.

For instance, utilizing their steel voxels would generate only 36 percent of the embodied carbon required for 3D concrete printing and 52 percent of the embodied carbon of precast concrete. The plywood voxels had the lowest carbon footprint, requiring about 17 percent and 24 percent of the embodied carbon needed, respectively.

“There is still a potential viable option for a plastics-based voxel approach, we just have to be a bit more strategic about which types of plastics, infills, and geometries we use,” Smith says.

In addition, projected on-site assembly time for the steel and wood voxel approaches averaged 99 hours, whereas existing construction methods averaged 155 hours.

These speed benefits rely on the distributed nature of voxel-based assembly. While one MILAbot working alone is far slower than existing techniques, with a team of 20 robots working in parallel, the system catches up to or surpasses existing automation methods at a lower cost.

“One benefit of this method is how incremental it is. You can start building, and if it turns out you need a new room, you can just add onto the structure. It is also reversible, so if your use changes, you can dissemble the voxels and change the structure,” Gershenfeld says.

The researchers also developed an interface that enables users to input or hand-design a voxelized structure. The automatic system determines the paths the MILAbots should follow for construction and sends commands to the assemblers.

The next step in this project will be a larger testbed in Bhutan, using the “super fab lab” that CBA helped set up there to replicate the robots to test construction for a planned sustainable city, Gershenfeld says.

Additional areas of future work include studying the stability of voxel structures under lateral loads, improving the design tool to account for the physics of the system, enhancing the MILAbots, and evaluating voxels that have integrated sheeting, insulation, or electrical and plumbing routing.

“Our work helps support why doing this type of distributed robot assembly might be a practical way to bring digital fabrication into building construction,” Smith says.

“This is yet another visionary example from Neil Gershenfeld and his team, of finding ways for buildings to build themselves with the help of tiny robotic machines. I’m now fascinated by how we can harness an idea like this to make it more affordable to make the outsides of buildings more engaging and joyful,” says Thomas Heatherwick, founder of the design and architecture firm Heatherwick Studio, who was not involved with this research.

This work was funded, in part, by the MIT Center for Bits and Atoms Consortia.

Mapping molecular markers of physical fitness

Tue, 04/28/2026 - 12:00am

Patterns of molecular activity in the blood may hold clues not only to how fit someone is, but also to the biological processes that support physical performance. Researchers at MIT, GE HealthCare, and the U.S. Military Academy at West Point have developed a computational model that links thousands of these molecular signals to fitness levels, revealing pathways that could inform future studies to improve fitness training and speed injury or disease recovery.

To develop their model, the researchers analyzed more than 50,000 biomarkers in 86 cadets at the U.S. Military Academy who were training for a military competition. Using these data, the researchers were able to identify molecular pathways that appear to contribute to higher levels of physical fitness.

“We had 50,000 measurements, and we wanted to get it down to about 100 where there’s some likelihood that the markers that we’re measuring are mechanistically linked to physical fitness. So, not just a statistical correlation, of which there will be many, but markers where there’s a likelihood that there is a causal relationship,” says Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology in MIT’s Department of Biological Engineering.

These biomarkers can be measured by analyzing blood samples, which could offer a simple way to provide an athlete, for example, or perhaps someone with chronic illness or a long-term injury, with additional information about potential areas to focus their efforts to reduce risk of injury, accelerate recovery, or improve their performance ceiling beyond what conventional measures show.

Azar Alizadeh, a principal scientist with GE HealthCare’s Healthcare Technology and Innovation Center, is the paper’s lead author. Fraenkel and Luca Marinelli, a senior principal scientist with GE HealthCare, are the senior authors of the new study, which appears in the journal Communications Biology.

Mapping fitness

To find the genetic basis of a simple trait such as height, scientists can perform large-scale studies known as genome-wide association studies (GWAS), in which genetic markers from thousands of people can be linked with height. However, the picture becomes much more complicated for traits such as physical fitness, which is determined by the interplay of many different genetic, physiological, and environmental factors.

The researchers set out to try to identify some of those factors, working with a group of 86 volunteers at the U.S. Military Academy at West Point who were training for the Sandhurst Military Skills Competition. Alizadeh led the experimental study design and execution, in collaboration with GE HealthCare, GE Research, West Point, and MIT scientists. During the three-month study period, volunteers participated in up to five sessions. At each session, blood samples were taken before and after intense exercise. The researchers also measured other traits such as lean muscle mass and VO2 max (the maximum rate of oxygen consumption during exercise).

From the blood samples, the researchers were able to measure more than 50,000 biomarkers, which they obtained by analyzing DNA methylation patterns, sequencing messenger RNA transcripts, and analyzing thousands of the proteins and small molecules found in the samples.

From their set of 50,000 biomarkers, the researchers hoped to identify a smaller number that could predict overall physical fitness, as measured by performance on the Army Combat Fitness Test (ACFT). This test includes a 2-mile run, maximum deadlift (the heaviest weight a person can lift for a single repetition up to 340 pounds), and sprint-drag-carry, a test that involves sprinting, dragging a sled, and carrying kettlebells.

One way to do this would be to simply train a computational model to identify correlations between fitness and biomarkers. However, with only 86 subjects in the study, that approach would likely yield correlations that were random and did not actually contribute to physical fitness, Fraenkel says.

To take a more targeted approach, the researchers first created a network model that represents the interactions between the markers, based on existing databases that catalog those interactions. These connections might represent proteins that interact with each other in a signaling pathway, or a transcription factor that turns on a set of genes.

“We built a network that you can think of as a city map. You want to find the places in the city map that are lighting up — not just one light going on, but a whole bunch of houses or street lamps going on in the same neighborhood,” Fraenkel says. “We can find neighborhoods on this enormous molecular map that are active at the same time, in a way that correlates with the phenotype that we measure.”

“We built upon the network bioinformatics from the Fraenkel lab to create an end-to-end predictive modeling framework to discover biological expression circuits that drive groups of physical characteristics predictive of ACFT scores, for example, body composition or exercise physiology metrics like VO2 max,” Marinelli says.

After feeding the measurements from the study participants into this predictive model, known as PhenoMol, the researchers were able to identify more than 100 biomarkers linked to performance on the ACFT. Fitness predictions based on these biomarkers were much more accurate than those of a model that correlated biomarkers with performance on the ACFT without taking network connections into account. Additionally, PhenoMol performed similarly to a model that predicted participants’ fitness based on measurements of their VO2 and lean muscle mass.

Cellular pathways

The researchers found that the biomarkers identified by PhenoMol clustered into several different cellular pathways. Those include pathways involved in blood coagulation and the complement cascade — a part of the immune system involved in clearing damaged cells. Those systems likely help with recovery from tissue injury and stress response during exercise, Fraenkel says.

Another prominent cluster involves molecules related to the urea cycle, which is responsible for eliminating the ammonia that results from the breakdown of proteins. The model also identified biomarkers that are linked with the function of mitochondria (the organelles that generate energy within cells).

Fraenkel now hopes to dig deeper into which markers show someone’s current fitness, and which might reveal what their potential fitness levels could be. This could help to reveal potential strengths that might not show up in traditional fitness tests, he says.

That kind of prediction could be useful not only for athletic training, but also for other people who are recovering from an injury or disease, or people experiencing the effects of aging. For example, using this approach in different populations might provide useful information for an elderly person after a stroke, since such events often require months of therapy to regain significant mobility.

“This has relevance for the military and for sports teams, but also in a lot of normal life situations where maybe someone is going through rehabilitation for some injury or disease and they’ve hit a wall,” Fraenkel says. “Or during aging, you may be able to see when somebody’s losing capacity or when they have more capacity than they’ve been able to actualize.”

Molecular markers of fitness could also be used in clinical trials to rigorously test the potential benefits of popular food supplements and fitness programs, he adds.

To make the testing process simpler, the researchers would like to narrow down their pool of biomarkers to a handful that could be easily measured from a blood sample using a single method suitable for widespread use.

The research was developed with funding from the Defense Advanced Research Projects Agency (DARPA), which states that the views, opinions, or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the U.S. government.

Six from MIT awarded 2026 Paul and Daisy Soros Fellowships for New Americans

Tue, 04/28/2026 - 12:00am

Six MIT affiliates — Denisse Córdova Carrizales SM ’26; Ria Das ’21, MNG ’22; Ronak Desai; Stacy Godfreey-Igwe ’22; Arya Rao; and Ananthan Sadagopan ’24 — have been named 2026 P.D. Soros Fellows. In addition, P.D. Soros Fellow Avinash Vadali will begin a PhD in condensed-matter physics at MIT this fall.

The fellowship provides immigrants and the children of immigrants up to $90,000 in tuition and stipend support for up to two years of graduate studies. Interested students should contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development.

Denisse Córdova Carrizales

Córdova Carrizales SM '26 is a PhD student in nuclear science and engineering in the lab of Professor Mingda Li, where she completed her master's work earlier this year. She is working on synthesizing and characterizing quantum materials with the goal of bridging fundamental science and industry to make our technology more energy-efficient and sustainable.

Córdova Carrizales, who is of Mexican descent, grew up in Houston, Texas, before attending Harvard University, where she graduated in 2023 with a BA in physics. At Harvard, she dove into experimental condensed-matter research. She also conducted research with the Princeton Plasma Physics Laboratory, Commonwealth Fusion Systems, and VEIR, spanning computational plasma physics and high-temperature superconducting magnet and cable engineering.

Her work includes coauthored papers in Nature Physics, Nature Materials, and Advanced Materials, as well as lead-author publications in Nano Letters and Physical Review Materials. In 2023, she received the LeRoy Apker Award from the American Physical Society.

Beyond research, Córdova Carrizales has advocated in Congress for nuclear disarmament and risk reduction and has written a piece on the nuclear stockpile stewardship program. At Harvard, she founded an organization to support first-generation college students studying physics. In a completely different arena, she performed as the lead in an off-Broadway show in New York.

Ria Das

Das ’21, MNG ’22 is a PhD student in the MIT Department of Electrical Engineering and Computer Science. She graduated from MIT in 2021 with a BS dual degree in mathematics along with electrical engineering and computer science, and received her master of engineering degree in 2022.

The daughter of Indian immigrant parents, Das grew up in Nashua, New Hampshire, where she struggled with issues of belonging and identity. These questions came to the forefront during her PhD studies at Stanford University. Das decided to step off the academic treadmill by taking a leave from her PhD to think more deeply about these topics.

During her leave, she traveled around the country before moving to New York to work at Basis Research Institute, an AI research nonprofit. As a research associate, Das developed an urban data team that worked with federal and municipal government agencies on issues of economic and housing equity, blending her interests in science and social problems. She then returned to MIT to complete her doctoral studies.

Today, Das works with Professor Joshua Tenenbaum in the Department of Brain and Cognitive Sciences to study how people undergo conceptual change to build more robust, accessible systems for automated (social) science and improved educational design. Looking ahead, she hopes to become a professor, collaborating closely with policy practitioners.

Ronak Desai

Desai is currently a student in the Harvard/MIT MD-PhD program, where his PhD focuses on chemistry. The son of immigrants from Gujarat, India, Desai was born in Tyler, Texas, and grew up in nearby Lindale. He earned his undergraduate degree at the University of Texas at Austin.

Desai spent a semester interning at the U.S. House of Representatives as a Bill Archer Fellow. He also completed biomedical research focused on studying and engineering novel polyketide synthases, aspiring to produce next-generation antibiotics by harnessing such newly engineered synthases.

Desai graduated with degrees in chemistry and biochemistry as a first-generation college student, Health Science Scholar, and Dean’s Honored Graduate, receiving nine scholarships throughout college. His research has resulted in publications in journals such as Cell and Nature Communications.

Desai hopes to combine his passions for medicine, science, and public policy in his career to advance the treatment of infectious diseases. He is conducting his doctoral research under Professor James J. Collins in the MIT Department of Biological Engineering and the Harvard-MIT Program in Health Sciences and Technology. Desai’s research centers on using artificial intelligence to discover and design novel antibiotics, an opportunity to advance treatments for patients worldwide.

Stacy Godfreey-Igwe

Godfreey-Igwe ’22 attended MIT as a QuestBridge and Gates Scholar, graduating in 2022 with a BS in mechanical engineering and a concentration in sustainable design. A Burchard Scholar, she also became the first student at MIT to complete a major in African and African diaspora studies. After graduating, she pursued a science policy fellowship in Washington and interned at the U.S. Department of Energy’s Building Technologies Office, where she worked to broaden adoption of heat pump technologies across diverse stakeholders.

Growing up in Richardson, Texas, as the daughter of Nigerian immigrants, Godfreey-Igwe developed an early awareness of structural inequality, particularly in how families like hers managed the burden of the severe Texas heat and high electricity costs. These experiences formed the basis of her lifelong journey seeking to address systemic inequities embedded in everyday systems.

Godfreey-Igwe is currently a doctoral student in the joint engineering and public policy - civil and environmental engineering program at Carnegie Mellon University (CMU), where she was selected for the inaugural CMU Rales Fellowship cohort. At CMU, she studies the impact of extreme heat on household energy use, particularly in vulnerable communities.

Beyond her research, Godfreey-Igwe organizes outreach and programming for local underrepresented students in STEM and participates in institutional efforts to expand access and belonging among graduate students. She aims to be a scholar and advocate whose work, drawing on her personal experiences, informs equitable energy solutions in a warming world.

Arya Rao

Rao is a student in the Harvard/MIT MD-PhD program. She completed her undergraduate degrees in biochemistry and computer science at Columbia University. Working with professors Pardis Sabeti (Harvard University) and Sangeeta Bhatia (MIT), Rao uses evolution as a lens for therapeutic design, developing artificial intelligence methods that read the genetic record and guide new intervention strategies.

Leveraging her dual training in medicine and computer science, Rao also leads the MESH AI Research Group at Mass General Brigham, where she develops simulation-based tools that test clinical AI systems in realistic educational settings before they reach patients.

Rao has been recognized for her work with a Forbes 30 Under 30 honor, the Massachusetts Medical Society Information Technology Award, the Harvard Presidential Public Service Fellowship, a Harvard Medical School Dean’s Innovation Award, and a Ladders to Cures Accelerator Award. She has published more than 30 manuscripts in publications including JAMA, Nature, and NEJM AI.

Growing up in rural northern Michigan, Rao was inspired by her parents, Konkani immigrants from India, who served as two of the area’s only physicians. She has always imagined a career that could leverage scientific innovation to improve patient care, especially for communities without access like her own. Going forward, she envisions a career as a surgeon-scientist that keeps her close to patients while taking on leadership that shapes how new technologies are evaluated, implemented, and made usable in the places that need them most.

Ananthan Sadagopan 

Sadagopan ’24 grew up in Westborough, Massachusetts, as the child of immigrants from Chennai, India. He participated in chemistry competitions, winning the You Be the Chemist Challenge in middle school and earning a gold medal at the International Chemistry Olympiad for the United States in high school. He attended MIT for college, graduating in three years in 2024 with a bachelor’s degree in chemistry and biology.

At MIT, Sadagopan worked with Srinivas Viswanathan on computational biology projects and with William Gibson, Matthew Meyerson, and Stuart Schreiber on chemical biology projects. He led projects characterizing somatic perturbations of X chromosome inactivation in cancer, developing a machine-learning tool for cancer dependency prediction, using small molecules to relocalize proteins in cells, and creating a generalizable strategy to drug the most mutated gene in cancer, TP53. Sadagopan’s work has been patented and published in journals such as Cell and Nature Chemical Biology.

Sadogopan was president of the chemistry undergraduate association and led the events committee for MIT Science Olympiad. He is currently pursuing a PhD in biological and biomedical science at Harvard University as a Hertz Fellow and Herchel Smith Fellow. He is interested in de-risking new therapeutic strategies and hopes that his work will inspire pharma companies to bring first-in-class therapies to patients.

Self-organizing “pencil beam” laser could help scientists design brain-targeted therapies

Mon, 04/27/2026 - 5:00am

MIT researchers discovered a paradoxical phenomenon in optical physics that could enable a new bioimaging method that’s faster and higher-resolution than existing technology.

They discovered that, under the right conditions, a chaotic mess of laser light can spontaneously self-organize into a highly focused “pencil beam.”

Using this self-organized pencil beam, the researchers captured 3D images of the human blood-brain barrier 25 times faster than the gold-standard method, while maintaining comparable resolution.

By showing individual cells absorbing drugs in real-time, this technology could help scientists test whether new drugs for neurodegenerative disease like Alzheimer’s or ALS reach their targets in the brain, with greater speed and resolution.

“The common belief in the field is that if you crank up the power in this type of laser, the light will inevitably become chaotic. But we proved that this is not the case. We followed the evidence, embraced the uncertainty, and found a way to let the light organize itself into a novel solution for bioimaging,” says Sixian You, assistant professor in the MIT Department of Electrical Engineering and Computer Science (EECS), a member of the Research Laboratory for Electronics, and senior author of a paper on this imaging technique.

She is joined on the paper by lead author Honghao Cao, an EECS graduate student; EECS graduate students Li-Yu Yu and Kunzan Liu; postdocs Sarah Spitz, Francesca Michela Pramotton, and Federico Presutti; Zhengyu Zhang PhD ’24; Subhash Kulkarni, an assistant professor at Harvard University and the Beth Israel Deaconess Medical Center; and Roger Kamm, the Cecil and Ida Green Distinguished Professor of Biological and Mechanical Engineering at MIT. The paper appears today in Nature Methods.

A surprising finding

The discovery began with an observation that initially puzzled the researchers.

The team previously developed a precise fiber shaper, a device that enables them to carefully tune the laser light shining through a multimode optical fiber. This type of optical fiber can carry a significant amount of power.

Cao was pushing the multimode fiber toward its limit to see how much power it could take.

Typically, the more power one pumps into the laser, the more disordered and scattered the beam of light becomes due to imperfections in the fiber.

But Cao observed that, as he increased the power almost to the point where it would burn the fiber, the light did the opposite of what was expected: It collapsed into a single, needle-sharp beam.

“Disorder is intrinsic to these fibers. The light engineering you typically need to do to overcome that disorder, especially at high power, is a longstanding hassle. But with this self-organization, you can get a stable, ultrafast pencil beam without the need for custom beam-shaping components,” You says.

To replicate this phenomenon, the researchers found they had to satisfy two simple, but precise conditions.

First, the laser must enter the fiber at a perfect, zero-degree angle. This is a more rigorous requirement than is usually used for these types of fibers. Second, the power must be dialed up until the light begins to interact with the glass of the fiber itself.

“At this critical power, the nonlinearity can counter the intrinsic disorder, creating a balance that transforms the input beam into a self-organized pencil beam,” Cao explains.

Typically, researchers conduct these experiments at much lower power levels for fear of destroying the fiber, in which case they wouldn’t see this self-organization. In addition, such precise on-axis alignment isn’t typically necessary since a multimode fiber can carry so much power.

But taken together, these two techniques can generate a stable pencil-beam without any complicated light engineering methods.

“That is the charm of this method — you could do this with a normal, optical setup and without much domain expertise,” You says.

A better beam

When the researchers performed characterization experiments of this pencil beam, it was more stable and high-resolution than many similar beams. Other beams often suffer from “sidelobes” — blurry halos of light that can distort images.

Their beam was more pristine and tightly focused.

Building on those experiments, the researchers demonstrated the use of this pencil-beam in biomedical imaging of the human blood-brain barrier.

This barrier is a tightly packed layer of cells that protects the brain from toxins, but it also blocks many medicines. Scientists and clinicians often want to see how drugs flow inside the vasculature of the blood-brain barrier and whether they reach their targets within the brain.

But with standard optical settings, the best one can do is capture one 2D section of the vasculature at a time, and then repeat the process multiple times to generate a fuller image, You explains.

Using this new technique, the researchers created an ultrafast, high-precision pencil beam that enabled them to dynamically track how cells absorb proteins in real-time.

“The pharmaceutical industry is especially interested in using human-based models to screen for drugs that effectively cross the barrier, as animal models often fail to predict what happens in humans. That this new method doesn’t require the cells to have a fluorescent tag is a game-changer. For the first time, we can now visualize the time-dependent entry of drugs into the brain and even identify the rate at which specific cell types internalize the drug,” says Kamm.

“Importantly, however, this approach is not limited to the blood-brain barrier but enables time-resolved tracking of diverse compounds and molecular targets across engineered tissue models, providing a powerful tool for biological engineering,” Spitz adds.

The team captured cellular-level 3D images that were higher quality than with other methods, and generated these images about 25 times faster.

“Usually, you have a tradeoff between image resolution and depth of focus — you can only probe so far at a time. But with our method, we can overcome this tradeoff by creating a pencil-beam with both high resolution and a large depth of focus,” You says.

In the future, the researchers want to better understand the fundamental physics of the pencil-beam and the mechanisms behind its self-organization. They also plan to apply the technique to other scenarios, such as imaging neurons in the brain, and work toward commercializing the technology.

“You’s group realized this beam that concentrates energy in time and space could be valuable for microscopy techniques that depend on the intensity of the light that illuminates the sample. They demonstrated just that and found advantages over ordinary laser beams for imaging. It will be scientifically interesting to fully understand the creation of the new pencil beams, which could find use in a variety of imaging applications,” says Frank Wise, the Samuel B. Eckert Professor of Engineering Emeritus at Cornell University, who was not involved with this work.

This work was funded, in part, by MIT startup funds, the National Science Foundation (NSF), the Silicon Valley Community Foundation, Diacomp Foundation, the Harvard Digestive Disease Core, a MathWorks Fellowship, and the Claude E. Shannon Award.

A faster way to estimate AI power consumption

Mon, 04/27/2026 - 12:00am

Due to the explosive growth of artificial intelligence, it is estimated that data centers will consume up to 12 percent of total U.S. electricity by 2028, according to the Lawrence Berkeley National Laboratory. Improving data center energy efficiency is one way scientists are striving to make AI more sustainable.

Toward that goal, researchers from MIT and the MIT-IBM Watson AI Lab developed a rapid prediction tool that tells data center operators how much power will be consumed by running a particular AI workload on a certain processor or AI accelerator chip.

Their method produces reliable power estimates in a few seconds, unlike traditional modeling techniques that can take hours or even days to yield results. Moreover, their prediction tool can be applied to a wide range of hardware configurations — even emerging designs that haven’t been deployed yet.

Data center operators could use these estimates to effectively allocate limited resources across multiple AI models and processors, improving energy efficiency. In addition, this tool could allow algorithm developers and model providers to assess potential energy consumption of a new model before they deploy it.

“The AI sustainability challenge is a pressing question we have to answer. Because our estimation method is fast, convenient, and provides direct feedback, we hope it makes algorithm developers and data center operators more likely to think about reducing energy consumption,” says Kyungmi Lee, an MIT postdoc and lead author of a paper on this technique.

She is joined on the paper by Zhiye Song, an electrical engineering and computer science (EECS) graduate student; Eun Kyung Lee and Xin Zhang, research managers at IBM Research and the MIT-IBM Watson AI Lab; Tamar Eilam, IBM Fellow, chief scientist of sustainable computing at IBM Research, and a member of the MIT-IBM Watson AI Lab; and senior author Anantha P. Chandrakasan, MIT provost, Vannevar Bush Professor of Electrical Engineering and Computer Science, and a member of the MIT-IBM Watson AI Lab. The research is being presented this week at the IEEE International Symposium on Performance Analysis of Systems and Software.

Expediting energy estimation

Inside a data center, thousands of powerful graphics processing units (GPUs) perform operations to train and deploy AI models. The power consumption of a particular GPU will vary based on its configuration and the workload it is handling.

Many traditional methods used to predict energy consumption involve breaking a workload into individual steps and emulating how each module inside the GPU is being utilized one step at a time. But AI workloads like model training and data preprocessing are extremely large and can take hours or even days to simulate in this manner.

“As an operator, if I want to compare different algorithms or configurations to find the most energy-efficient manner to proceed, if a single emulation is going to take days, that is going to become very impractical,” Lee says.

To speed up the prediction process, the MIT researchers sought to use less-detailed information that could be estimated faster. They found that AI workloads often have many repeatable patterns. They could use these patterns to generate the information needed for reliable but quick power estimation.

In many cases, algorithm developers write programs to run as efficiently as possible on a GPU. For instance, they use well-structured optimizations to distribute the work across parallel processing cores and move chunks of data around in the most efficient manner.

“These optimizations that software developers use create a regular structure, and that is what we are trying to leverage,” explains Lee.

The researchers developed a lightweight estimation model, called EnergAIzer, that captures the power usage pattern of a GPU from those optimizations.

An accurate assessment

But while their estimation was fast, the researchers found that it didn’t take all energy costs into account. For instance, every time a GPU runs a program, there is a fixed energy cost required for setting up and configurating that program. Then each time the GPU runs an operation on a chunk of data, an additional energy cost must be paid.

Due to fluctuations in the hardware or conflicts in accessing or moving data, a GPU might not be able to use all available bandwidth, slowing operations down and drawing more energy over time.

To include these additional costs and variances, the researchers gathered real measurements from GPUs to generate correction terms they applied to their estimation model.

“This way, we can get a fast estimation that is also very accurate,” she says.

In the end, a user can provide their workload information, like the AI model they want to run and the number and length of user inputs to process, and EnergAIzer will output an energy consumption estimation in a matter of seconds.

The user can also change the GPU configuration or adjust the operating speed to see how such design choices impact the overall power consumption.

When the researchers tested EnergAIzer using real AI workload information from actual GPUs, it could estimate the power consumption with only about 8 percent error, which is comparable to traditional methods that can take hours to produce results.

Their method could also be used to predict the power consumption of future GPUs and emerging device configurations, as long as the hardware doesn’t change drastically in a short amount of time.

In the future, the researchers want to test EnergAIzer on the newest GPU configurations and scale the model up so it can be applied to many GPUs that are collaborating to run a workload.

“To really make an impact on sustainability, we need a tool that can provide a fast energy estimation solution across the stack, for hardware designers, data center operators, and algorithm developers, so they can all be more aware of power consumption. With this tool, we’ve taken one step toward that goal,” Lee says.

This research was funded, in part, by the MIT-IBM Watson AI Lab.

The power of “and” in energy and climate entrepreneurship

Fri, 04/24/2026 - 1:27pm

A supportive ecosystem is a cornerstone in entrepreneurship, according to Georgina Campbell Flatter, the CEO of Greentown Labs. “If we really want to be driving the most transformational technologies to scale at a speed in which we need them to happen for our planet, we need to be thinking about the ecosystem that we build around it.” During a seminar titled MITEI Presents: Advancing the Energy Transition, Campbell Flatter spoke of “the power of ‘and’” — the importance of multiple people, companies, and solutions collaborating to advance energy and climate solutions — and how that underpins Greentown Labs’ mission. “Innovation is a team sport. No one can go alone,” she said.

Creating these ecosystems is paramount at Greentown Labs, the world’s largest energy and climate incubator. “Through the lens of Greentown, we think about the power of ‘and’ through how we can work together better in the ecosystems where we have physical presence, but also how we can connect better across ecosystems,” said Campbell Flatter. The concept of "and" also exists in energy and climate, innovation and deployment, science and entrepreneurship, and competitiveness and collaboration, she said. Campbell Flatter feels this expansive lens is especially important in our increasingly polarized world.

At its core, Greentown Labs is a place to cluster innovators together. “We have to be very intentional about how we support and accelerate and help those entrepreneurs,” said Campbell Flatter. There is a science behind this “innovation infrastructure” that involves not only bringing creative minds together, but also removing friction so startups can move faster. The majority of this friction exists in the gaps between innovation and deployment, often referred to as the “valleys of death.” The first valley of death happens between idea and prototype; the second valley of death happens between prototype and the first commercial pilot. Greentown often asks where their ecosystems can be most helpful, which has led them to focus on helping entrepreneurs bridge that second valley, according to Campbell Flatter.

“Entrepreneurs at the stage where they can’t quite afford space on their own, and maybe it takes six to 12 months to figure out the permitting anyway, come to Greentown,” said Campbell Flatter. “We’re actively thinking about the customers, the capital, the infrastructure needs that you have in order for you to move your way through this second valley.”

Part of Greentown’s decision to focus on the second valley came from MIT’s unique ability to bring innovators across the first valley of death — an ability that Campbell Flatter deemed “truly world class.” Referencing startups born from universities like MIT and Harvard, Campbell Flatter said “they're far more likely to be successful and scale because of the ecosystem they’re surrounded in. You’re getting feedback constantly from your peers, you’re getting support and mentorship — that all matters for the ecosystem.”

MIT also helps build this ecosystem by attracting innovators to the area. “Thirty percent of our entrepreneurs at Greentown are coming from out of state and moving to Massachusetts,” she said. “One, because Greentown’s a great home for them, but two, because of MIT and the talent that they can source from the ecosystem, which they are well aware of, and the knowledge, IP [intellectual property], and credibility.”

Not only is the symbiotic relationship between MIT and Greentown a powerful entrepreneurial ecosystem, but MIT has also been instrumental in Campbell Flatter’s own journey toward her current body of work. After completing her master’s degree in materials science at Oxford University, she graduated from the MIT Technology and Policy Program. Campbell Flatter credited her time as a graduate student at MIT for giving her an appreciation for how hard it is to commercialize technology, and for the importance of ecosystems, and for giving her an early sense of how energy and climate would define this century. “I think it is really important to recognize the intentionality behind MIT’s commitment to energy and climate,” said Campbell Flatter.

While at MIT, she ran the third iteration of the MIT Clean Energy Prize, advocating for the inclusion of a non-renewables chapter of the prize because she saw “how important it was to continue to decarbonize and bring efficiencies to the traditional energy sectors while we work on all these amazing new energy initiatives.” Greentown has put this into practice through their wide network of industry partners. 

“I guess this early lesson I took from MIT was this idea that we must embrace the power of ‘and,’” said Campbell Flatter. “It slows innovation down when we don’t embrace and work together.”

This speaker series highlights energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. Visit the MIT Energy Initiative's events page for more information on this and additional events.

MIT scientists build the world’s largest collection of Olympiad-level math problems, and open it to everyone

Fri, 04/24/2026 - 1:00pm

Every year, the countries competing in the International Mathematical Olympiad (IMO) arrive with a booklet of their best, most original problems. Those booklets get shared among delegations, then quietly disappear. No one had ever collected them systematically, cleaned them, and made them available, not for AI researchers testing the limits of mathematical reasoning, and not for the students around the world training for these competitions largely on their own.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), King Abdullah University of Science and Technology (KAUST), and the company HUMAIN have now done exactly that.

MathNet is the largest high-quality dataset of proof-based math problems ever created. Comprising more than 30,000 expert-authored problems and solutions spanning 47 countries, 17 languages, and 143 competitions, it is five times larger than the next-biggest dataset of its kind. The work will be presented at the International Conference on Learning Representations (ICLR) in Brazil later this month.

What makes MathNet different is not only its size, but its breadth. Previous Olympiad-level datasets draw almost exclusively from competitions in the United States and China. MathNet spans dozens of countries across six continents, covers 17 languages, includes both text- and image-based problems and solutions, and spans four decades of competition mathematics. The goal is to capture the full range of mathematical perspectives and problem-solving traditions that exist across the global math community, not just the most visible ones.

"Every country brings a booklet of its most novel and most creative problems," says Shaden Alshammari, an MIT PhD student and lead author on the paper. "They share the booklets with each other, but no one had made the effort to collect them, clean them, and upload them online."

Building MathNet required tracking down 1,595 PDF volumes totaling more than 25,000 pages, spanning digital documents and decades-old scans in more than a dozen languages. A significant portion of that archive came from an unlikely source: Navid Safaei, a longtime IMO community figure and co-author who had been collecting and scanning those booklets by hand since 2006. His personal archive formed much of the backbone of the dataset.

The sourcing matters as much as the scale. Where most existing math datasets pull problems from community forums like Art of Problem Solving (AoPS), MathNet draws exclusively from official national competition booklets. The solutions in those booklets are expert-written and peer-reviewed, and they often run to multiple pages, with authors walking through several approaches to the same problem. That depth gives AI models a far richer signal for learning mathematical reasoning than the shorter, informal solutions typical of community-sourced datasets. It also means the dataset is genuinely useful for students: Anyone preparing for the IMO or a national competition now has access to a centralized, searchable collection of high-quality problems and worked solutions from traditions around the world.

"I remember so many students for whom it was an individual effort. No one in their country was training them for this kind of competition," says Alshammari, who competed in the IMO as a student herself. "We hope this gives them a centralized place with high-quality problems and solutions to learn from."

The team has deep roots in the IMO community. Sultan Albarakati, a co-author, currently serves on the IMO board, and the researchers are working to share the dataset with the IMO foundation directly. To validate the dataset, they assembled a grading group of more than 30 human evaluators from countries including Armenia, Russia, Ukraine, Vietnam, and Poland, who coordinated together to verify thousands of solutions.

"The MathNet database has the potential to be an excellent resource for both students and leaders seeking new problems to work on or looking for the solution to a difficult question," says Tanish Patil, deputy leader of Switzerland's IMO. "Whilst other archives of Olympiad problems do exist (notably, the Contest Collections forums on AoPS), these resources lack standardized formatting system, verified solutions, and important problem metadata that topics and theory require. It will also be interesting to see how this dataset is used to improve the performance of reasoning models, and if we will soon be able to reliably answer an important issue when creating novel Olympiad questions: determining if a problem is truly original."

MathNet also functions as a rigorous benchmark for AI performance, and the results reveal a more complicated picture than recent headlines about AI math prowess might suggest. Frontier models have made extraordinary progress: Some have reportedly achieved gold-medal performance at the IMO, and on standard benchmarks they now solve problems that would stump most humans. But MathNet shows that progress is uneven. Even GPT-5, the top-performing model tested, averaged around 69.3 percent on MathNet's main benchmark of 6,400 problems, failing nearly one-in-three Olympiad-level problems. And when problems include figures, performance drops significantly across the board, exposing visual reasoning as a consistent weak point for even the most capable models.

Several open-source models scored 0 percent on Mongolian-language problems, highlighting another dimension where current AI systems fall short despite their overall strength.

"GPT models are equally good in English and other languages," Alshammari says. "But many of the open-source models fail completely at less-common languages, such as Mongolian."

The diversity of MathNet is also designed to address a deeper limitation in how AI models learn mathematics. When training data skews toward English and Chinese problems, models absorb a narrow slice of mathematical culture. A Romanian combinatorics problem or a Brazilian number theory problem may approach the same underlying concept from a completely different angle. Exposure to that range, the researchers argue, makes both humans and AI systems better mathematical thinkers.

Beyond problem-solving, MathNet introduces a retrieval benchmark that asks whether models can recognize when two problems share the same underlying mathematical structure, a capability that matters both for AI development and for the math community itself. Near-duplicate problems have appeared in real IMO exams over the years because finding mathematical equivalences across different notations, languages, and formats is genuinely hard, even for expert human committees. Testing eight state-of-the-art embedding models, the researchers found that even the strongest identified the correct match only about 5 percent of the time on the first try, with models frequently ranking structurally unrelated problems as more similar than equivalent ones.

The dataset also includes a retrieval-augmented generation benchmark, testing whether giving a model a structurally related problem before asking it to solve a new one improves performance. It does, but only when the retrieved problem is genuinely relevant. DeepSeek-V3.2-Speciale gained up to 12 percentage points with well-matched retrieval, while irrelevant retrieval degraded performance in roughly 22 percent of cases.

Alshammari wrote the paper with Safaei, HUMAIN AI engineer Abrar Zainal, KAUST Academy Director Sultan Albarakati, and MIT CSAIL colleagues: master's student Kevin Wen SB ’25; Microsoft Principal Engineering Manager Mark Hamilton SM ’22, PhD ‘25; and professors William Freeman and Antonio Torralba. Their work was funded, in part, by the Schwarzman College of Computing Fellowship and the National Science Foundation.

MathNet is publicly available at mathnet.csail.mit.edu.

Faces of MIT: Gabi Hott Soares

Fri, 04/24/2026 - 12:00pm

Gabi Hott Soares, associate director of student organizations and programming for the Student Organizations, Leadership, and Engagement Office (SOLE) in the Division of Student Life (DSL), empowers and equips students to lead and serve not only during their time at MIT, but also as they venture into their professional lives. With enthusiasm and a global mindset, she is dedicated to helping students thrive and reach their goals. 

Hott Soares was working in Brazil in corporate communication and social responsibility for heavy‑industry companies, including metals, mining, steel, and oil and gas, when she moved to the United States in 2017 to attend the Hult International Business School in Cambridge. After graduating, she hoped to fulfill her dream of working in the United States, and initially planned to continue in the same industry. Once she arrived in Boston, however, she saw the potential of working in higher education and identified it as a field she wanted to pursue. The challenge, Hott Soares noted, was that as an international professional, she did not have anyone stateside who could recommend her. 

Taking matters into her own hands, Hott Soares began attending meetups of Brazilian students and researchers in the Boston area to make connections. At one, she met an MIT student who invited her to volunteer as a marketing chair for his startup. Hott Soares worked with the startup for three months when she met another member of the team — the girlfriend of an MIT student — who mentioned that she was leaving a part‑time position within the MIT Spouses and Partners Connect (MS&PC) program. She asked Hott Soares if she would be interested in the role, and Hott Soares jumped at the opportunity to work at the Institute. 

In her first position at MIT, Hott Soares worked directly with Aaron Donaghey, manager of event scheduling and special projects in the Campus Activities Complex (CAC), in a temporary office assistant position supporting CAC and SOLE. Located on the fifth floor of the Stratton Student Center, she greeted students and provided resources related to both offices. Intent on learning as much as she could about how both offices operated, she dedicated time to familiarizing herself with their functions, which was no small task. CAC, for example, manages several event spaces, including Kresge Auditorium and the MIT Chapel, and oversees thousands of events each year. Meanwhile, SOLE advises hundreds of student organizations recognized by the Association of Student Activities.  

Six months later, when Hott Soares told Donaghey about her background and hope for a career at MIT, he encouraged her to apply to be the event support assistant within CAC. She was selected for the role, marking her first permanent role at MIT. On her path to continued growth at the Institute, and confident that new opportunities would come, she took advantage of the Institute’s career planning and development resources offered to employees. She worked one-on-one with Michele King Harrington, career development program administrator in human resources, and attended her workshops. King Harrington encouraged her to stay open to emerging opportunities, and in turn, Hott Soares immersed herself in learning everything she could about the Institute.  

In 2021, she was promoted to senior administrative assistant for what is now known as Student Engagement and Campus Activities within DSL. A year later, she became assistant director of student organizations and programming in SOLE. In 2023, she was again promoted to associate director of student organizations and programming and received a DSL Infinite Mile Award in the category “Here for the Students.” 

In her current role, Hott Soares leads the student events and programming boards area, which includes the Class Councils, Ring Committee, Senior Ball and Week Committees, and the Student Events Board. She interfaces daily with the student groups, helping them build community and plan activities and programs both on and off campus. While the skills she teaches students are applicable for their task at hand, they are also life skills that students will carry with them long after their time at MIT.  

Serving people and nurturing the MIT community are what Hott Soares enjoys most. She reminds students that amid a rigorous course load and demanding commitments, it’s important to have fun — especially when they are celebrating an event they worked hard to plan. “Their time at MIT is one of the most beautiful times of their lives,” she says. “I want them to remember that.” 

Soundbytes 

Q: What part of your work makes you feel most proud?

Hott Soares: I am proud of being able to work with the most brilliant minds in the world and still be myself. When I am interacting with students, we want to help each other, and we can create a relationship that is based on empathy, respect, trust, and humility. I am grateful that I get to work with so many wonderful people. 

Q: What advice would you give to a new staff member at MIT?

Hott Soares: Introduce yourself to people and take time to build relationships. Let others know what you do, what you want to do, and how you want to collaborate. Be humble, stay curious, and open to learning. MIT can feel fast-paced, but it is also a community full of people who genuinely care. You will thrive by being your true self!

Q: How would you describe the community at MIT?

Hott Soares: The people at MIT are amazing. Because I don’t have my family here, MIT is like home. The community is made up of people from different backgrounds and cultures, and I’ve always felt respected and like I belong. It is welcoming, safe, and compassionate. A shared sense of purpose, collaboration, creativity, and drive make MIT an inspiring place to work. 

Three from MIT named 2026 Goldwater Scholars

Thu, 04/23/2026 - 3:00pm

Three MIT rising seniors have been selected to receive a 2026 Barry Goldwater Scholarship, including Deeksha Kumaresh in the School of Engineering and Anna Liu and Charlotte Myersin the School of Science. An estimated 5,000 college sophomores and juniors from across the United States were nominated for the scholarships, of whom only 454 were selected.

The Goldwater Scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundation. These scholarships have supported undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields.

Deeksha Kumaresh, a third-year biological engineering major, is an undergraduate researcher at the Hammond Lab. The Hammond Research Group at the MIT Koch Institute for Integrative Cancer Research focuses on the self-assembly of polymeric nanomaterials, with a major emphasis on the use of electrostatics and other complementary interactions to generate multifunctional materials with highly controlled architecture.

“Hands down, the mentors I’ve encountered have been the most significant part of my MIT journey,” Kumaresh says. “I’m also extremely grateful to the Hammond Lab, which has provided a supportive environment where I can make mistakes, learn, and grow as a researcher. I treasure the spontaneous conversations with lab members (about science or life) and their willingness to treat me seriously as an independent researcher, even as an undergraduate.”

Kumaresh is mentored by Paula Hammond, dean of the School of Engineering, Institute Professor, and professor of chemical engineering. Kumaresh's career goals are to pursue an MD/PhD. In the long term, she seeks to lead a bioengineering research lab to predict the efficacy and side effects of cancer therapies by developing systems-level computational and biological preclinical models.

“Receiving this scholarship has been incredibly meaningful, because it offered me the chance to reflect critically on my post-graduate goals and receive recognition for my journey for them,” Kumaresh says. “Earning this scholarship has welcomed me into a tight-knit community where I’ve already found so much guidance. Everyone is genuinely curious about everyone else’s interests and are eager to lend a hand however they can.”

Anna Liu, a third-year chemistry major, is an undergraduate researcher in the Radosevich Group. The overarching objective of the group’s research is to develop new catalysts, strategies, and reagents for synthetic chemistry. By designing and synthesizing new molecular compounds with unknown structure and function, the group hopes to learn more about the general principles enabling new chemical transformations.

Liu is mentored by professor of chemistry Alexander Radosevich. She plans to pursue a PhD in organic or inorganic chemistry and eventually lead research developing sustainable synthetic transformations informed by fundamental mechanistic and reactivity studies, and teach at the university level.

“Going through the Goldwater application process gave me a deeper understanding of my research project and helped me reflect on my intrinsic motivations to pursue research. I’m excited to use what I’ve learned to keep growing as a researcher,” Liu says. “I am so grateful for the countless mentors, teachers, labmates, classmates, friends, and family in my life who have believed in me, fostered my passion for chemistry, and taught me so much. Receiving this scholarship is truly a testament to their outstanding support!"

Charlotte Myers, a third-year physics and astronomy major, conducts research at the Kavli Institute for Astrophysics and Space Research, where she applies machine learning to model galactic structure, and at the Center for Theoretical Physics, where she studies theoretical models of dark matter. Her research interests center on the physics of dark matter, which she approaches from multiple perspectives — from its distribution on galactic scales to particle-level models.

Myers is mentored by Lina Necib, an assistant professor in the Department of Physics. She plans to pursue a PhD in theoretical physics and conduct research in cosmology and astroparticle physics, with a focus on the fundamental physics of dark matter, and teach at the university level.

“I am very grateful to my research advisors, Professor Necib, Dr. Starkman, and Professor Slatyer, for their guidance and support in helping me develop as a researcher,” Myers says. “I find it deeply rewarding to engage with open questions in physics, and I am excited to continue pursuing this work in graduate school and beyond. Receiving this scholarship has given me both the resources and the confidence to continue on that path, even when progress is not always linear.”

The scholarship program honoring Senator Barry Goldwater was designed to identify, encourage, and financially support outstanding undergraduates interested in pursuing research careers in the sciences, engineering, and mathematics. The Goldwater Scholarship is the preeminent undergraduate award of its type in these fields.

MIT takes top team honors in 86th Putnam Math Competition

Thu, 04/23/2026 - 2:00pm

In an outstanding performance at the 86th William Lowell Putnam Mathematical Competition, MIT’s team once again took the top spot for the sixth consecutive year. MIT secured four of the five Putnam Fellows, who are the five highest-ranking students, and the Elizabeth Lowell Putnam Prize, which is given to a woman whose “performance in the competition is particularly meritorious.”

The members of the winning team, consisting of junior Cheng Jiang, senior Luke Robitaille, and first-year Chunji Wang, were all awarded as Putnam Fellows alongside senior Zixiang Zhou, each receiving a $2,500 award for their performance. Notably, Robitaille is a four-time Putnam Fellow, having received the award for each year of his studies. For a second consecutive year, sophomore Jessica Wan was awarded the Elizabeth Lowell Putnam Prize and received $1,000.

Wan was also among the top 25 scorers, amongst 16 others from MIT: Warren Bei, Reagan Choi, Pico Gilman, Henry Jiang, Zhicheng Jiang, Papon Lapate, Gyudong Lee, Derek Liu, Maximus Lu, Krishna Pothapragada, Pitchayut Saengrungkongka, Qiao Sun, Allen Wang, Kevin Wang, and Yichen Xiao.

A legacy of success

“I was delighted to see how well the MIT students did on the Putnam exam this year, which reflects their hard work, talent, and enthusiasm,” says Professor Henry Cohn, who led class 18.A34 (Mathematical Problem Solving) this year, also informally known as the Putnam seminar.

MIT’s continued success in the Putnam competition stems from a variety of sources. Some of this is built on things like the seminar, where students get together to sharpen their skills by diving deep into tough problems and discussing solutions.

Cohn, a former participant in the Putnam, comments on the joy of teaching the seminar and seeing students’ progress. “When you spend a semester watching students present solutions to difficult problems, you start to understand how they think,” says Cohn. “It’s exciting to see them apply their abilities to new, difficult problems."

Professor Bjorn Poonen, who also led the seminar in previous years (and is a four-time Putnam Fellow), describes it as an opportunity to hone a spectrum of skills in competition preparation. “Knowing how to explain things well is really important for doing well on the Putnam and for everything else, and for this it really helps to have experience communicating with others, which is what the problem-solving seminar is all about.”

A shared passion for problem-solving

The students who take the Putnam thrive on all aspects of the competition, from the social to the exam itself.

“It’s not a school day, and we still get to do math,” Jiang describes his excitement for the competition. Indeed, getting to “do math” extends beyond formally sitting for the exam, to breaks and opportunities for discussion that are interspersed throughout the day. The students take each opportunity to come together as seriously as they do the competition, and it is this collective passion for problem-solving that builds a strong sense of community and brings students back year after year.

“The competition brings together hundreds of students from across campus representing many majors, years of graduation, and degrees of math contest experience, but what brings everyone together is a shared love of solving problems,” Cohn says. “You can see this in the clusters of students who stay to discuss the problems long after the exam has ended. Mathematics can sometimes feel like a solitary pursuit, but at this level, collaboration is key.”

Community complements the shared passion the math enthusiasts share for problems and puzzles. “You get a kind of satisfaction similar to when you get unstuck while doing a crossword puzzle and everything falls into place,” Poonen describes his own experience solving Putnam problems.

Consistency in certainty

The competition is also an opportunity to see familiar faces. Robitaille recalls his experiences in high school math olympiads, and highlights the friendly atmosphere at the Putnam. “Throughout college, I have stayed close with people I met at competitions,” Robitaille says. “There’s the whole background of times spent together, not just on contest day.”

An event for both community and challenge, the consistency and certainty of competition day is what brought Robitaille and Zhou back year after year. “Each time, you have a set amount of time to sit in the room and work on the problems,” Robitaille says. “If you were the type of person for whom that would be a fun thing, like me, it’s nice to have an opportunity to do it again occasionally.”

“It’s more fun than the real world, where everything is complicated,” Zhou adds with a smile.

The full list of 2025 winners can be found on the Putnam website.

New chip can protect wireless biomedical devices from quantum attacks

Thu, 04/23/2026 - 12:00am

As quantum computers advance, they are expected to be able to break tried-and-true security schemes that currently keep most sensitive data secure from attackers. Scientists and policymakers are working to design and implement post-quantum cryptography to defend against these future attacks.

MIT researchers have developed an ultra-efficient microchip that can bring post-quantum cryptography techniques to wireless biomedical devices, like pacemakers and insulin pumps. Such wearable, ingestible, or implantable devices are usually too power-constrained to implement these computationally demanding security protocols.

Their tiny chip, which is about the size of a very fine needle tip, also includes built-in protections against physical hacking attempts that can bypass encryption to steal user data, such as a patient’s social security number or device credentials. Compared to prior designs, the new technology is more than an order of magnitude more energy-efficient.

In the long run, the new chip could enable next-generation wireless medical devices to maintain strong security even as quantum computing becomes more prevalent. In addition, it could be applied to many types of resource-constrained edge devices, like industrial sensors and smart inventory tags.

“Tiny edge devices are everywhere, and biomedical devices are often the most vulnerable attack targets because power constraints prevent them from having the most advanced levels of security. We’ve demonstrated a very practical hardware solution to secure the privacy of patients,” says Seoyoon Jang, an MIT electrical engineering and computer science (EECS) graduate student and lead author of a paper on the chip.

Jang is joined on the paper by Saurav Maji PhD ’23; visiting scholar Rashmi Agrawal; EECS graduate students Hyemin Stella Lee and Eunseok Lee; Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and an associate member of the Broad Institute of MIT and Harvard; and senior author Anantha Chandrakasan, MIT provost and the Vannevar Bush Professor of Electrical Engineering and Computer Science. The research was recently presented at the IEEE Custom Integrated Circuits Conference.

Stronger security

A large percentage of wireless biomedical devices, like ingestible biosensors for health monitoring, currently lack strong protection due to the computational demands of existing security protocols, Jang says.

But the complexity of post-quantum cryptography (PQC) can increase power consumption by two or three orders of magnitude.

Implementing PQC is of paramount importance, since regulatory bodies like the National Institute of Standards and Technology (NIST) will soon begin phasing out traditional cryptography protocols in favor of stronger PQC algorithms. In addition, some industry leaders believe rapid advances in quantum hardware make PQC implementation even more urgent.

To bring these power-hungry PQC protocols to wireless biomedical devices, the MIT researchers designed a customized microchip, known as an application-specific integrated circuit (ASIC), that greatly reduces energy overhead while guaranteeing the highest level of security.

“PQC is very secure algorithmically, but making a device resilient against physical attacks usually requires additional countermeasures that pump up the energy consumption at least two or three times. We want our chip to be robust to both security threats in a very lightweight manner,” Jang says.

A multi-pronged approach

To accomplish these goals, the researchers incorporated several design features into the chip.

First, they implemented two different PQC schemes to enhance robustness and “future-proof” their device in case one scheme is later proven to be insecure. To boost energy efficiency, they applied techniques that enable the PQC algorithms to share as much of the chip’s computational resources as possible.

Second, the researchers designed a highly efficient, on-chip true random number generator. This device continually generates random numbers to use for secret keys, which is essential to implement PQC.

Their on-chip design improves energy efficiency and security over standard approaches that usually receive random numbers from an external chip.

Third, they implemented countermeasures that prevent a type of physical hacking attempt, called a power side-channel attack, but only on the most vulnerable parts of the PQC protocols.

In power side-channel attacks, hackers steal secret information by analyzing the power consumption of a device while it processes data. The MIT researchers added just enough redundancy to the PQC operations to ensure the chip is protected from these types of attacks.

Fourth, they designed an early fault-detection mechanism so the chip will abort operations early if it detects a voltage glitch.

Wireless biomedical devices often have erratic power supplies, so they are susceptible to glitches that can cause an entire security procedure to fail. The MIT approach saves energy by stopping the chip from running a doomed procedure to completion.

“At the end of the day, because of the techniques we utilized, we can apply these post-quantum cryptography primitives while adding nothing to the overhead, with the added benefit of robustness to side-channel attacks,” Jang says.

Their device achieved between 20 to 60 times higher energy efficiency than all other PQC security techniques they compared it to, with a more compact area than many existing chips.

“As we transition into post-quantum approaches, providing strong security for even the most resource-limited devices is essential. This work shows that robust cryptographic protection for biomedical and edge devices can be achieved alongside energy efficiency and programmability,” says Chandrakasan.

In the future, the researchers want to apply these techniques to other vulnerable applications and energy-constrained devices.

This research was funded, in part, by the U.S. Advanced Research Projects Agency for Health.

MIT affiliates elected to the American Academy of Arts and Sciences for 2026

Wed, 04/22/2026 - 4:00pm

Four MIT faculty members are among the roughly 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced April 22. Thirteen additional MIT alumni were also honored.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

MIT faculty elected from MIT in 2026 are:

  • Isaiah Andrews PhD ’14,  Charles E. and Susan T. Harris Professor of Economics;
  • David Atkin, Barton L. Weller (1940) Professor of Economics;
  • Pablo Jarillo-Herrero, Cecil and Ida Green Professor of Physics; and
  • Benjamin Paul Weiss, Robert R. Shrock Professor of Earth and Planetary Sciences

MIT alumni elected this year include Mark Aguiar PhD ’99 (Economics); Mark G. Allen SM ’86, PhD ’89 (Chemical Engineering); Magdalena Balazinska PhD ’06 (EECS); Keren Bergman SM ’91, PhD ’94 (EECS); Sara Cherry PhD ’00 (Biology); Cynthia J. Ebinger SM ’86, PhD ’88 (EAPS); Charles L. Epstein ’78 (Mathematics); Shanhui Fan PhD ’97 (Physics); Atif Mian ’96, PhD ’01 (Mathematics with Computer Science and Economics); Sarah E. O'Connor PhD ’01 (Chemistry); Darryll J. Pines SM ’88, PhD ’92 (Mechanical Engineering); Phillip (Terry) Ragon ’72 (Physics); and Mansour Shayegan ’79, EE ’81, SM ’81, PhD ’83 (Electrical Engineering).

“We celebrate the achievement of each new member and the collective breadth and depth of their excellence – this is a fitting commemoration of the nation’s 250th anniversary,” said Academy President Laurie Patton.

Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.

Teaching AI models to say “I’m not sure”

Wed, 04/22/2026 - 3:15pm

Confidence is persuasive. In artificial intelligence systems, it is often misleading.

Today's most capable reasoning models share a trait with the loudest voice in the room: They deliver every answer with the same unshakable certainty, whether they're right or guessing. Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have now traced that overconfidence to a specific flaw in how these models are trained, and developed a method that fixes it without giving up any accuracy.

The technique, called RLCR (Reinforcement Learning with Calibration Rewards), trains language models to produce calibrated confidence estimates alongside their answers. In addition to coming up with an answer, the model thinks about its uncertainty in that answer, and outputs a confidence score. In experiments across multiple benchmarks, RLCR reduced calibration error by up to 90 percent while maintaining or improving accuracy, both on the tasks the model was trained on and on entirely new ones it had never seen. The work will be presented at the International Conference on Learning Representations later this month.

The problem traces to a surprisingly simple source. The reinforcement learning (RL) methods behind recent breakthroughs in AI reasoning, including the training approach used in systems like OpenAI's o1, reward models for getting the right answer, and penalize them for getting it wrong. Nothing in between. A model that arrives at the correct answer through careful reasoning receives the same reward as one that guesses correctly by chance. Over time, this trains models to confidently answer every question they are asked, whether they have strong evidence or are effectively flipping a coin.

That overconfidence has consequences. When models are deployed in medicine, law, finance, or any setting where users make decisions based on AI outputs, a system that expresses high confidence regardless of its actual certainty becomes unreliable in ways that are difficult to detect from the outside. A model that says "I'm 95 percent sure" when it is right only half the time is more dangerous than one that simply gets the answer wrong, because users have no signal to seek a second opinion.

"The standard training approach is simple and powerful, but it gives the model no incentive to express uncertainty or say I don’t know," says Mehul Damani, an MIT PhD student and co-lead author on the paper. "So the model naturally learns to guess when it is unsure." 

RLCR addresses this by adding a single term to the reward function: a Brier score, a well-established measure that penalizes the gap between a model's stated confidence and its actual accuracy. During training, models learn to reason about both the problem and their own uncertainty, producing an answer and a confidence estimate together. Confidently wrong answers are penalized. So are unnecessarily uncertain correct ones.

The math backs it up: the team proved formally that this type of reward structure guarantees models that are both accurate and well-calibrated. They then tested the approach on a 7-billion-parameter model across a range of question-answering and math benchmarks, including six datasets the model had never been trained on.

The results showed a consistent pattern. Standard RL training actively degraded calibration compared to the base model, making models worse at estimating their own uncertainty. RLCR reversed that effect, substantially improving calibration with no loss in accuracy. The method also outperformed post-hoc approaches, in which a separate classifier is trained to assign confidence scores after the fact. "What’s striking is that ordinary RL training doesn't just fail to help calibration. It actively hurts it," says Isha Puri, an MIT PhD student and co-lead author. "The models become more capable and more overconfident at the same time."

The team also demonstrated that the confidence estimates produced by RLCR are practically useful at inference time. When models generate multiple candidate answers, selecting the one with the highest self-reported confidence, or weighting votes by confidence in a majority-voting scheme, improves both accuracy and calibration as compute scales.

An additional finding suggests that the act of reasoning about uncertainty itself has value. The researchers trained classifiers on model outputs and found that including the model's explicit uncertainty reasoning in the input improved the classifier's performance, particularly for smaller models. The model's self-reflective reasoning about what it does and doesn’t know contains real information, not just decoration.

In addition to Damani and Puri, other authors on the paper are Stewart Slocum, Idan Shenfeld, Leshem Choshen, and senior authors Jacob Andreas and Yoon Kim.

Pages