MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 10 hours 34 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Sixteen new START.nano companies are developing hard-tech solutions with the support of MIT.nano

Tue, 04/07/2026 - 4:40pm

MIT.nano has announced that 16 startups became active participants in its START.nano program in 2025, more than doubling the number of new companies from the previous year. Aimed at speeding the transition of hard-tech innovation to market, START.nano supports new ventures through the discounted use of MIT.nano shared facilities and a guided access to the MIT innovation ecosystem. The newly engaged startups are developing solutions for some of the world’s greatest challenges in health, climate, energy, semiconductors, novel materials, and quantum computing.

“The unique resources of MIT.nano enable not just the foundational research of academia, but the translation of that research into commercial innovations through startups,” says START.nano Program Manager Joyce Wu SM ’00, PhD ’07. “The START.nano accelerator supports early-stage companies from MIT and beyond with the tools and network they need for success.”

Launched in 2021, START.nano aims to increase the survival rate of hard-tech startups by easing their journey from the lab to the real world. In addition to receiving access to MIT.nano’s laboratories, program participants are invited to present at startup exhibits at MIT conferences, and in exclusive events including the newly launched PITCH.nano competition.

“For an early-stage startup working at the frontier of superconductor discovery, the combination of infrastructure and community has been irreplaceable,” says Jason Gibson, CEO and co-founder of Quantum Formatics. “START.nano isn’t just a resource,” adds Cynthia Liao MBA ’24, CEO and co-founder of Vertical Semiconductor. “It’s a strategic advantage that accelerates our roadmap, allowing us to iterate quickly to meet customer needs and strengthen our competitive edge.”

Although an MIT affiliation is not required, five of the 16 companies in the new cohort are led by MIT alumni, and an additional three have MIT affiliation. In total, 49 percent of the startups in START.nano are founded by MIT graduates.

Here are the intended impacts of the 16 new START.nano companies:

Acorn Genetics is developing a "smartphone of sequencing," launching the power of genetic analysis out of slow, centralized labs and into the hands of consumers for fast, portable, and affordable sequencing.

Addis Energy leverages oil, gas, and geothermal drilling technologies to unlock the chemical potential of iron-rich rocks. By injecting engineered fluids, they harness the earth’s natural energy to produce ammonia that is both abundant and cost-effective.

Augmend Health uses virtual reality and AI to deliver clinical data intelligence services for specialty care that turns incomplete documentation into revenue, compliance, and better treatment decisions.

Brightlight Photonics is building high-performance laser infrastructure at chip scale, integrating Titanium:Sapphire gain to deliver broadband, high-power, low-noise optical sources for advanced photonic systems.

Cahira Technologies is creating the new paradigm of brain-computer symbiosis for treating intractable diseases and human augmentation through autonomous, nonsurgical neural implants.

Copernic Catalysts is leveraging computational modeling to develop and commercialize transformational catalysts for low-cost and sustainable production of bulk chemicals and e-fuels.

Daqus Energy is unlocking high-energy lithium-ion batteries using critical metal-free organic cathodes.

Electrified Thermal Solutions is reinventing the firebrick to electrify industrial heat.

Guardion is making analytical instruments, chemical detectors, and radiation detectors more sensitive, portable, and easier to scale with nanomaterial-based ion detectors.

Mantel Capture is designing carbon capture materials to operate at the high temperatures found inside boilers, kilns, and furnaces — enabling highly efficient carbon capture that has not been possible until now.

nOhm Devices is developing highly-efficient cryogenic electronics for quantum computers and sensors.

Quantum Formatics is speeding discovery of the world’s next superconductors using proprietary AI.

Qunett is building the foundational hardware stack for deployable quantum networks to power the next era of global connectivity.

Rheyo is developing new ways to make dental care more effective, efficient, and easy through advanced materials and technology.

Vertical Semiconductor is commercializing high-voltage, high-density, high-efficiency vertical GaN (gallium nitride) to power the next era of compute.

VioNano Innovations is developing specialty material solutions that reduce variability and improve precision in semiconductor manufacturing, allowing chipmakers to build even smaller, faster, and more cost-effective chips.

START.nano now comprises over 32 companies and 11 graduates — ventures that have moved beyond the prototyping stages, and some into commercialization. See the full list here.

Researchers develop molecular editing tool to relocate alcohol groups

Tue, 04/07/2026 - 12:35pm

A significant challenge for researchers in materials science and drug discovery is that even the most minor change to a molecule’s structure can completely alter its function. Historically, making these adjustments meant researchers had to re-synthesize the target molecule from scratch — a time-consuming and expensive bottleneck akin to tearing down a house just to move a lamp.

In an exciting discovery recently published in Nature, MIT chemists led by Professor Alison Wendlandt have developed a precision technique that allows scientists to seamlessly relocate alcohol functional groups from one spot on a molecule to a neighboring site. This process bypasses the need to rebuild the entire structure and is the result of a multi-year collaboration with Bristol Myers Squibb.

Functional group repositioning

Using a special light-sensitive molecule called decatungstate as a catalyst, the reaction triggers a highly controlled “migration” of the alcohol group. The process is remarkably predictable, ensuring the molecule retains its precise 3D shape and orientation throughout the move.

The ability to implement subtle structural tweaks without the waste of “from-scratch” synthesis eliminates a primary hurdle that has long plagued the field. Furthermore, because the reaction is gentle enough to work on complex, nearly finished structures, it serves as a powerful fine-tuning tool for late-stage drug candidates.

Precision editing to unlock new chemical designs

When combined with existing chemical methods, this tool provides new pathways to create challenging molecular architectures and oxygenation patterns that were previously out of reach.

“This alcohol migration strategy allows for precise, molecular-level tuning of oxygen atom positions,” says Qian Xu, the co-first author of the paper and a postdoc in the Wendlandt Group. “With predictable stereo- and regioselectivity and late-stage operability, it presents an enticing chance to modify natural products and drug molecules through ‘editing.’”

Ultimately, this precision editing tool holds the potential to dramatically improve the efficiency of molecular design campaigns, accelerating the development of new pharmaceuticals, materials, and agrochemicals.

In addition to Wendlandt and Xu, MIT contributors include co-lead author and graduate student Yichen Nie, recent postdoc Ronghua Zhang, and professor of chemistry Jeremiah A. Johnson. Other authors include Jacob-Jan Haaksma of the University of Groningen in The Netherlands; Natalie Holmberg-Douglas, Farid van der Mei, and Chloe Williams of of Bristol Myers Squibb; and Paul M. Scola of Actithera.

Study reveals “two-factor authentication” system that controls microRNA destruction

Tue, 04/07/2026 - 12:10pm

Cells rely on tiny molecules called microRNAs to tune which genes are active and when. Cells must carefully control the lifespan of microRNAs to prevent widespread disruption to gene regulation.

A new study led by researchers at MIT’s Whitehead Institute for Biomedical Research and Germany’s Max Planck Institute of Biochemistry reveals how cells selectively eliminate certain microRNAs through an unexpectedly intricate molecular recognition system. The open-access work, published on March 18 in Nature, shows that the process requires two separate RNA signals, similar to how many digital systems require two forms of identity verification before granting access.

The findings explain how cells use this “two-factor authentication” system to ensure that only intended microRNAs are destroyed, leaving the rest of the gene regulation machinery in operation.

MicroRNAs are short strands of RNA that help control gene expression. Working together with a protein called Argonaute, they bind to specific messenger RNAs — the molecules that carry genetic instructions from DNA to the cell’s protein-making machinery — and trigger their destruction. In this way, microRNAs can reduce the production of specific proteins.

While scientists recognized that microRNAs could be destroyed through a pathway known as target-directed microRNA degradation, or TDMD, the details of how cells recognized which microRNAs to eliminate remained unclear.

“We knew there was a pathway that could target microRNAs for degradation, but the biochemical mechanism behind it wasn’t understood,” says MIT Professor David Bartel, a Whitehead Institute member and co-senior author of the study.

Earlier work from Bartel’s lab and others had identified a key player in this pathway: the ZSWIM8 E3 ubiquitin ligase. E3 ubiquitin ligases are involved in the cell’s recycling system and attach a small molecular tag called ubiquitin to other proteins, marking them for destruction.

The researchers first showed that the ZSWIM8 E3 ligase specifically binds and tags Argonaute, the protein that holds microRNAs and helps regulate genes. The researchers’ next challenge was to understand how this machinery recognized only Argonaute complexes carrying specific microRNAs that should be degraded.

The answer turned out to be surprisingly sophisticated.

Using a combination of biochemistry and cryo-electron microscopy — an imaging technique that reveals molecular structures at near-atomic resolution — the researchers discovered that the degradation system relies on a dual-RNA recognition process. First, Argonaute must carry a specific microRNA. Second, another RNA molecule called a “trigger RNA” must bind to that microRNA in a particular way.

The degradation machinery activates only when both signals are present.

This dual requirement ensures exquisite specificity. Each cell contains over a hundred thousand Argonaute–microRNA complexes regulating many genes, and destroying them indiscriminately would disrupt essential biological processes.

“The vast majority of Argonaute molecules in the cell are doing useful work regulating gene expression,” says Bartel, who is a professor of biology at MIT and also a Howard Hughes Medical Institute investigator. “You only want to degrade the ones carrying a particular microRNA and bound to the right trigger RNA. Without that specificity, the cell would lose its microRNAs and the essential regulation that they provide.”

The structural images revealed complex molecular interactions. The ZSWIM8 ligase detects multiple structural changes that occur when the two RNAs bind together within the Argonaute protein.

“When we saw the structure, everything clicked,” says Elena Slobodyanyuk, a graduate student in Bartel’s lab and co-first author of the study. “You could see how the pairing of the trigger RNA with the microRNA reshapes the Argonaute complex in a way that the ligase can recognize.”

Beyond explaining how TDMD works, the findings may impact how scientists think about the regulation of RNA molecules more broadly.

“A lot of E3 ligases recognize their targets through simpler signals,” says Jakob Farnung, co-first author and researcher in the Department of Molecular Machines and Signaling at the Max Planck Institute of Biochemistry. “It was like opening a treasure chest where every detail revealed something new and mesmerizing.”

MicroRNAs typically persist in cells for much longer time periods than most messenger RNAs, but some degrade far more quickly, and the TDMD pathway appears to account for many of these unusually short-lived microRNAs.

The researchers are now investigating whether other RNAs can trigger similar degradation pathways and whether additional microRNAs are regulated through variations of the mechanism shown in this study.

“This opens up a whole new way of thinking about how RNA molecules can control protein degradation,” says Brenda Schulman, study co-senior author and director of the Department of Molecular Machines and Signaling at the Max Planck Institute of Biochemistry. “Here, the recognition was far more elaborate than expected. There’s likely much more left to discover.”

Uncovering the details of this intricate regulatory system required interdisciplinary collaboration, combining expertise in RNA biochemistry, structural biology, and ubiquitin enzymology to solve this long-standing molecular puzzle.

“This was a project that required the strengths of two labs working at the forefront of their fields,” says Schulman, who is also an alum of Whitehead Institute. “It was an incredible team effort.”

How bacteria suppress immune defenses in stubborn wound infections

Tue, 04/07/2026 - 11:40am

Chronic wound infections are notoriously difficult to manage because some bacteria can actively interfere with the body’s immune defenses. In wounds, Enterococcus faecalis (E. faecalis) is particularly resilient — it can survive inside tissues, alter the wound environment, and weaken immune signals at the injury site. This disruption creates conditions where other microbes can easily establish themselves, resulting in multi-species infections that are complex and slow to resolve. Such persistent wounds, including diabetic foot ulcers and post-surgical infections, place a heavy burden on patients and health care systems, and sometimes lead to serious complications such as amputations.

Now, researchers have discovered how E. faecalis releases lactic acid to acidify its surroundings and suppresses the immune-cell signal needed to start a proper response to infection. By silencing the body’s defenses, the bacterium can cause persistent and hard-to-treat wound infections. This explains why some wounds struggle to heal, even with treatment, and why infections involving multiple bacteria are especially difficult to eradicate.

The work was led by researchers from the Singapore-MIT Alliance for Research and Technology (SMART) Antimicrobial Resistance (AMR) interdisciplinary research group, alongside collaborators from the Singapore Centre for Environmental Life Sciences Engineering at Nanyang Technological University (NTU Singapore), MIT, and the University of Geneva in Switzerland.

In a paper titled Enterococcus faecalis-derived lactic acid suppresses macrophage activation to facilitate persistent and polymicrobial wound infections,” recently published in Cell Host & Microbe, the researchers documented how E. faecalis releases large amounts of lactic acid during infection. This acidity suppresses the activation of macrophages — immune cells that normally help to clear infections — and interferes with several important internal processes that help the cell recognize and respond to infection. As a result, the mechanisms that cells rely on to send out “danger” signals are suppressed, leaving the macrophages unable to fully activate.

Researchers found that E. faecalis uses a two‑step mechanism to achieve this. Lactic acid enters the macrophages through a lactate transporter called MCT‑1 and also binds to a lactate-sensing receptor, GPR81, on the cell surface. By engaging both pathways, the bacterium effectively shuts down downstream immune signalling and blocks the macrophage’s inflammatory response, allowing E. faecalis to persist in the wound much longer than it should. Specifically, the lactic acid prevents a key immune alarm signal, known as NF-κB, from switching on inside these cells.

This was proven in a mouse wound model, where strains of E. faecalis that could not make lactic acid were cleared much more quickly, and the wounds also showed stronger immune activity. In wounds infected with both E. faecalis and Escherichia coli, the weakened immune response caused by lactic acid also allowed E. coli to grow better. This explains why wound infections often involve multiple species of bacteria and become harder to treat over time, particularly since E. faecalis is among the most common bacteria found in chronic wounds.

“Chronic wound infections often fail not because antibiotics are powerless, but because the immune system has effectively been ‘switched off’ at the infection site. We found that E. faecalis floods the wound with lactic acid, lowering pH and muting the NF‑κB alarm inside macrophages — the very cells that should be calling for help. By pinpointing how acidity rewires immune signalling, we now have clear targets to reactivate the immune response,” says first author Ronni da Silva, research scientist at SMART AMR, former postdoc in the lab of co-author and MIT professor of biology Jianzhu Chen, and SCELSE-NTU visiting researcher.

“This discovery strengthens our understanding of host-pathogen interactions and offers new directions for developing treatments and wound care that target the bacteria’s immunosuppressive strategies. By revealing how the immune response is shut down, this research may help improve infection management and support better recovery outcomes for patients, especially those with chronic wounds or weakened immunity,” says Kimberly Kline, principal investigator at SMART AMR, SCELSE-NTU visiting academic, professor at the University of Geneva, and corresponding author of the paper.

By identifying lactic‑acid‑driven immune suppression as a root cause of persistent wound infections, this work highlights the potential of treatment approaches that support the immune system, rather than rely on antibiotics alone. This could lead to therapies that help wounds heal more reliably and reduce the risk of complications. Potential directions include reducing acidity in the wound or blocking the signals that lactic acid uses to switch off immune cells.

Building on their study, the researchers plan to explore validation in additional pathogens and human wound samples, followed by assessments in advanced preclinical models ahead of any potential clinical trials.

The research was partially supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.

MIT graduate engineering and business programs ranked highly by U.S. News for 2026-27

Tue, 04/07/2026 - 12:01am

U.S. News and World Report has again placed MIT’s graduate program in engineering at the top of its annual rankings, released today. The Institute has held the No. 1 spot since 1990, when the magazine first ranked such programs.

The MIT Sloan School of Management also placed highly, occupying the No. 6 spot for the best graduate business programs.

Among individual engineering disciplines, MIT placed first in six areas: aerospace/aeronautical/astronautical engineering, chemical engineering, computer engineering (tied with the University of California at Berkeley), electrical/electronic/communications engineering (tied with Stanford University and Berkeley), materials engineering, and mechanical engineering. It placed second in nuclear engineering.

In the rankings of individual MBA specialties, MIT placed first in four areas: business analytics, entrepreneurship (with Stanford), production/operations, and supply chain/logistics. It placed second in executive MBA programs (with the University of Chicago).

U.S. News bases its rankings of graduate schools of engineering and business on two types of data: reputational surveys of deans and other academic officials, and statistical indicators that measure the quality of a school’s faculty, research, and students. The magazine’s less-frequent rankings of graduate programs in the sciences, social sciences, and humanities are based solely on reputational surveys.

In the sciences, ranked by U.S. News for the first time in four years, MIT’s doctoral programs placed first in four areas: biology (with Scripps Research Institute), chemistry (with Berkeley and Caltech), computer science (with Carnegie Mellon University and Stanford), and physics (with Caltech, Princeton University, and Stanford). The Institute placed second in mathematics (with Harvard University, Stanford, and Berkeley).

Helping data centers deliver higher performance with less hardware

Tue, 04/07/2026 - 12:00am

To improve data center efficiency, multiple storage devices are often pooled together over a network so many applications can share them. But even with pooling, significant device capacity remains underutilized due to performance variability across the devices.

MIT researchers have now developed a system that boosts the performance of storage devices by handling three major sources of variability simultaneously. Their approach delivers significant speed improvements over traditional methods that tackle only one source of variability at a time.

The system uses a two-tier architecture, with a central controller that makes big-picture decisions about which tasks each storage device performs, and local controllers for each machine that rapidly reroute data if that device is struggling.

The method, which can adapt in real-time to shifting workloads, does not require specialized hardware. When the researchers tested this system on realistic tasks like AI model training and image compression, it nearly doubled the performance delivered by traditional approaches. By intelligently balancing the workloads of multiple storage devices, the system can increase overall data center efficiency.

“There is a tendency to want to throw more resources at a problem to solve it, but that is not sustainable in many ways. We want to be able to maximize the longevity of these very expensive and carbon-intensive resources,” says Gohar Chaudhry, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique. “With our adaptive software solution, you can still squeeze a lot of performance out of your existing devices before you need to throw them away and buy new ones.”

Chaudhry is joined on the paper by Ankit Bhardwaj, an assistant professor at Tufts University; Zhenyuan Ruan PhD ’24; and senior author Adam Belay, an associate professor of EECS and a member of the MIT Computer Science and Artificial Intelligence Laboratory. The research will be presented at the USENIX Symposium on Networked Systems Design and Implementation.

Leveraging untapped performance

Solid-state drives (SSDs) are high-performance digital storage devices that allow applications to read and write data. For instance, an SSD can store vast datasets and rapidly send data to a processor for machine-learning model training.   

Pooling multiple SSDs together so many applications can share them improves efficiency, since not every application needs to use the entire capacity of an SSD at a given time. But not all SSDs perform equally, and the slowest device can limit the overall performance of the pool.

These inefficiencies arise from variability in SSD hardware and the tasks they perform.

To utilize this untapped SSD performance, the researchers developed Sandook, a software-based system that tackles three major forms of performance-hampering variability simultaneously. “Sandook” is an Urdu word that means “box,” to signify “storage.”

One type of variability is caused by differences in the age, amount of wear, and capacity of SSDs that may have been purchased at different times from multiple vendors.

The second type of variability is due to the mismatch between read and write operations occurring on the same SSD. To write new data to the device, the SSD must erase some existing data. This process can slow down data reads, or retrievals, happening at the same time.

The third source of variability is garbage collection, a process of gathering and removing outdated data to free up space. This process, which slows SSD operations, is triggered at random intervals that a data center operator cannot control.

“I can’t assume all SSDs will behave identically through my entire deployment cycle. Even if I give them all the same workload, some of them will be stragglers, which hurts the net throughput I can achieve,” Chaudhry explains.

Plan globally, react locally

To handle all three sources of variability, Sandook utilizes a two-tier structure. A global schedular optimizes the distribution of tasks for the overall pool, while faster schedulers on each SSD react to urgent events and shift operations away from congested devices.

The system overcomes delays from read-write interference by rotating which SSDs an application can use for reads and writes. This reduces the chance reads and writes happen simultaneously on the same machine.

Sandook also profiles the typical performance of each SSD. It uses this information to detect when garbage collection is likely slowing operations down. Once detected, Sandook reduces the workload on that SSD by diverting some tasks until garbage collection is finished.

“If that SSD is doing garbage collection and can’t handle the same workload anymore, I want to give it a smaller workload and slowly ramp things back up. We want to find the sweet spot where it is still doing some work, and tap into that performance,” Chaudhry says.

The SSD profiles also allow Sandook’s global controller to assign workloads in a weighted fashion that considers the characteristics and capacity of each device.

Because the global controller sees the overall picture and the local controllers react on the fly, Sandook can simultaneously manage forms of variability that happen over different time scales. For instance, delays from garbage collection occur suddenly, while latency caused by wear and tear builds up over many months.

The researchers tested Sandook on a pool of 10 SSDs and evaluated the system on four tasks: running a database, training a machine-learning model, compressing images, and storing user data. Sandook boosted the throughput of each application between 12 and 94 percent when compared to static methods, and improved the overall utilization of SSD capacity by 23 percent.

The system enabled SSDs to achieve 95 percent of their theoretical maximum performance, without the need for specialized hardware or application-specific updates.

“Our dynamic solution can unlock more performance for all the SSDs and really push them to the limit. Every bit of capacity you can save really counts at this scale,” Chaudhry says.

In the future, the researchers want to incorporate new protocols available on the latest SSDs that give operators more control over data placement. They also want to leverage the predictability in AI workloads to increase the efficiency of SSD operations.

“Flash storage is a powerful technology that underpins modern datacenter applications, but sharing this resource across workloads with widely varying performance demands remains an outstanding challenge. This work moves the needle meaningfully forward with an elegant and practical solution ready for deployment, bringing flash storage closer to its full potential in production clouds,” says Josh Fried, a software engineer at Google and incoming assistant professor at the University of Pennsylvania, who was not involved with this work.

This research was funded, in part, by the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, and the Semiconductor Research Corporation.

Electrons in moiré crystals explore higher-dimensional quantum worlds

Fri, 04/03/2026 - 5:30pm

The electrons that power our society flow left and right through the circuitry in our electronics, back and forth along the transmission lines that make up our power grid, and up and down to light up every floor of every building. But the electrons in newly discovered “moiré crystals” move in much stranger ways. They can move left and right, back and forth, or up and down in our three-dimensional world, but these electrons also act as if they can teleport in and out of a mysterious fourth dimension of space that is perpendicular to our perceivable reality. Physicists have found that this strange, newly discovered quantum behavior has nothing to do with the electrons themselves and everything to do with the strange material environment in which they live.

The electrons in moiré crystals leap into a fourth dimension through a process called “quantum tunneling.” While a soccer ball sitting at the bottom of a hill will stay put until someone retrieves it, a quantum particle in a valley can jump out all on its own. Quantum tunneling may seem magical to us, but it is quite commonplace in the microscopic quantum world, on the length scales of atoms. Quantum tunneling is also important on larger length scales, particularly in large superconducting circuits that underlie an emerging landscape of quantum technology, as recognized by the 2025 Nobel Prize in Physics. 

However, quantum tunneling in moiré crystals is different, in that once an electron tunnels, physicists have now measured that it acts as if it had tunneled into a completely different world and come back again, as if it had been transported through a fourth “synthetic” dimension.

In a paper published recently in the journal Nature, a team of MIT researchers realize a long-anticipated scalable technique for producing high-quality moiré materials as moiré crystals, overcoming a materials bottleneck for next-generation electronic applications. In addition, the electrons in these crystals act as if they can teleport through a fourth dimension of space, unlocking a realistic materials approach for realizing numerous theoretical predictions of higher-dimensional superconductivity and higher-dimensional topological properties in the laboratory.

The study’s co-lead authors are Kevin Nuckolls, a Pappalardo postdoc in physics at MIT, and Nisarga Paul PhD ’25, and the study’s corresponding author is Joe Checkelsky, professor of physics at MIT. In addition, the study’s MIT co-authors include Alan Chen, Filippo Gaggioli, Joshua Wakefield, and Liang Fu, along with collaborators at Harvard University, Toho University, and the National High Magnetic Field Laboratory.

Crystal perfection

To make a moiré material, physicists first start with atomically thin two-dimensional (2D) materials, like the thinnest sheets of carbon known as graphene. Moiré materials can be created by combining individual sheets of the same 2D material and twisting them back and forth with respect to one another. Moiré materials can also be created by combining two different 2D materials that are very similar, but not quite the same, which ensures that they can never perfectly match one another even when carefully aligned. Both of these methods create intricate interference patterns where the individual layers of moiré materials are nearly aligned in some areas and visibly misaligned in others. Physicists call these patterns “moiré superlattices,” named after historical French fabrics that show similarly beautiful patterns generated by overlaying two different threading patterns.

For more than a decade, moiré materials have completely reshaped how physicists design and control quantum material properties, and the physics labs at MIT have been the hotbed of transformative discoveries in this ever-growing research field. Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT, and Raymond Ashoori, professor of physics at MIT, were early adopters of new techniques for fabricating moiré materials. Together in 2014, their labs discovered that electrons in moiré materials made from graphene and the 2D material boron nitride live in an intricate quantum fractal known as “Hofstadter’s butterfly.” In 2018, Jarillo-Herrero’s lab discovered that moiré materials made from twisting two sheets of graphene were fertile grounds for unconventional superconductivity that, by some metrics, is one of the strongest superconductors ever discovered. Long Ju, the Lawrence C. and Sarah W. Biedenharn Associate Professor of Physics, and his lab discovered in 2024 that moiré materials made from multilayer graphene and boron nitride cause electrons to split apart into fractional pieces, a quantum phenomenon previously thought to be exclusively confined to extremely high magnetic fields, but now realized without the need for a magnetic field.

Common across all of these experiments, and those performed around the world, were the tireless efforts of students and postdocs in carefully assembling moiré material devices by hand, one at a time. To make a moiré material device, 2D materials like graphene are peeled using Scotch tape from rock-like crystals, such as graphite. Then, sticky polymer films and microscopes enable researchers to pick up different 2D materials one by one with a precise sequence of twist angles. Finally, these stacks of 2D materials are etched into individual devices that allow researchers to investigate their properties in the lab.

In their new study, Joe Checkelsky and his lab have discovered a new technique for generating moiré materials that skips over all of these laborious steps. Their new method takes an entirely different approach, and it’s one that can assemble moiré materials by the tens of thousands. Instead of assembling samples one by one and layer by layer, Checkelsky and his lab have found new chemical synthesis routes that enlist Mother Nature’s help to grow “moiré crystals” with high-quality moiré superlattices built into each of their layers. By analogy, if one were to think of previous generations of moiré materials like two stacked sheets of paper with different line spacings, Checkelsky has figured out how to generate entire libraries of encyclopedias whose odd-numbered pages and even-numbered pages have two different line spacings.

“It feels incredible for our team to have made this materials discovery, particularly at MIT,” says Nuckolls, co-lead author on the work. “Moiré materials have become a central focus of quantum materials research today in large part because of the work of our colleagues just down the hallway.”

In the end, it turns out that nature is by far the best at assembling moiré materials when given the right tools. The MIT team discovered that naturally grown moiré materials are nearly perfect and highly reproducible. This offers a long-anticipated proof-of-concept demonstration of a potentially scalable route to using moiré materials in next-generation electronics. Although there are many more obstacles to be overcome to transform these fundamental science results into usable technology, the team has demonstrated a crucial first step in the right direction.

4D in 4K

After discovering how to grow and manipulate moiré superlattices in moiré crystals, the team began to investigate their properties. Initially, the team found that the metallic properties of these materials were surprisingly complicated, but they soon shifted their perspective to think from a higher-dimensional point of view, an idea inspired by theoretical proposals made roughly half a century ago. To peer into this prospective four-dimensional quantum world, the team performed detailed studies of the electronic and magnetic properties of moiré crystals at very large magnetic fields. The electrons in common metals move in tight circular orbits when placed in a magnetic field. However, something very special happens when they move in moiré crystals with two different interfering lattices. This interference generates a moiré superlattice that is mathematically equivalent to an emergent four-dimensional “superspace” lattice. Guided by this new 4D superspace lattice, the team discovered that these electrons could now move through this fourth dimension when their motion aligns to the direction where the two competing lattices interfere the most.

“Metaphorically, our measurements uncover ‘shadows’ of emergent 4D landscape upon which the electrons live,” says Nuckolls. “By carefully analyzing these 3D silhouettes from different angles and perspectives, our measurement reconstructs the 4D landscape that guides electrons in moiré crystals.”

Although this extra synthetic dimension is fictitious and the electrons in moiré crystals are actually still stuck in our 3D reality, they simulate a four-dimensional quantum world so closely that the measured properties of moiré crystals appear as if the researchers had actually performed their experiments in 4D. It seems like moiré crystals aren’t particularly bothered by whether the fourth dimension is fictitious and synthetic or if it’s real. It’s all the same to them.

“Mathematically, the equations describing the electron dynamics in these crystals are four-dimensional,” says co-lead author Nisarga Paul. “The electrons propagate in the synthetic dimension just as they do in our world’s three physical dimensions. It’s hard to detect this motion, but one of the striking realizations was that a magnetic field can reveal fingerprints of this synthetic dimension in experimentally measurable electronic properties known as quantum oscillations.”

Going forward, the team will explore how a wide variety of material properties might benefit from extra synthetic dimensions, which now could be within reach of realization.

“It’s fascinating to consider what may be possible next,” Checkelsky says. “There are long-standing theoretical predictions for higher-dimensional conductors and superconductors, for example — materials of this type may offer a new platform to examine these experimentally in the laboratory.”

This research was supported, in part, by the Gordon and Betty Moore Foundation, the U.S. Department of Energy Office of Science, the U.S. Office of Naval Research, the U.S. Army Research Office, U.S. Air Force Office of Scientific Research, MIT Pappalardo Fellowships in Physics, the Swiss National Science Foundation, and the U.S. National Science Foundation. 

Urban planning students engage with communities through the Freedom Summer Fellowship

Fri, 04/03/2026 - 5:15pm

For the past three summers, MIT master’s students and recently graduated planners have collaborated with cities and community organizations to advance climate, infrastructure, and economic development initiatives. They’re known as the Freedom Summer Fellows, participants in an impact-driven program launched in 2023 by the MIT Department of Urban Studies and Planning (DUSP), an expression of the department’s commitment to equal opportunity and experiential learning. 

Over the course of eight to 10 weeks, fellows are immersed in the real stakes and challenges of projects that involve navigating a network of interconnected causes, competing agendas, a range of stakeholders, and rapidly changing circumstances. Host organizations define discrete tasks and provide ongoing supervision, while fellows develop actionable tools and materials designed to empower organizations in the long term — from policy research and grant application strategies to navigate funding, to analytical tools and implementation frameworks to ensure informed and streamlined project management. 

“You can’t teach planning today without grappling with how policy actually unfolds within communities; under pressure, with limited resources, and with multiple conflicting interests,” says Phillip Thompson, professor of urban planning at MIT and former New York City deputy mayor for strategic policy initiatives under Mayor Bill de Blasio. “The Freedom Summer Fellowship is about capacity building through cooperative learning — a knowledge exchange intended to have lasting positive results for communities, while equipping planners with critical experience as they embark on their careers.”

From classroom to communities

The fellowship emerged from Bills and Billions, a DUSP Independent Activities Period course taught by Thompson and Elisabeth Reynolds, professor of the practice at MIT and former special assistant to President Joe Biden for manufacturing and economic development. The course examines U.S. federal policy and its intersection with local economic development, labor markets, and the infrastructure of industry, energy, and the built environment more broadly.  

“We were at an inflection point,” says Reynolds, speaking of her return to MIT in fall 2022 after serving at the National Economic Council. “There was a real sense of urgency about the wave of new legislation and funding around clean energy, infrastructure, and reindustrialization, and much of the investment and work in these areas continues today. It’s a very dynamic time for cities and states, with significant experimentation and innovative strategies — a perfect environment for MIT graduate students and recent grads.”  

Securing federal funding is typically dependent on competitive grants requiring technical, financial, and community planning that many local governments and nonprofits are not equipped for. “While much funding to localities has since been cut, the momentum for change is still there,” says Thompson. “The incentives put forward by the Inflation Reduction Act encouraged localities and communities to initiate their own clean energy projects, and there’s a continued recognition that climate change is going to take a movement from the bottom up.”

At a time when the U.S. is experiencing a paradigm shift in policy — characterized by challenges to a free-market economy and global trade, renewed investment in industrial strategy, and the lifting of environmental and other regulations — the fellowship offers a way to support the planning and implementation of equitable development strategies and to redirect resources where they are needed most.

From placements to professional practice

Since 2023, 31 Freedom Summer Fellows have collaborated with 19 host organizations, and contributed to more than $100 million in state, federal, and philanthropic grant applications, including a successful $3 million EPA Climate Pollution Reduction grant for Hawaii. Fellows have helped convene more than 3,500 community members and have produced dozens of planning tools, including implementation maps, technical tools, and dashboards that support equitable project design and production. Collaborations have inspired the focus of graduate theses produced as client reports for hosts, and in several cases fellows have extended their positions to full-time roles. 

For Sara Jex MCP ’25, her 2024 Freedom Summer Fellowship became a direct pathway from graduate study to professional practice. She was placed with the Site Readiness Fund for Good Jobs in Cleveland, Ohio, an organization working to transform brownfields and disinvested industrial sites into engines of inclusive economic growth.

“Much of my work that summer involved developing an EPA Community Change Grant application for a proposed industrial district spanning over 350 acres — 200 of which we’re looking to reactivate,” says Jex. “So, it’s a transformative project that will bring in new jobs, but there are also major challenges that come with industrial place-making, especially given the proximity to residential neighborhoods. In Rust Belt cities, there’s a history of industrial disinvestment leading to job loss, population decline, and environmental injustices. We don’t want to repeat the harms of the past — we want to create something better.”

To support equitable development strategies for the industrial corridor, Jex helped to prepare technical tools mapping the effects of development on home values, seeking to identify a balance of growth, affordability, and resident benefit. She also evaluated wealth-building strategies such as land trusts and mixed-income neighborhood trusts, offering recommendations for community ownership of land holdings.

“Our vision for the project is not just about bringing in new businesses and creating new jobs,” says Jex, “it’s also about going beyond job creation to create lasting benefit for communities surrounding the sites.”

Jex continued working with Site Readiness Fund for Good Jobs during her second year at MIT and now holds a full-time role at the organization. “The Freedom Summer Fellowship gave me a platform to start building my planning career,” she reflects. “It was eye-opening to be in a cohort of other students doing similar work across the country. The insights from our weekly meetings have stayed with me since graduating — we were able to share perspectives on the challenges we were facing from multiple different contexts, and that brought a new dimension to the learning process.”

Redefining resilience

For Deena Darby, an MIT master’s student with a background in architecture and public art, her 2025 Freedom Summer Fellowship offered a way to bridge creative practice with structural change. Working with the LA84 Foundation and the Ubuntu Climate Initiative in Los Angeles, Darby focused on neighborhood-based resilience in the context of the 2025 wildfires and the upcoming 2028 Olympics.

“My decision to apply to do a master’s in city planning at MIT was informed by the projects I had been working on in Harlem, the Bronx, Brooklyn, and other cities, including Philadelphia and Detroit. Much of that work involved community engagement work when producing public art at an architectural scale, but I kept feeling that residents deserved more than an art piece at the end of a project.” 

During the fellowship, Darby contributed to asset mapping across six neighborhoods, developed case studies on resilience hubs, and helped shape strategies that tied climate adaptation to culture, play, and community ownership. Her immersion in the lived experience of those neighborhoods — visiting sites, meeting organizers, and participating in local coalitions — was crucial to her development of strategic recommendations for decentralized infrastructure, cultural arts cohorts, and neighborhood-based resilience festivals.

“Resilience is often narrowly framed around climate,” Darby reflects. “But what we were really redefining was economic resilience, social resilience, and the ability of communities to tell their own stories.” 

Darby’s fellowship experience has led to her thesis project, working with the residents of a historically Black neighborhood in her hometown of Savannah, Georgia, who are experiencing displacement. “Coming from an architecture and planning background, my instinct is to ask, How can we frame these issues in terms of cultural preservation and community-based policy development and implementation?” says Darby. “How can we manage change, with the goal of benefiting present residents as well as honoring those who have lived here in the past?”

For Darby, gaining practical understanding of the inseparability of planning and policy has been key to shaping her approach to navigating the educational opportunities at MIT. “In a higher-education context, you’ll often find policy housed separately from planning. But the moment you’re working in situ, it doesn’t make sense to separate the two. For me, the fellowship was a bridge between two often-siloed disciplines.”

Reassessing expertise

“Impact at MIT is typically associated with technological breakthroughs,” says Reynolds. “But much of MIT’s work can make a huge difference when applied in the near term, on the ground. At DUSP, we’re all about bringing theory and practice together, about the interrelation of communities, infrastructure, policy, and how that maps out in the built environment. We can bring expertise and knowledge into the field tomorrow, into places that can immediately benefit from the collaboration.” 

Initial funding for the fellowship at MIT was provided by the MIT Climate Project, in addition to national foundations. Faculty are exploring ways to expand and increase the number of student placements, further embedding relationships between MIT and cities across the United States. There are also discussions about sharing the model with other institutions, including historically Black colleges and international collaborators. 

“We’re just starting these conversations with other institutions, but it’s the model of engaged, experiential, cooperative learning that matters,” says Thompson. “It’s clear that the experts aren’t necessarily those who have read a lot of books about planning or design, but those who are embedded within communities, trying to figure out these challenges from the inside.”

The planner might not be the primary expert — but they are the ones who guide decisions that shape the futures of communities. The Freedom Summer Fellowship is about fostering a culture of urban planning in which those decisions are centered upon the lived experience of stakeholders. An approach to practice in which — as Jex put it, reflecting on her experience in Cleveland: “Planners are the people who make decisions about how cities shape access to opportunity.”

Applications for the 2026 Freedom Summer Fellowships are being accepted now through April 7. 

Why does wealth inequality matter?

Fri, 04/03/2026 - 5:00pm

The MIT James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work recently hosted a half-day symposium at the Institute on “Why Wealth Inequality Matters.”

Three panel discussions convened experts from economics, philosophy, sociology, and political science to explore the origins, mechanisms, and political consequences of wealth inequality.

Richard Locke, John C Head III Dean of the MIT Sloan School of Management, welcomed attendees to the symposium, emphasizing how the event reflects MIT’s commitments to interdisciplinary collaboration and to addressing “society's most pressing issues.”

Here are three key takeaways from the afternoon’s panels.

When wealth buys political influence and legal immunity, democracy is threatened

Hélène Landemore of Yale University argued that wealth inequality isn’t inherently problematic, but becomes dangerous when wealth offers disproportionate influence in other spheres, including political power.

Wojciech Kopczuk of Columbia University echoed this, emphasizing that wealth is a complicated and often ambiguous measure of inequality. Wealth reflects institutional contexts — for example, weak safety nets drive precautionary saving. Still, he agreed that wealth is a relevant metric at the very top, where it correlates with political capture and corporate power.

Landemore explained that when the wealthy dominate policy discussions, “some groups are systematically disbelieved or ignored, and the result is policy failure.” For example, French carbon taxes disproportionately burdened working-class people who were more dependent on cars, which led to the yellow vests protests.

Elizabeth Anderson of the University of Michigan extended this point to corporate power, warning that extreme concentration gives powerful firms de facto immunity from the rule of law — the wealthiest companies can hire hundreds of lawyers to swamp the legal system.

To counteract these negative consequences of high inequality, Oren Cass of American Compass argued that strengthening worker power is key. Redistribution, he said, is a way to improve living standards, but “it is not a solution to the kinds of problems that actually plague democratic capitalism.”

The roots of the racial wealth gap are so deep that equal opportunity alone won’t close it

Ellora Derenoncourt of Princeton University explained that in the United States today, the wealth gap between Black and white Americans is 6:1. In other words, for every dollar of wealth held by an average white American, the average Black American holds about $0.17. She noted that this racial wealth gap has largely remained unchanged for the past 50 years.

“Even if we were to equalize differences in wealth accumulating opportunities — equal savings rates, equal capital gains rates going forward — we’re still hundreds of years away from convergence,” she explained, due to the magnitude of the original gap.

Alexandra Killewald of the University of Michigan added that the racial wealth gap is actively rebuilt each generation through unequal schools, unequal pay, and unequal access to homeownership.

“The past matters, but it’s not just about the past,” she explained. Even if a massive reparations plan were implemented, “if we just let things go on as they are, we will start to recreate inequality from Day 1.”

High inequality and authoritarianism reinforce each other

Daron Acemoglu of MIT described how increasing inequality goes hand-in-hand with the weakening of democracy: “Once inequality starts building up, it also naturally erodes democracies’ claim for legitimacy.”

High inequality, he argued, is both a cause and an effect of liberal democracy failing to deliver on its promise of shared prosperity. This failure, in turn, weakens public support for democracy.

Building on this argument, Sheri Berman of Barnard College examined why economically disadvantaged voters in the United States and Europe have increasingly voted for right-wing populist parties, despite holding economically progressive views.

She described how center-left parties have transformed since the late 20th century, converging with the right on economic policy (embracing free trade and market deregulation) while moving left on social and cultural issues. As a result, she argued, working-class and rural voters no longer saw center-left parties as champions of their economic interests, or as reflecting their social and cultural preferences.

David Yang of Harvard University explained that once authoritarianism takes hold, regimes continue to produce inequality. For example, non-democratic regimes are most responsive not to the average citizen, but to whoever poses the greatest threat to regime survival. In China, this tends to be the wealthier urban population capable of organizing large-scale collective action.

Working to advance the nuclear renaissance

Fri, 04/03/2026 - 4:55pm

Today, there are 94 nuclear reactors operating in the United States, more than in any other country in the world, and these units collectively provide nearly 20 percent of the nation’s electricity. That is a major accomplishment, according to Dean Price, but he believes that our country needs much more out of nuclear energy, especially at a moment when alternatives to fossil fuel-based power plants are desperately being sought. He became a nuclear engineer for this very reason — to make sure that nuclear technology is up to the task of delivering in this time of considerable need.

“Nuclear energy has been a tremendous part of our nation’s energy infrastructure for the past 60 years, and the number of people who maintain that infrastructure is incredibly small,” says Price, an MIT assistant professor in the Department of Nuclear Science and Engineering (NSE), as well as the Atlantic Richfield Career Development Professor in Energy Studies. “By becoming a nuclear engineer, you become one of a select number of people responsible for carbon-free energy generation in the United States.” 

That was a mission he was eager to take part in, and the goals he set for himself were far from modest: He wanted to help design and usher in a new class of nuclear reactors, building on the safety, economics, and reliability of the existing nuclear fleet.

Price has never wavered from this objective, and he’s only found encouragement along the way. The nuclear engineering community, he says, “is small, close-knit, and very welcoming. Once you get into it, most people are not inclined to do anything else.”

Illuminating the relationships between physical processes

In his first research project as an undergraduate at the University of Illinois Urbana at Champaign, Price studied the safety of the steel and concrete casks used to store spent reactor fuel rods after they’ve cooled off in tanks of water, typically for several years. His analysis indicated that this storage method was quite safe, although the question as to what should ultimately be done with these fuel casks, in terms of long-term disposal, remains open in this country.

After starting graduate studies at the University of Michigan in 2020, Price took up a different line of research that he’s still engaged in today. That area of study, called multiphysics modeling, involves looking at various physical processes going on in the core of a nuclear reactor to see how they interact — an alternative to studying these processes one at a time.

One key process, neutronics, concerns how neutrons buzz around in the reactor core causing nuclear fission, which is what generates the power. A second process, called thermal hydraulics, involves cooling the reactor to extract the heat generated by neutrons. A multiphysics simulation, analyzing how these two processes interact, could show how the heat carried away as the reactor produces power affects the behavior of neutrons, because the hotter the fuel is, the less likely it is to cause fission.

“If you ever want to change your power level, or do anything with the reactor, the temperature of the fuel is a critical input that you need to know,” says Price. “Multiphysics modeling allows us to correlate the fission neutronics processes with a thermal property, temperature. That, in turn, can help us predict how the reactor will behave under different conditions.”

Multiphysics modeling for light water reactors, which are the ones operating today with capacities on the order of 1,000 megawatts, are pretty well established, Prices says. But methods for modeling advanced reactors — small modular reactors (SMRs with capacities ranging from around 20 to 300 MW) and microreactors (rated at 1 to 20 MW) — are far less advanced. Only a very small number of these reactors are operating today, but Price is focusing his efforts on them because of their potential to produce power more cheaply and more safely, along with their greater flexibility in power and size.   

Although multiphysics simulations have supplied the nuclear community with a wealth of information, they can require supercomputers to solve, or find approximate solutions to, coupled and extremely difficult nonlinear equations. In the hopes of greatly reducing the computational burden, Price is actively exploring artificial intelligence approaches that could provide similar answers while bypassing those burdensome equations altogether. That has been a central theme of his research agenda since he joined the MIT faculty in September 2025.

A crucial role for artificial intelligence

What artificial intelligence and machine-learning methods, in particular, are good at is finding patterns concealed within data, such as correlations between variables critical to the functioning of a nuclear plant. For example, Price says, “if you tell me the power level of your reactor, it [AI] could tell you what the fuel temperature is and even tell you the 3-dimensional temperature distribution in your core.” And if this can be done without solving any complicated differential equations, computational costs could be greatly reduced.

Price is investigating several applications where AI may be especially useful, such as helping with the design of novel kinds of reactors. “We could then rely on the safety frameworks developed over the past 50 years to carry out a safety analysis of the proposed design,” he says. “In this way, AI will not be directly interfacing with anything that is safety-critical.” As he sees it, AI’s role would be to augment established procedures, rather than replacing them, helping to fill in existing gaps in knowledge.

When a machine-learning model is given a sufficient amount of data to learn from, it can help us better understand the relationship between key physical processes — again without having to solve nonlinear differential equations. 

“By really pinning down those relationships, we can make better design decisions in the early stages,” Price says. “And when that technology is developed and deployed, AI can help us make more intelligent control decisions that will enable us to operate our reactors in a safer and more economical way.”

Giving back to the community that nurtured him

Simply put, one of his chief goals is to bring the benefits of AI to the nuclear industry, and he views the possibilities as vast and largely untapped. Price also believes that he is well-positioned as a professor at MIT to bring us closer to the nuclear future that he envisions. As he sees it, he’s working not only to develop the next generation of reactors, but also to help prepare the next generation of leaders in the field.

Price became acquainted with some prospective members of that “next generation” in a design course he co-taught last fall with Curtis Smith, the KEPCO Professor of the Practice of Nuclear Science and Engineering. For Price, that introduction lasted just a few months, but it was long enough for him to discover that MIT students are exceptionally motivated, hard-working, and capable. Not surprisingly, those happen to be the same qualities he’s hoping to find in the students that join his research team.

Price vividly recalls the support he received when taking his first, tentative steps in this field. Now that he’s moved up the ranks from undergraduate to professor, and acquired a substantial body of knowledge along the way, he wants his students “to experience that same feeling that I had upon entering the field.” Beyond his specific goals for improving the design and operation of nuclear reactors, Price says, “I hope to perpetuate the same fun and healthy environment that made me love nuclear engineering in the first place.”

Toward cheaper, cleaner hydrogen production

Fri, 04/03/2026 - 12:00am

Hydrogen sits at the center of some of the world’s most important industrial processes, but its production still comes with a heavy environmental cost. Today, most hydrogen is produced through high-emissions processes like steam methane reforming and coal gasification.

But hydrogen can also be made by splitting water molecules using renewable electricity, eliminating fossil fuel emissions and other toxic byproducts. Such “green hydrogen” is made by running an electric current through water in an electrolyzer.

Green hydrogen won’t scale through decarbonization alone. It also has to be cost-competitive with the traditional methods of production.

1s1 Energy thinks it has the technology to finally make green hydrogen go mainstream. The company says its boron-based membrane material unlocks previously unachievable performance and durability in electrolyzers.

In tests with partners, 1s1 says, electrolyzers with its membranes needed just 70 percent of the energy to produce each kilogram of hydrogen, compared to incumbent devices.

“Green hydrogen has been a hard industry to have success in so far,” acknowledges 1s1 co-founder Dan Sobek ’88, SM ’92, PhD ’97. “The difference with us is we’ve done very targeted customer discovery. We have a very strong value proposition that’s not just about decarbonization. We have a pipeline of potential customers that see around a 60 percent reduction in operating costs with our technology. That’s a nice point of entry.”

Although 1s1 is focused on hydrogen production now, its technology could also be used in fuel cells and solid-state batteries, and to extract critical metals from mining waste. The company is beginning trials in some of those applications, and it is working with a large materials company to scale up production of its membranes for hydrogen production.

“We’re at an inflection point for the company,” Sobek says. “The plan is, by 2030, to have a solid business in several segments: electrolyzers, mineral extraction, and in collaborations with several large companies. But right now, we have to be judicious and focused.”

Improving electrolyzers

Sobek was born and raised in Argentina, but he also grew up at MIT over the course of three degrees and more than a decade. He first studied aeronautics and astronautics at MIT, then jumped to mechanical engineering as a graduate student, then moved to the Department of Electrical Engineering and Computer Science, where he worked under PhD advisors and MIT professors Martha Gray and Stephen Senturia. His thesis focused on a technique for quickly measuring optical properties of large numbers of biological cells.

“A lot of my learnings around microfabrication and materials chemistry ended up being really relevant for 1s1,” Sobek says. “A class that was very important to me was taught by Professor Amar Bose. I was a teaching assistant for him for a couple of semesters, and that had an incredible influence on my thinking.”

Following graduation, Sobek worked in microelectronics and microfluidics before founding his own company, Zymera, in 2004. The company developed deep-tissue imaging technology for detecting cancer and other serious diseases.

Around 2013, Sobek started talking to his Zymera co-founder, Sukanta Bhattacharyya, about making electrolysis more efficient, focusing on “proton exchange membrane” electrolyzers. Such electrolyzers employ a large amount of electricity to split water into hydrogen and oxygen ions. At their center is a membrane that can lose efficiency through voltage resistance.

On top of the efficiency challenge, electricity is often more expensive than fossil fuels in many parts of the world. Traditional hydrogen production also has the benefit of existing infrastructure, making it that much more difficult for green hydrogen production to scale.

Sobek and Bhattacharyya knew the most important part of such electrolyzers is their proton-conducting membrane, which shuttles hydrogen ions from the anode to the cathode in the electrolyzer’s electrochemical cell.

“I asked Sukanta how we could improve the efficiency and durability of that element,” Sobek recalls. “He gave me a one-word answer: boron.”

Boron can be given a negative charge, which makes hydrogen ions, or protons, bond to it more quickly. The hydrogen ions can then be filtered through the membrane and released as they move through the cell. Boron-based materials are also more stable and resistant to corrosion, further improving the long-term performance of electrolyzers.

The company was officially founded in late 2019. After years of development, today 1s1 attaches a chemically tailored version of boron onto polymer materials to create its membranes for exchanging protons.

“These are first-of-a-kind membranes with stable and durable, super-acid proton exchange groups that do not poison catalysts,” Sobek says.

Tiny membranes with big impact

In 2021, the U.S. Department of Energy set a goal for proton exchange membrane electrolysis to achieve 77 percent electrical efficiency by 2031. Sobek says 1s1 is already reaching that milestone in tests.

“It’s not just the technology, but the way we’re applying it,” Sobek says, “We’re making hydrogen viable for use in the production of different industrial chemicals.”

1s1 is currently conducting pilots with partners, including an electrical utility owned by a large steel company in Brazil. The company is also actively exploring other applications for its technology. Last year, 1s1 announced a project to produce green ammonia with the company Nitrofix through joint funding from the U.S. Department of Energy and the Israeli Ministry of Energy and Infrastructure. It’s also working with a large mine in Brazil to extract a material called niobium, which is useful for high-strength steel as well as fast-charging batteries. A similar process could even be used to extract gold.

“We can do that without using harsh chemicals, because the standard processes used to extract niobium and gold use extremely strong acids at high temperatures or extremely toxic chemicals,” Sobek says. “It’s gratifying for me because my home country of Argentina has had a lot of problems with the use of toxic chemicals to extract gold. We’re trying to enable low-cost, responsible mining.”

As 1s1 scales its membrane technology, Sobek says the goal is to deploy wherever the technology can improve processes.

“We have a large number of potential customers because this technology is really foundational,” Sobek says. “Creating high-impact technologies is always fun.”

Lincoln Laboratory laser communications terminal launches on historic Artemis II moon mission

Thu, 04/02/2026 - 9:00am

In 1969, Apollo 11 astronaut Neil Armstrong stepped onto the moon’s surface — a momentous engineering and science feat marked by his iconic words: "That’s one small step for man, one giant leap for mankind." Now, NASA is making history again.

With the successful launch of NASA’s Artemis II mission yesterday, four astronauts are set to become the first humans to travel to the moon in more than 50 years. In 2022, the uncrewed Artemis I mission demonstrated that NASA’s new Orion spacecraft could travel farther into space than ever before and return safely to Earth. Building on that success, the 10-day Artemis II mission will pave the way for future Artemis missions, which aim to land astronauts on the moon to prepare for a lasting lunar presence, and eventually human missions to Mars.

As it orbits the moon, the Orion spacecraft will carry an optical (laser) communications system developed at MIT Lincoln Laboratory in collaboration with NASA Goddard Space Flight Center. Called the Orion Artemis II Optical Communications System (O2O), the system is capable of higher-bandwidth data transmissions from space compared to traditional radio-frequency (RF) systems. During the Artemis II mission, O2O will use laser beams to send high-resolution video and images of the lunar surface down to Earth.

"Space-based communications has always been a big challenge," says lead systems engineer Farzana Khatri, a senior staff member in the laboratory’s Optical and Quantum Communications Group. "RF communications have served their purpose well. However, the RF spectrum is highly congested now, and RF does not scale well to longer distances across space. Laser communication [lasercom] is a solution that could solve this problem, and the laboratory is an expert in the field, which was really pioneered here."

Artemis II is historic not only for renewing human exploration beyond Earth, but also for being the first crewed lunar flight to demonstrate lasercom technologies, which are poised to revolutionize how spacecraft communicate. Lincoln Laboratory has been developing such technologies for more than two decades, and NASA has been infusing them into its missions to meet the growing demands of long-distance and data-intensive space exploration.

"The Orion spacecraft collects a huge amount of data during the first day of a mission, and typically these data sit on the spacecraft until it splashes down and can take months to be offloaded," Khatri says. "With an optical link running at the highest rate, we should be able to get all the data down to Earth within a few hours for immediate analysis. Furthermore, astronauts will be able to communicate in real-time over the optical link to stay in touch with Earth during their journey, inspiring the public and the next generation of deep-space explorers, much like the Apollo 11 astronauts who first landed on the moon 57 years ago."

At the heart of O2O is the laboratory-developed Modular, Agile, Scalable Optical Terminal (MAScOT). About the size of a house cat, MAScOT features a 4-inch telescope mounted on a two-axis pivoted support (gimbal) with fixed backend optics. The gimbal precisely points the telescope and tracks the laser beam through which communications signals are emitted and received in the direction of the desired data recipient or sender. Underneath the gimbal, in a separate assembly, are the backend optics, which contain light-focusing lenses, tracking sensors, fast-steering mirrors, and other components to finely point the laser beam.

MAScOT made its debut in space as part of the laboratory’s Integrated Laser Communications Relay Demonstration (LCRD) LEO User Modem and Amplifier Terminal (ILLUMA-T), which launched to the International Space Station in November 2023. Over the following six months, the laboratory team performed experiments to test and characterize the system's basic functionality, performance, and utility for human crews and user applications. Initially, the team checked whether the ILLUMA-T-to-LCRD optical link was operating at the intended data rates in both directions: 622 Mbps down and 51 Mbps up. In fact, even higher data rates were achieved: 1.2 Gbps down and 155 Mbps up. MAScOT’s lasercom terminal architecture, which was recognized with a 2025 R&D 100 Award, is now being used for Artemis II and will support future space missions.

"Our success with ILLUMA-T laid the foundation for streaming HD [high-definition] video to and from the moon," says co-principal investigator Jade Wang, an assistant leader of the Optical and Quantum Communications Group. "You can imagine the Artemis astronauts using videoconferencing to connect with physicians, coordinate mission activities, and livestream their lunar trips."

A dedicated operations team from Lincoln Laboratory is following the 10-day Artemis II mission from ground stations in Houston, Texas, and White Sands, New Mexico, and even as far as an experimental ground station in Australia, which allows for a better view of the spacecraft from the Southern Hemisphere. Leading up to the launch, the operations team had been making monthly trips to the Houston and White Sands ground stations to perform maintenance and simulations of various stages of the Artemis mission — from prelaunch to launch to the journey to the moon and back to the splashdown at the end of the mission. 

"Doing these monthly simulations is important so we all stay fresh and engaged, especially when there is a launch delay," says Khatri, who adds that team members have had the opportunity to meet and speak with the four astronauts several times during these trips.

Lessons learned throughout the Artemis II mission will pave the way for humans to return to the lunar surface and beyond, eventually to Mars. Through the Artemis program, NASA will travel farther into space and explore more of the moon while creating an enduring presence in deep space and a legacy for future generations.

O2O is funded by the Space Communication and Navigation (SCaN) program at NASA Headquarters in Washington. O2O was developed by a team of engineers from NASA’s Goddard Space Flight Center and Lincoln Laboratory. This partnership has led to multiple lasercom missions, such as the 2013 Lunar Laser Communication Demonstration (LLCD), the 2021 LCRD, the 2022 TeraByte Infrared Delivery (TBIRD), and the 2023 ILLUMA-T.

MIT researchers measure traffic emissions, to the block, in real-time

Thu, 04/02/2026 - 5:00am

In a study focused on New York City, MIT researchers have shown that existing sensors and mobile data can be used to generate a near real-time, high-resolution picture of auto emissions, which could be used to develop local transportation and decarbonization policies.

The new method produces much more detailed data than some other common approaches, which use intermittent samples of vehicle emissions. The researchers say it is also more practical and scales up better than some studies that have aimed for very granular emissions data from a small number of automobiles at once. The work helps bridge the gap between less-detailed citywide emissions inventories and highly detailed analyses based on individual vehicles.

“Our model, by combining real-time traffic cameras with multiple data sources, allows extrapolating very detailed emission maps, down to a single road and hour of the day,” says Paolo Santi, a principal research scientist in the MIT Senseable City Lab and co-author of a new paper detailing the project’s results. “Such detailed information can prove very helpful to support decision-making and understand effects of traffic and mobility interventions.”

Carlo Ratti, director of the MIT Senseable City Lab, notes that the research “is part of our lab’s ongoing quest into hyperlocal measurements of air quality and other environmental factors. By integrating multiple streams of data, we can reach a level of precision that was unthinkable just a few years ago — giving policymakers powerful new tools to understand and protect human health.”

The new method also protects privacy, since it uses computer vision techniques to recognize types of vehicles, but without compiling license plate numbers. The study leverages technologies, including those already installed at intersections, to yield richer data about vehicle movement and pollution.

“The very basic idea is just to estimate traffic emissions using existing data sources in a cost-effective way,” says Songhua Hu, a former postdoc in the Senseable City Lab, and now an assistant professor at City University of Hong Kong.

The paper, “Ubiquitous Data-driven Framework for Traffic Emission Estimation and Policy Evaluation,” is published in Nature Sustainability.

The authors are Hu; Santi; Tom Benson, a researcher in the Senseable City Lab; Xuesong Zhou, a professor of transportation engineering at Arizona State University; An Wang, an assistant professor at Hong Kong Polytechnic University; Ashutosh Kumar, a visiting doctoral student at the Senseable City Lab; and Ratti. The MIT Senseable City Lab is part of MIT’s Department of Urban Studies and Planning.

Manhattan measurements

To conduct the study, the researchers used images from 331 cameras already in use in Manhattan intersections, along with anonymized location records from over 1.75 million mobile phones. Applying vehicle-recognition programs and defining 12 broad categories of automobiles, the scholars found they could correctly place 93 percent of vehicles in the right category. The imaging also yielded important information about the specific ways traffic signals affect traffic flow. That matters because traffic signals are a major reason for stop-and-go driving patterns, which strongly affect urban emissions but are often omitted in conventional inventories.

The mobile phone data then provided rich information about the overall patterns of traffic and movement of individual vehicles throughout the city. The scholars combined the camera and phone data with known information about emissions rates to arrive at their own emissions estimates for New York City.

“We just need to input all emission-related information based on existing urban data sources, and we can estimate the traffic emissions,” Hu says.

Moreover, the researchers evaluated the changes in emissions that might occur in different scenarios when traffic patterns, or vehicle types, also change.

For one, they modeled what would happen to emissions if a certain percentage of travel demand shifted from private vehicles to buses. In another scenario, they looked at what would happen if morning and evening rush hour times were spread out a bit longer, leaving fewer vehicles on the road at once. They also modeled the effects of replacing fine-grained emissions inputs with citywide averages — finding that the rougher emissions estimates could vary widely, from −49 percent to 25 percent of the more fine-tuned results. That underscores how seemingly small simplifications can introduce large errors into emission estimates.

Major emissions drop

On one level, this work involved altering inputs into the model and seeing what emerged. But one scenario the researchers studied is based on a real-world change: In January 2025, New York City implemented congestion pricing south of 60th Street in Manhattan.

To study that, the researchers looked at what happened to vehicle traffic at intervals of two, four, six, and eight weeks after the program began. Overall, congestion pricing lowered traffic volume by about 10 percent — but there was a corresponding drop in emissions of 16-22 percent.

This finding aligns with a previous study by researchers at Cornell University, which reported a 22 percent reduction in particulate matter (PM2.5) levels within the pricing zone. The MIT team also found that these reductions were not evenly distributed across the network, with larger declines on some major streets and more mixed effects outside the pricing zone.

“We see these kinds of huge changes after the congestion pricing began, Hu says. “I think that’s a demonstration that our model can be very helpful if a government really wants to know if a new policy converts into real-world impact.”

There are additional forms of data that could be fed into the researchers’ new method. For instance, in related work in Amsterdam, the team leveraged dashboard cams from vehicles to yield rich information about vehicle movement.

“With our model we can make any camera used in cities, from the hundreds of traffic cameras to the thousands of dash cams, a powerful device to estimate traffic emissions in real-time,” says Fábio Duarte, the associate director of research and design at the MIT Senseable City Lab, who has worked on multiple related studies.

The research was supported by the city of Amsterdam, the AMS Institute, and the Abu Dhabi’s Department of Municipalities and Transport.

It was also supported by the MIT Senseable City Consortium, which consists of Atlas University, the city of Laval, the city of Rio de Janeiro, Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, the Dubai Future Foundation, FAE Technology, KAIST Center for Advanced Urban Systems, Sondotecnica, Toyota, and Volkswagen Group America.

Evaluating the ethics of autonomous systems

Thu, 04/02/2026 - 12:00am

Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.

But while these AI-driven outputs may be technically optimal, are they fair? What if a low-cost power distribution strategy leaves disadvantaged neighborhoods more vulnerable to outages than higher-income areas?

To help stakeholders quickly pinpoint potential ethical dilemmas before deployment, MIT researchers developed an automated evaluation method that balances the interplay between measurable outcomes, like cost or reliability, and qualitative or subjective values, such as fairness.   

The system separates objective evaluations from user-defined human values, using a large language model (LLM) as a proxy for humans to capture and incorporate stakeholder preferences. 

The adaptive framework selects the best scenarios for further evaluation, streamlining a process that typically requires costly and time-consuming manual effort. These test cases can show situations where autonomous systems align well with human values, as well as scenarios that unexpectedly fall short of ethical criteria.

“We can insert a lot of rules and guardrails into AI systems, but those safeguards can only prevent the things we can imagine happening. It is not enough to say, ‘Let’s just use AI because it has been trained on this information.’ We wanted to develop a more systematic way to discover the unknown unknowns and have a way to predict them before anything bad happens,” says senior author Chuchu Fan, an associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).

Fan is joined on the paper by lead author Anjali Parashar, a mechanical engineering graduate student; Yingke Li, an AeroAstro postdoc; and others at MIT and Saab. The research will be presented at the International Conference on Learning Representations.

Evaluating ethics

In a large system like a power grid, evaluating the ethical alignment of an AI model’s recommendations in a way that considers all objectives is especially difficult.

Most testing frameworks rely on pre-collected data, but labeled data on subjective ethical criteria are often hard to come by. In addition, because ethical values and AI systems are both constantly evolving, static evaluation methods based on written codes or regulatory documents require frequent updates.

Fan and her team approached this problem from a different perspective. Drawing on their prior work evaluating robotic systems, they developed an experimental design framework to identify the most informative scenarios, which human stakeholders would then evaluate more closely.

Their two-part system, called Scalable Experimental Design for System-level Ethical Testing (SEED-SET), incorporates quantitative metrics and ethical criteria. It can identify scenarios that effectively meet measurable requirements and align well with human values, and vice versa.   

“We don’t want to spend all our resources on random evaluations. So, it is very important to guide the framework toward the test cases we care the most about,” Li says.

Importantly, SEED-SET does not need pre-existing evaluation data, and it adapts to multiple objectives.

For instance, a power grid may have several user groups, including a large rural community and a data center. While both groups may want low-cost and reliable power, each group’s priority from an ethical perspective may vary widely.

These ethical criteria may not be well-specified, so they can’t be measured analytically.

The power grid operator wants to find the most cost-effective strategy that best meets the subjective ethical preferences of all stakeholders.

SEED-SET tackles this challenge by splitting the problem into two, following a hierarchical structure. An objective model considers how the system performs on tangible metrics like cost. Then a subjective model that considers stakeholder judgements, like perceived fairness, builds on the objective evaluation.

“The objective part of our approach is tied to the AI system, while the subjective part is tied to the users who are evaluating it. By decomposing the preferences in a hierarchical fashion, we can generate the desired scenarios with fewer evaluations,” Parashar says.

Encoding subjectivity

To perform the subjective assessment, the system uses an LLM as a proxy for human evaluators. The researchers encode the preferences of each user group into a natural language prompt for the model.

The LLM uses these instructions to compare two scenarios, selecting the preferred design based on the ethical criteria.

“After seeing hundreds or thousands of scenarios, a human evaluator can suffer from fatigue and become inconsistent in their evaluations, so we use an LLM-based strategy instead,” Parashar explains.

SEED-SET uses the selected scenario to simulate the overall system (in this case, a power distribution strategy). These simulation results guide its search for the next best candidate scenario to test.

In the end, SEED-SET intelligently selects the most representative scenarios that either meet or are not aligned with objective metrics and ethical criteria. In this way, users can analyze the performance of the AI system and adjust its strategy.

For instance, SEED-SET can pinpoint cases of power distribution that prioritize higher-income areas during periods of peak demand, leaving underprivileged neighborhoods more prone to outages.

To test SEED-SET, the researchers evaluated realistic autonomous systems, like an AI-driven power grid and an urban traffic routing system. They measured how well the generated scenarios aligned with ethical criteria.

The system generated more than twice as many optimal test cases as the baseline strategies in the same amount of time, while uncovering many scenarios other approaches overlooked.

“As we shifted the user preferences, the set of scenarios SEED-SET generated changed drastically. This tells us the evaluation strategy responds well to the preferences of the user,” Parashar says.

To measure how useful SEED-SET would be in practice, the researchers will need to conduct a user study to see if the scenarios it generates help with real decision-making.

In addition to running such a study, the researchers plan to explore the use of more efficient models that can scale up to larger problems with more criteria, such as evaluating LLM decision-making.

This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency.

Preview tool helps makers visualize 3D-printed objects

Wed, 04/01/2026 - 12:00am

Designers, makers, and others often use 3D printing to rapidly prototype a range of functional objects, from movie props to medical devices. Accurate print previews are essential so users know a fabricated object will perform as expected.

But previews generated by most 3D-printing software focus on function rather than aesthetics. A printed object may end up with a different color, texture, or shading than the user expected, resulting in multiple reprints that waste time, effort, and material.

To help users envision how a fabricated object will look, researchers from MIT and elsewhere developed an easy-to-use preview tool that puts appearance first.

Users upload a screenshot of the object from their 3D-printing software, along with a single image of the print material. From these inputs, the system automatically generates a rendering of how the fabricated object is likely to look.

The artificial intelligence-powered system, called VisiPrint, is designed to work with a range of 3D-printing software and can handle any material example. It considers not only the color of the material, but also gloss, translucency, and how nuances of the fabrication process affect the object’s appearance.

Such aesthetics-focused previews could be especially useful in areas like dentistry, by helping clinicians ensure temporary crowns and bridges match the appearance of a patient’s teeth, or in architecture, to aid designers in assessing the visual impact of models.

“3D printing can be a very wasteful process. Some studies estimate that as much as a third of the material used goes straight to the landfill, often from prototypes the user ends of discarding. To make 3D printing more sustainable, we want to reduce the number of tries it takes to get the prototype you want. The user shouldn’t have to try out every printing material they have before they settle on a design,” says Maxine Perroni-Scharf, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on VisiPrint.

She is joined on the paper by Faraz Faruqi, a fellow EECS graduate student; Raul Hernandez, an MIT undergraduate; SooYeon Ahn, a graduate student at the Gwangju Institute of Science and Technology; Szymon Rusinkiewicz, a professor of computer science at Princeton University; William Freeman, the Thomas and Gerd Perkins Professor of EECS at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Stefanie Mueller, an associate professor of EECS and Mechanical Engineering at MIT, and a member of CSAIL. The research will be presented at the ACM CHI Conference on Human Factors in Computing Systems.

Accurate aesthetics

The researchers focused on fused deposition modeling (FDM), the most common type of 3D printing. In FDM, print material filament is melted and then squirted through a nozzle to fabricate an object one layer at a time.

Generating accurate aesthetic previews is challenging because the melting and extrusion process can change the appearance of a material, as can the height of each deposited layer and the path the nozzle follows during fabrication.

VisiPrint uses two AI models that work together to overcome those challenges.

The VisiPrint preview is based on two inputs: a screenshot of the digital design from a user’s 3D-printing software (called “slicer” software), and an image of the print material, which can be taken from an online source or captured from a printed sample.

From these inputs, a computer vision model extracts features from the material sample that are important for the object’s appearance.

It feeds those features to a generative AI model that computes the geometry and structure of the object, while incorporating the so-called “slicing” pattern the nozzle will follow as it extrudes each layer.

The key to the researchers’ approach is a special conditioning method. This involves carefully adjusting the inner workings of the model to guide it, so it follows the slicing pattern and obeys the constraints of the 3D-printing process.

Their conditioning method utilizes a depth map that preserves the shape and shading of the object, along with a map of the edges that reflects the internal contours and structural boundaries.

“If you don’t have the right balance of these two things, you could use up with bad geometry or an incorrect slicing pattern. We had to be careful to combine them in the right way,” Perroni-Scharf says.

A user-focused system

The team also produced an easy-to-use interface where one can upload the required images and evaluate the preview.

The VisiPrint interface enables more advanced makers to adjust multiple settings, such as the influence of certain colors on the final appearance.

In the end, the aesthetic preview is intended to complement the functional preview generated by slicer software, since VisiPrint does not estimate printability, mechanical feasibility, or likelihood of failure.

To evaluate VisiPrint, the researchers conducted a user study that asked participants to compare the system to other approaches. Nearly all participants said it provided better overall appearance as well as more textural similarity with printed objects.

In addition, the VisiPrint preview process took about a minute on average, which was more than twice as fast as any competing method.

“VisiPrint really shined when compared to other AI interfaces. If you give a more general AI model the same screenshots, it might randomly change the shape or use the wrong slicing pattern because it had no direct conditioning,” she says.

In the future, the researchers want to address artifacts that can occur when model previews have extremely fine details. They also want to add features that allow users to optimize parts of the printing process beyond color of the material.

“It is important to think about the way that we fabricate objects. We need to continue striving to develop methods that reduce waste. To that end, this marriage of AI with the physical making process is an exciting area of future work,” Perroni-Scharf says.

“‘What you see is what you get’ has been the main thing that made desktop publishing ‘happen’ in the 1980s, as it allowed users to get what they wanted at first try. It is time to get WYSIWYG for 3D printing as well. VisiPrint is a great step in this direction,” says Patrick Baudisch, a professor of computer science at the Hasso Plattner Institute, who was not involved with this work.

This research was funded, in part, by an MIT Morningside Academy for Design Fellowship and an MIT MathWorks Fellowship.

Two physicists and a curious host walk into a studio…

Tue, 03/31/2026 - 7:00pm

This March on The Curiosity Desk, GBH’s daily science show with host Edgar B. Herwick III, MIT scientists dropped by to address the questions: “How close are we to observing the dark universe?” (Thursday, March 12 episode) and “Is Earth prepared for asteroids?” (Thursday, March 26 episode).

Up first, Prof. Nergis Mavalvala, dean of the MIT School of Science, and Prof. Salvatore Vitale joined the host live in studio to talk about the science behind the Laser Interferometer Gravitational-wave Observatory (LIGO) and how LIGO has provided the ability to observe the universe in ways that have never been done before.

In addition to learning something new, Mavalvala explained how experimenting delivers an added piece of excitement: “pushing the technology, the precision of the instrument, requires you to be very inventive. There’s almost nothing in these experiments that you can go buy off a shelf. Everything you’re designing, everything is from scratch. You’re meeting very stringent requirements.”

Herwick likened how they might tweak or tinker with the experiment to souping up a car engine, and the LIGO scientists nodded – adding that in the most complex experiments, each bite-sized part on its own works well, and it’s the interfaces between them that scientists must get right.

While there, the two long-time colleagues also took a detour to explain how in physics experimentalists benefit from the work of theorists and vice versa. Mavalvala, whose work focuses on building the world’s most precise instruments to study physical phenomena, described the synergy between ideas that come from theory (work that Vitale does) and how you measure. (No, they assure Herwick, they don’t get into a lot of fights.)

In fact, it’s fantastic to have people from both worlds at MIT, said Vitale.  Mavalvala agreed. “One of the things that’s really important about theory in science is that ultimately, in physics especially, it’s a bunch of math. And the important thing that you have to ask is, ‘does nature really behave that way?’ And how do you answer that question? You have to go out and measure. You have to go observe nature,” said Mavalvala.

As scientists fine-tune the gravitational wave detectors, they will inform what data are collected, what astrophysical objects they might find or hope to find – and the search for certain fainter, farther away, or more exotic objects can inform what enhancements they prioritize.

But what if I’m not interested in any of that, asked Herwick? Why should I care? 

“To me, it falls in the category of for the betterment of humankind. You never know what is going to be useful. A lot of fundamental research was very far at the beginning from what turned out to be fundamental applications,” said Vitale, adding, “What they do on the instrument side has already now very important applications.”

Mavalvala was unequivocal, underscoring how pursuing curiosity is put to good use:

“When you’re making instruments that achieve that kind of precision, you’re inventing new technologies. [With LIGO] We’ve invented vibration isolation technologies to keep our mirrors really still. We’ve invented lasers that are quieter than any that were ever made before. We’ve invented photonic techniques that are allowing us to make applications even to far off things like quantum computing. 

“So, this is one of the beauties of fundamental discovery science. A, you’ll discover something. But B you’ll be doing two things: you’ll be inventing the technologies of the future, and you’ll be training the generations of scientists who may go off to do completely different things, but this is what inspires them.”

Watch the full conversation below and on YouTube:

 

Planetary defense

Turning to objects beyond Earth – specifically, asteroids – Associate Professor Julien de Wit, along with research scientists Artem Burdanov and Saverio Cambioni, joined Herwick at the Curiosity Desk later in the month. They talked about their ongoing research to identify smaller asteroids (about the size of a school bus) using the James Webb Space Telescope and why planetary defense goes beyond thinking about the massive asteroids featured in movies like Armageddon. Notably, a lot of technology on earth depends on satellites, and asteroids pose the biggest threat to satellites.    

“Dinosaurs didn’t need to care about an asteroid hitting the moon. Humanity a century ago didn’t care. Now, if [an asteroid] hits the moon, a lot of debris will be expelled and all those particles – big and small – they will affect the fleet of satellites around Earth. That’s a big potential problem, so we need to take that into account in our future,” said Burdanov.

There’s also a potential upside to being better able to detect and potentially “capture” asteroids, explained de Wit, all of it benefitted by new instruments. “It’s really an asteroid revolution going on… Our situational awareness of what’s out there is really about to change dramatically.”

He explains that one dream is to mine asteroids themselves for material to build or power next generation technologies or stations in space. “The way to reliably move into space is to use resources from space. We can’t just move stuff to build a full city. We use stuff from space.”

Echoing the sentiments expressed earlier in the month by MIT’s dean of science, the trio of asteroid explorers also described how the pursuits of planetary scientists can lead to unexpected rewards along the way. “We are swimming in an era that is data rich, and so what we do in our group and at MIT is mine that data to reveal the universe like never before,” says de Wit. “Revealing new populations of asteroids, new populations of planets, and making sense of our universe like we have never done.”

Watch the full conversation below and on the GBH YouTube channel: 

Tune in to the Curiosity Desk some Thursdays to hear from MIT researchers as they visit Herwick and the production team. 

Building the blocks of life

Tue, 03/31/2026 - 4:50pm

Billions of years ago, simple organic molecules drifted across Earth's primordial landscape — nothing more than basic chemical compounds. But as natural forces shaped the planet over hundreds of millions of years, these molecules began to interact and bond in increasingly complex ways. Along the way, something spectacular emerged: life.

“Life is, to some degree, magical,” says computational biologist Sergei Kotelnikov. Simple organic compounds congregate into polymers, which assemble into living cells and ultimately organisms — the whole being greater than the sum of its parts.

“You can write formulas on how a molecule behaves,” he says, referring to the world of quantum mechanics. “But yet somehow, a few orders of magnitude above, on a bigger scale, it gives rise to such a mystery.”

Kotelnikov builds models to analyze and predict the structure of these biomolecules, particularly proteins, the fundamental building blocks of every organism. This year, he joined MIT as part of the School of Science Dean’s Postdoctoral Fellowship to work with the Keating Lab, where researchers focus on protein structure, function, and interaction. Using machine learning, his goal is to develop new methods in protein modeling with potential applications that span from medicine to agriculture.

A hunger for problems to solve

Kotelnikov grew up in Abakan, Russia, a small city sitting right in the center of Eurasia. As a child, one of his favorite pastimes was playing with Lego bricks.

“It encouraged me to build new things, rather than just following instructions,” he says. “You can do anything.”

Kotelnikov’s father, whose background lies in engineering and economics, would often challenge him with math problems.

“Your brain — you can feel some kind of expansion of understanding how things work, and that’s a very satisfactory feeling,” Kotelnikov says.

This itch to solve problems led him to join science Olympiad competitions, and later, a science-focused public boarding school located near the Russian Academy of Sciences, from which he often encountered scientists.

“It was like a candy shop,” he recalls, describing the period as a life-changing experience.

In 2012, Kotelnikov began his bachelor of science in physics and applied mathematics at the Moscow Institute of Physics and Technology — considered one of the leading STEM universities in Russia, and globally — and continued there for his master’s degree. It was there that biology came into the picture.

During a course on statistical physics, Kotelnikov was first introduced to the idea of the “emergence of complexity.” He became fascinated by this “mysterious and attractive manifestation of biology … this evolution that sharpens the physical phenomenon” to create, drive, and shape life as we know it today. By the time he completed his master’s degree, he realized he had only scratched surface of the field of computational biology.

In 2018, he began his PhD at Stony Brook University in New York and began working with Dima Kozakov, who is recognized as one of the world’s leaders in predicting protein interactions and complex structures.

Studying the architecture of life  

Proteins act like the bricks that construct an organism, underpinning almost every cellular process from tissue repair to hormone production. Like pieces of a Lego tower, their structures and interactions determine the functions that they carry out in a body.

However, diseases arise when they’re folded, curled, twisted, or connected in unusual ways. To develop medical interventions, scientists break down the tower and examine each individual piece to find the culprit and correct their shape and pairing. With limited experimental data on protein structures and interactions currently available, simulations developed by computational biologists like Kotelnikov provide crucial insight that inform fundamental understanding and applications like drug discovery.

With the guidance of Kozakov at Stony Brook’s Laufer Center for Physical and Quantitative Biology, Kotelnikov carried over his understanding of physics to create modeling methods that are more effective, efficient, reliable, and generalizable. Among them, he developed a new way of predicting the protein complex structures mediated by proteolysis-targeting chimeras, or PROTACs, a new class of molecules that can trigger the breakdown of specific proteins previously considered undruggable, such as those found in cancer.

PROTACs have been challenging to model, in part because they are composed of proteins that don’t naturally interact with each other, and because the linker that connects them is flexible. Imagine trying to guess the overall shape of a bendy Lego piece attached to two other pieces of different irregular, unmatched shapes. To efficiently find all possible configurations, Kotelnikov’s method conceptually cuts the linker into two halves and models each separately, then reformulates the problem and calculates it using a powerful algorithm called Fast Fourier Transform.

“It’s kind of like applied math judo that you sometimes need to do in order to make certain intractable computations tractable,” he says.

Kotelnikov’s state-of-the-art methods have been instrumental to his team’s top performance in numerous international challenges including the Critical Assessment of protein Structure Prediction (CASP) competition — the same contest in which the Nobel Prize-winning AlphaFold system for protein 3D structure prediction was presented.

Physics and machine learning

At MIT, Kotelnikov is working with Amy Keating, the Jay A. Stein (1968) Professor of Biology, biology department head, and professor of biological engineering, to study protein structure, function, and interactions.  

A recognized leader in the field, Keating employs both computational and experimental methods to study proteins, their interactions, as well as how this can impact disease. By infusing physics with machine learning, Kotelnikov’s goal is to advance modeling methods that can vastly inform applications such as cancer immunology and crop protection.

“Kotelnikov stands to gain a lot from working closely with wet lab researchers who are doing the experiments that will complement and test his predictions, and my lab will benefit from his experience developing and applying advanced computational analyses,” says Keating.

Kotelnikov is also planning to work with professors Tommi Jaakkola and Tess Smidt in MIT’s Department of Electrical Engineering and Computer Science to explore a field called geometric deep learning. In particular, he aims to integrate physical and geometric knowledge about biomolecules into neural network architectures and learning procedures. This approach can significantly reduce the amount of data needed for learning, and improve the generalizability of resulting models.

Beyond the two departments, Kotelnikov is also excited to see how the diversity and interdisciplinary mix of MIT’s community will help him come up with ideas.

“When you’re building a model, you’re entering this imaginary world of assumptions and simplifications and it might feel challenging because of this disconnect with reality,” Kotelnikov says. “Being able to efficiently communicate with experimentalists is of high value.”

Tomás Palacios named director of the Institute for Soldier Nanotechnologies

Tue, 03/31/2026 - 4:15pm

Tomás Palacios, the Clarence J. LeBel Professor of Electrical Engineering at MIT, has been appointed director of the MIT Institute for Soldier Nanotechnologies (ISN). Palacios assumed the role on Feb. 4, and will continue to serve as the director of the MIT Microsystems Technology Laboratories (MTL).

Founded in 2002, ISN is a U.S. Army-sponsored University Affiliated Research Center focused on advancing fundamental science and engineering to enable next-generation capabilities for protection, survivability, sensing, and system performance. ISN brings together researchers from across MIT to address challenges at the intersection of materials, devices, and systems. In collaboration with industry, MIT Lincoln Laboratory, the U.S. Army, and other U.S. military services, ISN works to transition promising technologies for both commercial and defense applications.

As director, Palacios will oversee ISN’s research portfolio, facilities, and strategic partnerships, working closely with the ISN leadership team, MIT administration, U.S. Army, and other research sponsors to guide the institute’s next phase of research and collaboration.

“Tomás Palacios brings exceptional energy, vision, and leadership to the Institute for Soldier Nanotechnologies,” says Ian A. Waitz, MIT’s vice president for research, who announced the appointment in a recent letter. “As director of Microsystems Technology Laboratories, he has demonstrated a rare ability to build strong research communities and partnerships across academia, industry, and government. I am confident he will guide ISN’s next phase with momentum, scientific excellence, and a deep sense of service to MIT and the nation.”

Palacios brings deep leadership experience within MIT and across national research collaborations. As director of MTL, he leads one of MIT’s flagship interdisciplinary research laboratories supporting work in micro- and nano-scale materials, devices, and systems. He is a member of the MIT.nano Leadership Council and, since 2023, has served as associate director of the multi-university SUPeRior Energy-efficient Materials and dEvices (SUPREME) Center, a Semiconductor Research Corp. JUMP 2.0 program focused on next-generation energy-efficient semiconductor technologies. Palacios is also the co-founder of several technology companies, including Vertical Semiconductor, Finwave Semiconductor, and CDimension, Inc.

“MIT’s motto, ‘mens et manus’ — ‘mind and hand’ — reminds us that fundamental research and real-world impact must go hand-in-hand,” says Palacios. “At ISN, our mission is to help protect and empower those who defend our nation. That responsibility demands urgency, creativity, and deep collaboration. I look forward to building on ISN’s strong partnership with the U.S. Army, industry, and colleagues across MIT to push the frontiers of nanotechnology and translate discovery into meaningful impact at the speed of relevance.”

Palacios is internationally recognized for his work on wide-bandgap semiconductors, nanoelectronics, and advanced electronic materials. An IEEE Fellow, his research spans fundamental device physics through system-level integration, with applications in high-power and high-frequency electronics, sensing, and energy systems. He is widely recognized for his research contributions, as well as for his leadership in education and mentoring.

Palacios succeeds John Joannopoulos, who served as ISN director from 2006 until his death in August 2025. During his nearly two decades of ISN leadership, Joannopoulos strengthened ISN’s interdisciplinary culture, devoting significant effort to fostering collaborations among ISN-funded principal investigators, building partnerships that extend across MIT and beyond to the Army research community. Joannopoulos, an extraordinary researcher and a generous mentor, was also a co-founder of companies such as WiTricity and OmniGuide, helping to translate many of ISN’s foundational scientific discoveries into commercial technologies. Raúl Radovitzky, ISN’s associate director, served as interim director during the search for a new director, providing continuity to ISN’s research programs, facilities, and partnerships.

“It is an honor to serve as director of the Institute for Soldier Nanotechnologies at such an important moment in time,” says Palacios. “ISN has built an extraordinary foundation of interdisciplinary excellence under Professor John Joannopoulos’ leadership and, more recently, Prof. Radovitzky’s. I look forward to working with the ISN community to advance breakthrough research at the intersection of materials, devices, and systems — research that not only strengthens national security, but also translates into technologies that benefit society more broadly.” 

Turning muscles into motors gives static organs new life

Tue, 03/31/2026 - 2:30pm

What if a technology could reanimate parts of the body that have lost their connection to the brain — like a bladder that can no longer empty due to a spinal cord injury, or intestines that can’t push food forward due to Crohn’s disease? What if this technology could also send sensations such as hunger or touch back to the brain?

New MIT research offers a glimpse into this future. In an open-access study published today in Nature Communications, the researchers introduce a novel myoneural actuator (MNA) that reprograms living muscles into fatigue-resistant, computer-controlled motors that can be implanted inside the body to restore movement in organs.

“We’ve built an interface that leverages natural pathways used by the nervous system so that we can seamlessly control organs in the body, while also enabling the transmission of sensory feedback to the brain,” says Hugh Herr, senior author of the study, a professor of media arts and sciences at the MIT Media Lab, co-director of the K. Lisa Yang Center for Bionics, and an associate member of the McGovern Institute for Brain Research at MIT. The study was co-led by Herr’s postdoc Guillermo Herrera-Arcos and former postdoc Hyungeun Song.

By repurposing existing muscle in the body, the researchers have developed the first “living” implant that uses rewired sensory nerves to revive paralyzed organs — which may present a new genre of medicine, where a person’s own tissue becomes the hardware.

Rewiring the brain-body interface

Many scientists have toiled to restore function in paralyzed organs, but it’s extremely challenging to design a technology that both communicates with the nervous system and doesn't fatigue over time. Some have tried to insert miniaturized actuators — small machines that can power bionic limbs — into the body. However, Herrera-Arcos says, “it’s hard to make actuators at the centimeter level, and they aren’t very efficient.” Others have focused on creating muscle tissue in the lab, but building muscles cell by cell is time-intensive and far from ready for human use.

Herr’s team tried something different.

“We engineered existing muscles to become an actuator, or motor, that reinstates motion in organs,” says Song.

To do this, the researchers had to navigate the delicate dynamics within the nervous system. The actuator would have to interface with the nervous system to work properly, but it must also somehow evade the brain’s control. “You don’t want the brain to consciously control the muscle actuator because you want the actuator to automatically control an organ, like the heart,” explains Herrera-Arcos. Establishing a computer-controlled muscle to move organs could ensure automatic function and also bypass damaged brain pathways.

Incorporating motor neurons into the actuator may help generate movement, but these neurons are directly controlled by the brain. “Sensory neurons, however, are wired to receive, not to command,” explains Song. “We thought we could leverage this dynamic and reroute motor signals through sensory fibers, making a computer — rather than the brain — the muscle’s new command center.”

To achieve this, sensory nerves would need to fuse fluidly with muscle, and scientists had not yet determined if this was possible. Remarkably, when the team replaced motor nerves in rodent muscle with sensory ones, “the sensory nerves re-innervated the muscles and formed functional synapses. It’s a tremendous discovery,” says Herrera-Arcos.

Sensory neurons not only enabled the use of a digital controller, but also helped curb muscle fatigue — increasing fatigue resistance in rodent muscle by 260 percent compared to native muscles. That’s because muscle fatigue depends largely on the diameter of the axons, or cable-like projections that innervate muscles. Motor neuron axons vary greatly in size, and when a motor nerve is electrically stimulated, the largest axons fire first — exhausting the muscle quickly. However, sensory axons are all nearly the same size, so the signal is broadcast more evenly across muscle fibers, avoiding fatigue, explains Herrera-Arcos.

Designing a biohybrid system

They combined all of these elements into a fatigue-resistant biohybrid motor called a myoneural actuator (MNA). By wrapping their actuator around a paralyzed intestine in a rodent, the researchers reinstated the organ’s squeezing motion. They also successfully controlled rodent calf muscles in an experiment designed to mimic residual muscle in human lower-limb amputations. Importantly, the MNA system transmitted sensory signals to the brain. “This suggests that our technology could seamlessly link organs to the brain. For example, we might be able to make a paralyzed stomach relay hunger,” explains Song.

Bringing their MNA to clinic will require further testing in larger animal models, and eventually, humans. But if it passes the regulatory gauntlet, their system could pave a smoother and safer path toward reviving static organs. Implanting MNAs would require a surgery that is already commonplace in clinic, the researchers say, and their system might be simpler and safer to implement than mechanical devices or organ transplants that introduce foreign material into the body.

The team is hopeful that their new technology could improve the lives of millions living with organ dysfunctions. “Today’s solutions are mostly synthetic: pacemakers and other mechanical assist devices. A living muscle actuator implanted alongside a weakened organ would be part of the body itself. That is a category of medicine different from anything seen in clinic,” explains Herrera-Arcos.

Song says that skin is of special interest. “Hypothetically, we could wrap MNAs around skin grafts to relay tactile feedback, such as strain or tension, which is currently missing for users of prostheses.” Their technology could even augment virtual reality systems, too. “The idea is that, if we couple the MNA system to skin and muscles, a person could feel what their virtual avatar is touching even though their real body isn’t moving,” says Song.

“Our research is on the brink of giving new life to various parts and extensions of the body,” adds Herrera-Arcos. “It’s exciting to think that our system could enhance human potential in ways that once only belonged to the realm of science fiction.”

This research was funded, in part, by the Yang Tan Collective at MIT, K. Lisa Yang Center for Bionics at MIT, Nakos Family Bionics Research Fund at MIT, and the Carl and Ruth Shapiro Foundation.

Pages