MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 2 hours 19 min ago

Lincoln Laboratory technologies win seven R&D 100 Awards for 2025

Tue, 09/09/2025 - 4:35pm

Seven technologies developed at MIT Lincoln Laboratory, either wholly or with collaborators, have earned 2025 R&D 100 Awards. This annual awards competition recognizes the year's most significant new technologies, products, and materials available on the marketplace or transitioned to use. An independent panel of technology experts and industry professionals selects the winners.

"Winning an R&D 100 Award is a recognition of the exceptional creativity and effort of our scientists and engineers. The awarded technologies reflect Lincoln Laboratory's mission to transform innovative ideas into real-world solutions for U.S. national security, industry, and society," says Melissa Choi, director of Lincoln Laboratory.

Lincoln Laboratory's winning technologies enhance national security in a range of ways, from securing satellite communication links and identifying nearby emitting devices to providing a layer of defense for U.S. Army vehicles and protecting service members from chemical threats. Other technologies are pushing frontiers in computing, enabling the 3D integration of chips and the close inspection of superconducting electronics. Industry is also benefiting from these developments — for example, by adopting an architecture that streamlines the development of laser communications terminals.

The online publication R&D World manages the awards program. Recipients span Fortune 500 companies, federally funded research institutions, academic and government labs, and small companies. Since 2010, Lincoln Laboratory has received 108 R&D 100 Awards.

Protecting lives 

Tactical Optical Spherical Sensor for Interrogating Threats (TOSSIT) is a throwable, baseball-sized sensor that remotely detects hazardous vapors and aerosols. It is designed to alert soldiers, first responders, and law enforcement to the presence of chemical threats, like nerve and blister agents, industrial chemical accidents, or fentanyl dust. Users can simply toss, drone-drop, or launch TOSSIT into an area of concern. To detect specific chemicals, the sensor samples the air with a built-in fan and uses an internal camera to observe color changes on a removable dye card. If chemicals are present, TOSSIT alerts users wirelessly on an app or via audible, light-up, or vibrational alarms in the sensor.

"TOSSIT fills an unmet need for a chemical-vapor point sensor, one that senses the immediate environment around it, that can be kinetically deployed ahead of service personnel. It provides a low-cost sensing option for vapors and solid aerosol threats — think toxic dust particles — that would otherwise not be detectable by small deployed sensor systems,” says principal investigator Richard Kingsborough. TOSSIT has been tested extensively in the field and is currently being transferred to the military. 

Wideband Selective Propagation Radar (WiSPR) is an advanced radar and communications system developed to protect U.S. Army armored vehicles. The system's active electronically scanned antenna array extends signal range at millimeter-wave frequencies, steering thousands of beams per second to detect incoming kinetic threats while enabling covert communications between vehicles. WiSPR is engineered to have a low probability of detection, helping U.S. Army units evade adversaries seeking to detect radio-frequency (RF) energy emitting from radars. The system is currently in production.

"Current global conflicts are highlighting the susceptibility of armored vehicles to adversary anti-tank weapons. By combining custom technologies and commercial off-the-shelf hardware, the Lincoln Laboratory team produced a WiSPR prototype as quickly and efficiently as possible," says program manager Christopher Serino, who oversaw WiSPR development with principal investigator David Conway.

Advancing computing

Bumpless Integration of Chiplets to Al-Optimized Fabric is an approach that enables the fabrication of next-generation 2D, 2.5D, and 3D integrated circuits. As data-processing demands increase, designers are exploring 3D stacked assemblies of small specialized chips (chiplets) to pack more power into devices. Tiny bumps of conductive material are used to electrically connect these stacks, but these microbumps cannot accommodate the extremely dense, massively interconnected components needed for future microcomputers. To address this issue, Lincoln Laboratory developed a technique eliminating microbumps. Key to this technique is a lithographically produced fabric allowing electrical bonding of chiplet stack layers. Researchers used an AI-driven decision-tree approach to optimize the design of this fabric. This bumpless feature can integrate hundreds of chiplets that perform like a single chip, improving data-processing speed and power efficiency, especially for high-performance AI applications.

"Our novel, bumpless, heterogeneous chiplet integration is a transformative approach addressing two semiconductor industry challenges: expanding chip yield and reducing cost and time to develop systems," says principal investigator Rabindra Das.

Quantum Diamond Magnetic Cryomicroscope is a breakthrough in magnetic field imaging for characterizing superconducting electronics, a promising frontier in high-performance computing. Unlike traditional techniques, this system delivers fast, wide-field, high-resolution imaging at the cryogenic temperatures required for superconducting devices. The instrument combines an optical microscopy system with a cryogenic sensor head containing a diamond engineered with nitrogen-vacancy centers — atomic-scale defects highly sensitive to magnetic fields. The cryomicroscope enables researchers to directly visualize trapped magnetic vortices that interfere with critical circuit components, helping to overcome a major obstacle to scaling superconducting electronics.

“The cryomicroscope gives us an unprecedented window into magnetic behavior in superconducting devices, accelerating progress toward next-generation computing technologies,” says Pauli Kehayias, joint principal investigator with Jennifer Schloss. The instrument is currently advancing superconducting electronics development at Lincoln Laboratory and is poised to impact materials science and quantum technology more broadly.

Enhancing communications 

Lincoln Laboratory Radio Frequency Situational Awareness Model (LL RF-SAM) utilizes advances in AI to enhance U.S. service members' vigilance over the electromagnetic spectrum. The modern spectrum can be described as a swamp of mixed signals originating from civilian, military, or enemy sources. In near-real time, LL RF-SAM inspects these signals to disentangle and identify nearby waveforms and their originating devices. For example, LL RF-SAM can help a user identify a particular packet of energy as a drone transmission protocol and then classify whether that drone is part of a corpus of friendly or enemy drones.

"This type of enhanced context helps military operators make data-driven decisions. The future adoption of this technology will have profound impact across communications, signals intelligence, spectrum management, and wireless infrastructure security," says principal investigator Joey Botero. 

Modular, Agile, Scalable Optical Terminal (MAScOT) is a laser communications (lasercom) terminal architecture that facilitates mission-enabling lasercom solutions adaptable to various space platforms and operating environments. Lasercom is rapidly becoming the go-to technology for space-to-space links in low Earth orbit because of its ability to support significantly higher data rates compared to radio frequency terminals. However, it has yet to be used operationally or commercially for longer-range space-to-ground links, as such systems often require custom designs for specific missions. MASCOT's modular, agile, and scalable design streamlines the process for building lasercom terminals suitable for a range of missions, from near Earth to deep space. MAScOT made its debut on the International Space Station in 2023 to demonstrate NASA's first two-way lasercom relay system, and is now being prepared to serve in an operational capacity on Artemis II, NASA's moon flyby mission scheduled for 2026. Two industry-built terminals have adopted the MAScOT architecture, and technology transfer to additional industry partners is ongoing.

"MAScOT is the latest lasercom terminal designed by Lincoln Laboratory engineers following decades of pioneering lasercom work with NASA, and it is poised to support lasercom for decades to come," says Bryan Robinson, who co-led MAScOT development with Tina Shih. 

Protected Anti-jam Tactical SATCOM (PATS) Key Management System (KMS) Prototype addresses the critical challenge of securely distributing cryptographic keys for military satellite communications (SATCOM) during terminal jamming, compromise, or disconnection. Realizing the U.S. Space Systems Command's vision for resilient, protected tactical SATCOM, the PATS KMS Prototype leverages innovative, bandwidth-efficient protocols and algorithms to enable real-time, scalable key distribution over wireless links, even under attack, so that warfighters can communicate securely in contested environments. PATS KMS is now being adopted as the core of the Department of Defense's next-generation SATCOM architecture.

"PATS KMS is not just a technology — it's a linchpin enabler of resilient, modern SATCOM, built for the realities of today's contested battlefield. We worked hand-in-hand with government stakeholders, operational users, and industry partners across a multiyear, multiphase journey to bring this capability to life," says Joseph Sobchuk, co-principal investigator with Nancy List. The R&D 100 Award is shared with the U.S. Space Force Space Systems Command, whose “visionary leadership has been instrumental in shaping the future of protected tactical SATCOM,” Sobchuk adds.

Study finds cell memory can be more like a dimmer dial than an on/off switch

Tue, 09/09/2025 - 11:00am

When cells are healthy, we don’t expect them to suddenly change cell types. A skin cell on your hand won’t naturally morph into a brain cell, and vice versa. That’s thanks to epigenetic memory, which enables the expression of various genes to “lock in” throughout a cell’s lifetime. Failure of this memory can lead to diseases, such as cancer.

Traditionally, scientists have thought that epigenetic memory locks genes either “on” or “off” — either fully activated or fully repressed, like a permanent Lite-Brite pattern. But MIT engineers have found that the picture has many more shades.

In a new study appearing today in Cell Genomics, the team reports that a cell’s memory is set not by on/off switching but through a more graded, dimmer-like dial of gene expression.

The researchers carried out experiments in which they set the expression of a single gene at different levels in different cells. While conventional wisdom would assume the gene should eventually switch on or off, the researchers found that the gene’s original expression persisted: Cells whose gene expression was set along a spectrum between on and off remained in this in-between state.

The results suggest that epigenetic memory — the process by which cells retain gene expression and “remember” their identity — is not binary but instead analog, which allows for a spectrum of gene expression and associated cell identities.

“Our finding opens the possibility that cells commit to their final identity by locking genes at specific levels of gene expression instead of just on and off,” says study author Domitilla Del Vecchio, professor of mechanical and biological engineering at MIT. “The consequence is that there may be many more cell types in our body than we know and recognize today, that may have important functions and could underlie healthy or diseased states.”

The study’s MIT lead authors are Sebastian Palacios and Simone Bruno, with additional co-authors.

Beyond binary

Every cell shares the same genome, which can be thought of as the starting ingredient for life. As a cell takes shape, it differentiates into one type or another, through the expression of genes in its genome. Some genes are activated, while others are repressed. The combination steers a cell toward one identity versus another.

A process of DNA methylation, by which certain molecules attach to the genes’ DNA, helps lock their expression in place. DNA methylation assists a cell to “remember” its unique pattern of gene expression, which ultimately establishes the cell’s identity.

Del Vecchio’s group at MIT applies mathematics and genetic engineering to understand cellular molecular processes and to engineer cells with new capabilities. In previous work, her group was experimenting with DNA methylation and ways to lock the expression of certain genes in ovarian cells.

“The textbook understanding was that DNA methylation had a role to lock genes in either an on or off state,” Del Vecchio says. “We thought this was the dogma. But then we started seeing results that were not consistent with that.”

While many of the cells in their experiment exhibited an all-or-nothing expression of genes, a significant number of cells appeared to freeze genes in an in-between state — neither entirely on or off.

“We found there was a spectrum of cells that expressed any level between on and off,” Palacios says. “And we thought, how is this possible?”

Shades of blue

In their new study, the team aimed to see whether the in-between gene expression they observed was a fluke or a more established property of cells that until now has gone unnoticed.

“It could be that scientists disregarded cells that don’t have a clear commitment, because they assumed this was a transient state,” Del Vecchio says. “But actually these in-between cell types may be permanent states that could have important functions.”

To test their idea, the researchers ran experiments with hamster ovarian cells — a line of cells commonly used in the laboratory. In each cell, an engineered gene was initially set to a different level of expression. The gene was turned fully on in some cells, completely off in others, and set somewhere in between on and off for the remaining cells.

The team paired the engineered gene with a fluorescent marker that lights up with a brightness corresponding to the gene’s level of expression. The researchers introduced, for a short time, an enzyme that triggers the gene’s DNA methylation, a natural gene-locking mechanism. They then monitored the cells over five months to see whether the modification would lock the genes in place at their in-between expression levels, or whether the genes would migrate toward fully on or off states before locking in.

“Our fluorescent marker is blue, and we see cells glow across the entire spectrum, from really shiny blue, to dimmer and dimmer, to no blue at all,” Del Vecchio says. “Every intensity level is maintained over time, which means gene expression is graded, or analog, and not binary. We were very surprised, because we thought after such a long time, the gene would veer off, to be either fully on or off, but it did not.”

The findings open new avenues into engineering more complex artificial tissues and organs by tuning the expression of certain genes in a cell’s genome, like a dial on a radio, rather than a switch. The results also complicate the picture of how a cell’s epigenetic memory works to establish its identity. It opens up the possibility that cell modifications such as those exhibited in therapy-resistant tumors could be treated in a more precise fashion.

“Del Vecchio and colleagues have beautifully shown how analog memory arises through chemical modifications to the DNA itself,” says Michael Elowitz, professor of biology and biological engineering at the California institute of Technology, who was not involved in the study. “As a result, we can now imagine repurposing this natural analog memory mechanism, invented by evolution, in the field of synthetic biology, where it could help allow us to program permanent and precise multicellular behaviors.”

“One of the things that enables the complexity in humans is epigenetic memory,” Palacios says. “And we find that it is not what we thought. For me, that’s actually mind-blowing. And I think we’re going to find that this analog memory is relevant for many different processes across biology.”

This research was supported, in part, by the National Science Foundation, MODULUS, and a Vannevar Bush Faculty Fellowship through the U.S. Office of Naval Research.

“Bottlebrush” particles deliver big chemotherapy payloads directly to cancer cells

Tue, 09/09/2025 - 5:00am

Using tiny particles shaped like bottlebrushes, MIT chemists have found a way to deliver a large range of chemotherapy drugs directly to tumor cells.

To guide them to the right location, each particle contains an antibody that targets a specific tumor protein. This antibody is tethered to bottlebrush-shaped polymer chains carrying dozens or hundreds of drug molecules — a much larger payload than can be delivered by any existing antibody-drug conjugates.

In mouse models of breast and ovarian cancer, the researchers found that treatment with these conjugated particles could eliminate most tumors. In the future, the particles could be modified to target other types of cancer, by swapping in different antibodies.

“We are excited about the potential to open up a new landscape of payloads and payload combinations with this technology, that could ultimately provide more effective therapies for cancer patients,” says Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, a member of the Koch Institute for Integrative Cancer Research, and the senior author of the new study.

MIT postdoc Bin Liu is the lead author of the paper, which appears today in Nature Biotechnology.

A bigger drug payload

Antibody-drug conjugates (ADCs) are a promising type of cancer treatment that consist of a cancer-targeting antibody attached to a chemotherapy drug. At least 15 ADCs have been approved by the FDA to treat several different types of cancer.

This approach allows specific targeting of a cancer drug to a tumor, which helps to prevent some of the side effects that occur when chemotherapy drugs are given intravenously. However, one drawback to currently approved ADCs is that only a handful of drug molecules can be attached to each antibody. That means they can only be used with very potent drugs — usually DNA-damaging agents or drugs that interfere with cell division.

To try to use a broader range of drugs, which are often less potent, Johnson and his colleagues decided to adapt bottlebrush particles that they had previously invented. These particles consist of a polymer backbone that are attached to tens to hundreds of “prodrug” molecules — inactive drug molecules that are activated upon release within the body. This structure allows the particles to deliver a wide range of drug molecules, and the particles can be designed to carry multiple drugs in specific ratios.

Using a technique called click chemistry, the researchers showed that they could attach one, two, or three of their bottlebrush polymers to a single tumor-targeting antibody, creating an antibody-bottlebrush conjugate (ABC). This means that just one antibody can carry hundreds of prodrug molecules. The currently approved ADCs can carry a maximum of about eight drug molecules.

The huge number of payloads in the ABC particles allows the researchers to incorporate less potent cancer drugs such as doxorubicin or paclitaxel, which enhances the customizability of the particles and the variety of drug combinations that can be used.

“We can use antibody-bottlebrush conjugates to increase the drug loading, and in that case, we can use less potent drugs,” Liu says. “In the future, we can very easily copolymerize with multiple drugs together to achieve combination therapy.”

The prodrug molecules are attached to the polymer backbone by cleavable linkers. After the particles reach a tumor site, some of these linkers are broken right away, allowing the drugs to kill nearby cancer cells even if they don’t express the target antibody. Other particles are absorbed into cells with the target antibody before releasing their toxic payload.

Effective treatment

For this study, the researchers created ABC particles carrying a few different types of drugs: microtubule inhibitors called MMAE and paclitaxel, and two DNA-damaging agents, doxorubicin and SN-38. They also designed ABC particles carrying an experimental type of drug known as PROTAC (proteolysis-targeting chimera), which can selectively degrade disease-causing proteins inside cells.

Each bottlebrush was tethered to an antibody targeting either HER2, a protein often overexpressed in breast cancer, or MUC1, which is commonly found in ovarian, lung, and other types of cancer.

The researchers tested each of the ABCs in mouse models of breast or ovarian cancer and found that in most cases, the ABC particles were able to eradicate the tumors. This treatment was significantly more effective than giving the same bottlebrush prodrugs by injection, without being conjugated to a targeting antibody.

“We used a very low dose, almost 100 times lower compared to the traditional small-molecule drug, and the ABC still can achieve much better efficacy compared to the small-molecule drug given on its own,” Liu says.

These ABCs also performed better than two FDA-approved ADCs, T-DXd and TDM-1, which both use HER2 to target cells. T-DXd carries deruxtecan, which interferes with DNA replication, and TDM-1 carries emtansine, a microtubule inhibitor.

In future work, the MIT team plans to try delivering combinations of drugs that work by different mechanisms, which could enhance their overall effectiveness. Among these could be immunotherapy drugs such as STING activators.

The researchers are also working on swapping in different antibodies, such as antibodies targeting EGFR, which is widely expressed in many tumors. More than 100 antibodies have been approved to treat cancer and other diseases, and in theory any of those could be conjugated to cancer drugs to create a targeted therapy.

The research was funded in part by the National Institutes of Health, the Ludwig Center at MIT, and the Koch Institute Frontier Research Program. 

Remembering David Baltimore, influential biologist and founding director of the Whitehead Institute

Mon, 09/08/2025 - 8:00pm

The Whitehead Institute for Biomedical Research fondly remembers its founding director, David Baltimore, a former MIT Institute Professor and Nobel laureate who died Sept. 6 at age 87.

With discovery after discovery, Baltimore brought to light key features of biology with direct implications for human health. His work at MIT earned him a share of the 1975 Nobel Prize in Physiology or Medicine (along with Howard Temin and Renato Dulbecco) for discovering reverse transcriptase and identifying retroviruses, which use RNA to synthesize viral DNA.

Following the award, Baltimore reoriented his laboratory’s focus to pursue a mix of immunology and virology. Among the lab’s most significant subsequent discoveries were the identification of a pair of proteins that play an essential role in enabling the immune system to create antibodies for so many different molecules, and investigations into how certain viruses can cause cell transformation and cancer. Work from Baltimore’s lab also helped lead to the development of the important cancer drug Gleevec — the first small molecule to target an oncoprotein inside of cells.

In 1982, Baltimore partnered with philanthropist Edwin C. “Jack” Whitehead to conceive and launch the Whitehead Institute and then served as its founding director until 1990. Within a decade of its founding, the Baltimore-led Whitehead Institute was named the world’s top research institution in molecular biology and genetics.

“More than 40 years later, Whitehead Institute is thriving, still guided by the strategic vision that David Baltimore and Jack Whitehead articulated,” says Phillip Sharp, MIT Institute Professor Emeritus, former Whitehead board member, and fellow Nobel laureate. “Of all David’s myriad and significant contributions to science, his role in building the first independent biomedical research institute associated with MIT and guiding it to extraordinary success may well prove to have had the broadest and longest-term impact.” 

Ruth Lehmann, director and president of the Whitehead Institute, and professor of biology at MIT, says: “I, like many others, owe my career to David Baltimore. He recruited me to Whitehead Institute and MIT in 1988 as a faculty member, taking a risk on an unproven, freshly-minted PhD graduate from Germany. As director, David was incredibly skilled at bringing together talented scientists at different stages of their careers and facilitating their collaboration so that the whole would be greater than the sum of its parts. This approach remains a core strength of Whitehead Institute.”

As part of the Whitehead Institute’s mission to cultivate the next generation of scientific leaders, Baltimore founded the Whitehead Fellows program, which provides extraordinarily talented recent PhD and MD graduates with the opportunity to launch their own labs, rather than to go into traditional postdoctoral positions. The program has been a huge success, with former fellows going on to excel as leaders in research, education, and industry.

David Page, MIT professor of biology, Whitehead Institute member, and former director who was the Whitehead's first fellow, recalls, “David was both an amazing scientist and a peerless leader of aspiring scientists. The launching of the Whitehead Fellows program reflected his recipe for institutional success: gather up the resources to allow young scientists to realize their dreams, recruit with an eye toward potential for outsized impact, and quietly mentor and support without taking credit for others’ successes — all while treating junior colleagues as equals. It is a beautiful strategy that David designed and executed magnificently.”

Sally Kornbluth, president of MIT and a member of the Whitehead Institute Board of Directors, says that “David was a scientific hero for so many. He was one of those remarkable individuals who could make stellar scientific breakthroughs and lead major institutions with extreme thoughtfulness and grace. He will be missed by the whole scientific community.”

“David was a wise giant. He was brilliant. He was an extraordinarily effective, ethical leader and institution builder who influenced and inspired generations of scientists and premier institutions,” says Susan Whitehead, member of the board of directors and daughter of Jack Whitehead.

Gerald R. Fink, the Margaret and Herman Sokol Professor Emeritus at MIT who was recruited by Baltimore from Cornell University as one of four founding members of the Whitehead Institute, and who succeeded him as director in 1990, observes: “David became my hero and friend. He upheld the highest scientific ideals and instilled trust and admiration in all around him.”

     David Baltimore - Infinite History (2010)
     Video: MIT | Watch with transcript

Baltimore was born in New York City in 1938. His scientific career began at Swarthmore College, where he earned a bachelor’s degree with high honors in chemistry in 1960. He then began doctoral studies in biophysics at MIT, but in 1961 shifted his focus to animal viruses and moved to what is now the Rockefeller University, where he did his thesis work in the lab of Richard Franklin.

After completing postdoctoral fellowships with James Darnell at MIT and Jerard Hurwitz at the Albert Einstein College of Medicine, Baltimore launched his own lab at the Salk Institute for Biological Studies from 1965 to 1968. Then, in 1968, he returned to MIT as a member of its biology faculty, where he remained until 1990. (Whitehead Institute’s members hold parallel appointments as faculty in the MIT Department of Biology.)

In 1990, Baltimore left the Whitehead Institute and MIT to become the president of Rockefeller University. He returned to MIT from 1994 to 1997, serving as an Institute Professor, after which he was named president of Caltech. Baltimore held that position until 2006, when he was elected to a three-year term as president of the American Association for the Advancement of Science.

For decades, Baltimore has been viewed not just as a brilliant scientist and talented academic leader, but also as a wise counsel to the scientific community. For example, he helped organize the 1975 Asilomar Conference on Recombinant DNA, which created stringent safety guidelines for the study and use of recombinant DNA technology. He played a leadership role in the development of policies on AIDS research and treatment, and on genomic editing. Serving as an advisor to both organizations and individual scientists, he helped to shape the strategic direction of dozens of institutions and to advance the careers of generations of researchers. As Founding Member Robert Weinberg summarizes it, “He had no tolerance for nonsense and weak science.”  

In 2023, the Whitehead Institute established the endowed David Baltimore Chair in Biomedical Research, honoring Baltimore’s six decades of scientific, academic, and policy leadership and his impact on advancing innovative basic biomedical research.

“David was a visionary leader in science and the institutions that sustain it. He devoted his career to advancing scientific knowledge and strengthening the communities that make discovery possible, and his leadership of Whitehead Institute exemplified this,” says Richard Young, MIT professor of biology and Whitehead Institute member. “David approached life with keen observation, boundless curiosity, and a gift for insight that made him both a brilliant scientist and a delightful companion. His commitment to mentoring and supporting young scientists left a lasting legacy, inspiring the next generation to pursue impactful contributions to biomedical research. Many of us found in him not only a mentor and role model, but also a steadfast friend whose presence enriched our lives and whose absence will be profoundly felt.”

Alzheimer’s erodes brain cells’ control of gene expression, undermining function, cognition

Mon, 09/08/2025 - 4:25pm

Most people recognize Alzheimer’s disease from its devastating symptoms such as memory loss, while new drugs target pathological aspects of disease manifestations, such as plaques of amyloid proteins. Now, a sweeping new open-access study in the Sept. 4 edition of Cell by MIT researchers shows the importance of understanding the disease as a battle over how well brain cells control the expression of their genes. The study paints a high-resolution picture of a desperate struggle to maintain healthy gene expression and gene regulation, where the consequences of failure or success are nothing less than the loss or preservation of cell function and cognition.

The study presents a first-of-its-kind, multimodal atlas of combined gene expression and gene regulation spanning 3.5 million cells from six brain regions, obtained by profiling 384 post-mortem brain samples across 111 donors. The researchers profiled both the “transcriptome,” showing which genes are expressed into RNA, and the “epigenome,” the set of chromosomal modifications that establish which DNA regions are accessible and thus utilized between different cell types.

The resulting atlas revealed many insights showing that the progression of Alzheimer’s is characterized by two major epigenomic trends. The first is that vulnerable cells in key brain regions suffer a breakdown of the rigorous nuclear “compartments” they normally maintain to ensure some parts of the genome are open for expression but others remain locked away. The second major finding is that susceptible cells experience a loss of “epigenomic information,” meaning they lose their grip on the unique pattern of gene regulation and expression that gives them their specific identity and enables their healthy function.

Accompanying the evidence of compromised compartmentalization and the erosion of epigenomic information are many specific findings pinpointing molecular circuitry that breaks down by cell type, by region, and gene network. They found, for instance, that when epigenomic conditions deteriorate, that opens the door to expression of many genes associated with disease, whereas if cells manage to keep their epigenomic house in order, they can keep disease-associated genes in check. Moreover, the researchers clearly saw that when the epigenomic breakdowns were occurring people lost cognitive ability, but where epigenomic stability remained, so did cognition.

“To understand the circuitry, the logic responsible for gene expression changes in Alzheimer’s disease [AD], we needed to understand the regulation and upstream control of all the changes that are happening, and that’s where the epigenome comes in,” says senior author Manolis Kellis, a professor in the Computer Science and Artificial Intelligence Lab and head of MIT’s Computational Biology Group. “This is the first large-scale, single-cell, multi-region gene-regulatory atlas of AD, systematically dissecting the dynamics of epigenomic and transcriptomic programs across disease progression and resilience.”

By providing that detailed examination of the epigenomic mechanisms of Alzheimer’s progression, the study provides a blueprint for devising new Alzheimer’s treatments that can target factors underlying the broad erosion of epigenomic control or the specific manifestations that affect key cell types such as neurons and supporting glial cells.

“The key to developing new and more effective treatments for Alzheimer’s disease depends on deepening our understanding of the mechanisms that contribute to the breakdowns of cellular and network function in the brain,” says Picower Professor and co-corresponding author Li-Huei Tsai, director of The Picower Institute for Learning and Memory and a founding member of MIT’s Aging Brain Initiative, along with Kellis. “This new data advances our understanding of how epigenomic factors drive disease.”

Kellis Lab members Zunpeng Liu and Shanshan Zhang are the study’s co-lead authors.

Compromised compartments and eroded information

Among the post-mortem brain samples in the study, 57 came from donors to the Religious Orders Study or the Rush Memory and Aging Project (collectively known as “ROSMAP”) who did not have AD pathology or symptoms, while 33 came from donors with early-stage pathology and 21 came from donors at a late stage. The samples therefore provided rich information about the symptoms and pathology each donor was experiencing before death.

In the new study, Liu and Zhang combined analyses of single-cell RNA sequencing of the samples, which measures which genes are being expressed in each cell, and ATACseq, which measures whether chromosomal regions are accessible for gene expression. Considered together, these transcriptomic and epigenomic measures enabled the researchers to understand the molecular details of how gene expression is regulated across seven broad classes of brain cells (e.g., neurons or other glial cell types) and 67 subtypes of cell types (e.g., 17 kinds of excitatory neurons or six kinds of inhibitory ones).

The researchers annotated more than 1 million gene-regulatory control regions that different cells employ to establish their specific identities and functionality using epigenomic marking. Then, by comparing the cells from Alzheimer’s brains to the ones without, and accounting for stage of pathology and cognitive symptoms, they could produce rigorous associations between the erosion of these epigenomic markings, and ultimately loss of function.

For instance, they saw that among people who advanced to late-stage AD, normally repressive compartments opened up for more expression and compartments that were normally more open during health became more repressed. Worryingly, when the normally repressive compartments of brain cells opened up, they became more afflicted with disease.

“For Alzheimer’s patients, repressive compartments opened up, and gene expression levels increased, which was associated with decreased cognitive function,” explains Liu.

But when cells managed to keep their compartments in order such that they expressed the genes they were supposed to, people remained cognitively intact.

Meanwhile, based on the cells’ expression of their regulatory elements, the researchers created an epigenomic information score for each cell. Generally, information declined as pathology progressed, but that was particularly notable among cells in the two brain regions affected earliest in Alzheimer’s: the entorhinal cortex and the hippocampus. The analyses also highlighted specific cell types that were especially vulnerable including microglia that play immune and other roles, oligodendrocytes that produce myelin insulation for neurons, and particular kinds of excitatory neurons.

Risk genes and “chromatin guardians”

Detailed analyses in the paper highlighted how epigenomic regulation tracked with disease-related problems, Liu notes. The e4 variant of the APOE gene, for instance, is widely understood to be the single biggest genetic risk factor for Alzheimer’s. In APOE4 brains, microglia initially responded to the emerging disease pathology with an increase in their epigenomic information, suggesting that they were stepping up to their unique responsibility to fight off disease. But as the disease progressed, the cells exhibited a sharp drop off in information, a sign of deterioration and degeneration. This turnabout was strongest in people who had two copies of APOE4, rather than just one. The findings, Kellis said, suggest that APOE4 might destabilize the genome of microglia, causing them to burn out.

Another example is the fate of neurons expressing the gene RELN and its protein Reelin. Prior studies, including by Kellis and Tsai, have shown that RELN- expressing neurons in the entorhinal cortex and hippocampus are especially vulnerable in Alzheimer’s, but promote resilience if they survive. The new study sheds new light on their fate by demonstrating that they exhibit early and severe epigenomic information loss as disease advances, but that in people who remained cognitively resilient the neurons maintained epigenomic information.

In yet another example, the researchers tracked what they colloquially call “chromatin guardians” because their expression sustains and regulates cells’ epigenomic programs. For instance, cells with greater epigenomic erosion and advanced AD progression displayed increased chromatin accessibility in areas that were supposed to be locked down by Polycomb repression genes or other gene expression silencers. While resilient cells expressed genes promoting neural connectivity, epigenomically eroded cells expressed genes linked to inflammation and oxidative stress.

“The message is clear: Alzheimer’s is not only about plaques and tangles, but about the erosion of nuclear order itself,” Kellis says. “Cognitive decline emerges when chromatin guardians lose ground to the forces of erosion, switching from resilience to vulnerability at the most fundamental level of genome regulation.

“And when our brain cells lose their epigenomic memory marks and epigenomic information at the lowest level deep inside our neurons and microglia, it seems that Alheimer’s patients also lose their memory and cognition at the highest level.”

Other authors of the paper are Benjamin T. James, Kyriaki Galani, Riley J. Mangan, Stuart Benjamin Fass, Chuqian Liang, Manoj M. Wagle, Carles A. Boix, Yosuke Tanigawa, Sukwon Yun, Yena Sung, Xushen Xiong, Na Sun, Lei Hou, Martin Wohlwend, Mufan Qiu, Xikun Han, Lei Xiong, Efthalia Preka, Lei Huang, William F. Li, Li-Lun Ho, Amy Grayson, Julio Mantero, Alexey Kozlenkov, Hansruedi Mathys, Tianlong Chen, Stella Dracheva, and David A. Bennett.

Funding for the research came from the National Institutes of Health, the National Science Foundation, the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, Eduardo Eurnekian, and Joseph P. DiSabato.

Physicists devise an idea for lasers that shoot beams of neutrinos

Mon, 09/08/2025 - 11:30am

At any given moment, trillions of particles called neutrinos are streaming through our bodies and every material in our surroundings, without noticeable effect. Smaller than electrons and lighter than photons, these ghostly entities are the most abundant particles with mass in the universe.

The exact mass of a neutrino is a big unknown. The particle is so small, and interacts so rarely with matter, that it is incredibly difficult to measure. Scientists attempt to do so by harnessing nuclear reactors and massive particle accelerators to generate unstable atoms, which then decay into various byproducts including neutrinos. In this way, physicists can manufacture beams of neutrinos that they can probe for properties including the particle’s mass.

Now MIT physicists propose a much more compact and efficient way to generate neutrinos that could be realized in a tabletop experiment.

In a paper appearing in Physical Review Letters, the physicists introduce the concept for a “neutrino laser” — a burst of neutrinos that could be produced by laser-cooling a gas of radioactive atoms down to temperatures colder than interstellar space. At such frigid temps, the team predicts the atoms should behave as one quantum entity, and radioactively decay in sync.

The decay of radioactive atoms naturally releases neutrinos, and the physicists say that in a coherent, quantum state this decay should accelerate, along with the production of neutrinos. This quantum effect should produce an amplified beam of neutrinos, broadly similar to how photons are amplified to produce conventional laser light.

“In our concept for a neutrino laser, the neutrinos would be emitted at a much faster rate than they normally would, sort of like a laser emits photons very fast,” says study co-author Ben Jones PhD ’15, an associate professor of physics at the University of Texas at Arlington.

As an example, the team calculated that such a neutrino laser could be realized by trapping 1 million atoms of rubidium-83. Normally, the radioactive atoms have a half-life of about 82 days, meaning that half the atoms decay, shedding an equivalent number of neutrinos, every 82 days. The physicists show that, by cooling rubidium-83 to a coherent, quantum state, the atoms should undergo radioactive decay in mere minutes.

“This is a novel way to accelerate radioactive decay and the production of neutrinos, which to my knowledge, has never been done,” says co-author Joseph Formaggio, professor of physics at MIT.

The team hopes to build a small tabletop demonstration to test their idea. If it works, they envision a neutrino laser could be used as a new form of communication, by which the particles could be sent directly through the Earth to underground stations and habitats. The neutrino laser could also be an efficient source of radioisotopes, which, along with neutrinos, are byproducts of radioactive decay. Such radioisotopes could be used to enhance medical imaging and cancer diagnostics.

Coherent condensate

For every atom in the universe, there are about a billion neutrinos. A large fraction of these invisible particles may have formed in the first moments following the Big Bang, and they persist in what physicists call the “cosmic neutrino background.” Neutrinos are also produced whenever atomic nuclei fuse together or break apart, such as in the fusion reactions in the sun’s core, and in the normal decay of radioactive materials.

Several years ago, Formaggio and Jones separately considered a novel possibility: What if a natural process of neutrino production could be enhanced through quantum coherence? Initial explorations revealed fundamental roadblocks in realizing this. Years later, while discussing the properties of ultracold tritium (an unstable isotope of hydrogen that undergoes radioactive decay) they asked: Could the production of neutrinos be enhanced if radioactive atoms such as tritium could be made so cold that they could be brought into a quantum state known as a Bose-Einstein condensate?

A Bose-Einstein condensate, or BEC, is a state of matter that forms when a gas of certain particles is cooled down to near absolute zero. At this point, the particles are brought down to their lowest energy level and stop moving as individuals. In this deep freeze, the particles can start to “feel” each others’ quantum effects, and can act as one coherent entity — a unique phase that can result in exotic physics.

BECs have been realized in a number of atomic species. (One of the first instances was with sodium atoms, by MIT’s Wolfgang Ketterle, who shared the 2001 Nobel Prize in Physics for the result.) However, no one has made a BEC from radioactive atoms. To do so would be exceptionally challenging, as most radioisotopes have short half-lives and would decay entirely before they could be sufficiently cooled to form a BEC.

Nevertheless, Formaggio wondered, if radioactive atoms could be made into a BEC, would this enhance the production of neutrinos in some way? In trying to work out the quantum mechanical calculations, he found initially that no such effect was likely.

“It turned out to be a red herring — we can’t accelerate the process of radioactive decay, and neutrino production, just by making a Bose-Einstein condensate,” Formaggio says.

In sync with optics

Several years later, Jones revisited the idea, with an added ingredient: superradiance — a phenomenon of quantum optics that occurs when a collection of light-emitting atoms is stimulated to behave in sync. In this coherent phase, it’s predicted that the atoms should emit a burst of photons that is “superradiant,” or more radiant than when the atoms are normally out of sync.

Jones proposed to Formaggio that perhaps a similar superradiant effect is possible in a radioactive Bose-Einstein condensate, which could then result in a similar burst of neutrinos. The physicists went to the drawing board to work out the equations of quantum mechanics governing how light-emitting atoms morph from a coherent starting state into a superradiant state. They used the same equations to work out what radioactive atoms in a coherent BEC state would do.

“The outcome is: You get a lot more photons more quickly, and when you apply the same rules to something that gives you neutrinos, it will give you a whole bunch more neutrinos more quickly,” Formaggio explains. “That’s when the pieces clicked together, that superradiance in a radioactive condensate could enable this accelerated, laser-like neutrino emission.”

To test their concept in theory, the team calculated how neutrinos would be produced from a cloud of 1 million super-cooled rubidium-83 atoms. They found that, in the coherent BEC state, the atoms radioactively decayed at an accelerating rate, releasing a laser-like beam of neutrinos within minutes.

Now that the physicists have shown in theory that a neutrino laser is possible, they plan to test the idea with a small tabletop setup.

“It should be enough to take this radioactive material, vaporize it, trap it with lasers, cool it down, and then turn it into a Bose-Einstein condensate,” Jones says. “Then it should start doing this superradiance spontaneously.”

The pair acknowledge that such an experiment will require a number of precautions and careful manipulation.

“If it turns out that we can show it in the lab, then people can think about: Can we use this as a neutrino detector? Or a new form of communication?” Formaggio says. “That’s when the fun really starts.”

Study finds exoplanet TRAPPIST-1e is unlikely to have a Venus- or Mars-like atmosphere

Mon, 09/08/2025 - 10:50am

In the search for habitable exoplanets, atmospheric conditions play a key role in determining if a planet can sustain liquid water. Suitable candidates often sit in the “Goldilocks zone,” a distance that is neither too close nor too far from their host star to allow liquid water. With the launch of the James Webb Space Telescope (JWST), astronomers are collecting improved observations of exoplanet atmospheres that will help determine which exoplanets are good candidates for further study.

In an open-access paper published today in The Astrophysical Journal Lettersastronomers used JWST to take a closer look at the atmosphere of the exoplanet TRAPPIST-1e, located in the TRAPPIST-1 system. While they haven’t found definitive proof of what it is made of — or if it even has an atmosphere — they were able to rule out several possibilities.

“The idea is: If we assume that the planet is not airless, can we constrain different atmospheric scenarios? Do those scenarios still allow for liquid water at the surface?” says Ana Glidden, a postdoc in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and the MIT Kavli Institute for Astrophysics and Space Research, and the first author on the paper. The answers they found were yes.

The new data rule out a hydrogen-dominated atmosphere, and place tighter constraints on other atmospheric conditions that are commonly created through secondary-generation, such as volcanic eruptions and outgassing from the planet’s interior. The data were consistent enough to still allow for the possibility of a surface ocean.

“TRAPPIST-1e remains one of our most compelling habitable-zone planets, and these new results take us a step closer to knowing what kind of world it is,” says Sara Seager, Class of 1941 Professor of Planetary Science at MIT and co-author on the study. “The evidence pointing away from Venus- and Mars-like atmospheres sharpens our focus on the scenarios still in play.”

The study’s co-authors also include collaborators from the University of Arizona, Johns Hopkins University, University of Michigan, the Space Telescope Science Institute, and members of the JWST-TST DREAMS Team.

Improved observations

Exoplanet atmospheres are studied using a technique called transmission spectroscopy. When a planet passes in front of its host star, the starlight is filtered through the planet’s atmosphere. Astronomers can determine which molecules are present in the atmosphere by seeing how the light changes at different wavelengths.

“Each molecule has a spectral fingerprint. You can compare your observations with those fingerprints to suss out which molecules may be present,” says Glidden.

JWST has a larger wavelength coverage and higher spectral resolution than its predecessor, the Hubble Space Telescope, which makes it possible to observe molecules like carbon dioxide and methane that are more commonly found in our own solar system. However, the improved observations have also highlighted the problem of stellar contamination, where changes in the host star’s temperature due to things like sunspots and solar flares make it difficult to interpret data.

“Stellar activity strongly interferes with the planetary interpretation of the data because we can only observe a potential atmosphere through starlight,” says Glidden. “It is challenging to separate out which signals come from the star versus from the planet itself.”

Ruling out atmospheric conditions

The researchers used a novel approach to mitigate for stellar activity and, as a result, “any signal you can see varying visit-to-visit is most likely from the star, while anything that’s consistent between the visits is most likely the planet,” says Glidden.

The researchers were then able to compare the results to several different possible atmospheric scenarios. They found that carbon dioxide-rich atmospheres, like those of Mars and Venus, are unlikely, while a warm, nitrogen-rich atmosphere similar to Saturn’s moon Titan remains possible. The evidence, however, is too weak to determine if any atmosphere was present, let alone detecting a specific type of gas. Additional, ongoing observations that are already in the works will help to narrow down the possibilities.

“With our initial observations, we have showcased the gains made with JWST. Our follow-up program will help us to further refine our understanding of one of our best habitable-zone planets,” says Glidden.

AI and machine learning for engineering design

Sun, 09/07/2025 - 12:00am

Artificial intelligence optimization offers a host of benefits for mechanical engineers, including faster and more accurate designs and simulations, improved efficiency, reduced development costs through process automation, and enhanced predictive maintenance and quality control.

“When people think about mechanical engineering, they're thinking about basic mechanical tools like hammers and … hardware like cars, robots, cranes, but mechanical engineering is very broad,” says Faez Ahmed, the Doherty Chair in Ocean Utilization and associate professor of mechanical engineering at MIT. “Within mechanical engineering, machine learning, AI, and optimization are playing a big role.”

In Ahmed’s course, 2.155/156 (AI and Machine Learning for Engineering Design), students use tools and techniques from artificial intelligence and machine learning for mechanical engineering design, focusing on the creation of new products and addressing engineering design challenges.

“There’s a lot of reason for mechanical engineers to think about machine learning and AI to essentially expedite the design process,” says Lyle Regenwetter, a teaching assistant for the course and a PhD candidate in Ahmed’s Design Computation and Digital Engineering Lab (DeCoDE), where research focuses on developing new machine learning and optimization methods to study complex engineering design problems.

First offered in 2021, the class has quickly become one of the Department of Mechanical Engineering (MechE)’s most popular non-core offerings, attracting students from departments across the Institute, including mechanical and civil and environmental engineering, aeronautics and astronautics, the MIT Sloan School of Management, and nuclear and computer science, along with cross-registered students from Harvard University and other schools.

The course, which is open to both undergraduate and graduate students, focuses on the implementation of advanced machine learning and optimization strategies in the context of real-world mechanical design problems. From designing bike frames to city grids, students participate in contests related to AI for physical systems and tackle optimization challenges in a class environment fueled by friendly competition.

Students are given challenge problems and starter code that “gave a solution, but [not] the best solution …” explains Ilan Moyer, a graduate student in MechE. “Our task was to [determine], how can we do better?” Live leaderboards encourage students to continually refine their methods.

Em Lauber, a system design and management graduate student, says the process gave space to explore the application of what students were learning and the practice skill of “literally how to code it.”

The curriculum incorporates discussions on research papers, and students also pursue hands-on exercises in machine learning tailored to specific engineering issues including robotics, aircraft, structures, and metamaterials. For their final project, students work together on a team project that employs AI techniques for design on a complex problem of their choice.

“It is wonderful to see the diverse breadth and high quality of class projects,” says Ahmed. “Student projects from this course often lead to research publications, and have even led to awards.” He cites the example of a recent paper, titled “GenCAD-Self-Repairing,” that went on to win the American Society of Mechanical Engineers Systems Engineering, Information and Knowledge Management 2025 Best Paper Award.

“The best part about the final project was that it gave every student the opportunity to apply what they’ve learned in the class to an area that interests them a lot,” says Malia Smith, a graduate student in MechE. Her project chose “markered motion captured data” and looked at predicting ground force for runners, an effort she called “really gratifying” because it worked so much better than expected.

Lauber took the framework of a “cat tree” design with different modules of poles, platforms, and ramps to create customized solutions for individual cat households, while Moyer created software that is designing a new type of 3D printer architecture.

“When you see machine learning in popular culture, it’s very abstracted, and you have the sense that there’s something very complicated going on,” says Moyer. “This class has opened the curtains.” 

A human-centered approach to data visualization

Fri, 09/05/2025 - 12:00am

The world is awash in data visualizations, from charts accompanying news stories on the economy to graphs tracking the weekly temperature to scatterplots showing relationships between baseball statistics.

At their core, data visualizations convey information, and everyone consumes that information differently. One person might scan the axes, while another may focus on an outlying data point or examine the magnitude of each colored bar.

But how do you consume that information if you can’t see it?

Making a data visualization accessible for blind and low-vision readers often involves writing a descriptive caption that captures some key points in a succinct paragraph.

“But that means blind and low-vision readers don’t get the ability to interpret the data for themselves. What if they had a different question about the data? Suddenly a simple caption doesn’t give them that. The core idea behind our group’s work in accessibility has been to maintain agency for blind and low-vision people,” says Arvind Satyanarayan, a newly tenured associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Satyanarayan’s group has explored making data visualizations accessible for screen readers, which narrate content on a computer screen. His team created a hierarchical platform that allows screen reader users to explore various levels of detail in a visualization with their keyboard, drilling down from high-level information to individual data points.

Under the umbrella of human-computer interaction (HCI) research, Satyanarayan’s Visualization Group also develops programming languages and authoring tools for visualizations, studies the sociocultural elements of visualization design, and uses visualizations to analyze machine-learning models.

For Satyanarayan, HCI is about promoting human agency, whether that means enabling a blind reader to interpret data trends or ensuring designers still feel in control of AI-driven visualization systems.

“We really take a human-centered approach to data visualization,” he says.

An eye for technology

Satyanarayan found the field of data visualization almost by accident.

As a child growing up in India, Bahrain, and Abu Dhabi, his initial interest in science sprouted from his love for tinkering.

Satyanarayan recalls his father bringing home a laptop, which he loaded with simple games. The internet grew up along with him, and as a teenager he became heavily engaged in the popular blogging platform Movable Type.

A teacher at heart even as a teenager, Satyanarayan offered tutorials on how to use the platform and ran a contest for people to style their blog. Along the way, he taught himself the skills to develop plugins and extensions.

He enjoyed designing eye-catching and user-friendly blogs, laying the foundation for his studies in human-computer interaction.

When he arrived at the University of California at San Diego for college, he was interested enough in the HCI field to take an introductory class.

“I’d always been a student of history, and this intro class really appealed to me because it was more about the history of user interfaces, and tracing the provenance and development of the ideas behind them,” he says.

Almost as an afterthought, he spoke with the professor, Jim Hollan — a pioneer of the field. Even though he hadn’t thought much about research beforehand, Satyanarayan ended up spending the summer in Hollan’s lab, studying how people interact with wall-sized displays.

As he prepared to pursue graduate studies (Satyanarayan split his PhD between Stanford University and the University of Washington), he was unsure whether to focus on programming languages or HCI. When it came time to choose, the human-centered focus of HCI and the interdisciplinarity of data visualization drew him in.

“Data visualization is deeply technical, but it also draws from cognitive science, perceptual psychology, and visual arts and aesthetics, and then it also has a big stake in civic and social responsibility,” he says.

He saw how visualization plays a role in civic and social responsibility through his first project with his PhD advisor, Jeffrey Heer. Satyanarayan and his collaborators built a data visualization interface for journalists at newsrooms that couldn’t afford to hire data departments. That drag-and-drop tool allowed journalists to design the visualization and all the data storytelling they wanted to do around it.

That project seeded many elements that became his thesis, for which he studied new programming languages for visualization and developed interactive graphical systems on top of them.

After earning his PhD, Satyanarayan sought a faculty job and spent an exhausting interview season crisscrossing the country, participating in 15 interviews in only two months.

MIT was his very last stop.

“I remember being exhausted and on autopilot, thinking that this is not going well. But then, the first day of my interview at MIT was filled with some of the best conversations I had. People were so eager and interested in understanding my research and how it connected to theirs,” he says.

Charting a collaborative course

The collaborative nature of MIT remained important as he built his research group; one of the group’s first graduate students was pursuing a PhD in MIT’s program in History, Anthropology, and Science, Technology, and Society. They continue to work closely with faculty who study anthropology, topics in the humanities, and clinical machine learning.

With interdisciplinary collaborators, the Visualization Group has explored the sociotechnical implications of data visualizations. For instance, charts are frequently shared, disseminated, and discussed on social media, where they are stripped of their context.

“What happens as a result is they can become vectors for misinformation or misunderstanding. But that is not because they are poorly designed to begin with. We spent a lot of time unpacking those details,” Satyanarayan says.

His group is also studying tactile graphics, which are common in museums to help blind and low-vision individuals interact with exhibits. Often, making a tactile graphic boils down to 3D-printing a chart.

“But a chart was designed to be read with our eyes, and our eyes work very differently than our fingers. We are now drilling into what it means to design tactile-first visualizations,” he says.

Co-design is a driving principle behind all his group’s accessibility work. On many projects, they work closely with Daniel Hajas, a researcher at the University College of London who has been blind since the age of 16.

“That has been really important for us, to make sure as people who are not blind, that we are developing tools and platforms that are actually useful for blind and low-vision people,” he says.

His group is also studying the sociocultural implications of data visualization. For instance, during the height of the Covid-19 pandemic, data visualizations were often turned into memes and social artifacts that were used to support or contest data from experts.

“In reality, neither data nor visualizations are neutral. We’ve been thinking about the data you use to visualize, and the design choices behind specific visualizations, and what that is communicating besides insights about the data,” he says.

Visualizing a real-world impact

Interdisciplinarity is also a theme of Satyanarayan’s interactive data visualization class, which he co-teaches with faculty members Sarah Williams and Catherine D'Ignazio in the Department of Urban Studies and Planning; and Crystal Lee in Comparative Media Studies/Writing, with shared appointments in the School of Arts, Humanities, and Social Sciences and the MIT Schwarzman College of Computing.

In the popular course, students not only learn the technical skills to make data visualizations, but they also build final projects centered on an area of social importance. For the past two years, students have focused on the housing affordability crisis in the Boston area, in partnership with the Massachusetts Area Planning Council. The students enjoy the opportunity to make a real-world impact with their work, Satyanarayan says.

And he enjoys the course as much as they do.

“I love teaching. I really enjoy getting to interact with the students. Our students are so intellectually curious and committed. It reassures me that our future is in good hands,” he says.

One of Satyanarayan’s personal interests is running along the Charles River Esplanade in Boston, which he does almost every day. He also enjoys cooking, especially with ingredients he has never used before.

Satyanarayan and his wife, who met while they were graduate students at Stanford (her PhD is in microbiology), also delight in tending their plot in the Fenway Victory Gardens, which is overflowing with lilies, lavender, lilacs, peonies, and roses.

Their newest addition is a miniature poodle puppy named Fen, which they got when Satyanarayan earned tenure earlier this year.

Thinking toward the future of his research, Satyanarayan is keen to further explore how generative AI might effectively assist people in building visualizations, and its implications for human creativity.

“In the world of generative AI, this question of agency applies to all of us,” he says. “How do we make sure, for these AI-driven systems, that we haven’t lost the parts of the work we find most interesting?”

J-WAFS welcomes Daniela Giardina as new executive director

Thu, 09/04/2025 - 10:00pm

The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) announced that Daniela Giardina has been named the new J-WAFS executive director. Giardina stepped into the role at the start of the fall semester, replacing founding executive director Renee J. Robins ’83, who is retiring after leading the program since its launch in 2014.

“Daniela brings a deep background in water and food security, along with excellent management and leadership skills,” says Robins. “Since I first met her nearly 10 years ago, I have been impressed with her commitment to working on global water and food challenges through research and innovation. I am so happy to know that I will be leaving J-WAFS in her experienced and capable hands.”

A decade of impact

J-WAFS fuels research, innovation, and collaboration to solve global water and food systems challenges. The mission of J-WAFS is to ensure safe and resilient supplies of water and food to meet the local and global needs of a dramatically growing population on a rapidly changing planet. J-WAFS funding opportunities are open to researchers in every MIT department, lab, and center, spanning all disciplines. Supported research projects include those involving engineering, science, technology, business, social science, economics, architecture, urban planning, and more. J-WAFS research and related activities include early-stage projects, sponsored research, commercialization efforts, student activities and mentorship, events that convene local and global experts, and international-scale collaborations.

The global water, food, and climate emergency makes J-WAFS’ work both timely and urgent. J-WAFS-funded researchers are achieving tangible, real-time solutions and results. Since its inception, J-WAFS has distributed nearly $26 million in grants, fellowships, and awards to the MIT community, supporting roughly 10 percent of MIT’s faculty and 300 students, postdocs, and research staff from 40 MIT departments, labs, and centers. J-WAFS grants have also helped researchers launch 13 startups and receive over $25 million in follow-on funding.

Giardina joins J-WAFS at an exciting time in the program’s history; in the spring, J-WAFS celebrated 10 years of supporting water and food research at MIT. The milestone was commemorated at a special event attended by MIT leadership, researchers, students, staff, donors, and others in the J-WAFS community. As J-WAFS enters its second decade, interest and opportunities for water and food research continue to grow. “I am truly honored to join J-WAFS at such a pivotal moment,” Giardina says.

Putting research into real-world practice

Giardina has nearly two decades of experience working with nongovernmental organizations and research institutions on humanitarian and development projects. Her work has taken her to Africa, Latin America, the Caribbean, and Central and Southeast Asia, where she has focused on water and food security projects. She has conducted technical trainings and assessments, and managed projects from design to implementation, including monitoring and evaluation.

Giardina comes to MIT from Oxfam America, where she directed disaster risk reduction and climate resilience initiatives, working on approaches to strengthen local leadership, community-based disaster risk reduction, and anticipatory action. Her role at Oxfam required her to oversee multimillion-dollar initiatives, supervising international teams, managing complex donor portfolios, and ensuring rigorous monitoring across programs. She connected hands-on research with community-oriented implementation, for example, by partnering with MIT’s D-Lab to launch an innovation lab in rural El Salvador. Her experience will help guide J-WAFS as it pursues impactful research that will make a difference on the ground.

Beyond program delivery, Giardina has played a strategic leadership role in shaping Oxfam’s global disaster risk reduction strategy and representing the organization at high-level U.N. and academic forums. She is multilingual and adept at building partnerships across cultures, having worked with governments, funders, and community-based organizations to strengthen resilience and advance equitable access to water and food.

Giardina holds a PhD in sustainable development from the University of Brescia in Italy. She also holds a master’s degree in environmental engineering from the Politecnico of Milan in Italy and is a chartered engineer since 2005 (equivalent to a professional engineering license in the United States). She also serves as vice chair of the Boston Network for International Development, a nonprofit that connects and strengthens Boston’s global development community.

“I have seen first-hand how climate change, misuse of resources, and inequality are undermining water and food security around the globe,” says Giardina. “What particularly excites me about J-WAFS is its interdisciplinary approach in facilitating meaningful partnerships to solve many of these problems through research and innovation. I am eager to help expand J-WAFS’ impact by strengthening existing programs, developing new initiatives, and building strategic partnerships that translate MIT's groundbreaking research into real-world solutions,” she adds.

A legacy of leadership

Renee Robins will retire with over 23 years of service to MIT. Years before joining the staff, she graduated from MIT with dual bachelor’s degrees in both biology and humanities/anthropology. She then went on to earn a master’s degree in public policy from Carnegie Mellon University. In 1998, she came back to MIT to serve in various roles across campus, including with the Cambridge MIT Institute, the MIT Portugal Program, the Mexico City Program, the Program on Emerging Technologies, and the Technology and Policy Program. She also worked at the Harvard Graduate School of Education, where she managed a $15 million research program as it scaled from implementation in one public school district to 59 schools in seven districts across North Carolina.

In late 2014, Robins joined J-WAFS as its founding executive director, playing a pivotal role in building it from the ground up and expanding the team to six full-time professionals. She worked closely with J-WAFS founding director Professor John H. Lienhard V to develop and implement funding initiatives, develop, and shepherd corporate-sponsored research partnerships, and mentor students in the Water Club and Food and Agriculture Club, as well as numerous other students. Throughout the years, Robins has inspired a diverse range of researchers to consider how their capabilities and expertise can be applied to water and food challenges. Perhaps most importantly, her leadership has helped cultivate a vibrant community, bringing together faculty, students, and research staff to be exposed to unfamiliar problems and new methodologies, to explore how their expertise might be applied, to learn from one another, and to collaborate.

At the J-WAFS 10th anniversary event in May, Robins noted, “it has been a true privilege to work alongside John Lienhard, our dedicated staff, and so many others. It’s been particularly rewarding to see the growth of an MIT network of water and food researchers that J-WAFS has nurtured, which grew out of those few individuals who saw themselves to be working in solitude on these critical challenges.”

Lienhard also spoke, thanking Robins by saying she “was my primary partner in building J-WAFS and [she is] a strong leader and strategic thinker.”

Not only is Robins a respected leader, she is also a dear friend to so many at MIT and beyond. In 2021, she was recognized for her outstanding leadership and commitment to J-WAFS and the Institute with an MIT Infinite Mile Award in the area of the Offices of the Provost and Vice President for Research.

Outside of MIT, Robins has served on the Board of Trustees for the International Honors Program — a comparative multi-site study abroad program, where she previously studied comparative culture and anthropology in seven countries around the world. Robins has also acted as an independent consultant, including work on program design and strategy around the launch of the Université Mohammed VI Polytechnique in Morocco.

Continuing the tradition of excellence

Giardina will report to J-WAFS director Rohit Karnik, the Abdul Latif Jameel Professor of Water and Food in the MIT Department of Mechanical Engineering. Karnik was named the director of J-WAFS in January, succeeding John Lienhard, who retired earlier this year.

As executive director, Giardina will be instrumental in driving J-WAFS’ mission and impact. She will work with Karnik to help shape J-WAFS’ programs, long-term strategy, and goals. She will also be responsible for supervising J-WAFS staff, managing grant administration, and overseeing and advising on financial decisions.

“I am very grateful to John and Renee, who have helped to establish J-WAFS as the Institute’s preeminent program for water and food research and significantly expanded MIT’s research efforts and impact in the water and food space,” says Karnik. “I am confident that with Daniela as executive director, J-WAFS will continue in the tradition of excellence that Renee and John put into place, as we move into the program’s second decade,” he notes.

Giardina adds, “I am inspired by the lab’s legacy of Renee Robins and Professor Lienhard, and I look forward to working with Professor Karnik and the J-WAFS staff.”

A comprehensive cellular-resolution map of brain activity

Thu, 09/04/2025 - 4:50pm

The first comprehensive map of mouse brain activity has been unveiled by a large international collaboration of neuroscientists. 

Researchers from the International Brain Laboratory (IBL), including MIT neuroscientist Ila Fiete, published their open-access findings today in two papers in Nature, revealing insights into how decision-making unfolds across the entire brain in mice at single-cell resolution. This brain-wide activity map challenges the traditional hierarchical view of information processing in the brain and shows that decision-making is distributed across many regions in a highly coordinated way.

“This is the first time anyone has produced a full, brain-wide map of the activity of single neurons during decision-making,” explains co-founder of IBL Alexandre Pouget. “The scale is unprecedented as we recorded from over half-a-million neurons across mice in 12 labs, covering 279 brain areas, which together represent 95 percent of the mouse brain volume. The decision-making activity, and particularly reward, lit up the brain like a Christmas tree,” adds Pouget, who is also a group leader at the University of Geneva in Switzerland.

Modeling decision-making

The brain map was made possible by a major international collaboration of neuroscientists from multiple universities, including MIT. Researchers across 12 labs used state-of-the-art silicon electrodes, called neuropixels probes, for simultaneous neural recordings to measure brain activity while mice were carrying out a decision-making task.

“Participating in the International Brain Laboratory has added new ways for our group to contribute to science,” says Fiete, who is also a professor of brain and cognitive sciences, an associate investigator at the McGovern Institute for Brain Research, and director of the K. Lisa Yang ICoN Center at MIT. “Our lab has helped standardize methods to analyze and generate robust conclusions from data. As computational neuroscientists interested in building models of how the brain works, access to brain-wide recordings is incredible: the traditional approach of recording from one or a few brain areas limited our ability to build and test theories, resulting in fragmented models. Now, we have the delightful but formidable task to make sense of how all parts of the brain coordinate to perform a behavior. Surprisingly, having a full view of the brain leads to simplifications in the models of decision-making,” says Fiete.

The labs collected data from mice performing a decision-making task with sensory, motor, and cognitive components. In the task, a mouse sits in front of a screen and a light appears on the left or right side. If the mouse then responds by moving a small wheel in the correct direction, it receives a reward.

In some trials, the light is so faint that the animal must guess which way to turn the wheel, for which it can use prior knowledge: the light tends to appear more frequently on one side for a number of trials, before the high-frequency side switches. Well-trained mice learn to use this information to help them make correct guesses. These challenging trials therefore allowed the researchers to study how prior expectations influence perception and decision-making.

Brain-wide results

The first paper, “A brain-wide map of neural activity during complex behaviour,” showed that decision-making signals are surprisingly distributed across the brain, not localized to specific regions. This adds brain-wide evidence to a growing number of studies that challenge the traditional hierarchical model of brain function, and emphasizes that there is constant communication across brain areas during decision-making, movement onset, and even reward. This means that neuroscientists will need to take a more holistic, brain-wide approach when studying complex behaviors in the future.

“The unprecedented breadth of our recordings pulls back the curtain on how the entire brain performs the whole arc of sensory processing, cognitive decision-making, and movement generation,” says Fiete. “Structuring a collaboration that collects a large standardized dataset which single labs could not assemble is a revolutionary new direction for systems neuroscience, initiating the field into the hyper-collaborative mode that has contributed to leaps forward in particle physics and human genetics. Beyond our own conclusions, the dataset and associated technologies, which were released much earlier as part of the IBL mission, have already become a massively used resource for the entire neuroscience community.”

The second paper, “Brain-wide representations of prior information,” showed that prior expectations — our beliefs about what is likely to happen based on our recent experience — are encoded throughout the brain. Surprisingly, these expectations are not only found in cognitive areas, but also brain areas that process sensory information and control actions. For example, expectations are even encoded in early sensory areas such as the thalamus, the brain’s first relay for visual input from the eye. This supports the view that the brain acts as a prediction machine, but with expectations encoded across multiple brain structures playing a central role in guiding behavior responses. These findings could have implications for understanding conditions such as schizophrenia and autism, which are thought to be caused by differences in the way expectations are updated in the brain.

“Much remains to be unpacked: If it is possible to find a signal in a brain area, does it mean that this area is generating the signal, or simply reflecting a signal generated somewhere else? How strongly is our perception of the world shaped by our expectations? Now we can generate some quantitative answers and begin the next phase experiments to learn about the origins of the expectation signals by intervening to modulate their activity,” says Fiete.

Looking ahead, the team at IBL plan to expand beyond their initial focus on decision-making to explore a broader range of neuroscience questions. With renewed funding in hand, IBL aims to expand its research scope and continue to support large-scale, standardized experiments.

New model of collaborative neuroscience

Officially launched in 2017, IBL introduced a new model of collaboration in neuroscience that uses a standardized set of tools and data processing pipelines shared across multiple labs, enabling the collection of massive datasets while ensuring data alignment and reproducibility. This approach to democratize and accelerate science draws inspiration from large-scale collaborations in physics and biology, such as CERN and the Human Genome Project.

All data from these studies, along with detailed specifications of the tools and protocols used for data collection, are openly accessible to the global scientific community for further analysis and research. Summaries of these resources can be viewed and downloaded on the IBL website under the sections: Data, Tools, Protocols.

This research was supported by grants from Wellcome, the Simons Foundation, the National Institutes of Health, the National Science Foundation, the Gatsby Charitable Foundation, and by the Max Planck Society and the Humboldt Foundation.

A greener way to 3D print stronger stuff

Thu, 09/04/2025 - 4:30pm

3D printing has come a long way since its invention in 1983 by Chuck Hull, who pioneered stereolithography, a technique that solidifies liquid resin into solid objects using ultraviolet lasers. Over the decades, 3D printers have evolved from experimental curiosities into tools capable of producing everything from custom prosthetics to complex food designs, architectural models, and even functioning human organs. 

But as the technology matures, its environmental footprint has become increasingly difficult to set aside. The vast majority of consumer and industrial 3D printing still relies on petroleum-based plastic filament. And while “greener” alternatives made from biodegradable or recycled materials exist, they come with a serious trade-off: they’re often not as strong. These eco-friendly filaments tend to become brittle under stress, making them ill-suited for structural applications or load-bearing parts — exactly where strength matters most.

This trade-off between sustainability and mechanical performance prompted researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Hasso Plattner Institute to ask: Is it possible to build objects that are mostly eco-friendly, but still strong where it counts?

Their answer is SustainaPrint, a new software and hardware toolkit designed to help users strategically combine strong and weak filaments to get the best of both worlds. Instead of printing an entire object with high-performance plastic, the system analyzes a model through finite element analysis simulations, predicts where the object is most likely to experience stress, and then reinforces just those zones with stronger material. The rest of the part can be printed using greener, weaker filament, reducing plastic use while preserving structural integrity.

“Our hope is that SustainaPrint can be used in industrial and distributed manufacturing settings one day, where local material stocks may vary in quality and composition,” says MIT PhD student and CSAIL researcher Maxine Perroni-Scharf, who is a lead author on a paper presenting the project. “In these contexts, the testing toolkit could help ensure the reliability of available filaments, while the software’s reinforcement strategy could reduce overall material consumption without sacrificing function.” 

For their experiments, the team used Polymaker’s PolyTerra PLA as the eco-friendly filament, and standard or Tough PLA from Ultimaker for reinforcement. They used a 20 percent reinforcement threshold to show that even a small amount of strong plastic goes a long way. Using this ratio, SustainaPrint was able to recover up to 70 percent of the strength of an object printed entirely with high-performance plastic.

They printed dozens of objects, from simple mechanical shapes like rings and beams to more functional household items such as headphone stands, wall hooks, and plant pots. Each object was printed three ways: once using only eco-friendly filament, once using only strong PLA, and once with the hybrid SustainaPrint configuration. The printed parts were then mechanically tested by pulling, bending, or otherwise breaking them to measure how much force each configuration could withstand. 

In many cases, the hybrid prints held up nearly as well as the full-strength versions. For example, in one test involving a dome-like shape, the hybrid version outperformed the version printed entirely in Tough PLA. The team believes this may be due to the reinforced version’s ability to distribute stress more evenly, avoiding the brittle failure sometimes caused by excessive stiffness.

“This indicates that in certain geometries and loading conditions, mixing materials strategically may actually outperform a single homogenous material,” says Perroni-Scharf. “It’s a reminder that real-world mechanical behavior is full of complexity, especially in 3D printing, where interlayer adhesion and tool path decisions can affect performance in unexpected ways.”

A lean, green, eco-friendly printing machine

SustainaPrint starts off by letting a user upload their 3D model into a custom interface. By selecting fixed regions and areas where forces will be applied, the software then uses an approach called “Finite Element Analysis” to simulate how the object will deform under stress. It then creates a map showing pressure distribution inside the structure, highlighting areas under compression or tension, and applies heuristics to segment the object into two categories: those that need reinforcement, and those that don’t.

Recognizing the need for accessible and low-cost testing, the team also developed a DIY testing toolkit to help users assess strength before printing. The kit has a 3D-printable device with modules for measuring both tensile and flexural strength. Users can pair the device with common items like pull-up bars or digital scales to get rough, but reliable performance metrics. The team benchmarked their results against manufacturer data and found that their measurements consistently fell within one standard deviation, even for filaments that had undergone multiple recycling cycles.

Although the current system is designed for dual-extrusion printers, the researchers believe that with some manual filament swapping and calibration, it could be adapted for single-extruder setups, too. In current form, the system simplifies the modeling process by allowing just one force and one fixed boundary per simulation. While this covers a wide range of common use cases, the team sees future work expanding the software to support more complex and dynamic loading conditions. The team also sees potential in using AI to infer the object’s intended use based on its geometry, which could allow for fully automated stress modeling without manual input of forces or boundaries.

3D for free

The researchers plan to release SustainaPrint open-source, making both the software and testing toolkit available for public use and modification. Another initiative they aspire to bring to life in the future: education. “In a classroom, SustainaPrint isn’t just a tool, it’s a way to teach students about material science, structural engineering, and sustainable design, all in one project,” says Perroni-Scharf. “It turns these abstract concepts into something tangible.”

As 3D printing becomes more embedded in how we manufacture and prototype everything from consumer goods to emergency equipment, sustainability concerns will only grow. With tools like SustainaPrint, those concerns no longer need to come at the expense of performance. Instead, they can become part of the design process: built into the very geometry of the things we make.

Co-author Patrick Baudisch, who is a professor at the Hasso Plattner Institute, adds that “the project addresses a key question: What is the point of collecting material for the purpose of recycling, when there is no plan to actually ever use that material? Maxine presents the missing link between the theoretical/abstract idea of 3D printing material recycling and what it actually takes to make this idea relevant.”

Perroni-Scharf and Baudisch wrote the paper with CSAIL research assistant Jennifer Xiao; MIT Department of Electrical Engineering and Computer Science master’s student Cole Paulin ’24; master’s student Ray Wang SM ’25 and PhD student Ticha Sethapakdi SM ’19 (both CSAIL members); Hasso Plattner Institute PhD student Muhammad Abdullah; and Associate Professor Stefanie Mueller, lead of the Human-Computer Interaction Engineering Group at CSAIL.

The researchers’ work was supported by a Designing for Sustainability Grant from the Designing for Sustainability MIT-HPI Research Program. Their work will be presented at the ACM Symposium on User Interface Software and Technology in September.

A new generative AI approach to predicting chemical reactions

Wed, 09/03/2025 - 3:55pm

Many attempts have been made to harness the power of new artificial intelligence and large language models (LLMs) to try to predict the outcomes of new chemical reactions. These have had limited success, in part because until now they have not been grounded in an understanding of fundamental physical principles, such as the laws of conservation of mass. Now, a team of researchers at MIT has come up with a way of incorporating these physical constraints on a reaction prediction model, and thus greatly improving the accuracy and reliability of its outputs.

The new work was reported Aug. 20 in the journal Nature, in a paper by recent postdoc Joonyoung Joung (now an assistant professor at Kookmin University, South Korea); former software engineer Mun Hong Fong (now at Duke University); chemical engineering graduate student Nicholas Casetti; postdoc Jordan Liles; physics undergraduate student Ne Dassanayake; and senior author Connor Coley, who is the Class of 1957 Career Development Professor in the MIT departments of Chemical Engineering and Electrical Engineering and Computer Science.

“The prediction of reaction outcomes is a very important task,” Joung explains. For example, if you want to make a new drug, “you need to know how to make it. So, this requires us to know what product is likely” to result from a given set of chemical inputs to a reaction. But most previous efforts to carry out such predictions look only at a set of inputs and a set of outputs, without looking at the intermediate steps or considering the constraints of ensuring that no mass is gained or lost in the process, which is not possible in actual reactions.

Joung points out that while large language models such as ChatGPT have been very successful in many areas of research, these models do not provide a way to limit their outputs to physically realistic possibilities, such as by requiring them to adhere to conservation of mass. These models use computational “tokens,” which in this case represent individual atoms, but “if you don’t conserve the tokens, the LLM model starts to make new atoms, or deletes atoms in the reaction.” Instead of being grounded in real scientific understanding, “this is kind of like alchemy,” he says. While many attempts at reaction prediction only look at the final products, “we want to track all the chemicals, and how the chemicals are transformed” throughout the reaction process from start to end, he says.

In order to address the problem, the team made use of a method developed back in the 1970s by chemist Ivar Ugi, which uses a bond-electron matrix to represent the electrons in a reaction. They used this system as the basis for their new program, called FlowER (Flow matching for Electron Redistribution), which allows them to explicitly keep track of all the electrons in the reaction to ensure that none are spuriously added or deleted in the process.

The system uses a matrix to represent the electrons in a reaction, and uses nonzero values to represent bonds or lone electron pairs and zeros to represent a lack thereof. “That helps us to conserve both atoms and electrons at the same time,” says Fong. This representation, he says, was one of the key elements to including mass conservation in their prediction system.

The system they developed is still at an early stage, Coley says. “The system as it stands is a demonstration — a proof of concept that this generative approach of flow matching is very well suited to the task of chemical reaction prediction.” While the team is excited about this promising approach, he says, “we’re aware that it does have specific limitations as far as the breadth of different chemistries that it’s seen.” Although the model was trained using data on more than a million chemical reactions, obtained from a U.S. Patent Office database, those data do not include certain metals and some kinds of catalytic reactions, he says.

“We’re incredibly excited about the fact that we can get such reliable predictions of chemical mechanisms” from the existing system, he says. “It conserves mass, it conserves electrons, but we certainly acknowledge that there’s a lot more expansion and robustness to work on in the coming years as well.”

But even in its present form, which is being made freely available through the online platform GitHub, “we think it will make accurate predictions and be helpful as a tool for assessing reactivity and mapping out reaction pathways,” Coley says. “If we’re looking toward the future of really advancing the state of the art of mechanistic understanding and helping to invent new reactions, we’re not quite there. But we hope this will be a steppingstone toward that.”

“It’s all open source,” says Fong. “The models, the data, all of them are up there,” including a previous dataset developed by Joung that exhaustively lists the mechanistic steps of known reactions. “I think we are one of the pioneering groups making this dataset, and making it available open-source, and making this usable for everyone,” he says.

The FlowER model matches or outperforms existing approaches in finding standard mechanistic pathways, the team says, and makes it possible to generalize to previously unseen reaction types. They say the model could potentially be relevant for predicting reactions for medicinal chemistry, materials discovery, combustion, atmospheric chemistry, and electrochemical systems.

In their comparisons with existing reaction prediction systems, Coley says, “using the architecture choices that we’ve made, we get this massive increase in validity and conservation, and we get a matching or a little bit better accuracy in terms of performance.”

He adds that “what’s unique about our approach is that while we are using these textbook understandings of mechanisms to generate this dataset, we’re anchoring the reactants and products of the overall reaction in experimentally validated data from the patent literature.” They are inferring the underlying mechanisms, he says, rather than just making them up. “We’re imputing them from experimental data, and that’s not something that has been done and shared at this kind of scale before.”

The next step, he says, is “we are quite interested in expanding the model’s understanding of metals and catalytic cycles. We’ve just scratched the surface in this first paper,” and most of the reactions included so far don’t include metals or catalysts, “so that’s a direction we’re quite interested in.”

In the long term, he says, “a lot of the excitement is in using this kind of system to help discover new complex reactions and help elucidate new mechanisms. I think that the long-term potential impact is big, but this is of course just a first step.”

The work was supported by the Machine Learning for Pharmaceutical Discovery and Synthesis consortium and the National Science Foundation.

3 Questions: The pros and cons of synthetic data in AI

Wed, 09/03/2025 - 12:00am

Synthetic data are artificially generated by algorithms to mimic the statistical properties of actual data, without containing any information from real-world sources. While concrete numbers are hard to pin down, some estimates suggest that more than 60 percent of data used for AI applications in 2024 was synthetic, and this figure is expected to grow across industries.

Because synthetic data don’t contain real-world information, they hold the promise of safeguarding privacy while reducing the cost and increasing the speed at which new AI models are developed. But using synthetic data requires careful evaluation, planning, and checks and balances to prevent loss of performance when AI models are deployed.       

To unpack some pros and cons of using synthetic data, MIT News spoke with Kalyan Veeramachaneni, a principal research scientist in the Laboratory for Information and Decision Systems and co-founder of DataCebo whose open-core platform, the Synthetic Data Vaulthelps users generate and test synthetic data.

Q: How are synthetic data created?

A: Synthetic data are algorithmically generated but do not come from a real situation. Their value lies in their statistical similarity to real data. If we’re talking about language, for instance, synthetic data look very much as if a human had written those sentences. While researchers have created synthetic data for a long time, what has changed in the past few years is our ability to build generative models out of data and use them to create realistic synthetic data. We can take a little bit of real data and build a generative model from that, which we can use to create as much synthetic data as we want. Plus, the model creates synthetic data in a way that captures all the underlying rules and infinite patterns that exist in the real data.

There are essentially four different data modalities: language, video or images, audio, and tabular data. All four of them have slightly different ways of building the generative models to create synthetic data. An LLM, for instance, is nothing but a generative model from which you are sampling synthetic data when you ask it a question.      

A lot of language and image data are publicly available on the internet. But tabular data, which is the data collected when we interact with physical and social systems, is often locked up behind enterprise firewalls. Much of it is sensitive or private, such as customer transactions stored by a bank. For this type of data, platforms like the Synthetic Data Vault provide software that can be used to build generative models. Those models then create synthetic data that preserve customer privacy and can be shared more widely.      

One powerful thing about this generative modeling approach for synthesizing data is that enterprises can now build a customized, local model for their own data. Generative AI automates what used to be a manual process.

Q: What are some benefits of using synthetic data, and which use-cases and applications are they particularly well-suited for?

A: One fundamental application which has grown tremendously over the past decade is using synthetic data to test software applications. There is data-driven logic behind many software applications, so you need data to test that software and its functionality. In the past, people have resorted to manually generating data, but now we can use generative models to create as much data as we need.

Users can also create specific data for application testing. Say I work for an e-commerce company. I can generate synthetic data that mimics real customers who live in Ohio and made transactions pertaining to one particular product in February or March.

Because synthetic data aren’t drawn from real situations, they are also privacy-preserving. One of the biggest problems in software testing has been getting access to sensitive real data for testing software in non-production environments, due to privacy concerns. Another immediate benefit is in performance testing. You can create a billion transactions from a generative model and test how fast your system can process them.

Another application where synthetic data hold a lot of promise is in training machine-learning models. Sometimes, we want an AI model to help us predict an event that is less frequent. A bank may want to use an AI model to predict fraudulent transactions, but there may be too few real examples to train a model that can identify fraud accurately. Synthetic data provide data augmentation — additional data examples that are similar to the real data. These can significantly improve the accuracy of AI models.

Also, sometimes users don’t have time or the financial resources to collect all the data. For instance, collecting data about customer intent would require conducting many surveys. If you end up with limited data and then try to train a model, it won’t perform well. You can augment by adding synthetic data to train those models better.

Q. What are some of the risks or potential pitfalls of using synthetic data, and are there steps users can take to prevent or mitigate those problems?

A. One of the biggest questions people often have in their mind is, if the data are synthetically created, why should I trust them? Determining whether you can trust the data often comes down to evaluating the overall system where you are using them.

There are a lot of aspects of synthetic data we have been able to evaluate for a long time. For instance, there are existing methods to measure how close synthetic data are to real data, and we can measure their quality and whether they preserve privacy. But there are other important considerations if you are using those synthetic data to train a machine-learning model for a new use case. How would you know the data are going to lead to models that still make valid conclusions?

New efficacy metrics are emerging, and the emphasis is now on efficacy for a particular task. You must really dig into your workflow to ensure the synthetic data you add to the system still allow you to draw valid conclusions. That is something that must be done carefully on an application-by-application basis.

Bias can also be an issue. Since it is created from a small amount of real data, the same bias that exists in the real data can carry over into the synthetic data. Just like with real data, you would need to purposefully make sure the bias is removed through different sampling techniques, which can create balanced datasets. It takes some careful planning, but you can calibrate the data generation to prevent the proliferation of bias.

To help with the evaluation process, our group created the Synthetic Data Metrics Library. We worried that people would use synthetic data in their environment and it would give different conclusions in the real world. We created a metrics and evaluation library to ensure checks and balances. The machine learning community has faced a lot of challenges in ensuring models can generalize to new situations. The use of synthetic data adds a whole new dimension to that problem.

I expect that the old systems of working with data, whether to build software applications, answer analytical questions, or train models, will dramatically change as we get more sophisticated at building these generative models. A lot of things we have never been able to do before will now be possible.

Soft materials hold onto “memories” of their past, for longer than previously thought

Wed, 09/03/2025 - 12:00am

If your hand lotion is a bit runnier than usual coming out of the bottle, it might have something to do with the goop’s “mechanical memory.”

Soft gels and lotions are made by mixing ingredients until they form a stable and uniform substance. But even after a gel has set, it can hold onto “memories,” or residual stress, from the mixing process. Over time, the material can give in to these embedded stresses and slide back into its former, premixed state. Mechanical memory is, in part, why hand lotion separates and gets runny over time. 

Now, an MIT engineer has devised a simple way to measure the degree of residual stress in soft materials after they have been mixed, and found that common products like hair gel and shaving cream have longer mechanical memories, holding onto residual stresses for longer periods of time than manufacturers might have assumed.

In a study appearing today in Physical Review Letters, Crystal Owens, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), presents a new protocol for measuring residual stress in soft, gel-like materials, using a standard benchtop rheometer.

Applying this protocol to everyday soft materials, Owens found that if a gel is made by mixing it in one direction, once it settles into a stable and uniform state, it effectively holds onto the memory of the direction in which it is mixed. Even after several days, the gel will hold some internal stress that, if released, will cause the gel to shift in the direction opposite to how it was initially mixed, reverting back to its earlier state.

“This is one reason different batches of cosmetics or food behave differently even if they underwent ‘identical’ manufacturing,” Owens says. “Understanding and measuring these hidden stresses during processing could help manufacturers design better products that last longer and perform more predictably.”

A soft glass

Hand lotion, hair gel, and shaving cream all fall under the category of “soft glassy materials” — materials that exhibit properties of both solids and liquids.

“Anything you can pour into your hand and it forms a soft mound is going to be considered a soft glass,” Owens explains. “In materials science, it’s considered a soft version of something that has the same amorphous structure as glass.”

In other words, a soft glassy material is a strange amalgam of a solid and a liquid. It can be poured out like a liquid, and it can hold its shape like a solid. Once they are made, these materials exist in a delicate balance between solid and liquid. And Owens wondered: For how long?

“What happens to these materials after very long times? Do they finally relax or do they never relax?” Owens says. “From a physics perspective, that’s a very interesting concept: What is the essential state of these materials?”

Twist and hold

In the manufacturing of soft glassy materials such as hair gel and shampoo, ingredients are first mixed into a uniform product. Quality control engineers then let a sample sit for about a minute — a period of time that they assume is enough to allow any residual stresses from the mixing process dissipate. In that time, the material should settle into a steady, stable state, ready for use.

But Owens suspected that the materials may hold some degree of stress from the production process long after they’ve appeared to settle.

“Residual stress is a low level of stress that’s trapped inside a material after it’s come to a steady state,” Owens says. “This sort of stress has not been measured in these sorts of materials.”

To test her hypothesis, she carried out experiments with two common soft glassy materials: hair gel and shaving cream. She made measurements of each material in a rheometer — an instrument consisting of two rotating plates that can twist and press a material together at precisely controlled pressures and forces that relate directly to the material’s internal stresses and strains.

In her experiments, she placed each material in the rheometer and spun the instrument’s top plate around to mix the material. Then she let the material settle, and then settle some more — much longer than one minute. During this time, she observed the amount of force it took the rheometer to hold the material in place. She reasoned that the greater the rheometer’s force, the more it must be counteracting any stress within the material that would otherwise cause it to shift out of its current state.

Over multiple experiments using this new protocol, Owens found that different types of soft glassy materials held a significant amount of residual stress, long after most researchers would assume the stress had dissipated. What’s more, she found that the degree of stress that a material retained was a reflection of the direction in which it was initially mixed, and when it was mixed.

“The material can effectively ‘remember’ which direction it was mixed, and how long ago,” Owens says. “And it turns out they hold this memory of their past, a lot longer than we used to think.”

In addition to the protocol she has developed to measure residual stress, Owens has developed a model to estimate how a material will change over time, given the degree of residual stress that it holds. Using this model, she says scientists might design materials with “short-term memory,” or very little residual stress, such that they remain stable over longer periods.

One material where she sees room for such improvement is asphalt — a substance that is first mixed, then poured in molten form over a surface where it then cools and settles over time. She suspects that residual stresses from the mixing of asphalt may contribute to cracks forming in pavement over time. Reducing these stresses at the start of the process could lead to longer-lasting, more resilient roads.

“People are inventing new types of asphalt all the time to be more eco-friendly, and all of these will have different levels of residual stress that will need some control,” she says. “There’s plenty of room to explore.”

This research was supported, in part, by MIT’s Postdoctoral Fellowship for Engineering Excellence and an MIT Mathworks Fellowship.

3 Questions: On biology and medicine’s “data revolution”

Tue, 09/02/2025 - 5:45pm

Caroline Uhler is an Andrew (1956) and Erna Viterbi Professor of Engineering at MIT; a professor of electrical engineering and computer science in the Institute for Data, Science, and Society (IDSS); and director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, where she is also a core institute and scientific leadership team member. 

Uhler is interested in all the methods by which scientists can uncover causality in biological systems, ranging from causal discovery on observed variables to causal feature learning and representation learning. In this interview, she discusses machine learning in biology, areas that are ripe for problem-solving, and cutting-edge research coming out of the Schmidt Center.

Q: The Eric and Wendy Schmidt Center has four distinct areas of focus structured around four natural levels of biological organization: proteins, cells, tissues, and organisms. What, within the current landscape of machine learning, makes now the right time to work on these specific problem classes?

A: Biology and medicine are currently undergoing a “data revolution.” The availability of large-scale, diverse datasets — ranging from genomics and multi-omics to high-resolution imaging and electronic health records — makes this an opportune time. Inexpensive and accurate DNA sequencing is a reality, advanced molecular imaging has become routine, and single cell genomics is allowing the profiling of millions of cells. These innovations — and the massive datasets they produce — have brought us to the threshold of a new era in biology, one where we will be able to move beyond characterizing the units of life (such as all proteins, genes, and cell types) to understanding the `programs of life’, such as the logic of gene circuits and cell-cell communication that underlies tissue patterning and the molecular mechanisms that underlie the genotype-phenotype map.

At the same time, in the past decade, machine learning has seen remarkable progress with models like BERT, GPT-3, and ChatGPT demonstrating advanced capabilities in text understanding and generation, while vision transformers and multimodal models like CLIP have achieved human-level performance in image-related tasks. These breakthroughs provide powerful architectural blueprints and training strategies that can be adapted to biological data. For instance, transformers can model genomic sequences similar to language, and vision models can analyze medical and microscopy images.

Importantly, biology is poised to be not just a beneficiary of machine learning, but also a significant source of inspiration for new ML research. Much like agriculture and breeding spurred modern statistics, biology has the potential to inspire new and perhaps even more profound avenues of ML research. Unlike fields such as recommender systems and internet advertising, where there are no natural laws to discover and predictive accuracy is the ultimate measure of value, in biology, phenomena are physically interpretable, and causal mechanisms are the ultimate goal. Additionally, biology boasts genetic and chemical tools that enable perturbational screens on an unparalleled scale compared to other fields. These combined features make biology uniquely suited to both benefit greatly from ML and serve as a profound wellspring of inspiration for it.

Q: Taking a somewhat different tack, what problems in biology are still really resistant to our current tool set? Are there areas, perhaps specific challenges in disease or in wellness, which you feel are ripe for problem-solving?

A: Machine learning has demonstrated remarkable success in predictive tasks across domains such as image classification, natural language processing, and clinical risk modeling. However, in the biological sciences, predictive accuracy is often insufficient. The fundamental questions in these fields are inherently causal: How does a perturbation to a specific gene or pathway affect downstream cellular processes? What is the mechanism by which an intervention leads to a phenotypic change? Traditional machine learning models, which are primarily optimized for capturing statistical associations in observational data, often fail to answer such interventional queries.There is a strong need for biology and medicine to also inspire new foundational developments in machine learning. 

The field is now equipped with high-throughput perturbation technologies — such as pooled CRISPR screens, single-cell transcriptomics, and spatial profiling — that generate rich datasets under systematic interventions. These data modalities naturally call for the development of models that go beyond pattern recognition to support causal inference, active experimental design, and representation learning in settings with complex, structured latent variables. From a mathematical perspective, this requires tackling core questions of identifiability, sample efficiency, and the integration of combinatorial, geometric, and probabilistic tools. I believe that addressing these challenges will not only unlock new insights into the mechanisms of cellular systems, but also push the theoretical boundaries of machine learning.

With respect to foundation models, a consensus in the field is that we are still far from creating a holistic foundation model for biology across scales, similar to what ChatGPT represents in the language domain — a sort of digital organism capable of simulating all biological phenomena. While new foundation models emerge almost weekly, these models have thus far been specialized for a specific scale and question, and focus on one or a few modalities.

Significant progress has been made in predicting protein structures from their sequences. This success has highlighted the importance of iterative machine learning challenges, such as CASP (critical assessment of structure prediction), which have been instrumental in benchmarking state-of-the-art algorithms for protein structure prediction and driving their improvement.

The Schmidt Center is organizing challenges to increase awareness in the ML field and make progress in the development of methods to solve causal prediction problems that are so critical for the biomedical sciences. With the increasing availability of single-gene perturbation data at the single-cell level, I believe predicting the effect of single or combinatorial perturbations, and which perturbations could drive a desired phenotype, are solvable problems. With our Cell Perturbation Prediction Challenge (CPPC), we aim to provide the means to objectively test and benchmark algorithms for predicting the effect of new perturbations.

Another area where the field has made remarkable strides is disease diagnostic and patient triage. Machine learning algorithms can integrate different sources of patient information (data modalities), generate missing modalities, identify patterns that may be difficult for us to detect, and help stratify patients based on their disease risk. While we must remain cautious about potential biases in model predictions, the danger of models learning shortcuts instead of true correlations, and the risk of automation bias in clinical decision-making, I believe this is an area where machine learning is already having a significant impact.

Q: Let’s talk about some of the headlines coming out of the Schmidt Center recently. What current research do you think people should be particularly excited about, and why? 

A: In collaboration with Dr. Fei Chen at the Broad Institute, we have recently developed a method for the prediction of unseen proteins’ subcellular location, called PUPS. Many existing methods can only make predictions based on the specific protein and cell data on which they were trained. PUPS, however, combines a protein language model with an image in-painting model to utilize both protein sequences and cellular images. We demonstrate that the protein sequence input enables generalization to unseen proteins, and the cellular image input captures single-cell variability, enabling cell-type-specific predictions. The model learns how relevant each amino acid residue is for the predicted sub-cellular localization, and it can predict changes in localization due to mutations in the protein sequences. Since proteins’ function is strictly related to their subcellular localization, our predictions could provide insights into potential mechanisms of disease. In the future, we aim to extend this method to predict the localization of multiple proteins in a cell and possibly understand protein-protein interactions.

Together with Professor G.V. Shivashankar, a long-time collaborator at ETH Zürich, we have previously shown how simple images of cells stained with fluorescent DNA-intercalating dyes to label the chromatin can yield a lot of information about the state and fate of a cell in health and disease, when combined with machine learning algorithms. Recently, we have furthered this observation and proved the deep link between chromatin organization and gene regulation by developing Image2Reg, a method that enables the prediction of unseen genetically or chemically perturbed genes from chromatin images. Image2Reg utilizes convolutional neural networks to learn an informative representation of the chromatin images of perturbed cells. It also employs a graph convolutional network to create a gene embedding that captures the regulatory effects of genes based on protein-protein interaction data, integrated with cell-type-specific transcriptomic data. Finally, it learns a map between the resulting physical and biochemical representation of cells, allowing us to predict the perturbed gene modules based on chromatin images.

Furthermore, we recently finalized the development of a method for predicting the outcomes of unseen combinatorial gene perturbations and identifying the types of interactions occurring between the perturbed genes. MORPH can guide the design of the most informative perturbations for lab-in-a-loop experiments. Furthermore, the attention-based framework provably enables our method to identify causal relations among the genes, providing insights into the underlying gene regulatory programs. Finally, thanks to its modular structure, we can apply MORPH to perturbation data measured in various modalities, including not only transcriptomics, but also imaging. We are very excited about the potential of this method to enable the efficient exploration of the perturbation space to advance our understanding of cellular programs by bridging causal theory to important applications, with implications for both basic research and therapeutic applications.

New gift expands mental illness studies at Poitras Center for Psychiatric Disorders Research

Tue, 09/02/2025 - 5:20pm

One in every eight people — 970 million globally — live with mental illness, according to the World Health Organization, with depression and anxiety being the most common mental health conditions worldwide. Existing therapies for complex psychiatric disorders like depression, anxiety, and schizophrenia have limitations, and federal funding to address these shortcomings is growing increasingly uncertain.

Patricia and James Poitras ’63 have committed $8 million to the Poitras Center for Psychiatric Disorders Research to launch pioneering research initiatives aimed at uncovering the brain basis of major mental illness and accelerating the development of novel treatments.

“Federal funding rarely supports the kind of bold, early-stage research that has the potential to transform our understanding of psychiatric illness. Pat and I want to help fill that gap — giving researchers the freedom to follow their most promising leads, even when the path forward isn’t guaranteed,” says James Poitras, who is chair of the McGovern Institute for Brain Research board.

Their latest gift builds upon their legacy of philanthropic support for psychiatric disorders research at MIT, which now exceeds $46 million.

“With deep gratitude for Jim and Pat’s visionary support, we are eager to launch a bold set of studies aimed at unraveling the neural and cognitive underpinnings of major mental illnesses,” says Professor Robert Desimone, director of the McGovern Institute, home to the Poitras Center. “Together, these projects represent a powerful step toward transforming how we understand and treat mental illness.”

A legacy of support

Soon after joining the McGovern Institute Leadership Board in 2006, the Poitrases made a $20 million commitment to establish the Poitras Center for Psychiatric Disorders Research at MIT. The center’s goal, to improve human health by addressing the root causes of complex psychiatric disorders, is deeply personal to them both.

“We had decided many years ago that our philanthropic efforts would be directed towards psychiatric research. We could not have imagined then that this perfect synergy between research at MIT’s McGovern Institute and our own philanthropic goals would develop,” recalls Patricia. 

The center supports research at the McGovern Institute and collaborative projects with institutions such as the Broad Institute of MIT and Harvard, McLean Hospital, Mass General Brigham, and other clinical research centers. Since its establishment in 2007, the center has enabled advances in psychiatric research including the development of a machine learning “risk calculator” for bipolar disorder, the use of brain imaging to predict treatment outcomes for anxiety, and studies demonstrating that mindfulness can improve mental health in adolescents.

For the past decade, the Poitrases have also fueled breakthroughs in the lab of McGovern investigator and MIT Professor Feng Zhang, backing the invention of powerful CRISPR systems and other molecular tools that are transforming biology and medicine. Their support has enabled the Zhang team to engineer new delivery vehicles for gene therapy, including vehicles capable of carrying genetic payloads that were once out of reach. The lab has also advanced innovative RNA-guided gene engineering tools such as NovaIscB, published in Nature Biotechnology in May 2025. These revolutionary genome editing and delivery technologies hold promise for the next generation of therapies needed for serious psychiatric illness.

In addition to fueling research in the center, the Poitras family has gifted two endowed professorships — the James and Patricia Poitras Professor of Neuroscience at MIT, currently held by Feng Zhang, and the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT, held by Guoping Feng — and an annual postdoctoral fellowship at the McGovern Institute.

New initiatives at the Poitras Center

The Poitras family’s latest commitment to the Poitras Center will launch an ambitious set of new projects that bring together neuroscientists, clinicians, and computational experts to probe underpinnings of complex psychiatric disorders including schizophrenia, anxiety, and depression. These efforts reflect the center’s core mission: to speed scientific discovery and therapeutic innovation in the field of psychiatric brain disorders research.

McGovern cognitive neuroscientists Evelina Fedorenko PhD ’07, an associate professor, and Nancy Kanwisher ’80, PhD ’86, the Walter A. Rosenblith Professor of Cognitive Neuroscience — in collaboration with psychiatrist Ann Shinn of McLean Hospital — will explore how altered inner speech and reasoning contribute to the symptoms of schizophrenia. They will collect functional MRI data from individuals diagnosed with schizophrenia and matched controls as they perform reasoning tasks. The goal is to identify the brain activity patterns that underlie impaired reasoning in schizophrenia, a core cognitive disruption in the disorder.

A complementary line of investigation will focus on the role of inner speech — the “voice in our head” that shapes thought and self-awareness. The team will conduct a large-scale online behavioral study of neurotypical individuals to analyze how inner speech characteristics correlate with schizophrenia-spectrum traits. This will be followed by neuroimaging work comparing brain architecture among individuals with strong or weak inner voices and people with schizophrenia, with the aim of discovering neural markers linked to self-talk and disrupted cognition.

A different project led by McGovern neuroscientist and MIT Associate Professor Mark Harnett and 2024–2026 Poitras Center Postdoctoral Fellow Cynthia Rais focuses on how ketamine — an increasingly used antidepressant — alters brain circuits to produce rapid and sustained improvements in mood. Despite its clinical success, ketamine’s mechanisms of action remain poorly understood. The Harnett lab is using sophisticated tools to track how ketamine affects synaptic communication and large-scale brain network dynamics, particularly in models of treatment-resistant depression. By mapping these changes at both the cellular and systems levels, the team hopes to reveal how ketamine lifts mood so quickly — and inform the development of safer, longer-lasting antidepressants.

Guoping Feng is leveraging a new animal model of depression to uncover the brain circuits that drive major depressive disorder. The new animal model provides a powerful system for studying the intricacies of mood regulation. Feng’s team is using state-of-the-art molecular tools to identify the specific genes and cell types involved in this circuit, with the goal of developing targeted treatments that can fine-tune these emotional pathways.

“This is one of the most promising models we have for understanding depression at a mechanistic level,” says Feng, who is also associate director of the McGovern Institute. “It gives us a clear target for future therapies.”

Another novel approach to treating mood disorders comes from the lab of James DiCarlo, the Peter de Florez Professor of Neuroscience at MIT, who is exploring the brain’s visual-emotional interface as a therapeutic tool for anxiety. The amygdala, a key emotional center in the brain, is heavily influenced by visual input. DiCarlo’s lab is using advanced computational models to design visual scenes that may subtly shift emotional processing in the brain — essentially using sight to regulate mood. Unlike traditional therapies, this strategy could offer a noninvasive, drug-free option for individuals suffering from anxiety.

Together, these projects exemplify the kind of interdisciplinary, high-impact research that the Poitras Center was established to support.

“Mental illness affects not just individuals, but entire families who often struggle in silence and uncertainty,” adds Patricia Poitras. “Our hope is that Poitras Center scientists will continue to make important advancements and spark novel treatments for complex mental health disorders and, most of all, give families living with these conditions a renewed sense of hope for the future.”

New particle detector passes the “standard candle” test

Tue, 09/02/2025 - 1:00pm

A new and powerful particle detector just passed a critical test in its goal to decipher the ingredients of the early universe.

The sPHENIX detector is the newest experiment at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) and is designed to precisely measure products of high-speed particle collisions. From the aftermath, scientists hope to reconstruct the properties of quark-gluon plasma (QGP) — a white-hot soup of subatomic particles known as quarks and gluons that is thought to have sprung into existence in the few microseconds following the Big Bang. Just as quickly, the mysterious plasma disappeared, cooling and combining to form the protons and neutrons that make up today’s ordinary matter.

Now, the sPHENIX detector has made a key measurement that proves it has the precision to help piece together the primordial properties of quark-gluon plasma.

In a paper in the Journal of High Energy Physics, scientists including physicists at MIT report that sPHENIX precisely measured the number and energy of particles that streamed out from gold ions that collided at close to the speed of light.

Straight ahead

This test is considered in physics to be a “standard candle,” meaning that the measurement is a well-established constant that can be used to gauge a detector’s precision.

In particular, sPHENIX successfully measured the number of charged particles that are produced when two gold ions collide, and determined how this number changes when the ions collide head-on, versus just glancing by. The detector’s measurements revealed that head-on collisions produced 10 times more charged particles, which were also 10 times more energetic, compared to less straight-on collisions.

“This indicates the detector works as it should,” says Gunther Roland, professor of physics at MIT, who is a member and former spokesperson for the sPHENIX Collaboration. “It’s as if you sent a new telescope up in space after you’ve spent 10 years building it, and it snaps the first picture. It’s not necessarily a picture of something completely new, but it proves that it’s now ready to start doing new science.”

“With this strong foundation, sPHENIX is well-positioned to advance the study of the quark-gluon plasma with greater precision and improved resolution,” adds Hao-Ren Jheng, a graduate student in physics at MIT and a lead co-author of the new paper. “Probing the evolution, structure, and properties of the QGP will help us reconstruct the conditions of the early universe.”

The paper’s co-authors are all members of the sPHENIX Collaboration, which comprises over 300 scientists from multiple institutions around the world, including Roland, Jheng, and physicists at MIT’s Bates Research and Engineering Center.

“Gone in an instant”

Particle colliders such as Brookhaven’s RHIC are designed to accelerate particles at “relativistic” speeds, meaning close to the speed of light. When these particles are flung around in opposite, circulating beams and brought back together, any smash-ups that occur can release an enormous amount of energy. In the right conditions, this energy can very briefly exist in the form of quark-gluon plasma — the same stuff that sprung out of the Big Bang.

Just as in the early universe, quark-gluon plasma doesn’t hang around for very long in particle colliders. If and when QGP is produced, it exists for just 10 to the minus 22, or about a sextillionth, of a second. In this moment, quark-gluon plasma is incredibly hot, up to several trillion degrees Celsius, and behaves as a “perfect fluid,” moving as one entity rather than as a collection of random particles. Almost immediately, this exotic behavior disappears, and the plasma cools and transitions into more ordinary particles such as protons and neutrons, which stream out from the main collision.

“You never see the QGP itself — you just see its ashes, so to speak, in the form of the particles that come from its decay,” Roland says. “With sPHENIX, we want to measure these particles to reconstruct the properties of the QGP, which is essentially gone in an instant.”

“One in a billion”

The sPHENIX detector is the next generation of Brookhaven’s original Pioneering High Energy Nuclear Interaction eXperiment, or PHENIX, which measured collisions of heavy ions generated by RHIC. In 2021, sPHENIX was installed in place of its predecessor, as a faster and more powerful version, designed to detect quark-gluon plasma’s more subtle and ephemeral signatures.

The detector itself is about the size of a two-story house and weighs around 1,000 tons. It sits at the intersection of RHIC’s two main collider beams, where relativistic particles, accelerated from opposite directions, meet and collide, producing particles that fly out into the detector. The sPHENIX detector is able to catch and measure 15,000 particle collisions per second, thanks to its novel, layered components, including the MVTX, or micro-vertex — a subdetector that was designed, built, and installed by scientists at MIT’s Bates Research and Engineering Center.

Together, the detector’s systems enable sPHENIX to act as a giant 3D camera that can track the number, energy, and paths of individual particles during an explosion of particles generated by a single collision.

“SPHENIX takes advantage of developments in detector technology since RHIC switched on 25 years ago, to collect data at the fastest possible rate,” says MIT postdoc Cameron Dean, who was a main contributor to the new study’s analysis. “This allows us to probe incredibly rare processes for the first time.”

In the fall of 2024, scientists ran the detector through the “standard candle” test to gauge its speed and precision. Over three weeks, they gathered data from sPHENIX as the main collider accelerated and smashed together beams of gold ions traveling at the speed of light. Their analysis of the data showed that sPHENIX accurately measured the number of charged particles produced in individual gold ion collisions, as well as the particles’ energies. What’s more, the detector was sensitive to a collision’s “head-on-ness,” and could observe that head-on collisions produced more particles with greater energy, compared to less direct collisions.

“This measurement provides clear evidence that the detector is functioning as intended,” Jheng says.

“The fun for sPHENIX is just beginning,” Dean adds. “We are currently back colliding particles and expect to do so for several more months. With all our data, we can look for the one-in-a-billion rare process that could give us insights on things like the density of QGP, the diffusion of particles through ultra-dense matter, and how much energy it takes to bind different particles together.”

This work was supported, in part, by the U.S. Department of Energy Office of Science, and the National Science Foundation.

Advancing career and academic ambitions with MITx MicroMasters Program in Finance

Fri, 08/29/2025 - 1:35pm

For a long time, Satik Movsesyan envisioned a future of working in finance and also pursuing a full-time master’s degree program at the MIT Sloan School of Management. She says the MITx MicroMasters Program in Finance provides her with the ideal opportunity to directly enhance her career with courses developed and delivered by MIT Sloan faculty.

Movsesyan first began actively pursuing ways to connect with the MIT community as a first-year student in her undergraduate program at the American University of Armenia, where she majored in business with a concentration in accounting and finance. That’s when she discovered the MicroMasters Program in Finance. Led by MIT Open Learning and MIT Sloan, the program offers learners an opportunity to advance in the finance field through a rigorous, comprehensive online curriculum comprising foundational courses, mathematical methods, and advanced modeling. During her senior year, she started taking courses in the program, beginning with 15.516x (Financial Accounting).

“I saw completing the MicroMasters program as a way to accelerate my time at MIT offline, as well as to prepare me for the academic rigor,” says Movsesyan. “The program provides a way for me to streamline my studies, while also working toward transforming capital markets here in Armenia — in a way, also helping me to streamline my career.”

Movsesyan initially started as an intern at C-Quadrat Ampega Asset Management Armenia and was promoted to her current role of financial analyst. The firm is one of two pension asset managers in Armenia. Movsesyan credits the MicroMasters program with helping her to make deeper inferences in terms of analytical tasks and empowering her to create more enhanced dynamic models to support the efficient allocation of assets. Her learning has enabled her to build different valuation models for financial instruments. She is currently developing a portfolio management tool for her company.

“Although the courses are grounded deeply in theory, they never lack a perfect applicability component, which makes them very useful,” says Movsesyan. “Having MIT’s MicroMasters on a CV adds credibility as a professional, and your input becomes more valued by the employer.”

Movsesyan says that the program has helped her to develop resilience, as well as critical and analytical thinking. Her long-term goal is to become a portfolio manager and ultimately establish an asset management company, targeted at offering an extensive range of funds based on diverse risk-return preferences of investors, while promoting transparent and sustainable investment practices. 

“The knowledge I’ve gained from the variety of courses is a perfect blend which supports me day-to-day in building solutions to existing problems in asset management,” says Movsesyan.

In addition to being a learner in the program, Movsesyan serves as a community teaching assistant (CTA). After taking 15.516x, she became a CTA for that course, working with learners around the world. She says that this role of helping and supporting others requires constantly immersing herself in the course content, which also results in challenging herself and mastering the material.

“I think my story with the MITx MicroMasters Program is proof that no matter where you are — even if you’re in a small, developing country with limited resources — if you truly want to do something, you can achieve what you want,” says Movsesyan. “It’s an example for students around the world who also have transformative ideas and determination to take action. They can be a part of the MIT community.”

Understanding shocks to welfare systems

Thu, 08/28/2025 - 4:00pm

In an unhappy coincidence, the Covid-19 pandemic and Angie Jo’s doctoral studies in political science both began in 2019. Paradoxically, this global catastrophe helped define her primary research thrust.

As countries reacted with unprecedented fiscal measures to protect their citizens from economic collapse, Jo MCP ’19 discerned striking patterns among these interventions: Nations typically seen as the least generous on social welfare were suddenly deploying the most dramatic emergency responses.

“I wanted to understand why countries like the U.S., which famously offer minimal state support, suddenly mobilize an enormous emergency response to a crisis — only to let it vanish after the crisis passes,” says Jo.

Driven by this interest, Jo launched into a comparative exploration of welfare states that forms the backbone of her doctoral research. Her work examines how different types of welfare regimes respond to collective crises, and whether these responses lead to lasting institutional reforms or merely temporary patches.

A mismatch in investments

Jo’s research focuses on a particular subset of advanced industrialized democracies — countries like the United States, United Kingdom, Canada, and Australia — that political economists classify as “liberal welfare regimes.” These nations stand in contrast to the “social democratic welfare regimes” exemplified by Scandinavian countries.

“In everyday times, citizens in countries like Denmark or Sweden are already well-protected by a deep and comprehensive welfare state,” Jo explains. “When something like Covid hits, these countries were largely able to use the social policy tools and administrative infrastructure they already had, such as subsidized childcare and short-time work schemes that prevent mass layoffs.”

Liberal welfare regimes, however, exhibit a different pattern. During normal periods, "government assistance is viewed by many as the last resort,” Jo observes. “It’s means-tested and minimal, and the responsibility to manage risk is put on the individual.”

Yet when Covid struck, these same governments “spent historically unprecedented amounts on emergency aid to citizens, including stimulus checks, expanded unemployment insurance, child tax credits, grants, and debt forbearance that might normally have faced backlash from many Americans as government ‘handouts.’”

This stark contrast — minimal investment in social safety nets during normal times followed by massive crisis spending — lies at the heart of Jo’s inquiry. “What struck me was the mismatch: The U.S. invests so little in social welfare at baseline, but when crisis hits, it can suddenly unleash massive aid — just not in ways that stick. So what happens when the next crisis comes?”

From architecture to political economy

Jo took a winding path to studying welfare states in crisis. Born in South Korea, she moved with her family to California at age 3 as her parents sought an American education for their children. After moving back to Korea for high school, she attended Harvard University, where she initially focused on art and architecture.

“I thought I’d be an artist,” Jo recalls, “but I always had many interests, and I was very aware of different countries and different political systems, because we were moving around a lot.”

While studying architecture at Harvard, Jo’s academic focus pivoted.

“I realized that most of the decisions around how things get built, whether it’s a building or a city or infrastructure, are made by the government or by powerful private actors,” she explains. “The architect is the artist’s hand that is commissioned to execute, but the decisions behind it, I realized, were what interested me more.”

After a year working in macroeconomics research at a hedge fund, Jo found herself drawn to questions in political economy. “While I didn’t find the zero-sum game of finance compelling, I really wanted to understand the interactions between markets and governments that lay behind the trades,” she says.

Jo decided to pursue a master’s degree in city planning at MIT, where she studied the political economy of master-planning new cities as a form of industrial policy in China and South Korea, before transitioning to the political science PhD program. Her research focus shifted dramatically when the Covid-19 pandemic struck.

“It was the first time I realized, wow, these wealthy Western democracies have serious problems, too,” Jo says. “They are not dealing well with this pandemic and the structural inequalities and the deep tensions that have always been part of some of these societies, but are being tested even further by the enormity of this shock.”

The costs of crisis response

One of Jo’s key insights challenges conventional wisdom about fiscal conservatism. The assumption that keeping government small saves money in the long run may be fundamentally flawed when considering crisis response.

“What I’m exploring in my research is the irony that the less you invest in a capable, effective and well-resourced government, the more that backfires when a crisis inevitably hits and you have to patch up the holes,” Jo argues. “You’re not saving money; you’re deferring the cost.”

This inefficiency becomes particularly apparent when examining how different countries deployed aid during Covid. Countries like Denmark, with robust data systems connecting health records, employment information, and family data, could target assistance with precision. The United States, by contrast, relied on blunter instruments.

“If your system isn’t built to deliver aid in normal times, it won’t suddenly work well under pressure,” Jo explains. “The U.S. had to invent entire programs from scratch overnight — and many were clumsy, inefficient, or regressive.”

There is also a political aspect to this constraint. “Not only do liberal welfare countries lack the infrastructure to address crises, they are often governed by powerful constituencies that do not want to build it — they deliberately choose to enact temporary benefits that are precisely designed to fade,” Jo argues. “This perpetuates a cycle where short-term compensations are employed from crisis to crisis, constraining the permanent expansion of the welfare state.”

Missed opportunities

Jo’s dissertation also examines whether crises provide opportunities for institutional reform. Her second paper focuses on the 2008 financial crisis in the United States, and the Hardest Hit Fund, a program that allocated federal money to state housing finance agencies to prevent foreclosures.

“I ask why, with hundreds of millions in federal aid and few strings attached, state agencies ultimately helped so few underwater homeowners shed unmanageable debt burdens,” Jo says. “The money and the mandate were there — the transformative capacity wasn’t.”

Some states used the funds to pursue ambitious policy interventions, such as restructuring mortgage debt to permanently reduce homeowners’ principal and interest burdens. However, most opted for temporary solutions like helping borrowers make up missed payments, while preserving their original contract. Partisan politics, financial interests, and status quo bias are most likely responsible for these varying state strategies, Jo believes.

She sees this as “another case of the choice that governments have between throwing money at the problem as a temporary Band-Aid solution, or using a crisis as an opportunity to pursue more ambitious, deeper reforms that help people more sustainably in the long run.”

The significance of crisis response research

For Jo, understanding how welfare states respond to crises is not just an academic exercise, but a matter of profound human consequence.

“When there’s an event like the financial crisis or Covid, the scale of suffering and the welfare gap that emerges is devastating,” Jo emphasizes. “I believe political science should be actively studying these rare episodes, rather than disregarding them as once-in-a-century anomalies.”

Her research carries implications for how we think about welfare state design and crisis preparedness. As Jo notes, the most vulnerable members of society — “people who are unbanked, undocumented, people who have low or no tax liability because they don’t make enough, immigrants or those who don’t speak English or don’t have access to the internet or are unhoused” — are often invisible to relief systems.

As Jo prepares for her career in academia, she is motivated to apply her political science training to address such failures. “We’re going to have more crises, whether pandemics, AI, climate disasters, or financial shocks,” Jo warns. “Finding better ways to cover those people is essential, and is not something that our current welfare state — or our politics — are designed to handle.”

Pages