MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 8 hours 2 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

A new patch could help to heal the heart

9 hours 5 min ago

MIT engineers have developed a flexible drug-delivery patch that can be placed on the heart after a heart attack to help promote healing and regeneration of cardiac tissue.

The new patch is designed to carry several different drugs that can be released at different times, on a pre-programmed schedule. In a study of rats, the researchers showed that this treatment reduced the amount of damaged heart tissue by 50 percent and significantly improved cardiac function.

If approved for use in humans, this type of patch could help heart attack victims recover more of their cardiac function than is now possible, the researchers say.

“When someone suffers a major heart attack, the damaged cardiac tissue doesn’t regenerate effectively, leading to a permanent loss of heart function. The tissue that was damaged doesn’t recover,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research. “Our goal is to restore that function and help people regain a stronger, more resilient heart after a myocardial infarction.”

Jaklenec and Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute, are the senior authors of the new study, which appears today in Cell Biomaterials. Former MIT postdoc Erika Wangis the lead author of the paper.

Programmed drug delivery

After a heart attack, many patients end up having bypass surgery, which improves blood flow to the heart but doesn’t repair the cardiac tissue that was damaged. In the new study, the MIT team wanted to create a patch that could be applied to the heart at the same time that the surgery is performed.

This patch, they hoped, could deliver drugs over an extended time period to promote tissue healing. Many diseases, including heart conditions, require phase-specific treatment, but most systems release drugs all at once. Timed delivery better synchronizes therapy with recovery.

“We wanted to see if it’s possible to deliver a precisely orchestrated therapeutic intervention to help heal the heart, right at the site of damage, while the surgeon is already performing open-heart surgery,” Jaklenec says.

To achieve this, the researchers set out to adapt drug-delivery microparticles they had previously developed, which consist of capsules similar to tiny coffee cups with lids. These capsules are made from a polymer called PLGA and can be sealed with a drug inside.

By changing the molecular weight of the polymers used to form the lids, the researchers can control how quickly they degrade, which enables them to program the particles to release their contents at specific times. For this application, the researchers designed particles that break down during days 1-3, days 7-9, and days 12-14 after implantation.

This allowed them to devise a regimen of three drugs that promote heart healing in different ways. The first set of particles release neuregulin-1, a growth factor that helps to prevent cell death. At the next time point, particles release VEGF, a growth factor that promotes formation of blood vessels surrounding the heart. The last batch of particles releases a small molecule drug called GW788388, which inhibits the formation of scar tissue that can occur following a heart attack.

“When tissue regenerates, it follows a carefully timed series of steps,” Jaklenec says. “Dr. Wang created a system that delivers key components at just the right time, in the sequence that the body naturally uses to heal.”

The researchers embedded rows of these particles into thin sheets of a tough but flexible hydrogel, similar to a contact lens. This hydrogel is made from alginate and PEGDA, two biocompatible polymers that eventually break down in the body. For this study, the researchers created compact, miniature patches only a few millimeters across.

“We encapsulate arrays of these particles in a hydrogel patch, and then we can surgically implant this patch into the heart. In this way, we’re really programming the treatment into this material,” Wang says.

Better heart function

Once they created these patches, the researchers tested them on spheres of heart tissue that included cardiomyocytes generated from induced pluripotent stem cells. These spheres also included endothelial cells and human ventricular cardiac fibroblasts, which are also important components of the heart.

The researchers exposed those spheres to low-oxygen conditions, mimicking the effects of a heart attack, then placed the patches over them. They found that the patches promoted blood vessel growth, helped more cells to survive, and reduced the amount of fibrosis that developed.

In tests in a rat model of heart attack, the researchers also saw significant improvements following treatment with the patch. Compared to no treatment or IV injection of the same drugs, animals treated with the patch showed 33 percent higher survival rates, a 50 percent reduction in the amount of damaged tissue, and significantly increased cardiac output.

The researchers showed that the patches would eventually dissolve over time, becoming a very thin layer over the course of a year without disrupting the heart’s mechanical function.

“This is an important way to combine drug delivery and biomaterials to potentially new treatments for patients,” Langer says.

Of the drugs tested in this study, neuregulin-1 and VEGF have been tested in clinical trials to treat heart conditions, but GW788388 has only been explored in animal models. The researchers now hope to test their patches in additional animal models in hopes of running a clinical trial in the future.

The current version of the patch needs to be implanted surgically, but the researchers are exploring the possibility of incorporating these microparticles into stents that could be inserted into arteries to deliver drugs on a programmed schedule.

Other authors of the paper include Elizabeth Calle, Binbin Ying, Behnaz Eshaghi, Linzixuan Zhang, Xin Yang, Stacey Qiaohui Lin, Jooli Han, Alanna Backx, Yuting Huang, Sevinj Mursalova, Chuhan Joyce Qi, and Yi Liu.

The researchers were supported by the Natural Sciences and Engineering Research Council of Canada and the U.S. National Heart, Lung, and Blood Institute.

Lightning-prediction tool could help protect the planes of the future

20 hours 5 min ago

More than 70 aircraft are struck by lightning every day. If you happen to be flying when a strike occurs, chances are you won’t feel a thing, thanks to lightning protection measures that are embedded in key zones throughout the aircraft.

Lightning protection systems work well, largely because they are designed for planes with a “tube-and-wing” structure, a simple geometry common to most aircraft today. But future airplanes may not look and fly the same way. The aviation industry is exploring new designs, including blended-wing bodies and truss-braced wings, partly to reduce fuel and weight costs. But researchers don’t yet know how these unconventional designs might respond to lightning strikes.

MIT aerospace engineers are hoping to change that with a new physics-based approach that predicts how lightning would sweep across a plane with any design. The tool then generates a zoning map highlighting sections of an aircraft that would require various degrees of lightning protection, given how they are likely to experience a strike.

“People are starting to conceive aircraft that look very different from what we’re used to, and we can’t apply exactly what we know from historical data to these new configurations because they’re just too different,” says Carmen Guerra-Garcia, associate professor of aeronautics and astronautics (AeroAstro) at MIT. “Physics-based methods are universal. They’re agnostic to the type of geometry or vehicle. This is the path forward to be able to do this lightning zoning and protect future aircraft.”

She and her colleagues report their results in a study appearing this week in IEEE Access. The study’s first author is AeroAstro graduate student Nathanael Jenkins. Other co-authors include Louisa Michael and Benjamin Westin of Boeing Research and Technology.

First strike

When lightning strikes, it first attaches to a part of a plane — typically a sharp edge or extremity — and hangs on for up to a second. During this brief flash, the plane continues speeding through the air, causing the lightning current to “sweep” over parts of its surface, potentially changing in intensity and re-attaching at certain points where the intense current flow could damage vulnerable sections of an aircraft.

In previous work, Guerra-Garcia’s group developed a model to predict the parts of a plane where lightning is most likely to first connect. That work, led by graduate student Sam Austin, established a starting point for the team’s new work, which aims to predict how and where the lightning will then sweep over the plane’s surface. The team next converted their lightning sweep predictions into zoning maps to identify vulnerable regions requiring certain levels of protection.

A typical tube-and-wing plane is divided into three main zones, as classified by the aviation industry. Each zone has a clear description of the level of current it must withstand in order to be certified for flight. Parts of a plane that are more likely to be hit by lightning are generally classified as zone 1 and require more protection, which can include embedded metal foil in the skin of the airplane that conducts away a lightning current.

To date, an airplane’s lightning zones have been determined over many years of flight inspections after lightning strikes and fine-tuning of protection measures. Guerra-Garcia and her colleagues looked to develop a zoning approach based on physics, rather than historical flight data. Such a physics-based mapping could be applied to any shape of aircraft, such as unconventional and largely untested designs, to identify regions that really require reinforcement.

“Protecting aircraft from lightning is heavy,” Jenkins says. “Embedding copper mesh or foil throughout an aircraft is an added weight penalty. And if we had the greatest level of protection for every part of the plane’s surface, the plane would weigh far too much. So zoning is about trying to optimize the weight of the system while also having it be as safe as possible.”

In the zone

For their new approach, the team developed a model to predict the pattern of lightning sweep and the corresponding lightning protection zones, for a given airplane geometry. Starting with a specific airplane shape — in their case, a typical tube-and-wing structure — the researchers simulated the fluid dynamics, or how air would flow around a plane, given a certain speed, altitude, and pitch angle. They also incorporated their previous model that predicts the places where lightning is more likely to initially attach.

For each initial attachment point, the team simulated tens of thousands of potential lightning arcs, or angles from which the current strikes the plane. They then ran the model forward to predict how the tens of thousands of potential strikes would follow the air flow across the plane’s surface. These runs produced a statistical representation of where lightning, striking a specific point on a plane, is likely to flow and potentially cause damage. The team converted this statistical representation into a map of zones of varying vulnerability.

They validated the method on a conventional tube-and-wing structure, showing that the zoning maps generated by the physics-based approach were consistent with what the aviation industry has determined over decades of fine-tuning.

“We now have a physics-based tool that provides some metrics like the probability of lightning attachment and dwell time, which is how long an arc will linger at a specific point,” Guerra-Garcia explains. “We convert those physics metrics into zoning maps to show, if I’m in this red region, the lightning arc will stay for a long time, so that region needs to be heavily protected.”

The team is starting to apply the approach to new geometries, such as blended-wing designs and truss-braced structures. The researchers envision that the tool can help designers incorporate safe and efficient lightning-protection systems early on in the design process.

“Lightning is incredible and terrifying at the same time, and I have full confidence in flying on planes at the moment,” Jenkins says. “I want to have that same confidence in 20 years’ time. So, we need a new way to zone aircraft.”

“With physics-based methods like the ones developed with professor Guerra-Garcia’s group we have the opportunity to shape industry standards and as an industry rely on the underlying physics to develop guidelines for aircraft certification through simulation,” says co-author Louisa Michael of Boeing Technology Innovation. Currently, we are engaging with industrial committees to propose these methods to be included in Aerospace Recommended Practices.”

“Zoning unconventional aircraft is not an easy task,” adds co-author Ben Westin of Boeing Technology Innovation. “But these methods will allow us to confidently identify which threat levels each part of the aircraft needs to be protected against and certified for, and they give our design engineers a platform to do their best work to optimize aircraft design.”

Beyond airplanes, Guerra-Garcia is looking at ways to adapt the lightning protection model to other technologies, including wind turbines.

“About 60 percent of blade losses are due to lightning and will become worse as we move offshore because wind turbines will be even bigger and more susceptible to upward lightning,” she says. “They have many of the same challenges of a flowing gas environment. It’s more complex, and we will apply this same sort of methodology to this space.”

This research was funded, in part, by the Boeing Company.

Startup provides a nontechnical gateway to coding on quantum computers

20 hours 5 min ago

Quantum computers have the potential to model new molecules and weather patterns better than any computer today. They may also one day accelerate artificial intelligence algorithms at a much lower energy footprint. But anyone interested in using quantum computers faces a steep learning curve that starts with getting access to quantum devices and then figuring out one of the many quantum software programs on the market.

Now qBraid, founded by Kanav Setia and Jason Necaise ’20, is providing a gateway to quantum computing with a platform that gives users access to the leading quantum devices and software. Users can log on to qBraid’s cloud-based interface and connect with quantum devices and other computing resources from leading companies like Nvidia, Microsoft, and IBM. In a few clicks, they can start coding or deploy cutting-edge software that works across devices.

“The mission is to take you from not knowing anything about quantum computing to running your first program on these amazing machines in less than 10 minutes,” Setia says. “We’re a one-stop platform that gives access to everything the quantum ecosystem has to offer. Our goal is to enable anyone — whether they’re enterprise customers, academics, or individual users — to build and ultimately deploy applications.”

Since its founding in June of 2020, qBraid has helped more than 20,000 people in more than 120 countries deploy code on quantum devices. That traction is ultimately helping to drive innovation in a nascent industry that’s expected to play a key role in our future.

“This lowers the barrier to entry for a lot of newcomers,” Setia says. “They can be up and running in a few minutes instead of a few weeks. That’s why we’ve gotten so much adoption around the world. We’re one of the most popular platforms for accessing quantum software and hardware.”

A quantum “software sandbox”

Setia met Necaise while the two interned at IBM. At the time, Necaise was an undergraduate at MIT majoring in physics, while Setia was at Dartmouth College. The two enjoyed working together, and Necaise said if Setia ever started a company, he’d be interested in joining.

A few months later, Setia decided to take him up on the offer. At Dartmouth, Setia had taken one of the first applied quantum computing classes, but students spent weeks struggling to install all the necessary software programs before they could even start coding.

“We hadn’t even gotten close to developing any useful algorithms,” Seita said. “The idea for qBraid was, ‘Why don’t we build a software sandbox in the cloud and give people an easy programming setup out of the box?’ Connection with the hardware would already be done.”

The founders received early support from the MIT Sandbox Innovation Fund and took part in the delta v summer startup accelerator run by the Martin Trust Center for MIT Entrepreneurship.

“Both programs provided us with very strong mentorship,” Setia says. “They give you frameworks on what a startup should look like, and they bring in some of the smartest people in the world to mentor you — people you’d never have access to otherwise.”

Necaise left the company in 2021. Setia, meanwhile, continued to find problems with quantum software outside of the classroom.

“This is a massive bottleneck,” Setia says. “I’d worked on several quantum software programs that pushed out updates or changes, and suddenly all hell broke loose on my codebase. I’d spend two to four weeks jostling with these updates that had almost nothing to do with the quantum algorithms I was working on.”

QBraid started as a platform with pre-installed software that let developers start writing code immediately. The company also added support for version-controlled quantum software so developers could build applications on top without worrying about changes. Over time, qBraid added connections to quantum computers and tools that lets quantum programs run across different devices.

“The pitch was you don’t need to manage a bunch of software or a whole bunch of cloud accounts,” Setia says. “We’re a single platform: the quantum cloud.”

QBraid also launched qBook, a learning platform that offers interactive courses in quantum computing.

“If you see a piece of code you like, you just click play and the code runs,” Setia says. “You can run a whole bunch of code, modify it on the fly, and you can understand how it works. It runs on laptops, iPads, and phones. A significant portion of our users are from developing countries, and they’re developing applications from their phones.”

Democratizing quantum computing

Today qBraid’s 20,000 users come from over 400 universities and 100 companies around the world. As qBraid’s user base has grown, the company went from integrating quantum computers onto their platform from the outside to creating a quantum operating system, qBraid-OS, that is currently being used by four leading quantum companies.

“We are productizing these quantum computers,” Setia explains. “Many quantum companies are realizing they want to focus their energy completely on the hardware, with us productizing their infrastructure. We’re like the operating system for quantum computers.”

People are using qBraid to build quantum applications in AI and machine learning, to discover new molecules or develop new drugs, and to develop applications in finance and cybersecurity. With every new use case, Setia says qBraid is democratizing quantum computing to create the quantum workforce that will continue to advance the field.

“[In 2018], an article in The New York Times said there were possibly less than 1,000 people in the world that could be called experts in quantum programming,” Setia says. “A lot of people want to access these cutting-edge machines, but they don’t have the right software backgrounds. They are just getting started and want to play with algorithms. QBraid gives those people an easy programming setup out of the box.”

Helping K-12 schools navigate the complex world of AI

Mon, 11/03/2025 - 4:45pm

With the rapid advancement of generative artificial intelligence, teachers and school leaders are looking for answers to complicated questions about successfully integrating technology into lessons, while also ensuring students actually learn what they’re trying to teach. 

Justin Reich, an associate professor in MIT’s Comparative Media Studies/Writing program, hopes a new guidebook published by the MIT Teaching Systems Lab can support K-12 educators as they determine what AI policies or guidelines to craft.

“Throughout my career, I’ve tried to be a person who researches education and technology and translates findings for people who work in the field,” says Reich. “When tricky things come along I try to jump in and be helpful.” 

A Guide to AI in Schools: Perspectives for the Perplexed,” published this fall, was developed with the support of an expert advisory panel and other researchers. The project includes input from more than 100 students and teachers from around the United States, sharing their experiences teaching and learning with new generative AI tools. 

“We’re trying to advocate for an ethos of humility as we examine AI in schools,” Reich says. “We’re sharing some examples from educators about how they’re using AI in interesting ways, some of which might prove sturdy and some of which might prove faulty. And we won’t know which is which for a long time.”

Finding answers to AI and education questions

The guidebook attempts to help K-12 educators, students, school leaders, policymakers, and others collect and share information, experiences, and resources. AI’s arrival has left schools scrambling to respond to multiple challenges, like how to ensure academic integrity and maintain data privacy. 

Reich cautions that the guidebook is not meant to be prescriptive or definitive, but something that will help spark thought and discussion. 

“Writing a guidebook on generative AI in schools in 2025 is a little bit like writing a guidebook of aviation in 1905,” the guidebook’s authors note. “No one in 2025 can say how best to manage AI in schools.”

Schools are also struggling to measure how student learning loss looks in the age of AI. “How does bypassing productive thinking with AI look in practice?” Reich asks. “If we think teachers provide content and context to support learning and students no longer perform the exercises housing the content and providing the context, that’s a serious problem.”

Reich invites people directly impacted by AI to help develop solutions to the challenges its ubiquity presents. “It’s like observing a conversation in the teacher’s lounge and inviting students, parents, and other people to participate about how teachers think about AI,” he says, “what they are seeing in their classrooms, and what they’ve tried and how it went.”

The guidebook, in Reich’s view, is ultimately a collection of hypotheses expressed in interviews with teachers: well-informed, initial guesses about the paths that schools could follow in the years ahead. 

Producing educator resources in a podcast

In addition to the guidebook, the Teaching Systems Lab also recently produced “The Homework Machine,” a seven-part series from the Teachlab podcast that explores how AI is reshaping K-12 education. 

Reich produced the podcast in collaboration with journalist Jesse Dukes. Each episode tackles a specific area, asking important questions about challenges related to issues like AI adoption, poetry as a tool for student engagement, post-Covid learning loss, pedagogy, and book bans. The podcast allows Reich to share timely information about education-related updates and collaborate with people interested in helping further the work.

“The academic publishing cycle doesn’t lend itself to helping people with near-term challenges like those AI presents,” Reich says. “Peer review takes a long time, and the research produced isn’t always in a form that’s helpful to educators.” Schools and districts are grappling with AI in real time, bypassing time-tested quality control measures. 

The podcast can help reduce the time it takes to share, test, and evaluate AI-related solutions to new challenges, which could prove useful in creating training and resources.  

“We hope the podcast will spark thought and discussion, allowing people to draw from others’ experiences,” Reich says.

The podcast was also produced into an hour-long radio special, which was broadcast by public radio stations across the country.

“We’re fumbling around in the dark”

Reich is direct in his assessment of where we are with understanding AI and its impacts on education. “We’re fumbling around in the dark,” he says, recalling past attempts to quickly integrate new tech into classrooms. These failures, Reich suggests, highlight the importance of patience and humility as AI research continues. “AI bypassed normal procurement processes in education; it just showed up on kids’ phones,” he notes. 

“We’ve been really wrong about tech in the past,” Reich says. Despite districts’ spending on tools like smartboards, for example, research indicates there’s no evidence that they improve learning or outcomes. In a new article for article for The Conversation, he argues that early teacher guidance in areas like web literacy has produced bad advice that still exists in our educational system. “We taught students and educators not to trust Wikipedia,” he recalls, “and to search for website credibility markers, both of which turned out to be incorrect.” Reich wants to avoid a similar rush to judgment on AI, recommending that we avoid guessing at AI-enabled instructional strategies.

These challenges, coupled with potential and observed student impacts, significantly raise the stakes for schools and students’ families in the AI race. “Education technology always provokes teacher anxiety,” Reich notes, “but the breadth of AI-related concerns is much greater than in other tech-related areas.” 

The dawn of the AI age is different from how we’ve previously received tech into our classrooms, Reich says. AI wasn’t adopted like other tech. It simply arrived. It’s now upending educational models and, in some cases, complicating efforts to improve student outcomes.

Reich is quick to point out that there are no clear, definitive answers on effective AI implementation and use in classrooms; those answers don’t currently exist. Each of the resources Reich helped develop invite engagement from the audiences they target, aggregating valuable responses others might find useful.

“We can develop long-term solutions to schools’ AI challenges, but it will take time and work,” he says. “AI isn’t like learning to tie knots; we don’t know what AI is, or is going to be, yet.” 

Reich also recommends learning more about AI implementation from a variety of sources. “Decentralized pockets of learning can help us test ideas, search for themes, and collect evidence on what works,” he says. “We need to know if learning is actually better with AI.” 

While teachers don’t get to choose regarding AI’s existence, Reich believes it’s important that we solicit their input and involve students and other stakeholders to help develop solutions that improve learning and outcomes. 

“Let’s race to answers that are right, not first,” Reich says.

3 Questions: How AI is helping us monitor and support vulnerable ecosystems

Mon, 11/03/2025 - 3:55pm

A recent study from Oregon State University estimated that more than 3,500 animal species are at risk of extinction because of factors including habitat alterations, natural resources being overexploited, and climate change.

To better understand these changes and protect vulnerable wildlife, conservationists like MIT PhD student and Computer Science and Artificial Intelligence Laboratory (CSAIL) researcher Justin Kay are developing computer vision algorithms that carefully monitor animal populations. A member of the lab of MIT Department of Electrical Engineering and Computer Science assistant professor and CSAIL principal investigator Sara Beery, Kay is currently working on tracking salmon in the Pacific Northwest, where they provide crucial nutrients to predators like birds and bears, while managing the population of prey, like bugs.

With all that wildlife data, though, researchers have lots of information to sort through and many AI models to choose from to analyze it all. Kay and his colleagues at CSAIL and the University of Massachusetts Amherst are developing AI methods that make this data-crunching process much more efficient, including a new approach called “consensus-driven active model selection” (or “CODA”) that helps conservationists choose which AI model to use. Their work was named a Highlight Paper at the International Conference on Computer Vision (ICCV) in October.

That research was supported, in part, by the National Science Foundation, Natural Sciences and Engineering Research Council of Canada, and Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Here, Kay discusses this project, among other conservation efforts.

Q: In your paper, you pose the question of which AI models will perform the best on a particular dataset. With as many as 1.9 million pre-trained models available in the HuggingFace Models repository alone, how does CODA help us address that challenge?

A: Until recently, using AI for data analysis has typically meant training your own model. This requires significant effort to collect and annotate a representative training dataset, as well as iteratively train and validate models. You also need a certain technical skill set to run and modify AI training code. The way people interact with AI is changing, though — in particular, there are now millions of publicly available pre-trained models that can perform a variety of predictive tasks very well. This potentially enables people to use AI to analyze their data without developing their own model, simply by downloading an existing model with the capabilities they need. But this poses a new challenge: Which model, of the millions available, should they use to analyze their data? 

Typically, answering this model selection question also requires you to spend a lot of time collecting and annotating a large dataset, albeit for testing models rather than training them. This is especially true for real applications where user needs are specific, data distributions are imbalanced and constantly changing, and model performance may be inconsistent across samples. Our goal with CODA was to substantially reduce this effort. We do this by making the data annotation process “active.” Instead of requiring users to bulk-annotate a large test dataset all at once, in active model selection we make the process interactive, guiding users to annotate the most informative data points in their raw data. This is remarkably effective, often requiring users to annotate as few as 25 examples to identify the best model from their set of candidates. 

We’re very excited about CODA offering a new perspective on how to best utilize human effort in the development and deployment of machine-learning (ML) systems. As AI models become more commonplace, our work emphasizes the value of focusing effort on robust evaluation pipelines, rather than solely on training.

Q: You applied the CODA method to classifying wildlife in images. Why did it perform so well, and what role can systems like this have in monitoring ecosystems in the future?

A: One key insight was that when considering a collection of candidate AI models, the consensus of all of their predictions is more informative than any individual model’s predictions. This can be seen as a sort of “wisdom of the crowd:” On average, pooling the votes of all models gives you a decent prior over what the labels of individual data points in your raw dataset should be. Our approach with CODA is based on estimating a “confusion matrix” for each AI model — given the true label for some data point is class X, what is the probability that an individual model predicts class X, Y, or Z? This creates informative dependencies between all of the candidate models, the categories you want to label, and the unlabeled points in your dataset.

Consider an example application where you are a wildlife ecologist who has just collected a dataset containing potentially hundreds of thousands of images from cameras deployed in the wild. You want to know what species are in these images, a time-consuming task that computer vision classifiers can help automate. You are trying to decide which species classification model to run on your data. If you have labeled 50 images of tigers so far, and some model has performed well on those 50 images, you can be pretty confident it will perform well on the remainder of the (currently unlabeled) images of tigers in your raw dataset as well. You also know that when that model predicts some image contains a tiger, it is likely to be correct, and therefore that any model that predicts a different label for that image is more likely to be wrong. You can use all these interdependencies to construct probabilistic estimates of each model’s confusion matrix, as well as a probability distribution over which model has the highest accuracy on the overall dataset. These design choices allow us to make more informed choices over which data points to label and ultimately are the reason why CODA performs model selection much more efficiently than past work.

There are also a lot of exciting possibilities for building on top of our work. We think there may be even better ways of constructing informative priors for model selection based on domain expertise — for instance, if it is already known that one model performs exceptionally well on some subset of classes or poorly on others. There are also opportunities to extend the framework to support more complex machine-learning tasks and more sophisticated probabilistic models of performance. We hope our work can provide inspiration and a starting point for other researchers to keep pushing the state of the art.

Q: You work in the Beerylab, led by Sara Beery, where researchers are combining the pattern-recognition capabilities of machine-learning algorithms with computer vision technology to monitor wildlife. What are some other ways your team is tracking and analyzing the natural world, beyond CODA?

A: The lab is a really exciting place to work, and new projects are emerging all the time. We have ongoing projects monitoring coral reefs with drones, re-identifying individual elephants over time, and fusing multi-modal Earth observation data from satellites and in-situ cameras, just to name a few. Broadly, we look at emerging technologies for biodiversity monitoring and try to understand where the data analysis bottlenecks are, and develop new computer vision and machine-learning approaches that address those problems in a widely applicable way. It’s an exciting way of approaching problems that sort of targets the “meta-questions” underlying particular data challenges we face. 

The computer vision algorithms I’ve worked on that count migrating salmon in underwater sonar video are examples of that work. We often deal with shifting data distributions, even as we try to construct the most diverse training datasets we can. We always encounter something new when we deploy a new camera, and this tends to degrade the performance of computer vision algorithms. This is one instance of a general problem in machine learning called domain adaptation, but when we tried to apply existing domain adaptation algorithms to our fisheries data we realized there were serious limitations in how existing algorithms were trained and evaluated. We were able to develop a new domain adaptation framework, published earlier this year in Transactions on Machine Learning Research, that addressed these limitations and led to advancements in fish counting, and even self-driving and spacecraft analysis.

One line of work that I’m particularly excited about is understanding how to better develop and analyze the performance of predictive ML algorithms in the context of what they are actually used for. Usually, the outputs from some computer vision algorithm — say, bounding boxes around animals in images — are not actually the thing that people care about, but rather a means to an end to answer a larger problem — say, what species live here, and how is that changing over time? We have been working on methods to analyze predictive performance in this context and reconsider the ways that we input human expertise into ML systems with this in mind. CODA was one example of this, where we showed that we could actually consider the ML models themselves as fixed and build a statistical framework to understand their performance very efficiently. We have been working recently on similar integrated analyses combining ML predictions with multi-stage prediction pipelines, as well as ecological statistical models. 

The natural world is changing at unprecedented rates and scales, and being able to quickly move from scientific hypotheses or management questions to data-driven answers is more important than ever for protecting ecosystems and the communities that depend on them. Advancements in AI can play an important role, but we need to think critically about the ways that we design, train, and evaluate algorithms in the context of these very real challenges.

Turning on an immune pathway in tumors could lead to their destruction

Mon, 11/03/2025 - 3:00pm

By stimulating cancer cells to produce a molecule that activates a signaling pathway in nearby immune cells, MIT researchers have found a way to force tumors to trigger their own destruction.

Activating this signaling pathway, known as the cGAS-STING pathway, worked even better when combined with existing immunotherapy drugs known as checkpoint blockade inhibitors, in a study of mice. That dual treatment was successfully able to control tumor growth.

The researchers turned on the cGAS-STING pathway in immune cells using messenger RNA delivered to cancer cells. This approach may avoid the side effects of delivering large doses of a STING activator, and takes advantage of a natural process in the body. This could make it easier to develop a treatment for use in patients, the researchers say.

“Our approach harnesses the tumor’s own machinery to produce immune-stimulating molecules, creating a powerful antitumor response,” says Natalie Artzi, a principal research scientist at MIT’s Institute for Medical Engineering and Science, an associate professor of medicine at Harvard Medical School, a core faculty member at the Wyss Institute for Biologically Inspired Engineering at Harvard, and the senior author of the study.

“By increasing cGAS levels inside cancer cells, we can enhance delivery efficiency — compared to targeting the more scarce immune cells in the tumor microenvironment — and stimulate the natural production of cGAMP, which then activates immune cells locally,” she says. “This strategy not only strengthens antitumor immunity but also reduces the toxicity associated with direct STING agonist delivery, bringing us closer to safer and more effective cancer immunotherapies.”

Alexander Cryer, a visiting scholar at IMES, is the lead author of the paper, which appears this week in the Proceedings of the National Academy of Sciences.

Immune activation

STING (short for stimulator of interferon genes) is a protein that helps to trigger immune responses. When STING is activated, it turns on a pathway that initiates production of type one interferons, which are cytokines that stimulate immune cells.

Many research groups, including Artzi’s, have explored the possibility of artificially stimulating this pathway with molecules called STING agonists, which could help immune cells to recognize and attack tumor cells. This approach has worked well in animal models, but it has had limited success in clinical trials, in part because the required doses can cause harmful side effects.

While working on a project exploring new ways to deliver STING agonists, Cryer became intrigued when he learned from previous work that cancer cells can produce a STING activator known as cGAMP. The cells then secrete cGAMP, which can activate nearby immune cells.

“Part of my philosophy of science is that I really enjoy using endogenous processes that the body already has, and trying to utilize them in a slightly different context. Evolution has done all the hard work. We just need to figure out how push it in a different direction,” Cryer says. “Once I saw that cancer cells produce this molecule, I thought: Maybe there’s a way to take this process and supercharge it.”

Within cells, the production of cGAMP is catalyzed by an enzyme called cGAS. To get tumor cells to activate STING in immune cells, the researchers devised a way to deliver messenger RNA that encodes cGAS. When this enzyme detects double-stranded DNA in the cell body, which can be a sign of either infection or cancer-induced damage, it begins producing cGAMP.

“It just so happens that cancer cells, because they’re dividing so fast and not particularly accurately, tend to have more double-stranded DNA fragments than healthy cells,” Cryer says.

The tumor cells then release cGAMP into tumor microenvironment, where it can be taken up by neighboring immune cells and activate their STING pathway.

Targeting tumors

Using a mouse model of melanoma, the researchers evaluated their new strategy’s potential to kill cancer cells. They injected mRNA encoding cGAS, encapsulated in lipid nanoparticles, into tumors. One group of mice received this treatment alone, while another received a checkpoint blockade inhibitor, and a third received both treatments.

Given on their own, cGAS and the checkpoint inhibitor each significantly slowed tumor growth. However, the best results were seen in the mice that received both treatments. In that group, tumors were completely eradicated in 30 percent of the mice, while none of the tumors were fully eliminated in the groups that received just one treatment.

An analysis of the immune response showed that the mRNA treatment stimulated production of interferon as well as many other immune signaling molecules. A variety of immune cells, including macrophages and dendritic cells, were activated. These cells help to stimulate T cells, which can then destroy cancer cells.

The researchers were able to elicit these responses with just a small dose of cancer-cell-produced cGAMP, which could help to overcome one of the potential obstacles to using cGAMP on its own as therapy: Large doses are required to stimulate an immune response, and these doses can lead to widespread inflammation, tissue damage, and autoimmune reactions. When injected on its own, cGAMP tends to spread through the body and is rapidly cleared from the tumor, while in this study, the mRNA nanoparticles and cGAMP remained at the tumor site.

“The side effects of this class of molecule can be pretty severe, and one of the potential advantages of our approach is that you’re able to potentially subvert some toxicity that you might see if you’re giving the free molecules,” Cryer says.

The researchers now hope to work on adapting the delivery system so that it could be given as a systemic injection, rather than injecting it into the tumor. They also plan to test the mRNA therapy in combination with chemotherapy drugs or radiotherapy that damage DNA, which could make the therapy even more effective because there could be even more double-stranded DNA available to help activate the synthesis of cGAMP.

A faster problem-solving tool that guarantees feasibility

Mon, 11/03/2025 - 12:00am

Managing a power grid is like trying to solve an enormous puzzle.

Grid operators must ensure the proper amount of power is flowing to the right areas at the exact time when it is needed, and they must do this in a way that minimizes costs without overloading physical infrastructure. Even more, they must solve this complicated problem repeatedly, as rapidly as possible, to meet constantly changing demand.

To help crack this consistent conundrum, MIT researchers developed a problem-solving tool that finds the optimal solution much faster than traditional approaches while ensuring the solution doesn’t violate any of the system’s constraints. In a power grid, constraints could be things like generator and line capacity.

This new tool incorporates a feasibility-seeking step into a powerful machine-learning model trained to solve the problem. The feasibility-seeking step uses the model’s prediction as a starting point, iteratively refining the solution until it finds the best achievable answer.

The MIT system can unravel complex problems several times faster than traditional solvers, while providing strong guarantees of success. For some extremely complex problems, it could find better solutions than tried-and-true tools. The technique also outperformed pure machine learning approaches, which are fast but can’t always find feasible solutions.

In addition to helping schedule power production in an electric grid, this new tool could be applied to many types of complicated problems, such as designing new products, managing investment portfolios, or planning production to meet consumer demand.

“Solving these especially thorny problems well requires us to combine tools from machine learning, optimization, and electrical engineering to develop methods that hit the right tradeoffs in terms of providing value to the domain, while also meeting its requirements. You have to look at the needs of the application and design methods in a way that actually fulfills those needs,” says Priya Donti, the Silverman Family Career Development Professor in the Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS).

Donti, senior author of an open-access paper on this new tool, called FSNet, is joined by lead author Hoang Nguyen, an EECS graduate student. The paper will be presented at the Conference on Neural Information Processing Systems.

Combining approaches

Ensuring optimal power flow in an electric grid is an extremely hard problem that is becoming more difficult for operators to solve quickly.

“As we try to integrate more renewables into the grid, operators must deal with the fact that the amount of power generation is going to vary moment to moment. At the same time, there are many more distributed devices to coordinate,” Donti explains.

Grid operators often rely on traditional solvers, which provide mathematical guarantees that the optimal solution doesn’t violate any problem constraints. But these tools can take hours or even days to arrive at that solution if the problem is especially convoluted.

On the other hand, deep-learning models can solve even very hard problems in a fraction of the time, but the solution might ignore some important constraints. For a power grid operator, this could result in issues like unsafe voltage levels or even grid outages.

“Machine-learning models struggle to satisfy all the constraints due to the many errors that occur during the training process,” Nguyen explains.

For FSNet, the researchers combined the best of both approaches into a two-step problem-solving framework.

Focusing on feasibility

In the first step, a neural network predicts a solution to the optimization problem. Very loosely inspired by neurons in the human brain, neural networks are deep learning models that excel at recognizing patterns in data.

Next, a traditional solver that has been incorporated into FSNet performs a feasibility-seeking step. This optimization algorithm iteratively refines the initial prediction while ensuring the solution does not violate any constraints.

Because the feasibility-seeking step is based on a mathematical model of the problem, it can guarantee the solution is deployable.

“This step is very important. In FSNet, we can have the rigorous guarantees that we need in practice,” Hoang says.

The researchers designed FSNet to address both main types of constraints (equality and inequality) at the same time. This makes it easier to use than other approaches that may require customizing the neural network or solving for each type of constraint separately.

“Here, you can just plug and play with different optimization solvers,” Donti says.

By thinking differently about how the neural network solves complex optimization problems, the researchers were able to unlock a new technique that works better, she adds.

They compared FSNet to traditional solvers and pure machine-learning approaches on a range of challenging problems, including power grid optimization. Their system cut solving times by orders of magnitude compared to the baseline approaches, while respecting all problem constraints.

FSNet also found better solutions to some of the trickiest problems.

“While this was surprising to us, it does make sense. Our neural network can figure out by itself some additional structure in the data that the original optimization solver was not designed to exploit,” Donti explains.

In the future, the researchers want to make FSNet less memory-intensive, incorporate more efficient optimization algorithms, and scale it up to tackle more realistic problems.

“Finding solutions to challenging optimization problems that are feasible is paramount to finding ones that are close to optimal. Especially for physical systems like power grids, close to optimal means nothing without feasibility. This work provides an important step toward ensuring that deep-learning models can produce predictions that satisfy constraints, with explicit guarantees on constraint enforcement,” says Kyri Baker, an associate professor at the University of Colorado Boulder, who was not involved with this work.

"A persistent challenge for machine learning-based optimization is feasibility. This work elegantly couples end-to-end learning with an unrolled feasibility-seeking procedure that minimizes equality and inequality violations. The results are very promising and I look forward to see where this research will head," adds Ferdinando Fioretto, an assistant professor at the University of Virginia, who was not involved with this work.

Study: Good management of aid projects reduces local violence

Mon, 11/03/2025 - 12:00am

Good management of aid projects in developing countries reduces violence in those areas — but poorly managed projects increase the chances of local violence, according to a new study by an MIT economist.

The research, examining World Bank projects in Africa, illuminates a major question surrounding international aid. Observers have long wondered if aid projects, by bringing new resources into developing countries, lead to conflict over those goods as an unintended consequence. Previously, some scholars have identified an increase in violence attached to aid, while others have found a decrease.

The new study shows those prior results are not necessarily wrong, but not entirely right, either. Instead, aid oversight matters. World Bank programs earning the highest evaluation scores for their implementation reduce the likelihood of conflict by up to 12 percent, compared to the worst-managed programs.

“I find that the management quality of these projects has a really strong effect on whether that project leads to conflict or not,” says MIT economist Jacob Moscona, who conducted the research. “Well-managed aid projects can actually reduce conflict, and poorly managed projects increase conflict, relative to no project. So, the way aid programs are organized is very important.”

The findings also suggest aid projects can work well almost anywhere. At times, observers have suggested the political conditions in some countries prevent aid from being effective. But the new study finds otherwise.

“There are ways these programs can have their positive effects without the negative consequences,” Moscona says. “And it’s not the result of what politics looks like on the receiving end; it’s about the organization itself.”

Moscona’s paper detailing the study, “The Management of Aid and Conflict in Africa,” is published in the November issue of the American Economic Journal: Economic Policy. Moscona, the paper’s sole author, is the 3M Career Development Assistant Professor in MIT’s Department of Economics.

Decisions on the ground

To conduct the study, Moscona examined World Bank data from the 1997-2014 time period, using the information compiled by AidData, a nonprofit group that also studies World Bank programs. Importantly, the World Bank conducts extensive evaluations of its projects and includes the identities of project leaders as part of those reviews.

“There are a lot of decisions on the ground made by managers of aid, and aid organizations themselves, that can have a huge impact on whether or not aid leads to conflict, and how aid resources are used and whether they are misappropriated or captured and get into the wrong hands,” Moscona says.

For instance, diligent daily checks about food distribution programs can and have substantially reduced the amount of food that is stolen or “leaks” out of the program. Other projects have created innovative ways of tagging small devices to ensure those objects are used by program participants, reducing appropriation by others.

Moscona combined the World Bank data with statistics from the Armed Conflict Location and Event Data Project (ACLED), a nonprofit that monitors political violence. That enabled him to evaluate how the quality of aid project implementation — and even the quality of the project leadership — influenced local outcomes.

For instance, by looking at the ratings of World Bank project leaders, Moscona found that shifting from a project leader at the 25th percentile, in terms of how frequently projects are linked with conflict, to one at the 75th percentile, increases the chances of local conflict by 15 percent.

“The magnitudes are pretty large, in terms of the probability that a conflict starts in the vicinity of a project,” Moscona observes.

Moscona’s research identified several other aspects of the interaction between aid and conflict that hold up over the region and time period. The establishment of aid programs does not seem to lead to long-term strategic activity by non-government forces, such as land acquisition or the establishment of rebel bases. The effects are also larger in areas that have had recent political violence. And armed conflict is greater when the resources at stake can be expropriated — such as food or medical devices.

“It matters most if you have more divertable resources, like food and medical devices that can be captured, as opposed to infrastructure projects,” Moscona says.

Reconciling the previous results

Moscona also found a clear trend in the data about the timing of violence in relation to aid. Government and other armed groups do not engage in much armed conflict when aid programs are being established; it is the appearance of desired goods themselves that sets off violent activity.

“You don’t see much conflict when the projects are getting off the ground,” Moscona says.” You really see the conflict start when the money is coming in or when the resources start to flow. Which is consistent with the idea of the relevant mechanism being about aid resources and their misappropriation, rather than groups trying to deligitimize a project.”

All told, Moscona’s study finds a logical mechanism explaining the varying results other scholars have found with regard to aid and conflict. If aid programs are not equally well-administered, it stands to reason that their outcomes will not be identical, either.

“There wasn’t much work trying to make those two sets of results speak to each other,” says Moscona. “I see it less as overturning existing results than providing a way to reconcile different results and experiences.”

Moscona’s findings may also speak to the value of aid in general — and provide actionable ideas for institutions such as the World Bank. If better management makes such a difference, then the potential effectiveness of aid programs may increase.

“One goal is to change the conversation about aid,” Moscona says. The data, he suggests, shows that the public discourse about aid can be “less defeatist about the potential negative consequences of aid, and the idea that it’s out of the control of the people who administer it.” 

New nanoparticles stimulate the immune system to attack ovarian tumors

Fri, 10/31/2025 - 6:00am

Cancer immunotherapy, which uses drugs that stimulate the body’s immune cells to attack tumors, is a promising approach to treating many types of cancer. However, it doesn’t work well for some tumors, including ovarian cancer.

To elicit a better response, MIT researchers have designed new nanoparticles that can deliver an immune-stimulating molecule called IL-12 directly to ovarian tumors. When given along with immunotherapy drugs called checkpoint inhibitors, IL-12 helps the immune system launch an attack on cancer cells.

Studying a mouse model of ovarian cancer, the researchers showed that this combination treatment could eliminate metastatic tumors in more than 80 percent of the mice. When the mice were later injected with more cancer cells, to simulate tumor recurrence, their immune cells remembered the tumor proteins and cleared them again.

“What’s really exciting is that we’re able to deliver IL-12 directly in the tumor space. And because of the way that this nanomaterial is designed to allow IL-12 to be borne on the surfaces of the cancer cells, we have essentially tricked the cancer into stimulating immune cells to arm themselves against that cancer,” says Paula Hammond, an MIT Institute Professor, MIT’s vice provost for faculty, and a member of the Koch Institute for Integrative Cancer Research.

Hammond and Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the new study, which appears today in Nature Materials. Ivan Pires PhD ’24, now a postdoc at Brigham and Women’s Hospital, is the lead author of the paper.

“Hitting the gas”

Most tumors express and secrete proteins that suppress immune cells, creating a microenvironment in which the immune response is weakened. One of the main players that can kill tumor cells are T cells, but they get sidelined or blocked by the cancer cells and are unable to attack the tumor. Checkpoint inhibitors are an FDA-approved treatment designed to take those brakes off the immune system by removing the immune-suppressing proteins so that T cells can mount an attack on tumor cells

For some cancers, including some types of melanoma and lung cancer, removing the brakes is enough to provoke the immune system into attacking cancer cells. However, ovarian tumors have many ways to suppress the immune system, so checkpoint inhibitors alone usually aren’t enough to launch an immune response.

“The problem with ovarian cancer is no one is hitting the gas. So, even if you take off the brakes, nothing happens,” Pires says.

IL-12 offers one way to “hit the gas,” by supercharging T cells and other immune cells. However, the large doses of IL-12 required to get a strong response can produce side effects due to generalized inflammation, such as flu-like symptoms (fever, fatigue, GI issues, headaches, and fatigue), as well as more severe complications such as liver toxicity and cytokine release syndrome — which can be so severe they may even lead to death.

In a 2022 study, Hammond’s lab developed nanoparticles that could deliver IL-12 directly to tumor cells, which allows larger doses to be given while avoiding the side effects seen when the drug is injected. However, these particles tended to release their payload all at once after reaching the tumor, which hindered their ability to generate a strong T cell response.

In the new study, the researchers modified the particles so that IL-12 would be released more gradually, over about a week. They achieved this by using a different chemical linker to attach IL-12 to the particles.

“With our current technology, we optimize that chemistry such that there’s a more controlled release rate, and that allowed us to have better efficacy,” Pires says.

The particles consist of tiny, fatty droplets known as liposomes, with IL-12 molecules tethered to the surface. For this study, the researchers used a linker called maleimide to attach IL-12 to the liposomes. This linker is more stable than the one they used in the previous generation of particles, which was susceptible to being cleaved by proteins in the body, leading to premature release.

To make sure that the particles get to the right place, the researchers coat them with a layer of a polymer called poly-L-glutamate (PLE), which helps them directly target ovarian tumor cells. Once they reach the tumors, the particles bind to the cancer cell surfaces, where they gradually release their payload and activate nearby T cells.

Disappearing tumors

In tests in mice, the researchers showed that the IL-12-carrying particles could effectively recruit and stimulate T cells that attack tumors. The cancer models used for these studies are metastatic, so tumors developed not only in the ovaries but throughout the peritoneal cavity, which includes the surface of the intestines, liver, pancreas, and other organs. Tumors could even be seen in the lung tissues.

First, the researchers tested the IL-12 nanoparticles on their own, and they showed that this treatment eliminated tumors in about 30 percent of the mice. They also found a significant increase in the number of T cells that accumulated in the tumor environment.

Then, the researchers gave the particles to mice along with checkpoint inhibitors. More than 80 percent of the mice that received this dual treatment were cured. This happened even when the researchers used models of ovarian cancer that are highly resistant to immunotherapy or to the chemotherapy drugs usually used for ovarian cancer.

Patients with ovarian cancer are usually treated with surgery followed by chemotherapy. While this may be initially effective, cancer cells that remain after surgery are often able to grow into new tumors. Establishing an immune memory of the tumor proteins could help to prevent that kind of recurrence.

In this study, when the researchers injected tumor cells into the cured mice five months after the initial treatment, the immune system was still able to recognize and kill the cells.

“We don’t see the cancer cells being able to develop again in that same mouse, meaning that we do have an immune memory developed in those animals,” Pires says.

The researchers are now working with MIT’s Deshpande Center for Technological Innovation to spin out a company that they hope could further develop the nanoparticle technology. In a study published earlier this year, Hammond’s lab reported a new manufacturing approach that should enable large-scale production of this type of nanoparticle.

The research was funded by the National Institutes of Health, the Marble Center for Nanomedicine, the Deshpande Center for Technological Innovation, the Ragon Institute of MGH, MIT, and Harvard, and the Koch Institute Support (core) Grant from the National Cancer Institute.

Using classic physical phenomena to solve new problems

Fri, 10/31/2025 - 12:00am

Quenching, a powerful heat transfer mechanism, is remarkably effective at transporting heat away. But in extreme environments, like nuclear power plants and aboard spaceships, a lot rides on the efficiency and speed of the process.

It’s why Marco Graffiedi, a fifth-year doctoral student at MIT’s Department of Nuclear Science and Engineering (NSE), is researching the phenomenon to help develop the next generation of spaceships and nuclear plants.

Growing up in small-town Italy

Graffiedi’s parents encouraged a sense of exploration, giving him responsibilities for family projects even at a young age. When they restored a countryside cabin in a small town near Palazzolo, in the hills between Florence and Bologna, the then-14-year-old Marco got a project of his own. He had to ensure the animals on the property had enough accessible water without overfilling the storage tank. Marco designed and built a passive hydraulic system that effectively solved the problem and is still functional today.

His proclivity for science continued in high school in Lugo, where Graffiedi enjoyed recreating classical physics phenomena, through experiments. Incidentally, the high school is named after Gregorio Ricci-Curbastro, a mathematician who laid the foundation for the theory of relativity — history that is not lost on Graffiedi. After high school, Graffiedi attended the International Physics Olympiad in Bangkok, a formative event that cemented his love for physics.

A gradual shift toward engineering

A passion for physics and basic sciences notwithstanding, Graffiedi wondered if he’d be a better fit for engineering, where he could use the study of physics, chemistry, and math as tools to build something.

Following that path, he completed a bachelor’s and master’s in mechanical engineering — because an undergraduate degree in Italy takes only three years, pretty much everyone does a master’s, Graffiedi laughs — at the Università di Pisa and the Scuola Superiore Sant’Anna (School of Engineering). The Sant’Anna is a highly selective institution that most students attend to complement their university studies.

Graffiedi’s university studies gradually moved him toward the field of environmental engineering. He researched concentrated solar power in order to reduce the cost of solar power by studying the associated thermal cycle and trying to improve solar power collection. While the project was not very successful, it reinforced Graffiedi’s impression of the necessity of alternative energies. Still firmly planted in energy studies, Graffiedi worked on fracture mechanics for his master’s thesis, in collaboration with (what was then) GE Oil and Gas, researching how to improve the effectiveness of centrifugal compressors. And a summer internship at Fermilab had Graffiedi working on the thermal characterization of superconductive coatings.

With his studies behind him, Graffiedi was still unsure about this professional path. Through the Edison Program from GE Oil and Gas, where he worked shortly after graduation, Graffiedi got to test drive many fields — from mechanical and thermal engineering to exploring gas turbines and combustion. He eventually became a test engineer, coordinating a team of engineers to test a new upgrade to the company’s gas turbines. “I set up the test bench, understanding how to instrument the machine, collect data, and run the test,” Graffiedi remembers, “there was a lot you need to think about, from a little turbine blade with sensors on it to the location of safety exits on the test bench.”

The move toward nuclear engineering

As fun as the test engineering job was, Graffiedi started to crave more technical knowledge and wanted to pivot to science. As part of his exploration, he came across nuclear energy and, understanding it to be the future, decided to lean on his engineering background to apply to MIT NSE.

He found a fit in Professor Matteo Bucci’s group and decided to explore boiling and quenching. The move from science to engineering, and back to science, was now complete.

NASA, the primary sponsor of the research, is interested in preventing boiling of cryogenic fuels, because boiling leads to loss of fuel and the resulting vapor will need to be vented to avoid overpressurizing a fuel tank.

Graffiedi’s primary focus is on quenching, which will play an important role in refueling in space — and in the cooling of nuclear cores. When a cryogen is used to cool down a surface, it undergoes what is known as the Leidenfrost effect, which means it first forms a thin vapor film that acts as an insulator and prevents further cooling. To facilitate rapid cooling, it’s important to accelerate the collapse of the vapor film. Graffiedi is exploring the mechanics of the quenching process on a microscopic level, studies that are important for land and space applications.

Boiling can be used for yet another modern application: to improve the efficiency of cooling systems for data centers. The growth of data centers and electric transportation systems needs effective heat transfer mechanisms to avoid overheating. Immersion cooling using dielectric fluids — fluids that do not conduct electricity — is one way to do so. These fluids remove heat from a surface by leaning on the principle of boiling. For effective boiling, the fluid must overcome the Leidenfrost effect and break the vapor film that forms. The fluid must also have high critical heat flux (CHF), which is the maximum value of the heat flux at which boiling can effectively be used to transfer heat from a heated surface to a liquid. Because dielectric fluids have lower CHF than water, Graffiedi is exploring solutions to enhance these limits. In particular, he is investigating how high electric fields can be used to enhance CHF and even to use boiling as a way to cool electronic components in the absence of gravity. He published this research in Applied Thermal Engineering in June.

Beyond boiling

Graffiedi’s love of science and engineering shows in his commitment to teaching as well. He has been a teaching assistant for four classes at NSE, winning awards for his contributions. His many additional achievements include winning the Manson Benedict Award presented to an NSE graduate student for excellence in academic performance and professional promise in nuclear science and engineering, and a service award for his role as past president of the MIT Division of the American Nuclear Society.

Boston has a fervent Italian community, Graffiedi says, and he enjoys being a part of it. Fittingly, the MIT Italian club is called MITaly. When he’s not at work or otherwise engaged, Graffiedi loves Latin dancing, something he makes time for at least a couple of times a week. While he has his favorite Italian restaurants in the city, Graffiedi is grateful for another set of skills his parents gave him when was just 11: making perfect pizza and pasta.

Q&A: How MITHIC is fostering a culture of collaboration at MIT

Thu, 10/30/2025 - 3:45pm

The MIT Human Insight Collaborative (MITHIC) is a presidential initiative with a mission of elevating human-centered research and teaching and connecting scholars in the humanities, arts, and social sciences with colleagues across the Institute.

Since its launch in 2024, MITHIC has funded 31 projects led by teaching and research staff representing 22 different units across MIT. The collaborative is holding its annual event on Nov. 17. 

In this Q&A, Keeril Makan, associate dean in the MIT School of Humanities, Arts, and Social Sciences, and Maria Yang, interim dean of the MIT School of Engineering, discuss the value of MITHIC and the ways it’s accelerating new research and collaborations across the Institute. Makan is the Michael (1949) Sonja Koerner Music Composition Professor and faculty lead for MITHIC. Yang is the William E. Leonhard (1940) Professor in the Department of Mechanical Engineering and co-chair of MITHIC’s SHASS+ Connectivity Fund.

Q: You each come from different areas of MIT. Looking at MITHIC from your respective roles, why is this initiative so important for the Institute?

Makan: The world is counting on MIT to develop solutions to some of the world’s greatest challenges, such as artificial intelligence, poverty, and health care. These are all issues that arise from human activity, a thread that runs through much of the research we’re focused on in SHASS. Through MITHIC, we’re embedding human-centered thinking and connecting the Institute’s top scholars in the work needed to find innovative ways of addressing these problems.

Yang: MITHIC is very important to MIT, and I think of this from the point of view as an engineer, which is my background. Engineers often think about the technology first, which is absolutely important. But for that technology to have real impact, you have to think about the human insights that make that technology relevant and can be deployed in the world. So really having a deep understanding of that is core to MITHIC and MIT’s engineering enterprise.

Q: How does MITHIC fit into MIT’s broader mission? 

Makan: MITHIC highlights how the work we do in the School of Humanities, Arts, and Social Sciences is aligned with MIT’s mission, which is to address the world’s great problems. But MITHIC has also connected all of MIT in this endeavor. We have faculty from all five schools and the MIT Schwarzman College of Computing involved in evaluating MITHIC project proposals. Each of them represent a different point of view and are engaging with these projects that originate in SHASS, but actually cut across many different fields. Seeing their perspectives on these projects has been inspiring.

Yang: I think of MIT’s main mission as using technology and many other things to make impact in the world, especially social impact. The kind of interdisciplinary work that MITHIC catalyzes really enables all of that work to happen in a new and profound way. The SHASS+ Connectivity Fund, which connects SHASS faculty and researchers with colleagues outside of SHASS, has resulted in collaborations that were not possible before. One example is a project being led by professors Mark Rau, who has a shared appointment between Music and Electrical Engineering and Computer Science, and Antoine Allanore in Materials Science and Engineering. The two of them are looking at how they can take ancient unplayable instruments and recreate them using new technologies for scanning and fabrication. They’re also working with the Museum of Fine Arts, so it’s a whole new type of collaboration that exemplifies MITHIC. 

Q: What has been the community response to MITHIC in its first year?

Makan: It’s been very strong. We found a lot of pent-up demand, both from faculty in SHASS and faculty in the sciences and engineering. Either there were preexisting collaborations that they could take to the next level through MITHIC, or there was the opportunity to meet someone new and talk to someone about a problem and how they could collaborate. MITHIC also hosted a series of Meeting of the Minds events, which are a chance to have faculty and members of the community get to know one another on a certain topic. This community building has been exciting, and led to an overwhelming number of applications last year. There has also been significant student involvement, with several projects bringing on UROPs [Undergraduate Research Opportunities Program projects] and PhD students to help with their research. MITHIC gives a real morale boost and a lot of hope that there is a focus upon building collaborations at MIT and on not forgetting that the world needs humanists, artists, and social scientists.

Yang: One faculty member told me the SHASS+ Connectivity Fund has given them hope for the kind of research that we do because of the cross collaboration. There’s a lot of excitement and enthusiasm for this type of work.

Q: The SHASS+ Connectivity Fund is designed to support interdisciplinary collaborations at MIT. What’s an example of a SHASS+ project that’s worked particularly well? 

Makan: One exciting collaboration is between professors Jörn Dunkel in Mathematics and In Song Kim in Political science. In Song is someone who has done a lot of work on studying lobbying and its effect upon the legislative process. He met Jörn, I believe, at one of MIT’s daycare centers, so it’s a relationship that started in a very informal fashion. But they found they actually had ways of looking at math and quantitative analysis that could complement one another. Their work is creating a new subfield and taking the research in a direction that would not be possible without this funding.

Yang: One of the SHASS+ projects that I think is really interesting is between professors Marzyeh Ghassemi in Electrical Engineering and Computer Science and Esther Duflo in Economics. The two of them are looking at how they can use AI to help health diagnostics in low-resource global settings, where there isn’t a lot of equipment or technology to do basic health diagnostics. They can use handheld, low-cost equipment to do things like predict if someone is going to have a heart attack. And they are not only developing the diagnostic tool, but evaluating the fairness of the algorithm. The project is an excellent example of using a MITHIC grant to make impact in the world.

Q: What has been MITHIC’s impact in terms of elevating research and teaching within SHASS?

Makan: In addition to the SHASS+ Connectivity Fund, there are two other possibilities to help support both SHASS research as well as educational initiatives: the Humanities Cultivation Fund and the SHASS Education Innovation Fund. And both of these are providing funding in excess of what we normally see within SHASS. It both recognizes the importance of the work of our faculty and it also gives them the means to actually take ideas to a much further place. 

One of the projects that MITHIC is helping to support is the Compass Initiative. Compass was started by Lily Tsai, one of our professors in Political Science, along with other faculty in SHASS to create essentially an introductory class to the different methodologies within SHASS. So we have philosophers, music historians, etc., all teaching together, all addressing how we interact with one another, what it means to be a good citizen, what it means to be socially aware and civically engaged. This is a class that is very timely for MIT and for the world. And we were able to give it robust funding so they can take this and develop it even further. 

MITHIC has also been able to take local initiatives in SHASS and elevate them. There has been a group of anthropologists, historians, and urban planners that have been working together on a project called the Living Climate Futures Lab. This is a group interested in working with frontline communities around climate change and sustainability. They work to build trust with local communities and start to work with them on thinking about how climate change affects them and what solutions might look like. This is a powerful and uniquely SHASS approach to climate change, and through MITHIC, we’re able to take this seed effort, robustly fund it, and help connect it to the larger climate project at MIT. 

Q: What excites you most about the future of MITHIC at MIT?

Yang: We have a lot of MIT efforts that are trying to break people out of their disciplinary silos, and MITHIC really is a big push on that front. It’s a presidential initiative, so it’s high on the priority list of what people are thinking about. We’ve already done our first round, and the second round is going to be even more exciting, so it’s only going to gain in force. In SHASS+, we’re actually having two calls for proposals this academic year instead of just one. I feel like there’s still so much possibility to bring together interdisciplinary research across the Institute.

Makan: I’m excited about how MITHIC is changing the culture of MIT. MIT thinks of itself in terms of engineering, science, and technology, and this is an opportunity to think about those STEM fields within the context of human activity and humanistic thinking. Having this shift at MIT in how we approach solving problems bodes well for the world, and it places SHASS as this connective tissue at the Institute. It connects the schools and it can also connect the other initiatives, such as manufacturing and health and life sciences. There’s an opportunity for MITHIC to seed all these other initiatives with the work that goes on in SHASS.

Battery-powered appliances make it easy to switch from gas to electric

Thu, 10/30/2025 - 12:00am

As batteries have gotten cheaper and more powerful, they have enabled the electrification of everything from vehicles to lawn equipment, power tools, and scooters. But electrifying homes has been a slower process. That’s because switching from gas appliances often requires ripping out drywall, running new wires, and upgrading the electrical box.

Now the startup Copper, founded by Sam Calisch SM ’14, PhD ’19, has developed a battery-equipped kitchen range that can plug into a standard 120-volt wall outlet. The induction range features a lithium iron phosphate battery that charges when energy is cheapest and cleanest, then delivers power when you’re ready to cook.

“We’re making ‘going electric’ like an appliance swap instead of a construction project,” says Calisch. “If you have a gas stove today, there is almost certainly an outlet within reach because the stove has an oven light, clock, or electric igniters. That’s big if you’re in a single-family home, but in apartments it’s an existential factor. Rewiring a 100-unit apartment building is such an expensive proposition that basically no one’s doing it.”

Copper has shipped about 1,000 of its battery-powered ranges to date, often to developers and owners of large apartment complexes. The company also has an agreement with the New York City Housing Authority for at least 10,000 units.

Once installed, the ranges can contribute to a distributed, cleaner, and more resilient energy network. In fact, Copper recently piloted a program in California to offer cheap, clean power to the grid from its home batteries when it would otherwise need to fire up a gas-powered plant to meet spiking electricity demand.

“After these appliances are installed, they become a grid asset,” Calisch says. “We can manage the fleet of batteries to help provide firm power and help grids deliver more clean electricity. We use that revenue, in turn, to further drive down the cost of electrification.”

Finding a mission

Calisch has been working on climate technologies his entire career. It all started at the clean technology incubator Otherlab that was founded by Saul Griffith SM ’01, PhD ’04.

“That’s where I caught the bug for technology and product development for climate impact,” Calisch says. “But I realized I needed to up my game, so I went to grad school in [MIT Professor] Neil Gershenfeld’s lab, the Center for Bits and Atoms. I got to dabble in software engineering, mechanical engineering, electrical engineering, mathematical modeling, all with the lens of building and iterating quickly.”

Calisch stayed at MIT for his PhD, where he worked on approaches in manufacturing that used fewer materials and less energy. After finishing his PhD in 2019, Calisch helped start a nonprofit called Rewiring America focused on advocating for electrification. Through that work, he collaborated with U.S. Senate offices on the Inflation Reduction Act.

The cost of lithium ion batteries has decreased by about 97 percent since their commercial debut in 1991. As more products have gone electric, the manufacturing process for everything from phones to drones, robots, and electric vehicles has converged around an electric tech stack of batteries, electric motors, power electronics, and chips. The countries that master the electric tech stack will be at a distinct manufacturing advantage.

Calisch started Copper to boost the supply chain for batteries while contributing to the electrification movement.

“Appliances can help deploy batteries, and batteries help deploy appliances,” Calisch says. “Appliances can also drive down the installed cost of batteries.”

The company is starting with the kitchen range because its peak power draw is among the highest in the home. Flattening that peak brings big benefits. Ranges are also meaningful: It’s where people gather around and cook each night. People take pride in their kitchen ranges more than, say, a water heater.

Copper’s 30-inch induction range heats up more quickly and reaches more precise temperatures than its gas counterpart. Installing it is as easy as swapping a fridge or dishwasher. Thanks to its 5-kilowatt-hour battery, the range even works when the power goes out.

“Batteries have become 10 times cheaper and are now both affordable and create tangible improvements in quality of life,” Calisch says. “It’s a new notion of climate impact that isn’t about turning down thermostats and suffering for the planet, it’s about adopting new technologies that are better.”

Scaling impact

Calisch says there’s no way for the U.S. to maintain resilient energy systems in the future without a lot of batteries. Because of power transmission and regulatory limitations, those batteries can’t all be located out on the grid.

“We see an analog to the internet,” Calisch says. “In order to deliver millions of times more information across the internet, we didn’t add millions of times more wires. We added local storage and caching across the network. That’s what increased throughput. We’re doing the same thing for the electric grid.”

This summer, Copper raised $28 million to scale its production to meet growing demand for its battery equipped appliances. Copper is also working to license its technology to other appliance manufacturers to help speed the electric transition.

“These electric technologies have the potential to improve people’s lives and, as a byproduct, take us off of fossil fuels,” Calisch says. “We’re in the business of identifying points of friction for that transition. We are not an appliance company; we’re an energy company.”

Looking back, Calisch credits MIT with equipping him with the knowledge needed to run a technical business.

“My time at MIT gave me hands-on experience with a variety of engineering systems,” Calisch. “I can talk to our embedded engineering team or electrical engineering team or mechanical engineering team and understand what they’re saying. That’s been enormously useful for running a company.”

He adds: “I also developed an expansive view of infrastructure at MIT, which has been instrumental in launching Copper and thinking about the electrical grid not just as wires on the street, but all of the loads in our buildings. It’s about making homes not just consumers of electricity, but participants in this broader network.”

Study reveals the role of geography in the opioid crisis

Thu, 10/30/2025 - 12:00am

The U.S. opioid crisis has varied in severity across the country, leading to extended debate about how and why it has spread.

Now, a study co-authored by MIT economists sheds new light on these dynamics, examining the role that geography has played in the crisis. The results show how state-level policies inadvertently contributed to the rise of opioid addiction, and how addiction itself is a central driver of the long-term problem.

The research analyzes data about people who moved within the U.S., as a way of addressing a leading question about the crisis: How much of the problem is attributable to local factors, and to what extent do people have individual characteristics making them prone to opioid problems?

“We find a very large role for place-based factors, but that doesn’t mean there aren’t person-based factors as well,” says MIT economist Amy Finkelstein, co-author of a new paper detailing the study’s findings. “As is usual, it’s rare to find an extreme answer, either one or the other.”

In scrutinizing the role of geography, the scholars developed new insights about the spread of the crisis in relation to the dynamics of addiction. The study concludes that laws restricting pain clinics, or “pill mills,” where opioids were often prescribed, reduced risky opioid use by 5 percent over the 2006-2019 study period. Due to the path of addiction, enacting those laws near the onset of the crisis, in the 1990s, could have reduced risky use by 30 percent over that same time.

“What we do find is that pill mill laws really matter,” says MIT PhD student Dean Li, a co-author of the paper. “The striking thing is that they mattered a lot, and a lot of the effect was through transitions into opioid addiction.”

The paper, “What Drives Risky Prescription Opioid Use: Evidence from Migration,” appears in the Quarterly Journal of Economics. The authors are Finkelstein, who is the John and Jennie S. MacDonald Professor of Economics; Matthew Gentzkow, a professor of economics at Stanford University; and Li, a PhD student in MIT’s Department of Economics.

The opioid crisis, as the scholars note in the paper, is one of the biggest U.S. health problems in recent memory. As of 2017, there were more than twice as many U.S. deaths from opioids as from homicide. There were also at least 10 times as many opioid deaths compared to the number of deaths from cocaine during the 1980s-era crack epidemic in the U.S.

Many accounts and analyses of the crisis have converged on the increase in medically prescribed opioids starting in the 1990s as a crucial part of the problem; this was in turn a function of aggressive marketing by pharmaceutical companies, among other things. But explanations of the crisis beyond that have tended to fracture. Some analyses emphasize the personal characteristics of those who fall into opioid use, such as a past history of substance use, mental health conditions, age, and more. Other analyses focus on place-based factors, including the propensity of area medical providers to prescribe opioids.

To conduct the study, the scholars examined data on prescription opioid use from adults in the Social Security Disability Insurance program from 2006 to 2019, covering about 3 million cases in all. They defined “risky” use as an average daily morphine-equivalent dose of more than 120 milligrams, which has been shown to increase drug dependence.

By studying people who move, the scholars were developing a kind of natural experiment — Finkelstein has also used this same method to examine questions about disparities in health care costs and longevity across the U.S. In this case, in focusing on the opioid consumption patterns of the same people as they lived in different places, the scholars can disentangle the extent to which place-based and personal factors drive usage.

Overall, the study found a somewhat greater role for place-based factors than for personal characteristics in accounting for the drivers of risky opioid use. To see the magnitude of place-based effects, consider someone moving to a state with a 3.5 percentage point higher rate of risky use — akin to moving from the state with the 10th lowest rate of risky use to the state with the 10th highest rate. On average, that person’s probability of risky opioid use would increase by a full percentage point in the first year, then by 0.3 percentage points in each subsequent year.

Some of the study’s key findings involve the precise mechanisms at work beneath these top-line numbers.

In the research, the scholars examine what they call the “addiction channel,” in which opioid users fall into addiction, and the “availability channel,” in which the already-addicted find ways to sustain their use. Over the 2006-2019 period, they find, people falling into addiction through new prescriptions had an impact on overall opioid uptake that was 2.5 times as large as that of existing users getting continued access to prescribed opiods.

When people who are not already risky users of opioids move to places with higher rates of risky opioid use, Finkelstein observes, “One thing you can see very clearly in the data is that in the addiction channel, there’s no immediate change in behavior, but gradually as they’re in this new place you see an increase in risky opioid use.”

She adds: “This is consistent with a model where people move to a new place, have a back problem or car accident and go to a hospital, and if the doctor is more likely to prescribe opioids, there’s more of a risk they’re going to become addicted.”

By contrast, Finkelstein says, “If we look at people who are already risky users of opioids and they move to a new place with higher rates of risky opioid use, you see there’s an immediate increase in their opioid use, which suggests it’s just more available. And then you also see the gradual increase indicating more addiction.”

By looking at state-level policies, the researchers found this trend to be particularly pronounced in over a dozen states that lagged in enacting restrictions on pain clinics, or “pill mills,” where providers had more latitude to prescribe opioids.

In this way the research does not just evaluate the impact of place versus personal characteristics; it quantifies the problem of addiction as an additional dimension of the issue. While many analyses have sought to explain why people first use opioids, the current study reinforces the importance of preventing the onset of addiction, especially because addicted users may later seek out nonprescription opioids, exacerbating the problem even further.

“The persistence of addiction is a huge problem,” Li says. “Even after the role of prescription opioids has subsided, the opioid crisis persists. And we think this is related to the persistence of addiction. Once you have this set in, it’s so much harder to change, compared to stopping the onset of addiction in the first place.”

Research support was provided by the National Institute on Aging, the Social Security Administration, and the Stanford Institute for Economic Policy Research.

Injectable antenna could safely power deep-tissue medical implants

Wed, 10/29/2025 - 5:00pm

Researchers from the MIT Media Lab have developed an antenna — about the size of a fine grain of sand — that can be injected into the body to wirelessly power deep-tissue medical implants, such as pacemakers in cardiac patients and neuromodulators in people suffering from epilepsy or Parkinson’s disease.

“This is the next major step in miniaturizing deep-tissue implants,” says Baju Joy, a PhD student in the Media Lab’s Nano-Cybernetic Biotrek research group. “It enables battery-free implants that can be placed with a needle, instead of major surgery.”

paper detailing this work was published in the October issue of IEEE Transactions on Antennas and Propagation. Joy is joined on the paper by lead author Yubin Cai, PhD student at the Media Lab; Benoît X. E. Desbiolles and Viktor Schell, former MIT postdocs; Shubham Yadav, an MIT PhD student in media arts and sciences; David C. Bono, an instructor in the MIT Department of Materials Science and Engineering; and senior author Deblina Sarkar, the AT&T Career Development Associate Professor at the Media Lab and head of the Nano-Cybernetic Biotrek group.

Deep-tissue implants are currently powered either with a several-centimeters-long battery that is surgically implanted in the body, requiring periodic replacement, or with a surgically placed magnetic coil, also of a centimeter-scale size, that can harvest power wirelessly. The coil method functions only at high frequencies, which can cause tissue heating, limiting how much power can be safely delivered to the implant when miniaturized to sub-millimeter sizes.

“After that limit, you start damaging the cells,” says Joy.

As is stated in the team’s IEEE Transactions on Antennas and Propagation paper, “developing an antenna at ultra-small dimensions (less then 500 micrometers) which can operate efficiently in the low-frequency band is challenging.”

The 200-micrometer antenna — developed through research led by Sarkar — operates at low frequencies (109 kHz) thanks to a novel technology in which a magnetostrictive film, which deforms when a magnetic field is applied, is laminated with a piezoelectric film, which converts deformation to electric charge. When an alternating magnetic field is applied, magnetic domains within the magnetostrictive film contort it in the same way that a piece of fabric interwoven with pieces of metal would contort if subjected to a strong magnet. The mechanical strain in the magnetostrictive layer causes the piezoelectric layer to generate electric charges across electrodes placed above and below.

“We are leveraging this mechanical vibration to convert the magnetic field to an electric field,” Joy says.

Sarkar says the newly developed antenna delivers four to five orders of magnitude more power than implantable antennas of similar size that rely on metallic coils and operate in the GHz frequency range.

“Our technology has the potential to introduce a new avenue for minimally invasive bioelectric devices that can operate wirelessly deep within the human body,” she says.

The magnetic field that activates the antenna is provided by a device similar to a rechargeable wireless cell phone charger, and is small enough to be applied to the skin as a stick-on patch or slipped into a pocket close to the skin surface.

Because the antenna is fabricated with the same technology as a microchip, it can be easily integrated with already-existing microelectronics.

“These electronics and electrodes can be easily made to be much smaller than the antenna itself, and they would be integrated with the antenna during nanofabrication,” Joy says, adding that the researchers’ work leverages 50 years of research and development applied to making transistors and other electronics smaller and smaller. “The other components can be tiny, and the entire system can be placed with a needle injection.”

Manufacture of the antennas could be easily scaled up, the researchers say, and multiple antennas and implants could be injected to treat large areas of the body.

Another possible application of this antenna, in addition to pacemaking and neuromodulation, is glucose sensing in the body. Circuits with an optical sensor for detecting glucose already exist, but the process would benefit greatly with a wireless power supply that can be non-invasively integrated inside of the body.

“That’s just one example,” Joy says. “We can leverage all these other techniques that are also developed using the same fabrication methods, and then just integrate them easily to the antenna.”

Burning things to make things

Wed, 10/29/2025 - 4:35pm

Around 80 percent of global energy production today comes from the combustion of fossil fuels. Combustion, or the process of converting stored chemical energy into thermal energy through burning, is vital for a variety of common activities including electricity generation, transportation, and domestic uses like heating and cooking — but it also yields a host of environmental consequences, contributing to air pollution and greenhouse gas emissions.

Sili Deng, the Doherty Chair in Ocean Utilization and associate professor of mechanical engineering at MIT, is leading research to drive the transition from the heavy dependence on fossil fuels to renewable energy with storage.

“I was first introduced to flame synthesis in my junior year in college,” Deng says. “I realized you can actually burn things to make things, [and] that was really fascinating.”

Deng says she ultimately picked combustion as a focus of her work because she likes the intellectual challenge the concept offers. “In combustion you have chemistry, and you have fluid mechanics. Each subject is very rich in science. This also has very strong engineering implications and applications.”

Deng’s research group targets three areas: building up fundamental knowledge on combustion processes and emissions; developing alternative fuels and metal combustion to replace fossil fuels; and synthesizing flame-based materials for catalysis and energy storage, which can bring down the cost of manufacturing battery materials.

One focus of the team has been on low-cost, low-emission manufacturing of cathode materials for lithium-ion batteries. Lithium-ion batteries play an increasingly critical role in transportation electrification (e.g., batteries for electric vehicles) and grid energy storage for electricity that is generated from renewable energy sources like wind and solar. Deng’s team has developed a technology they call flame-assisted spray pyrolysis, or FASP, which can help reduce the high manufacturing costs associated with cathode materials.

FASP is based on flame synthesis, a technology that dates back nearly 3,000 years. In ancient China, this was the primary way black ink materials were made. “[People burned] vegetables or woods, such that afterwards they can collect the solidified smoke,” Deng explains. “For our battery applications, we can try to fit in the same formula, but of course with new tweaks.”

The team is also interested in developing alternative fuels, including looking at the use of metals like aluminum to power rockets. “We’re interested in utilizing aluminum as a fuel for civil applications,” Deng says, because aluminum is abundant in the earth, cheap, and it’s available globally. “What we are trying to do is to understand [aluminum combustion] and be able to tailor its ignition and propagation properties.”

Among other accolades, Deng is a 2025 recipient of the Hiroshi Tsuji Early Career Researcher Award from the Combustion Institute, an award that recognizes excellence in fundamental or applied combustion science research.

Study: Identifying kids who need help learning to read isn’t as easy as A, B, C

Wed, 10/29/2025 - 11:45am

In most states, schools are required to screen students as they enter kindergarten — a process that is meant to identify students who may need extra help learning to read. However, a new study by MIT researchers suggests that these screenings may not be working as intended in all schools.

The researchers’ survey of about 250 teachers found that many felt they did not receive adequate training to perform the tests, and about half reported that they were not confident that children who need extra instruction in reading end up receiving it.

When performed successfully, these screens can be essential tools to make sure children get the extra help they need to learn to read. However, the new findings suggest that many school districts may need to tweak how they implement the screenings and analyze the results, the researchers say.

“This result demonstrates the need to have a systematic approach for how the basic science on how children learn to read is translated into educational opportunity,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli is the senior author of the new open-access study, which appears today in Annals of Dyslexia. Ola Ozernov-Palchik, an MIT research scientist who is also a research assistant professor at Boston University Wheelock College of Education and Human Development, is the lead author of the study.

Boosting literacy

Over the past 20 years, national reading proficiency scores in the United States have trended up, but only slightly. In 2022, 33 percent of fourth-graders achieved reading proficiency, compared to 29 percent in 1992, according to the National Assessment of Educational Progress reading report card. (The highest level achieved in the past 20 years was 37 percent, in 2017.)

In hopes of boosting those rates, most states have passed laws requiring students to be screened for potential reading struggles early in elementary school. In most cases, the screenings are required two or three times per year, in kindergarten, first grade, and second grade.

These tests are designed to identify students who have difficulty with skills such as identifying letters and the sounds they make, blending sounds to make words, and recognizing words that rhyme. Students with low scores in these measures can then be offered extra interventions designed to help them catch up.

“The indicators of future reading disability or dyslexia are present as early as within the first few months of kindergarten,” Ozernov-Palchik says. “And there’s also an overwhelming body of evidence showing that interventions are most effective in the earliest grades.”

In the new study, the researchers wanted to evaluate how effectively these screenings are being implemented in schools. With help from the National Center for Improving Literacy, they posted on social media sites seeking classroom teachers and reading specialists who are responsible for administering literacy screening tests.

The survey respondents came from 39 states and represented public and private schools, located in urban, suburban, and rural areas. The researchers asked those teachers dozens of questions about their experience with the literacy screenings, including questions about their training, the testing process itself, and the results of the screenings.

One of the significant challenges reported by the respondents was a lack of training. About 75 percent reported that they received fewer than three hours of training on how to perform the screens, and 44 percent received no training at all or less than an hour of training.

“Under ideal conditions, there is an expert who trains the educators, they provide practice opportunities, they provide feedback, and they observe the educators administer the assessment,” Ozernov-Palchik says. “None of this was done in many of the cases.”

Instead, many educators reported that they spent their own time figuring out how to give the evaluations, sometimes working with colleagues. And, new hires who arrived at a school after the initial training was given were often left on their own to figure it out.

Another major challenge was suboptimal conditions for administering the tests. About 80 percent of teachers reported interruptions during the screenings, and 40 percent had to do the screens in noisy locations such as a school hallway. More than half of the teachers also reported technical difficulties in administering the tests, and that rate was higher among teachers who worked at schools with a higher percentage of students from low socioeconomic (SES) backgrounds.

Teachers also reported difficulties when it came to evaluating students categorized as English language learners (ELL). Many teachers relayed that they hadn’t been trained on how to distinguish students who were having trouble reading from those who struggled on the tests because they didn’t speak English well.

“The study reveals that there’s a lot of difficulty understanding how to handle English language learners in the context of screening,” Ozernov-Palchik says. “Overall, those kids tend to be either over-identified or under-identified as needing help, but they’re not getting the support that they need.”

Unrealized potential

Most concerning, the researchers say, is that in many schools, the results of the screening tests are not being used to get students the extra help that they need. Only 44 percent of the teachers surveyed said that their schools had a formal process for creating intervention plans for students after the screening was performed.

“Even though most educators said they believe that screening is important to do, they’re not feeling that it has the potential to drive change the way that it’s currently implemented,” Ozernov-Palchik says.

In the study, the researchers recommended several steps that state legislatures or individual school districts can take to make the screening process run more smoothly and successfully.

“Implementation is the key here,” Ozernov-Palchik says. “Teachers need more support and professional development. There needs to be systematic support as they administer the screening. They need to have designated spaces for screening, and explicit instruction in how to handle children who are English language learners.”

The researchers also recommend that school districts train an individual to take charge of interpreting the screening results and analyzing the data, to make sure that the screenings are leading to improved success in reading.

In addition to advocating for those changes, the researchers are also working on a technology platform that uses artificial intelligence to provide more individualized instruction in reading, which could help students receive help in the areas where they struggle the most.

The research was funded by Schmidt Futures, the Chan Zuckerberg Initiative for the Reach Every Reader project, and the Halis Family Foundation.

This is your brain without sleep

Wed, 10/29/2025 - 6:00am

Nearly everyone has experienced it: After a night of poor sleep, you don’t feel as alert as you should. Your brain might seem foggy, and your mind drifts off when you should be paying attention.

A new study from MIT reveals what happens inside the brain as these momentary failures of attention occur. The scientists found that during these lapses, a wave of cerebrospinal fluid (CSF) flows out of the brain — a process that typically occurs during sleep and helps to wash away waste products that have built up during the day. This flushing is believed to be necessary for maintaining a healthy, normally functioning brain.

When a person is sleep-deprived, it appears that their body attempts to catch up on this cleansing process by initiating pulses of CSF flow. However, this comes at a cost of dramatically impaired attention.

“If you don’t sleep, the CSF waves start to intrude into wakefulness where normally you wouldn’t see them. However, they come with an attentional tradeoff, where attention fails during the moments that you have this wave of fluid flow,” says Laura Lewis, the Athinoula A. Martinos Associate Professor of Electrical Engineering and Computer Science, a member of MIT’s Institute for Medical Engineering and Science and the Research Laboratory of Electronics, and an associate member of the Picower Institute for Learning and Memory.

Lewis is the senior author of the study, which appears today in Nature Neuroscience. MIT visiting graduate student Zinong Yang is the lead author of the paper.

Flushing the brain

Although sleep is a critical biological process, it’s not known exactly why it is so important. It appears to be essential for maintaining alertness, and it has been well-documented that sleep deprivation leads to impairments of attention and other cognitive functions.

During sleep, the cerebrospinal fluid that cushions the brain helps to remove waste that has built up during the day. In a 2019 study, Lewis and colleagues showed that CSF flow during sleep follows a rhythmic pattern in and out of the brain, and that these flows are linked to changes in brain waves during sleep.

That finding led Lewis to wonder what might happen to CSF flow after sleep deprivation. To explore that question, she and her colleagues recruited 26 volunteers who were tested twice — once following a night of sleep deprivation in the lab, and once when they were well-rested.

In the morning, the researchers monitored several different measures of brain and body function as the participants performed a task that is commonly used to evaluate the effects of sleep deprivation.

During the task, each participant wore an electroencephalogram (EEG) cap that could record brain waves while they were also in a functional magnetic resonance imaging (fMRI) scanner. The researchers used a modified version of fMRI that allowed them to measure not only blood oxygenation in the brain, but also the flow of CSF in and out of the brain. They also measured each subject’s heart rate, breathing rate, and pupil diameter.

The participants performed two attentional tasks while in the fMRI scanner, one visual and one auditory. For the visual task, they had to look at a screen that had a fixed cross. At random intervals, the cross would turn into a square, and the participants were told to press a button whenever they saw this happen. For the auditory task, they would hear a beep instead of seeing a visual transformation.

Sleep-deprived participants performed much worse than well-rested participants on these tasks, as expected. Their response times were slower, and for some of the stimuli, the participants never registered the change at all.

During these momentary lapses of attention, the researchers identified several physiological changes that occurred at the same time. Most significantly, they found a flux of CSF out of the brain just as those lapses occurred. After each lapse, CSF flowed back into the brain.

“The results are suggesting that at the moment that attention fails, this fluid is actually being expelled outward away from the brain. And when attention recovers, it’s drawn back in,” Lewis says.

The researchers hypothesize that when the brain is sleep-deprived, it begins to compensate for the loss of the cleansing that normally occurs during sleep, even though these pulses of CSF flow come with the cost of attention loss.

“One way to think about those events is because your brain is so in need of sleep, it tries its best to enter into a sleep-like state to restore some cognitive functions,” Yang says. “Your brain’s fluid system is trying to restore function by pushing the brain to iterate between high-attention and high-flow states.”

A unified circuit

The researchers also found several other physiological events linked to attentional lapses, including decreases in breathing and heart rate, along with constriction of the pupils. They found that pupil constriction began about 12 seconds before CSF flowed out of the brain, and pupils dilated again after the attentional lapse.

“What’s interesting is it seems like this isn’t just a phenomenon in the brain, it’s also a body-wide event. It suggests that there’s a tight coordination of these systems, where when your attention fails, you might feel it perceptually and psychologically, but it’s also reflecting an event that’s happening throughout the brain and body,” Lewis says.

This close linkage between disparate events may indicate that there is a single circuit that controls both attention and bodily functions such as fluid flow, heart rate, and arousal, according to the researchers.

“These results suggest to us that there’s a unified circuit that’s governing both what we think of as very high-level functions of the brain — our attention, our ability to perceive and respond to the world — and then also really basic fundamental physiological processes like fluid dynamics of the brain, brain-wide blood flow, and blood vessel constriction,” Lewis says.

In this study, the researchers did not explore what circuit might be controlling this switching, but one good candidate, they say, is the noradrenergic system. Recent research has shown that this system, which regulates many cognitive and bodily functions through the neurotransmitter norepinephrine, oscillates during normal sleep.

The research was funded by the National Institutes of Health, a National Defense Science and Engineering Graduate Research Fellowship, a NAWA Fellowship, a McKnight Scholar Award, a Sloan Fellowship, a Pew Biomedical Scholar Award, a One Mind Rising Star Award, and the Simons Collaboration on Plasticity in the Aging Brain.

New method could improve manufacturing of gene-therapy drugs

Tue, 10/28/2025 - 4:50pm

Some of the most expensive drugs currently in use are gene therapies to treat specific diseases, and their high cost limits their availability for those who need them. Part of the reason for the cost is that the manufacturing process yields as much as 90 percent non-active material, and separating out these useless parts is slow, leads to significant losses, and is not well adapted to large-scale production. Separation accounts for almost 70 percent of the total gene therapy manufacturing cost. But now, researchers at MIT’s Department of Chemical Engineering and Center for Biomedical Innovation have found a way to greatly improve that separation process.

The findings are described in the journal ACS Nano, in a paper by MIT Research Scientist Vivekananda Bal, Edward R. Gilliland Professor Richard Braatz, and five others.

“Since 2017, there have been around 10,000 clinical trials of gene therapy drugs,” Bal says. Of those, about 60 percent are based on adeno-associated virus, which is used as a carrier for the modified gene or genes. These viruses consist of a sort of shell structure, known as capsids, that protects the genetic material within, but the production systems used to manufacture these drugs tend to produce large quantities of empty capsids with no genetic material inside.

These empty capsids, which can make up anywhere from half to 90 percent of the yield, are useless therapeutically, and in fact can be counterproductive because they can add to any immune reaction in the patient without providing any benefit. They must be removed prior to the formulation as a part of the manufacturing process. The existing purification processes are not scalable and involve multiple stages, have long processing times, and incur high product losses and high cost. 

Separating full from empty capsids is complicated by the fact that in almost every way, they appear nearly identical. “They both have similar structure, the same protein sequences,” Bal says. “They also have similar molecular weight, and similar density.” Given the similarity, it’s extremely challenging to separate them. “How do you come up with a method?”

Most systems presently use a method based on chromatography, in which the mixture passes through a column of absorbent material, and slight differences in the properties can cause them to pass through at different rates, so that they can be separated out. Because the differences are so slight, the process requires multiple rounds of processing, in addition to filtration steps, adding to the time and cost. The method is also inefficient, wasting up to 30 or 40 percent of the product, Bal says. And the resulting product is still only about two-thirds pure, with a third of inactive material remaining.

There is another purification method that is widely used in the small molecule pharmaceutical industry, which uses a preferential crystallization process instead of chromatography, but this method had not been tried for protein purification — specifically, capsid-based drugs — before. Bal decided to try it, since with this method “its operating time is low and the product loss is also very low, and the purity achieved is very, very high because of the high selectivity,” he says. The method separates out empty from full capsids in the solution, as well as separating out cell debris and other useless material, all in one step, without requiring the significant pre-processing and post-processing steps needed by the other methods.

“The time required for purification using the crystallization method is around four hours, compared to that required for the chromatography method, which is about 37 to 40 hours,” he says. “So basically, it is about 10 times more effective in terms of operating time.” This novel method will reduce the cost of gene therapy drugs by five to 10 times, he says.

The method relies on a very slight difference in the electrical potential of the full versus empty capsids. DNA molecules have a slight negative charge, whereas the surface of the capsids has a positive charge. “Because of that, the overall charge density distribution of the full capsids will be different from that of the empty capsids,” he says. That difference leads to a difference in the crystallization rates, which can be used to create conditions that favor the crystallization of the full capsids while leaving the empty ones behind.

Tests proved the effectiveness of the method, which can be easily adapted to large-scale pharmaceutical manufacturing processes, he says. The team has applied for a patent through MIT’s Technology Licensing Office, and is already in discussions with a number of pharmaceutical companies about beginning trials of the system, which could lead to the system becoming commercialized within a couple of years, Bal says.

“They’re basically collaborating,” he says of the companies. “They’re transferring their samples for a trial with our method,” and ultimately the process will either be licensed to a company, or form the basis of a new startup company, he says.

In addition to Bal and Braatz, the research team also included Jacqueline Wolfrum, Paul Barone, Stacy Springs, Anthony Sinskey, and Robert Kotin, all of MIT’s Center for Biomedical Innovation. The work was supported by the Massachusetts Life Sciences Center, Sanofi S.A., Sartorius AG, Artemis Life Sciences, and the U.S. Food and Drug Administration.

The joy of life (sciences)

Tue, 10/28/2025 - 4:30pm

For almost 30 years, Mary Gallagher has supported award-winning faculty members and their labs in the same way she tends the soil beneath her garden. In both, she pairs diligence and experience with a delight in the way that interconnected ecosystems contribute to the growth of a plant, or an idea, seeded in the right place.

Gallagher, a senior administrative assistant in the Department of Biology, has spent much of her career at MIT. Her mastery in navigating the myriad tasks required by administrators, and her ability to build connections, have supported and elevated everyone she interacts with, at the Institute and beyond.

Oh, the people you’ll know

Gallagher didn’t start her career at MIT. Her first role following graduation from the University of Vermont in the early 1980s was at a nearby community arts center, where she worked alongside a man who would become a household name in American politics. 

“This guy had just been elected mayor, shockingly, of Burlington, Vermont, by under 100 votes, unseating the incumbent. He went in and created this arts council and youth office,” Gallagher recalls.

That political newcomer was none other than a young Bernie Sanders, now the longest-serving independent senator in U.S. congressional history. 

Gallagher arrived at MIT in 1996, becoming an administrative assistant (aka “lab admin”) in what was then called the MIT Energy Laboratory. Shortly after her arrival, Cecil and Ida Green Professor of Physics and Engineering Systems Ernest Moniz transformed the laboratory into the MIT Energy Initiative (MITEI).

Gallagher quickly learned how versatile the work of an administrator can be. As MITEI rapidly grew, she interacted with people across campus and its vast array of disciplines at the Institute, including mechanical engineering, political science, and economics. 

“Admin jobs at MIT are really crazy because of the depth of work that we’re willing to do to support the institution. I was hired to do secretarial work, and next thing I know, I was traveling all the time, and planning a five-day, 5,000-person event down in D.C.,” Gallagher says. “I developed crazy computer and event-planner skills.”

Although such tasks may seem daunting to some, Gallagher has been thrilled with the opportunities she’s had to meet so many people and develop so many new skills. As a lab admin in MITEI for 18 years, she mastered navigating MIT administration, lab finances, and technical support. When Moniz left MITEI to lead the U.S. Department of Energy under President Obama, she moved to the Department of Biology at MIT.

Mutual thriving

Over the years, Gallagher has fostered the growth of students and colleagues at MIT, and vice versa. 

Friend and former colleague Samantha Farrell recalls her first days at MITEI as a rather nervous and very "green" temp, when Gallagher offered an excellent cappuccino from Gallagher’s new Nespresso coffee machine. 

“I treasure her friendship and knowledge,” Farrell says. “She taught me everything I needed to know about being an admin and working in research.”

Gallagher’s experience has also set faculty across the Institute up for success. 

According to one principal investigator she currently supports, Novartis Professor of Biology Leonard Guarente, Gallagher is “extremely impactful and, in short, an ideal administrative assistant."

Similarly, professor of biology Daniel Lew is grateful that her extensive MIT experience was available as he moved his lab to the Institute in recent years. “Mary was invaluable in setting up and running the lab, teaching at MIT, and organizing meetings and workshops,” Lew says. “She is a font of knowledge about MIT.”

A willingness to share knowledge, resources, and sometimes a cappuccino, is just as critical as a willingness to learn, especially at a teaching institution like MIT. So it goes without saying that the students at MIT have left their mark on Gallagher in turn — including teaching her how to format a digital table of contents on her very first day at MIT.

“Working with undergrads and grad students is my favorite part of MIT. Their generosity leaves me breathless,” says Gallagher. “No matter how busy they are, they’re always willing to help another person.” 

Campus community

Gallagher cites the decline in community following the Covid-19 pandemic shutdown as one of her most significant challenges. 

Prior to Covid, Gallagher says, “MIT had this great sense of community. Everyone had projects, volunteered, and engaged. The campus was buzzing, it was a hoot!” 

She nurtured that community, from active participation in the MIT Women’s League to organizing an award-winning relaunch of Artist Behind the Desk. This subgroup of the MIT Working Group for Support Staff Issues hosted lunchtime recitals and visual art shows to bring together staff artists around campus, for which the group received a 2005 MIT Excellence Award for Creating Connections.

Moreover, Gallagher is an integral part of the smaller communities within the labs she supports.

Professor of biology and American Cancer Society Professor Graham Walker, yet another Department of Biology faculty member Gallagher supports, says, “Mary’s personal warmth and constant smile has lit up my lab for many years, and we are all grateful to have her as such a good colleague and friend.”

She strives to restore the sense of community that the campus used to have, but recognizes that striving for bygone days is futile.

“You can never go back in time and make the future what it was in the past,” she says. “You have to reimagine how we can make ourselves special in a new way.”

Spreading her roots

Gallagher’s life has been inextricably shaped by the Institute, and MIT, in turn, would not be what it is if not for Gallagher’s willingness to share her wisdom on the complexities of administration alongside the “joie de vivre” of her garden’s butterflies.

She recently bought a home in rural New Hampshire, trading the buzzing crowds of campus for the buzzing of local honeybees. Her work ethic is reflected in her ongoing commitment to curiosity, through reading about native plant life and documenting pollinating insects as they wander about her flowers. 

Just as she can admire each bug and flower for the role it plays in the larger system, Gallagher has participated in and contributed to a culture of appreciating the role of every individual within the whole.

“At MIT’s core, they believe that everybody brings something to the table,” she says. “I wouldn’t be who I am if I didn’t work at MIT and meet all these people.”

Pages