MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 4 hours 43 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Fraternities and sororities at MIT raise funds for local charities

Fri, 12/12/2025 - 4:15pm

Throughout campus and across the river in Boston and Brookline, MIT hosts a vibrant network of 43 fraternities and sororities, with more than 35 percent of undergraduate students belonging to one of these value-based communities. Each fraternity and sorority is a unique community that not only fosters leadership and builds lifelong friendships, but also takes its role in giving back seriously.

Keeping up a 143-year-long tradition of philanthropy, several fraternities and sororities raised funds for a variety of local charities this fall, including the Breast Cancer Research Foundation, Boston Area Rape Crisis Center, and Dignity Matters of Boston.

With donations still coming in, Liz Jason, associate dean of Fraternities, Sororities and Independent Living Groups (FSILG) at MIT, says, “Philanthropy is a defining tradition within our FSILG community; it’s where values become action. When chapters give back, they strengthen their bonds, uplift others, and demonstrate what it truly means to be part of MIT: using talent, passion, and collective effort to make a real difference.”

To raise money, the fraternities and sororities hosted a variety of fun, clever, and even unique events and challenges over the course of the fall semester.

Sorority Alpha Chi Omega held an event called Walk a Mile in Her Shoes, where participants donned heels for a relay race-style event to raise awareness of gender stereotypes, domestic violence, and sexual assault. They also held a bake sale at the event, with funds going to the Boston Area Rape Crisis Center.

The Interfraternity Council (IFC) hosted a Greek Carnival on Kresge Oval in October to benefit the Boston Area Rape Crisis Center and to raise awareness about sexual violence. They held a variety of games and activities, including a dunk tank, a bake sale, a tug-of-war competition, and other field-day games.

“In my own chapter, Delta Tau Delta, I’ve seen an interest in increasing our philanthropic efforts, and as a member of the IFC Executive Board, I realized we could take the initiative to reduce barriers to entry for all chapters through a single large fundraising event,” says senior Luc Gaitskell.

In mid-November, the MIT Panhellenic Association created an event in which members of the community donated clothing, and then Panhel used the clothing to set up a one-time thrift shop where community members could come buy second-hand clothes at discounted prices. All the money raised was donated to Dignity Matters.

“Service has always been at the heart of what MIT Panhel does,” says senior Sabrina Chen. “We chose to partner with Dignity Matters because their mission of helping individuals stay healthy and regain self-confidence resonates with our commitment to supporting women and advancing equity. Our thrift shop was a perfect way to raise money for the organization while encouraging affordable, sustainable fashion.”

Division of Student Life vice chancellor Suzy Nelson explains, “Our students are committed to a range of causes; their dedication reflects not only their generosity, but also the spirit of engaging the MIT community in giving back through philanthropy.”

Students interested in joining a fraternity, sorority, or an independent living group can find more information on the Division of Student Life website.

MIT HEALS leadership charts a bold path for convergence in health and life sciences

Fri, 12/12/2025 - 4:00pm

In February, President Sally Kornbluth announced the appointment of Professor Angela Koehler as faculty director of the MIT Health and Life Sciences Collaborative (MIT HEALS), with professors Iain Cheeseman and Katharina Ribbeck as associate directors. Since then, the leadership team has moved quickly to shape HEALS into an ambitious, community-wide platform for catalyzing research, translation, and education at MIT and beyond — at a moment when advances in computation, biology, and engineering are redefining what’s possible in health and the life sciences.

Rooted in MIT’s long-standing strengths in foundational discovery, convergence, and translational science, HEALS is designed to foster connections across disciplines — linking life scientists and engineers with clinicians, computational scientists, humanists, operations researchers, and designers. The initiative builds on a simple premise: that solving today’s most pressing challenges in health and life sciences requires bold thinking, deep collaboration, and sustained investment in people.

“HEALS is an opportunity to rethink how we support talent, unlock scientific ideas, and translate them into impact,” says Koehler, the Charles W. and Jennifer C. Johnson Professor in the Department of Biological Engineering and associate director of the Koch Institute for Integrative Cancer Research. “We’re building on MIT’s best traditions — convergence, experimentation, and entrepreneurship — while opening new channels for interdisciplinary research and community building.”

Koehler says her own path has been shaped by that same belief in convergence. Early collaborations between chemists, engineers, and clinicians convinced her that bringing diverse people together — what she calls “induced proximity” — can spark discoveries that wouldn’t emerge in isolation.

A culture of connection

Since stepping into their roles, the HEALS leadership team has focused on building a collaborative ecosystem that enables researchers to take on bold, interdisciplinary challenges in health and life sciences. Rather than creating a new center or department, their approach emphasizes connecting the MIT community across existing boundaries — disciplinary, institutional, and cultural.

“We want to fund science that wouldn’t otherwise happen — projects that bridge gaps, open new doors, and bring researchers together in ways that are genuinely constructive and collaborative,” says Iain Cheeseman, the Herman and Margaret Sokol Professor of Biology, core member of the Whitehead Institute for Biomedical Research, and associate head of the Department of Biology.

That vision is already taking shape through initiatives like the MIT HEALS seed grants, which support bold new collaborations between MIT principal investigators; the MIT–Mass General Brigham Seed Program, which supports joint research between investigators at MIT and clinicians at MGB; and the Biswas Postdoctoral Fellowship Program, designed to bring top early-career researchers to MIT to pursue cross-cutting work in areas such as computational biology, biomedical engineering, and therapeutic discovery.

The leadership team sees these programs not as endpoints, but as starting points for a broader shift in how MIT supports health and life sciences research.

For Cheeseman, whose lab is working to build on their fundamental discoveries on how human cells function to impact cancer treatment and rare human disease, HEALS represents a way to connect deep biological discovery with the translational insights emerging from MIT’s engineering and clinical communities. He puts it simply: “to me, this is deeply personal, recognizing the limitations that existed for my own work and hoping to unlock these possibilities for researchers across MIT.”

Training the next generation

Ribbeck, a biologist focused on mucus and microbial ecosystems, sees HEALS as a way to train scientists who are as comfortable discussing patient needs as they are conducting experiments at the bench. She emphasizes that preparing the next generation of researchers means equipping them with fluency in areas like clinical language, regulatory processes, and translational pathways — skills many current investigators lack. “Many PIs, although they do clinical research, may not have dedicated support for taking their findings to the next level — how to design a clinical trial, or what regulatory questions need to be addressed — reflecting a broader structural gap in translational training” she says.

A central focus for the HEALS leadership team is building new models for training researchers to move fluidly between disciplines, institutions, and methods of translation. Ribbeck and Koehler stress the importance of giving students and postdocs hands-on opportunities that connect research with real-world experience. That means expanding programs like the Undergraduate Research Opportunities Program (UROP), the Advanced UROP (SuperUROP), and the MIT New Engineering Education Transformation, and creating new ways for trainees to engage with industry, clinical partners, and entrepreneurship. They are learning at the intersection of engineering, biology, and medicine — and increasingly across disciplines that span economics, design, the social sciences, and the humanities, where students are already creating collaborations that do not yet have formal pathways. 

Koehler, drawing from her leadership at the Deshpande Center for Technological Innovation and the Koch Institute, notes that “if we invest in the people, the solutions to problems will naturally arise.” She envisions HEALS as a platform for induced proximity — not just of disciplines, but of people at different career stages, working together in environments that support both risk-taking and mentorship.

“For me, HEALS builds on what I’ve seen work at MIT — bringing people with different skill sets together to tackle challenges in life sciences and medicine,” she says. “It’s about putting community first and empowering the next generation to lead across disciplines.”

A platform for impact

Looking ahead, the HEALS leadership team envisions the collaborative as a durable platform for advancing health and life sciences at MIT. That includes launching flagship events, supporting high-risk, high-reward ideas, and developing partnerships across the biomedical ecosystem in Boston and beyond. ​​As they see it, MIT is uniquely positioned for this moment: More than three-quarters of the Institute’s faculty work in areas that touch health and life sciences, giving HEALS a rare opportunity to bring that breadth together in new configurations and amplify impact across disciplines.

From the earliest conversations, the leaders have heard a clear message from faculty across MIT — a strong appetite for deeper connection, for working across boundaries, and for tackling urgent societal challenges together. That shared sense of momentum is what gave rise to HEALS, and it now drives the team’s focus on building the structures that can support a community that wants to collaborate at scale.

“Faculty across MIT are already reaching out — looking to connect with clinics, collaborate on new challenges, and co-create solutions,” says Koehler. “That hunger for connection is why HEALS was created. Now we have to build the structures that support it.”

Cheeseman adds that this collaborative model is what makes MIT uniquely positioned to lead. “When you bring together people from different fields who are motivated by impact,” he says, “you create the conditions for discoveries that none of us could achieve alone.”

Enabling small language models to solve complex reasoning tasks

Fri, 12/12/2025 - 3:30pm

As language models (LMs) improve at tasks like image generation, trivia questions, and simple math, you might think that human-like reasoning is around the corner. In reality, they still trail us by a wide margin on complex tasks. Try playing Sudoku with one, for instance, where you fill in numbers one through nine in such a way that each appears only once across the columns, rows, and sections of a nine-by-nine grid. Your AI opponent will either fail to fill in boxes on its own or do so inefficiently, although it can verify if you’ve filled yours out correctly.

Whether an LM is trying to solve advanced puzzles, design molecules, or write math proofs, the system struggles to answer open-ended requests that have strict rules to follow. The model is better at telling users how to approach these challenges than attempting them itself. Moreover, hands-on problem-solving requires LMs to consider a wide range of options while following constraints. Small LMs can’t do this reliably on their own; large language models (LLMs) sometimes can, particularly if they’re optimized for reasoning tasks, but they take a while to respond, and they use a lot of computing power.

This predicament led researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) to develop a collaborative approach where an LLM does the planning, then divvies up the legwork of that strategy among smaller ones. Their method helps small LMs provide more accurate responses than leading LLMs like OpenAI’s GPT-4o, and approach the precision of top reasoning systems such as o1, while being more efficient than both. Their framework, called “Distributional Constraints by Inference Programming with Language Models” (or “DisCIPL”), has a large model steer smaller “follower” models toward precise responses when writing things like text blurbs, grocery lists with budgets, and travel itineraries.

The inner workings of DisCIPL are much like contracting a company for a particular job. You provide a “boss” model with a request, and it carefully considers how to go about doing that project. Then, the LLM relays these instructions and guidelines in a clear way to smaller models. It corrects follower LMs’ outputs where needed — for example, replacing one model’s phrasing that doesn’t fit in a poem with a better option from another.

The LLM communicates with its followers using a language they all understand — that is, a programming language for controlling LMs called “LLaMPPL.” Developed by MIT's Probabilistic Computing Project in 2023, this program allows users to encode specific rules that steer a model toward a desired result. For example, LLaMPPL can be used to produce error-free code by incorporating the rules of a particular language within its instructions. Directions like “write eight lines of poetry where each line has exactly eight words” are encoded in LLaMPPL, queuing smaller models to contribute to different parts of the answer.

MIT PhD student Gabriel Grand, who is the lead author on a paper presenting this work, says that DisCIPL allows LMs to guide each other toward the best responses, which improves their overall efficiency. “We’re working toward improving LMs’ inference efficiency, particularly on the many modern applications of these models that involve generating outputs subject to constraints,” adds Grand, who is also a CSAIL researcher. “Language models are consuming more energy as people use them more, which means we need models that can provide accurate answers while using minimal computing power.”

“It's really exciting to see new alternatives to standard language model inference,” says University of California at Berkeley Assistant Professor Alane Suhr, who wasn’t involved in the research. “This work invites new approaches to language modeling and LLMs that significantly reduce inference latency via parallelization, require significantly fewer parameters than current LLMs, and even improve task performance over standard serialized inference. The work also presents opportunities to explore transparency, interpretability, and controllability of model outputs, which is still a huge open problem in the deployment of these technologies.”

An underdog story

You may think that larger-scale LMs are “better” at complex prompts than smaller ones when it comes to accuracy and efficiency. DisCIPL suggests a surprising counterpoint for these tasks: If you can combine the strengths of smaller models instead, you may just see an efficiency bump with similar results.

The researchers note that, in theory, you can plug in dozens of LMs to work together in the DisCIPL framework, regardless of size. In writing and reasoning experiments, they went with GPT-4o as their “planner LM,” which is one of the models that helps ChatGPT generate responses. It brainstormed a plan for several “Llama-3.2-1B” models (smaller systems developed by Meta), in which those LMs filled in each word (or token) of the response.

This collective approach competed against three comparable ones: a follower-only baseline powered by Llama-3.2-1B, GPT-4o working on its own, and the industry-leading o1 reasoning system that helps ChatGPT figure out more complex questions, such as coding requests and math problems.

DisCIPL first presented an ability to write sentences and paragraphs that follow explicit rules. The models were given very specific prompts — for example, writing a sentence that has exactly 18 words, where the fourth word must be “Glasgow,” the eighth should be “in”, and the 11th must be “and.” The system was remarkably adept at handling this request, crafting coherent outputs while achieving accuracy and coherence similar to o1.

Faster, cheaper, better

This experiment also revealed that key components of DisCIPL were much cheaper than state-of-the-art systems. For instance, whereas existing reasoning models like OpenAI’s o1 perform reasoning in text, DisCIPL “reasons” by writing Python code, which is more compact. In practice, the researchers found that DisCIPL led to 40.1 percent shorter reasoning and 80.2 percent cost savings over o1.

DisCIPL’s efficiency gains stem partly from using small Llama models as followers, which are 1,000 to 10,000 times cheaper per token than comparable reasoning models. This means that DisCIPL is more “scalable” — the researchers were able to run dozens of Llama models in parallel for a fraction of the cost.

Those weren’t the only surprising findings, according to CSAIL researchers. Their system also performed well against o1 on real-world tasks, such as making ingredient lists, planning out a travel itinerary, and writing grant proposals with word limits. Meanwhile, GPT-4o struggled with these requests, and with writing tests, it often couldn’t place keywords in the correct parts of sentences. The follower-only baseline essentially finished in last place across the board, as it had difficulties with following instructions.

“Over the last several years, we’ve seen some impressive results from approaches that use language models to ‘auto-formalize’ problems in math and robotics by representing them with code,” says senior author Jacob Andreas, who is an MIT electrical engineering and computer science associate professor and CSAIL principal investigator. “What I find most exciting about this paper is the fact that we can now use LMs to auto-formalize text generation itself, enabling the same kinds of efficiency gains and guarantees that we’ve seen in these other domains.” 

In the future, the researchers plan on expanding this framework into a more fully-recursive approach, where you can use the same model as both the leader and followers. Grand adds that DisCIPL could be extended to mathematical reasoning tasks, where answers are harder to verify. They also intend to test the system on its ability to meet users’ fuzzy preferences, as opposed to following hard constraints, which can’t be outlined in code so explicitly. Thinking even bigger, the team hopes to use the largest possible models available, although they note that such experiments are computationally expensive.

Grand and Andreas wrote the paper alongside CSAIL principal investigator and MIT Professor Joshua Tenenbaum, as well as MIT Department of Brain and Cognitive Sciences Principal Research Scientist Vikash Mansinghka and Yale University Assistant Professor Alex Lew SM ’20 PhD ’25. CSAIL researchers presented the work at the Conference on Language Modeling in October and IVADO’s “Deploying Autonomous Agents: Lessons, Risks and Real-World Impact” workshop in November.

Their work was supported, in part, by the MIT Quest for Intelligence, Siegel Family Foundation, the MIT-IBM Watson AI Lab, a Sloan Research Fellowship, Intel, the Air Force Office of Scientific Research, the Defense Advanced Research Projects Agency, the Office of Naval Research, and the National Science Foundation.

New MIT program to train military leaders for the AI age

Fri, 12/12/2025 - 1:10pm

Artificial intelligence can enhance decision-making and enable action with reduced risk and greater precision, making it a critical tool for national security. A new program offered jointly by the MIT departments of Mechanical Engineering (Course 2, MechE) and Electrical Engineering and Computer Science (Course 6, EECS) will provide breadth and depth in technical studies for naval officers, as well as a path for non-naval officers studying at MIT, to grow in their understanding of applied AI for naval and military applications.

“The potential for artificial intelligence is just starting to be fully realized. It’s a tool that dramatically improves speed, efficiency, and decision-making with countless applications,” says Commander Christopher MacLean, MIT associate professor of the practice in mechanical engineering, naval construction, and engineering. “AI is a force multiplier that can be used for data processing, decision support, unmanned and autonomous systems, cyber defense, logistics and supply chains, energy management, and many other fields.”

The program, called “2N6: Applied Artificial Intelligence Program for Naval Officers,” comprises a two-year master of science degree in mechanical engineering with an accompanying AI certificate awarded by the MIT Schwarzman College of Computing.

“The officers entering this program will learn from the world’s experts, and conduct cutting-edge relevant research, and will exit the program best prepared for their roles as leaders across the U.S. naval enterprise,” says MacLean.

The 2N6 curriculum is application focused, and the content is built to satisfy the U.S. Navy’s sub-specialty code for Applied Artificial Intelligence. Students will learn core AI concepts, as well as applications to special topics, such as decision-making for computational exercises; AI for manufacturing and design, with special emphasis on navy applications; and AI for marine autonomy of surface and underwater vehicles.

“The expanding influence of artificial intelligence is redefining our approach to problem-solving. AI holds the potential to address some of the most pressing issues in nearly every field,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I’m honored that the college can contribute to and support such a vital program that will equip our nation’s naval officers with the technical expertise they need for mission-relevant challenges.”

MIT has been a leading center of ship research and design for over a century, with work at the Institute today representing significant advancements in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. The 2N program will celebrate its 125th year at MIT in 2026.

“In MechE, we are embracing the use of AI to explore new frontiers in research and education, with deep grounding in the fundamentals, design, and scaling of physical systems,” says John Hart, the Class of 1922 Professor and head of MechE. “With the 2N6 program, we’re proud to be at the helm of such an important charge in training the next generation of leaders for the Navy.”

“Breakthroughs in artificial intelligence are reshaping society and advancing human decision-making and creativity,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing, head of EECS, and MathWorks Professor. “We are delighted to partner with the Department of Mechanical Engineering in launching this important collaboration with the U.S. Navy. The program will explore not only the forefront of AI advances, but also its effective application in Navy operations.”

2N6 was created following a visit to campus from Admiral Samuel Paparo, commander of the U.S. Indo-Pacific Command, with MIT Provost Anantha Chandrakasan, who was then dean of engineering and chief innovation and strategy officer.

“[Admiral Paparo] was given an overview of some of the cutting-edge work and research that MIT has done and is doing in the field of AI, [and was introduced to the 2N program],” says MacLean. “The admiral made the connection, envisioning an applied AI program similar to 2N.”

2N6 will run as a pilot program for at least two years. The program’s first cohort will comprise only U.S. Navy officers, with plans to expand more broadly.

“We are thrilled to build on the long-standing relationship between MIT and the U.S. Navy with this new program,” says Themis Sapsis, William I. Koch Professor in mechanical engineering and the director of the Center for Ocean Engineering at MIT. “It is specifically designed to train naval officers on the fundamentals and applications of AI, but also involve them in research that has direct impact to the Navy. We believe that 2N6 can model a new paradigm for advanced AI education focused more broadly on supporting national security.”

A better DNA material for genetic medicine

Fri, 12/12/2025 - 12:00pm

To our immune system, a potentially lifesaving gene therapy can look a lot like a dangerous infection. That’s because most genetic medicine uses viruses or double-stranded DNA to deliver genetic information to target cells. DNA in its traditional double helix form can lead to toxic immune stimulation and be difficult to package into cellular delivery vehicles. As a result, the reach of genetic medicine is limited today.

Kano Therapeutics is taking a different approach to genetic therapies. The company is developing gene-editing technologies using circular single-stranded DNA (cssDNA), a biomolecule that is less toxic than double stranded DNA and more stable than RNA, and could be delivered more efficiently to many parts of the body to treat genetic diseases, cancers, and more.

The company, which was founded by former MIT postdoc Floris Engelhardt, professor of biological engineering Mark Bathe, and John Vroom MBA ’22, is developing a platform for manufacturing cssDNA of customized lengths and sequences, which could deliver genetic material to fix or replace faulty genes.

“We can work with CRISPR and other gene-editing technologies,” Engelhardt says. “CRISPR finds a location in a genome, binds to it, and cuts at that location. That allows you to edit a gene or stop a gene from functioning. But what if you have a loss-of-function disease where you need to insert a new piece of genetic code? Our approach allows you to replace whole genes or add genetic information.”

Making DNA flexible

Around 2019, Bathe’s lab published research describing ways to engineer the sequence and length of cssDNA molecules, which have been used in labs for decades but have increasingly drawn interest for improving gene therapies. Several pharmaceutical companies immediately reached out.

“Single-stranded DNA is a little like messenger RNA, which can code for any protein in any cell, tumor, or organ,” Bathe says. “It fundamentally encodes for a protein, so it can be used across diseases, including rare diseases that may only affect a few people in the country.”

Engelhardt had also worked on cssDNA as a PhD student in Munich. She met Bathe at a conference.

“We were considering collaborating on research,” Engelhardt recalls. “Then Mark heard I was finishing my PhD and said, ‘Wait a minute. Instead of collaborating, I should hire you.’”

Within 48 hours of submitting her PhD thesis, Engelhardt received an email asking her to apply to Bathe’s lab as a postdoc. She was drawn to the position because she would be focusing on research that had the potential to help patients.

“MIT is very good at creating industry-focused postdocs,” Engelhardt says. “I was inspired by the idea of doing postdoc work with the goal of spinning out a company, as opposed to doing solely academic-focused research.”

Bathe and Engelhardt learned from members of the pharmaceutical industry how single-stranded DNA could help overcome limitations in gene and cell therapies. Although CRISPR-based treatments have recently been approved for a few genetic diseases, CRISPR’s effectiveness has been limited by its potential toxicity and inefficient delivery to specific sites in the body. Also, those treatments can only be administered once because CRISPR often gets labeled as foreign by our immune systems and rejected from the body.

Engelhardt began exploring MIT’s resources to help commercialize her research. She met Vroom through an online “founder speed dating” event at MIT. She also received support from the Venture Mentoring Service, took classes at MIT’s Sloan School of Management, and worked with MIT’s Industrial Liaison Program. Early on, Bathe suggested Engelhardt work with MIT’s Technology License Office, something she says she tells every founder to do the moment they start thinking about commercializing their research.

In 2021, Kano won the $20,000 first place prize at the MIT Sloan Healthcare Innovation Prize (SHIP) to commercialize a new way to design and manufacture single-stranded DNA. Kano uses fermentation to produce its cssDNA less expensively than approaches based on chemical DNA synthesis.

“No one had the ability to access this type of genetic material, and so a lot of our work was around creating the highest-quality, economically scalable process to allow circular single-stranded DNA to be commercially viable,” Engelhardt says.

Engelhardt and Vroom began meeting with investors as soon as Engelhardt finished her postdoc work in 2021. The founders worked to raise money over the next year while Vroom finished his MBA.

Today, Kano’s circular ssDNA can be used to insert entire genes, up to 10,000 nucleotides long, into the body. Kano is planning to partner with pharmaceutical companies to make their gene therapies more targeted and potent. For instance, pharmaceutical partners could use Kano’s platform to join the CD19 and CD20 genes, which are expressed in certain tumor cells, and stipulate that only if both genes bind to a cell receptor do they enter that cell’s genome and make edits.

Overall, Engelhardt says working with circular single-stranded DNA makes Kano’s approach more flexible than platforms like CRISPR.

“We realized working with pharmaceutical companies early on in my postdoc there was a lack of design understanding because of the lack of access to these molecules,” Engelhardt says. “When it comes to gene or cell therapies, people just think of the gene itself, not the flanking sequences or anything else that goes around the gene. Now that the DNA isn’t stuck in a double helix all the time, I can create small, three-dimensional structures — think loops or hairpins — that work, for example, as a binding protein that pulls it into the nucleus. That unlocks a completely new path for DNA because it makes it engineerable — not only on a structural level but also a sequence level.”

Partnering for impact

To facilitate more partnerships, Kano is signing agreements with partners that give it a smaller percentage of eventual drug royalties but allow it to work with many companies at the same time. In a recent collaboration with Merck KGaA, Kano combined its circular cssDNA platform with the company’s lipid nanoparticles solutions for delivering gene therapies. Kano is also in discussions with other large pharmaceutical companies to jointly bring cancer drugs into the clinic over the next two years.

“That’s exciting because we’ll be implementing our DNA into partners’ drug system, so when they file their new drug and dose their first patients, our DNA is going to be the therapeutic information carrier for efficacy,” Engelhardt says. “As a first-time founder, this is where you want to go. We talk about patient impact all the time, and this is how we’re going to get it.”

Kano is also developing the first databank mapping cssDNA designs to activity, to speed up the development of new treatments.

“Right now, there is no understanding of how to design DNA for these therapies,” Engelhardt says. “Everyone who wants to differentiate needs to come up with a new editing tool, a new delivery tool, and there’s no connecting company that can enable those areas of expertise. When partners come to us, we can say, ‘The gene sequence is all yours.’ But often it’s not just about the sequence. It’s also about the promoter or flanking sequence that allows you to insert your DNA into the genome, or that makes DNA package well into your delivery nanoparticle. At Kano, we’re building the best knowledgebase to use DNA material to treat diseases.”

Making clean energy investments more successful

Fri, 12/12/2025 - 11:20am

Governments and companies constantly face decisions about how to allocate finite amounts of money to clean energy technologies that can make a difference to the world’s climate, its economies, and to society as a whole. The process is inherently uncertain, but research has been shown to help predict which technologies will be most successful. Using data-driven bases for such decisions can have a significant impact on allowing more informed decisions that produce the desired results.

The role of these predictive tools, and the areas where further research is needed, are addressed in a perspective article published Nov. 24 in Nature Energy, by professor Jessika Trancik of MIT’s Sociotechnical Systems Research Center and Institute of Data, Systems, and Society and 13 co-authors from institutions around the world.

She and her co-authors span engineering and social science and share “a common interest in understanding how to best use data and models to inform decisions that influence how technology evolves,” Trancik says. They are interested in “analyzing many evolving technologies — rather than focusing on developing only one particular technology — to understand which ones can deliver.” Their paper is aimed at companies and governments, as well as researchers. “Increasingly, companies have as much agency as governments over these technology portfolio decisions,” she says, “although government policy can still do a lot because it can provide a sort of signal across the market.”

The study looked at three stages of the process, starting with forecasting the actual technological changes that are likely to play important roles in coming years, then looking at how those changes could affect economic, social, and environmental conditions, and finally, how to apply these insights into the actual decision-making processes as they occur.

Forecasting usually falls into two categories, either data-driven or expert-driven, or a combination of those. That provides an estimate of how technologies may be improving, as well as an estimate of the uncertainties in those predictions. Then in the next step, a variety of models are applied that are “very wide ranging,” Trancik says, “different models that cover energy systems, transportation systems, electricity, and also integrated assessment models that look at the impact of technology on the environment and on the economy.”

And then, the third step is “finding structured ways to use the information from predictive models to interact with people that may be using that information to inform their decision-making process,” she says. “In all three of these steps, how you need to recognize the vast uncertainty and tease out the predictive aspects. How you deal with uncertainty is really important.”

In the implementation of these decisions, “people may have different objectives, or they may have the same objective but different beliefs about how to get there. And so, part of the research is bringing in this quantitative analysis, these research results, into that process,” Trancik says. And a very important aspect of that third step, she adds, is “recognizing that it’s not just about presenting the model results and saying, ‘here you go, this is the right answer.’ Rather, you have to bring people into the process of designing the studies and interacting with the modeling results.”

She adds that “the role of research is to provide information to, in this case, the decision-making processes. It’s not the role of the researchers to push for one outcome or another, in terms of balancing the trade-offs,” such as between economic, environmental, and social equity concerns. It’s about providing information, not just for the decision-makers themselves, but also for the public who may influence those decisions. “I do think it’s relevant for the public to think about this, and to think about the agency that actually they could have over how technology is evolving.”

In the study, the team highlighted priorities for further research that needs to be done. Those priorities, Trancik says, include “streamlining and validating models, and also streamlining data collection,” because these days “we often have more data than we need, just tons of data,” and yet “there’s often a scarcity of data in certain key areas like technology performance and evolution. How technologies evolve is just so important in influencing our daily lives, yet it’s hard sometimes to access good representative data on what’s actually happening with this technology.” But she sees opportunities for concerted efforts to assemble large, comprehensive data on technology from publicly available sources.

Trancik points out that many models are developed to represent some real-world process, and “it’s very important to test how well that model does against reality,” for example by using the model to “predict” some event whose outcome is already known and then “seeing how far off you are.” That’s easier to do with a more streamlined model, she says.

“It’s tempting to develop a model that includes many, many parameters and lots of different detail. But often what you need to do is only include detail that’s relevant for the particular question you’re asking, and that allows you to make your model simpler.” Sometimes that means you can simplify the decision down to just solving an equation, and other times, “you need to simulate things, but you can still validate the model against real-world data that you have.”

“The scale of energy and climate problems mean there is much more to do,” says Gregory Nemet, faculty chair in business and regulation at the University of Wisconsin at Madison, who was a co-author of the paper. He adds, “while we can’t accurately forecast individual technologies on their own, a variety of methods have been developed that in conjunction can enable decision-makers to make public dollars go much further, and enhance the likelihood that future investments create strong public benefits.”

This work is perhaps particularly relevant now, Trancik says, in helping to address global challenges including climate change and meeting energy demand, which were in focus at the global climate conference COP 30 that just took place in Brazil. “I think with big societal challenges like climate change, always a key question is, ‘how do you make progress with limited time and limited financial resources?’” This research, she stresses, “is all about that. It’s about using data, using knowledge that’s out there, expertise that’s out there, drawing out the relevant parts of all of that, to allow people and society to be more deliberate and successful about how they’re making decisions about investing in technology.”

As with other areas such as epidemiology, where the power of analytical forecasting may be more widely appreciated, she says, “in other areas of technology as well, there’s a lot we can do to anticipate where things are going, how technology is evolving at the global or at the national scale … There are these macro-level trends that you can steer in certain directions, that we actually have more agency over as a society than we might recognize.”

The study included researchers in Massachusetts, Wisconsin, Colorado, Maryland, Maine, California, Austria, Norway, Mexico, Finland, Italy, the U.K., and the Netherlands. 

President Tharman Shanmugaratnam of Singapore visits MIT

Fri, 12/12/2025 - 10:00am

President Tharman Shanmugaratnam of the Republic of Singapore visited MIT on Tuesday, meeting campus leaders while receiving the Miriam Pozen Prize and delivering a lecture on fiscal policy at the MIT Sloan School of Management.

“We really have to re-orient fiscal policy and develop new fiscal compacts,” said Tharman in his remarks, referring to the budget policy challenges countries face at a time of expanding government debt.

His talk, “The Compacts We Need: Fiscal Choices and Risk-sharing for Sustained Prosperity,” was delivered before a capacity audience of students, faculty, administrators, and staff at MIT’s Samberg Center.

Tharman is a trained economist who for many years ran Singapore’s central bank and has become a notable presence in global policymaking circles. Presenting a crisp summary of global trends, he observed that debt levels in major economies are at or beyond levels once regarded as unsustainable.

“There is no realistic solution to putting government debts back on a sustainable path other than having to make major adjustments to taxes and spending,” he said. However, he emphasized that his remarks were distinctly not “a call for austerity.” Instead, as he outlined, well-considered public investment can reduce the need for additional spending and thus be fiscally sound over time.

For instance, he noted, sound policy approaches can reduce individuals’ health care needs by better providing the conditions in which people stay healthy. Lowering some of these individual burdens and investing in community-building policies can help society both fiscally and by enhancing social solidarity.

“The challenge is to make these adjustments while re-fashioning fiscal policy so that people can see the adjustments — they can see the value in government spending that their taxes are contributing to — and to make adjustments in a way that doesn’t reduce growth,” Tharman said. “You do need growth for solidarity.”

In this sense, he proposed, “We need new fiscal compacts, new retirement compacts, and new global compacts to address the risks that are posed in the minds of individuals, as well as the largest risks” in society. Countries are vulnerable to a variety of shocks, he noted, calling climate change the “defining challenge of our time.” And yet, he added, for all of this, sensible policymaking can encourage people, creating more support for public-minded governance.

“It is that sharing of hopes and aspirations that is at the heart of true solidarity, not the sharing of fears,” Tharman concluded.

Before the lecture, Tharman was greeted by MIT Provost Anantha Chandrakasan, who presented him with a small gift from the MIT Glass Lab, and MIT Sloan Dean Richard Locke. Locke then made welcoming remarks at the event, praising Tharman’s “remarkable leadership in international financial policy, among other things.” After the lecture, Tharman also met with a group of MIT students from Singapore.

The Miriam Pozen Prize is awarded every two years by the MIT Golub Center for Finance and Policy, part of MIT Sloan. The prize, which recognizes extraordinary contributions to financial policy, was created to draw attention to the important research on financial policy conducted at the Golub Center, whose mission is to support research and educational initiatives related to governments’ roles as financial institutions and as regulators of the global financial system. It is named for the mother of MIT Sloan Senior Lecturer Robert C. Pozen, who is also the former executive chairman of MFS Investment Management, and a former vice chairman of Fidelity Investments and president of Fidelity Management and Research Company.

In introductory remarks. Robert Pozen said he was “deeply honored” to present the prize, adding, “It’s very unusual to have someone who is both a brilliant economist and an effective political leader, and that combination is exactly what we’re trying to honor and recognize.”

The previous recipients of the award are Mario Draghi PhD ’77, the former prime minister of Italy and president of the European Central Bank; and the late Stanley Fischer PhD ’69, an influential MIT economist who later became governor of the Bank of Israel, and then vice-chairman of the U.S. Federal Reserve. Draghi received the honor in 2023, and Fischer in 2021.

Tharman was first elected to his current office in 2023. In Singapore, he previously served as, among other roles, deputy prime minister, minister for finance, minister for education, and chairman of the Monetary Authority of Singapore.

Tharman holds a BA in economics from the London School of Economics, an MA in economics from the University of Cambridge, and an MPA from the Harvard Kennedy School at Harvard University.

MIT and Singapore have developed a sustained and productive relationship in research and education over the last quarter-century. The Singapore-MIT Alliance for Research and Technology (SMART), formally launched in 2007, is MIT’s first research center located outside of the United States, featuring work in several interdisciplinary areas of innovation.

The MIT-Singapore program also provides MIT students with research, work, and educational opportunities in Singapore. Additionally, MIT Institute Professor Emeritus Thomas Magnanti, who was present at Tuesday’s event, was the founding president of the Singapore University of Technology and Design, in 2009.

Tuesday’s event also had introductory remarks from Deborah J. Lucas, Sloan Distinguished Professor of Finance at MIT Sloan and director of the MIT Golub Center for Finance and Policy; Peter Fischer, Golub Distinguished Senior Fellow at MIT Sloan and a former under secretary in the U.S. Treasury Department; and Robert C. Merton, School of Managament Distinguished Professor of Finance at MIT Sloan.

In her comments, Lucas said that Tharman “personifies the qualities the award was created to honor,” while Fischer cited his emphasis on “the betterment of humankind.”

Merton praised Tharman’s “deep commitment for advancing financial policy in a way that serves both national and global arenas.” He added: “You have always believed that policy is not just about numbers, but about people. And that sound financial [policies] serve the many, not just the few.”

New method improves the reliability of statistical estimations

Fri, 12/12/2025 - 12:00am

Let’s say an environmental scientist is studying whether exposure to air pollution is associated with lower birth weights in a particular county.

They might train a machine-learning model to estimate the magnitude of this association, since machine-learning methods are especially good at learning complex relationships.

Standard machine-learning methods excel at making predictions and sometimes provide uncertainties, like confidence intervals, for these predictions. However, they generally don’t provide estimates or confidence intervals when determining whether two variables are related. Other methods have been developed specifically to address this association problem and provide confidence intervals. But, in spatial settings, MIT researchers found these confidence intervals can be completely off the mark.

When variables like air pollution levels or precipitation change across different locations, common methods for generating confidence intervals may claim a high level of confidence when, in fact, the estimation completely failed to capture the actual value. These faulty confidence intervals can mislead the user into trusting a model that failed.

After identifying this shortfall, the researchers developed a new method designed to generate valid confidence intervals for problems involving data that vary across space. In simulations and experiments with real data, their method was the only technique that consistently generated accurate confidence intervals.

This work could help researchers in fields like environmental science, economics, and epidemiology better understand when to trust the results of certain experiments.

“There are so many problems where people are interested in understanding phenomena over space, like weather or forest management. We’ve shown that, for this broad class of problems, there are more appropriate methods that can get us better performance, a better understanding of what is going on, and results that are more trustworthy,” says Tamara Broderick, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society, an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and senior author of this study.

Broderick is joined on the paper by co-lead authors David R. Burt, a postdoc, and Renato Berlinghieri, an EECS graduate student; and Stephen Bates an assistant professor in EECS and member of LIDS. The research was recently presented at the Conference on Neural Information Processing Systems.

Invalid assumptions

Spatial association involves studying how a variable and a certain outcome are related over a geographic area. For instance, one might want to study how tree cover in the United States relates to elevation.

To solve this type of problem, a scientist could gather observational data from many locations and use it to estimate the association at a different location where they do not have data.

The MIT researchers realized that, in this case, existing methods often generate confidence intervals that are completely wrong. A model might say it is 95 percent confident its estimation captures the true relationship between tree cover and elevation, when it didn’t capture that relationship at all.

After exploring this problem, the researchers determined that the assumptions these confidence interval methods rely on don’t hold up when data vary spatially.

Assumptions are like rules that must be followed to ensure results of a statistical analysis are valid. Common methods for generating confidence intervals operate under various assumptions.

First, they assume that the source data, which is the observational data one gathered to train the model, is independent and identically distributed. This assumption implies that the chance of including one location in the data has no bearing on whether another is included. But, for example, U.S. Environmental Protection Agency (EPA) air sensors are placed with other air sensor locations in mind.

Second, existing methods often assume that the model is perfectly correct, but this assumption is never true in practice. Finally, they assume the source data are similar to the target data where one wants to estimate.

But in spatial settings, the source data can be fundamentally different from the target data because the target data are in a different location than where the source data were gathered.

For instance, a scientist might use data from EPA pollution monitors to train a machine-learning model that can predict health outcomes in a rural area where there are no monitors. But the EPA pollution monitors are likely placed in urban areas, where there is more traffic and heavy industry, so the air quality data will be much different than the air quality data in the rural area.

In this case, estimates of association using the urban data suffer from bias because the target data are systematically different from the source data.

A smooth solution

The new method for generating confidence intervals explicitly accounts for this potential bias.

Instead of assuming the source and target data are similar, the researchers assume the data vary smoothly over space.

For instance, with fine particulate air pollution, one wouldn’t expect the pollution level on one city block to be starkly different than the pollution level on the next city block. Instead, pollution levels would smoothly taper off as one moves away from a pollution source.

“For these types of problems, this spatial smoothness assumption is more appropriate. It is a better match for what is actually going on in the data,” Broderick says.

When they compared their method to other common techniques, they found it was the only one that could consistently produce reliable confidence intervals for spatial analyses. In addition, their method remains reliable even when the observational data are distorted by random errors.

In the future, the researchers want to apply this analysis to different types of variables and explore other applications where it could provide more reliable results.

This research was funded, in part, by an MIT Social and Ethical Responsibilities of Computing (SERC) seed grant, the Office of Naval Research, Generali, Microsoft, and the National Science Foundation (NSF).

School of Science welcomed new faculty in 2024

Thu, 12/11/2025 - 4:55pm

The School of Science welcomed 11 new faculty members in 2024.

Shaoyun Bai researches symplectic topology, the study of even-dimensional spaces whose properties are reflected by two-dimensional surfaces inside them. He is interested in this area’s interaction with other fields, including algebraic geometry, algebraic topology, geometric topology, and dynamics. He has been developing new tool kits for counting problems from moduli spaces, which have been applied to classical questions, including the Arnold conjecture, periodic points of Hamiltonian maps, higher-rank Casson invariants, enumeration of embedded curves, and topology of symplectic fibrations.

Bai completed his undergraduate studies at Tsinghua University in 2017 and earned his PhD in mathematics from Princeton University in 2022, advised by John Pardon. Bai then held visiting positions at MSRI (now known as Simons Laufer Mathematical Sciences Institute) as a McDuff Postdoctoral Fellow and at the Simons Center for Geometry and Physics, and he was a Ritt Assistant Professor at Columbia University. He joined the MIT Department of Mathematics as an assistant professor in 2024.

Abigail Bodner investigates turbulence in the upper ocean using remote sensing measurements, in-situ ocean observations numerical simulations, climate models, and machine learning. Her research explores how the small-scale physics of turbulence near the ocean surface impacts the large-scale climate. 

Bodner earned a BS and MS from Tel Aviv University studying mathematics and geophysics, atmospheric and planetary sciences. She then went on to Brown University, earning an MS in applied mathematics before completing her PhD studies in 2021 in Earth, environmental, and planetary science. Prior to coming to MIT, Bodner was a Simons Society Junior Fellow at New York University. Bodner joined the Department of Earth, Atmospheric and Planetary Sciences (EAPS) faculty in 2024, with a shared appointment in the Department of Electrical Engineering and Computer Science.

Jacopo Borga is interested in probability theory and its connections to combinatorics, and in mathematical physics. He studies various random combinatorial structures — mathematical objects such as graphs or permutations — and their patterns and behavior at a large scale. This research includes random permutons, meanders, multidimensional constrained Brownian motions, Schramm-Loewner evolutions, and Liouville quantum gravity. 

Borga earned bachelor’s and master’s degrees in mathematics from the Università degli Studi di Padova, and a master’s degree in mathematics from Université Sorbonne Paris Cité (USPC), then proceeded to complete a PhD in mathematics at Unstitut für Mathematik at the Universität Zürich. Borga was an assistant professor at Stanford University before joining MIT as an assistant professor of mathematics in 2024.

Linlin Fan aims to decipher the neural codes underlying learning and memory and to identify the physical basis of learning and memory. Her research focus is on the learning rules of brain circuits — what kinds of activity trigger the encoding and storing of information — how these learning rulers are implemented, and how memories can be inferred from mapping neural functional connectivity patterns. To answer these questions, Fan’s group leverages high-precision, all-optical technologies to map and control the electrical charges of neurons within the brain.

Fan earned her PhD at Harvard University after undergraduate studies at Peking University in China. She joined the MIT Department of Brain and Cognitive Sciences as the Samuel A. Goldblith Career Development Professor of Applied Biology, and the Picower Institute for Learning and Memory as an investigator in January 2024. Previously, Fan worked as a postdoc at Stanford University.

Whitney Henry investigates ferroptosis, a type of cell death dependent on iron, to uncover how oxidative stress, metabolism, and immune signaling intersect to shape cell fate decisions. Her research has defined key lipid metabolic and iron homeostatic programs that regulate ferroptosis susceptibility. By uncovering the molecular factors influencing ferroptosis susceptibility, investigating its effects on the tumor microenvironment, and developing innovative methods to manipulate ferroptosis resistance in living organisms, Henry’s lab aims to gain a comprehensive understanding of the therapeutic potential of ferroptosis, especially to target highly metastatic, therapy-resistant cancer cells.

Henry received her bachelor's degree in biology with a minor in chemistry from Grambling State University and her PhD from Harvard University. Following her doctoral studies, she worked at the Whitehead Institute for Biomedical Research and was supported by fellowships from the Jane Coffin Childs Memorial Fund for Medical Research and the Ludwig Center at MIT. Henry joined the MIT faculty in 2024 as an assistant professor in the Department of Biology and a member of the Koch Institute for Integrative Cancer Research, and was recently named the Robert A. Swanson (1969) Career Development Professor of Life Sciences and a HHMI Freeman Hrabowski Scholar.

Gian Michele Innocenti is an experimental physicist who probes new regimes of quantum chromodynamics (QCD) through collisions of ultra relativistic heavy ions at the Large Hadron Collider. He has developed advanced analysis techniques and data-acquisition strategies that enable novel measurements of open heavy-flavor and jet production in hadronic and ultraperipheral heavy-ion collisions, shedding light on the properties of high-temperature QCD matter and parton dynamics in Lorentz-contracted nuclei. He leads the MIT Pixel𝜑 program, which exploits CMOS MAPS technology to build a high-precision tracking detector for the ePIC experiment at the Electron–Ion Collider.

Innocenti received his PhD in particle and nuclear physics at the University of Turin in Italy in early 2014. He then joined the MIT heavy-ion group in the Laboratory of Nuclear Science in 2014 as a postdoc, followed by a staff research physicist position at CERN in 2018. Innocenti joined the MIT Department of Physics as an assistant professor in January 2024.

Mathematician Christoph Kehle's research interests lie at the intersection of analysis, geometry, and partial differential equations. In particular, he focuses on the Einstein field equations of general relativity and our current understanding of gravitation, which describe how matter and energy shape spacetime. His work addresses the Strong Cosmic Censorship conjecture, singularities in black hole interiors, and the dynamics of extremal black holes.

Prior to joining MIT, Kehle was a junior fellow at ETH Zürich and a member at the Institute for Advanced Study in Princeton. He earned his bachelor’s and master’s degrees at Ludwig Maximilian University and Technical University of Munich, and his PhD in 2020 from the University of Cambridge. Kehle joined the Department of Mathematics as an assistant professor in July 2024.

Aleksandr Logunov is a mathematician specializing in harmonic analysis and geometric analysis. He has developed novel techniques for studying the zeros of solutions to partial differential equations and has resolved several long-standing problems, including Yau’s conjecture, Nadirashvili’s conjecture, and Landis’ conjectures.

Logunov earned his PhD in 2015 from St. Petersburg State University. He then spent two years as a postdoc at Tel Aviv University, followed by a year as a member of the Institute for Advanced Study in Princeton. In 2018, he joined Princeton University as an assistant professor. In 2020, he spent a semester at Tel Aviv University as an IAS Outstanding Fellow, and in 2021, he was appointed full professor at the University of Geneva. Logunov joined MIT as a full professor in the Department of Mathematics in January 2024.

Lyle Nelson is a sedimentary geologist studying the co-evolution of life and surface environments across pivotal transitions in Earth history, especially during significant ecological change — such as extinction events and the emergence of new clades — and during major shifts in ocean chemistry and climate. Studying sedimentary rocks that were tectonically uplifted and are now exposed in mountain belts around the world, Nelson’s group aims to answer questions such as how the reorganization of continents influenced the carbon cycle and climate, the causes and effects of ancient ice ages, and what factors drove the evolution of early life forms and the rapid diversification of animals during the Cambrian period.

Nelson earned a bachelor’s degree in earth and planetary sciences from Harvard University in 2015 and then worked as an exploration geologist before completing his PhD at Johns Hopkins University in 2022. Prior to coming to MIT, he was an assistant professor in the Department of Earth Sciences at Carleton University in Ontario, Canada. Nelson joined the EAPS faculty in 2024.

Protein evolution is the process by which proteins change over time through mechanisms such as mutation or natural selection. Biologist Sergey Ovchinnikov uses phylogenetic inference, protein structure prediction/determination, protein design, deep learning, energy-based models, and differentiable programming to tackle evolutionary questions at environmental, organismal, genomic, structural, and molecular scales, with the aim of developing a unified model of protein evolution.

Ovchinnikov received his BS in micro/molecular biology from Portland State University in 2010 and his PhD in molecular and cellular biology from the University of Washington in 2017. He was next a John Harvard Distinguished Science Fellow at Harvard University until 2023. Ovchinnikov joined MIT as an assistant professor of biology in January 2024.

Shu-Heng Shao explores the structural aspects of quantum field theories and lattice systems. Recently, his research has centered on generalized symmetries and anomalies, with a particular focus on a novel type of symmetry without an inverse, referred to as non-invertible symmetries. These new symmetries have been identified in various quantum systems, including the Ising model, Yang-Mills theories, lattice gauge theories, and the Standard Model. They lead to new constraints on renormalization group flows, new conservation laws, and new organizing principles in classifying phases of quantum matter.

Shao obtained his BS in physics from National Taiwan University in 2010, and his PhD in physics from Harvard University in 2016. He was then a five-year long-term member at the Institute for Advanced Study in Princeton before he moved to the Yang Institute for Theoretical Physics at Stony Brook University as an assistant professor in 2021. In 2024, he joined the MIT faculty as an assistant professor of physics.

MIT researchers find new immunotherapeutic targets for glioblastoma

Thu, 12/11/2025 - 4:40pm

Glioblastoma is the most common form of brain cancer in adults, and its consequences are usually quick and fatal. After receiving standard-of-care treatment (surgery followed by radiation and chemotherapy), fewer than half of patients will survive longer than 15 months. Only 5 percent of patients survive longer than five years.

Researchers have explored immune checkpoint inhibitors as an avenue for boosting glioblastoma survival rates. This type of immunotherapy, which has proven effective against a range of tumor types, turns off a molecular switch that prevents T cells from attacking cancer cells. The patient’s own immune system is then able to clear the tumor. 

However, glioblastoma is unusually resistant to attack by T cells, rendering immune checkpoint inhibitors ineffective. The culprit is a different immune cell, macrophages, which have been recruited to tumors, where they support tumor growth while suppressing the ability of T cells to infiltrate and attack tumors.

A team of researchers led by Forest White at the MIT Koch Institute for Integrative Cancer Research used sophisticated immune profiling tools to map out how macrophages evolve from a first-line defense against cancer and other pathogens into a shield that protects the glioblastoma tumor — as well as how the tumor cells themselves are transformed by the encounter.

“Looking at the co-evolution of both cell types is key,” says White, who is also the Ned C. (1949) and Janet C. (Bemis) Rice Professor in the Department of Biological Engineering. “It’s a little bit like what happens when a new family moves into a neighborhood: The family members’ lives change, but so do the social dynamics of the people around them. Whether you’re mixing people or cells, you won’t be able to predict how they will interact, even if you know both well.”

“By looking at what happens when macrophages move into the tumor, we can observe changes to both types of cells that we wouldn’t otherwise be able to see,” says Yufei Cui, a PhD candidate in the White Laboratory. “We were able to identify new targets for both glioblastoma and macrophages that could be used to develop therapies that, when delivered in combination with immune checkpoint inhibitors, more effectively treat glioblastoma.”

The study, appearing recently in Cancer Research, includes Stefani Spranger, associate professor of biology and member of the MIT Koch Institute, and Darrell Irvine, former member of the Koch Institute and now professor at the Scripps Research Institute.

As in other cancers, macrophages play a pivotal role in glioblastoma development and resistance to immune therapies. In laboratory models, inhibiting the activity of tumor-associated macrophages has been found to slow glioblastoma growth, but that success has not translated to studies of human patients. While the overall strategy of targeting glioblastoma-associated macrophages is promising, new targets — derived from models that more accurately reproduce the cell interactions in patient tumors — need to be identified.

One approach to discovering such targets is a specialty of the White lab: profiling cells’ immunopeptidomes — the repertoires of antigens presented on the surfaces of cancer cells, macrophages, and many other types of cells. Surface-presenting antigens are a window into the internal state of the cell: The antigens derive from proteins produced as the cell carries out different functions and responds to its environment. By binding to surface antigens, T cells and other immune cells can monitor cells for dysfunction and respond to them. 

The White lab has developed sophisticated methods for immunopeptidome profiling, combining methods such as liquid chromatography and mass spectrometry to isolate cell surface antigens — in this case, from glioblastoma and macrophage cells cultured in isolation and together — and quantifying changes in expression over time. The researchers identified over 800 peptides in macrophages that either increased or decreased in expression when cultured with glioblastoma cells. Peptides with the biggest gains in expression under co-cultivation derived from 33 source proteins, mostly related to cytokine signaling that promotes tumor aggression and suppresses immune response to tumors.

Antigen presentation on glioblastoma cells was also transformed by interactions with macrophages. These antigens were associated with Rho GTPase, a signaling protein that belongs to Ras, a class of proteins that is mutated in 30 percent of all cancers. Changes in Rho GTPase expression predispose cells to developing hallmark traits of cancer, such as prolonged cell longevity, abnormal growth, and metastasis. Antigen profiles of co-cultured glioblastoma cells revealed over 40 Rho GTPase-associated antigens with increased expression compared to tumor cells cultured in isolation.

Researchers compared antigen expression changes in co-cultured macrophage and glioblastoma cells to immunopeptidome profiles of mouse models and human tumor samples, finding that patterns observed in cell culture translated to animal models and, potentially, to patients.

Researchers selected six antigens showing increased expression in either glioblastoma cells or macrophages to test as therapeutic targets, developing an mRNA-based immunostimulatory therapy for each antigen. After treating mice with glioblastoma, tumors showed significantly slowed growth overall and, in a few cases, were completely eradicated. 

In future work, the team plans to use their immunopeptidome profiling techniques to characterize co-cultured dendritic cells, which retrieve proteins from cancer cells and presents them to T cells as antigens, as well as to explore antigen presentation of cells in live models of glioblastoma.

“This study demonstrates the promise of profiling cell surface antigens,” says Cui. “With quantitative accuracy and cell type resolution, our approach could be used to design improved immunotherapies against many cancer types and other diseases,” says Cui.

This work was supported, in part, by the National Cancer Institute (NCI) and the MIT Center for Precision Cancer Medicine. 

A new way to deliver antibodies could make treatment much easier for patients

Thu, 12/11/2025 - 10:45am

Antibody treatments for cancer and other diseases are typically delivered intravenously, because of the large volumes that are needed per dose. This means the patient has to go to a hospital for every treatment, where they may spend hours receiving the infusion.

MIT engineers have now taken a major step toward reformulating antibodies so that they can be injected using a standard syringe. The researchers found a way to create solid particles of highly concentrated antibodies, suspended in a solution. These particles carry enough antibodies that only about 2 milliliters of solution would be needed per dose.

This advance could make it much easier for patients to receive antibody treatments, and could make treatment more accessible for patients who have difficulty coming into a hospital, including older people.

“As the global population ages, making the treatment process more convenient and accessible for those populations is something that needs to be addressed,” says Talia Zheng, an MIT graduate student who is the lead author of the new study.

Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering, is the senior author of the open-access paper, which appears in Advanced Materials. MIT graduate student Lucas Attia and Janet Teng ’25 are also authors of the study.

Highly concentrated antibodies

Therapeutic antibody drugs such as rituximab, which is used to treat some cancers, consist of antibodies suspended in a water-based solution. In addition to cancers, antibodies are also used to treat infectious diseases, as well as autoimmune disorders such as rheumatoid arthritis, inflammatory bowel disease, and multiple sclerosis.

Because the antibody solutions are formulated at low concentrations (10 to 30 milligrams of antibody per milliliter of solution), patients need to be given at least 100 milliliters per dose, which is much too large to be injected using a standard syringe. To decrease this volume to the point where it could be injected, the antibody concentration would need to be at least 300 milligrams per milliliter, but that would make the solution much too thick to be injected.

“You can’t concentrate existing formulations to these concentrations,” Doyle says. “They’ll be very viscous and will exceed the force threshold of what you can inject into a patient.”

In 2023, Doyle’s lab developed a way to generated highly concentrated antibody formulations by encapsulating them into hydrogel particles. However, that process requires centrifugation, a step that would be difficult to scale up for manufacturing.

In their new study, the researchers took a different approach that allows them to create droplets suspended in an emulsion, similar to oil and vinegar. In this case, droplets containing antibodies dissolved in a watery solution are suspended in an organic solvent called pentanol.

These droplets can then be dehydrated, leaving behind highly concentrated solid antibodies — about 360 milligrams of antibody per milliliter of solution. These particles also include a small amount of polyethylene glycol (PEG), a polymer that helps stabilize the particles.

Once these solid particles form, the organic solvent surrounding them is removed and replaced with an aqueous solution (water containing dissolved salts and small amount of stabilizing polymer), similar to the solution now used to infuse therapeutic antibodies.

This assembly process can be done rapidly using a microfluidic setup and does not require centrifugation, which should allow it to be scaled up much more easily using emulsification devices compliant with GMP (good manufacturing practice) regulations.

“Our first approach was a bit brute force, and when we were developing this new approach, we said to it’s got to be simple if it’s going to be better and scalable,” Doyle says.

Injectable particles

The researchers showed that they could control the size of the particles — from about 60 to 200 microns in diameter — by changing the flow rate of the solutions that make up the droplets.

Using particles 100 microns in diameter, they tested the injectability of the solution using a mechanical force tester. Those studies showed that the force needed to push the plunger of a syringe containing the particle solution was less than 20 newtons.

“That is less than half of the maximum acceptable force that people usually try to aim for, so it’s very injectable,” Zheng says.

Using a 2-milliliter syringe, a typical size for subcutaneous injections, more than 700 milligrams of the target antibody could be given at once — enough for most therapeutic applications. The researchers also showed that their formulations remained stable under refrigeration for at least four months.

The researchers now plan to test their antibody particles for therapeutic applications in animal models. They are also working on scaling up the manufacturing process, so they can make enough for large-scale testing.

The research was funded by the MIT Undergraduate Research Opportunities Program and the U.S. Department of Energy.

Lisa Su ’90, SM ’91, PhD ’94 to deliver MIT’s 2026 Commencement address

Thu, 12/11/2025 - 9:00am

Lisa Su ’90, SM ’91, PhD ’94, a leading executive in the semiconductor industry and head of the company Advanced Micro Devices (AMD), will deliver the address at the OneMIT Commencement Ceremony on Thursday, May 28.

As chair and CEO of AMD, Su has transformed the company, which is now a global leader in high-performance and AI computing. In addition to designing industry-leading CPUs and the specialized GPUs that enable AI applications, AMD technology is the foundation of many of the world’s most advanced supercomputers and high-performance computing systems. The company continues to work on next-generation hardware and open software that will accelerate the adoption of AI, which Su has described as the most transformational technology of our time.

Su has maintained a close relationship with MIT since her days as a student. She was the speaker at the 2017 doctoral hooding ceremony, and in 2018 she established the Lisa Su Fellowship Fund. She served on the Electrical Engineering and Computer Science Visiting Committee for 10 years. In 2022, Building 12, which houses MIT.nano, was named in her honor.

“Long before she led the spectacular turnaround of AMD and lent her name to MIT’s world-class nano facility, Lisa Su was an MIT student who inspired and mentored her classmates. During her PhD studies, she created instructions that guided generations of student researchers in using some of the Institute’s most advanced equipment,” says MIT President Sally Kornbluth. “Lisa is renowned for her intellectual rigor, boldness, and originality, and we're absolutely thrilled that she has agreed to deliver the Commencement address to our graduates this year.”

“MIT has always held a special place in my life and career, and I’m thrilled to accept the invitation to speak at Commencement,” Su says. “The Class of 2026 will be graduating at an exciting time, as AI transforms our world and expands what is possible, and I look forward to celebrating them as they prepare to share their skills and ideas with the world.”

Born in Taiwan, Su grew up in Queens, New York. After earning bachelor’s, master’s, and doctoral degrees in electrical engineering from MIT, she worked at Texas Instruments, IBM, and Freescale Semiconductor, then joined AMD in 2012. In her current position, Su is a member of a small group: Only about 10 percent of Fortune 500 companies have female CEOs.

“Lisa Su has embraced MIT’s ‘mind and hand’ motto over the course of her career, first with important scientific discoveries in semiconductor design and engineering, and later as an extraordinary business executive leading the delivery of innovative products that play an essential role in the modern digital economy. We are very fortunate that she has agreed to share some of the lessons learned on her journey,” says Jim Poterba, the Mitsui Professor of Economics and chair of the Commencement Committee.

“Dr. Lisa Su is an inspiration to the MIT community for the way she combines exceptional engineering and leadership with meaningful, far-reaching impact in computing and countless other fields,” senior class president Heba Hussein says. “Her journey embodies the spirit of MIT, and the Class of 2026 is incredibly excited to welcome her at Commencement as we step into the world carrying the same MIT values!”

“I am excited to hear from someone that I know we can all learn something from. I think all MIT students respect the ‘lock-in’ that must have been required to achieve all that she has, with AMD and beyond,” says Alice Hall, president of the Undergraduate Association.

“Dr. Su is a world leader in manufacturing technologies and personifies MIT's values. As an alum, she has shared many experiences with current students, and I look forward to hearing about how these experiences shaped her successful career,” says Teddy Warner, president of the Graduate Student Council.

Su has received many honors including two named for MIT alumni: the Global Semiconductor Association’s Dr. Morris Chang Exemplary Leadership Award and the Robert N. Noyce Medal. She was named TIME’s 2024 CEO of the Year and has been recognized as one of TIME’s 100 Most Influential People and Fortune's Most Powerful People in Business. She received the 2024 Bower Award for Business Leadership and the Distinguished Leadership Award from the Committee for Economic Development (CED). Su is a member of the American Academy of Arts and Sciences and the National Academy of Engineering.

Su joins notable recent MIT Commencement speakers including science communicator Hank Green (2025); inventor and entrepreneur Noubar Afeyan (2024); YouTuber and inventor Mark Rober (2023); Director-General of the World Trade Organization Ngozi Okonjo-Iweala (2022); lawyer and social justice activist Bryan Stevenson (2021); and retired U.S. Navy four-star admiral William McRaven (2020). 

A new approach to carbon capture could slash costs

Thu, 12/11/2025 - 5:00am

Capturing carbon dioxide from industrial plants is an important strategy in the efforts to reduce the impact of global climate change. It’s used in many industries, including the production of petrochemicals, cement, and fertilizers.

MIT chemical engineers have now discovered a simple way to make carbon capture more efficient and affordable, by adding a common chemical compound to capture solutions. The innovation could cut costs significantly and enable the technology to run on waste heat or even sunlight, instead of energy-intensive heating.

Their new approach uses a chemical called tris — short for tris(hydroxymethyl)aminomethane — to stabilize the pH of the solution used to capture CO2, allowing the system to absorb more of the gas at relatively low temperature. The system can release CO2 at just 60 degrees Celsius (140 degrees Fahrenheit) — a dramatic improvement over conventional methods, which require temperatures exceeding 120 C to release captured carbon.

“It’s something that could be implemented almost immediately in fairly standard types of equipment,” says T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering Practice at MIT and the senior author of the study.

Youhong (Nancy) Guo, a recent MIT postdoc who is now an assistant professor of applied physical sciences at the University of North Carolina at Chapel Hill, is the lead author of the paper, which appears today in Nature Chemical Engineering.

More efficient capture

Using current technologies, around 0.1 percent of global carbon emissions is captured and either stored underground or converted into other products.

The most widely used carbon-capture method involves running waste gases through a solution that contains chemical compounds called amines. These solutions have a high pH, which allows them to absorb CO2, an acidic gas. In addition to traditional amines, basic compounds called carbonates, which are inexpensive and readily available, can also capture acidic CO2 gas. However, as CO2 is absorbed, the pH of the solution drops quickly, limiting the CO2 uptake capacity.

The most energy-intensive step comes once the CO2 is absorbed, because both amine and carbonate solutions must be heated to above 120 C to release the captured carbon. This regeneration step consumes enormous amounts of energy.

To make carbon capture by carbonates more efficient, the MIT team added tris into a potassium carbonate solution. This chemical, commonly used in lab experiments and found in some cosmetics and the Covid-19 mRNA vaccines, acts as a pH buffer — a solution that helps prevent the pH from changing.

When added to a carbonate solution, positively charged tris balances the negative charge of the bicarbonate ions formed when CO2 is absorbed. This stabilizes the pH, allowing the solution to absorb triple the amount of CO2.

As another advantage, tris is highly sensitive to temperature changes. When the solution full of CO2 is heated just slightly, to about 60 C, tris quickly releases protons, causing the pH to drop and the captured CO2 to bubble out.

“At room temperature, the solution can absorb more CO2, and with mild heating it can release the CO2. There is an instant pH change when we heat up the solution a little bit,” Guo says.

“Potassium carbonate is one of the holy grail solvents for carbon capture due to its high chemical stability, low cost, and negligible emissions,” says David Heldebrant, an associate professor of chemical engineering and bioengineering at Washington State University, who was not involved in the study. “I believe this electrochemical tris-promoted potassium carbonate solvent system has a lot of promise for the field of carbon capture, especially since the researchers have been able to improve on the energetics by regenerating at atmospheric pressure, as compared to vacuum-assisted regeneration, which is normally done.”

A simple swap

To demonstrate their approach, the researchers built a continuous-flow reactor for carbon capture. First, gases containing CO2 are bubbled through a reservoir containing carbonate and tris, which absorbs the CO2. That solution then is pumped into a CO2 regeneration module, which is heated to about 60 C to release a pure stream of CO2.

Once the CO2 is released, the carbonate solution is cooled and returned to the reservoir for another round of CO2 absorption and regeneration.

Because the system can operate at relatively low temperatures, there is more flexibility in where the energy could come from, such as solar panels, electricity, or waste heat already generated by industrial plants.

Swapping in carbonate-tris solutions to replace conventional amines should be straightforward for industrial facilities, the researchers say. “One of the nice things about this is its simplicity, in terms of overall design. It’s a drop-in approach that allows you to readily change over from one kind of solution to another,” Hatton says.

When carbon is captured from industrial plants, some of it can be diverted into the manufacture of other useful products, but most of it will likely end up being stored in underground geological formations, Hatton says.

“You can only use a small fraction of the captured CO2 for producing chemicals before you saturate the market,” he says.

Guo is now exploring whether other additives could make the carbon capture process even more efficient by speeding up CO2 absorption rates.

The authors acknowledge Eni S.p.A. for the fruitful discussions under the MIT–Eni research framework agreement.

New materials could boost the energy efficiency of microelectronics

Thu, 12/11/2025 - 12:00am

MIT researchers have developed a new fabrication method that could enable the production of more energy efficient electronics by stacking multiple functional components on top of one existing circuit.

In traditional circuits, logic devices that perform computation, like transistors, and memory devices that store data are built as separate components, forcing data to travel back and forth between them, which wastes energy.

This new electronics integration platform allows scientists to fabricate transistors and memory devices in one compact stack on a semiconductor chip. This eliminates much of that wasted energy while boosting the speed of computation.

Key to this advance is a newly developed material with unique properties and a more precise fabrication approach that reduces the number of defects in the material. This allows the researchers to make extremely tiny transistors with built-in memory that can perform faster than state-of-the-art devices while consuming less electricity than similar transistors.

By improving the energy efficiency of electronic devices, this new approach could help reduce the burgeoning electricity consumption of computation, especially for demanding applications like generative AI, deep learning, and computer vision tasks.

“We have to minimize the amount of energy we use for AI and other data-centric computation in the future because it is simply not sustainable. We will need new technology like this integration platform to continue that progress,” says Yanjie Shao, an MIT postdoc and lead author of two papers on these new transistors.

The new technique is described in two papers (one invited) that were presented at the IEEE International Electron Devices Meeting. Shao is joined on the papers by senior authors Jesús del Alamo, the Donner Professor of Engineering in the MIT Department of Electrical Engineering and Computer Science (EECS); Dimitri Antoniadis, the Ray and Maria Stata Professor of Electrical Engineering and Computer Science at MIT; as well as others at MIT, the University of Waterloo, and Samsung Electronics.

Flipping the problem

Standard CMOS (complementary metal-oxide semiconductor) chips traditionally have a front end, where the active components like transistors and capacitors are fabricated, and a back end that includes wires called interconnects and other metal bonds that connect components of the chip.

But some energy is lost when data travel between these bonds, and slight misalignments can hamper performance. Stacking active components would reduce the distance data must travel and improve a chip’s energy efficiency.

Typically, it is difficult to stack silicon transistors on a CMOS chip because the high temperature required to fabricate additional devices on the front end would destroy the existing transistors underneath.

The MIT researchers turned this problem on its head, developing an integration technique to stack active components on the back end of the chip instead.

“If we can use this back-end platform to put in additional active layers of transistors, not just interconnects, that would make the integration density of the chip much higher and improve its energy efficiency,” Shao explains.

The researchers accomplished this using a new material, amorphous indium oxide, as the active channel layer of their back-end transistor. The active channel layer is where the transistor’s essential functions take place.

Due to the unique properties of indium oxide, they can “grow” an extremely thin layer of this material at a temperature of only about 150 degrees Celsius on the back end of an existing circuit without damaging the device on the front end.

Perfecting the process

They carefully optimized the fabrication process, which minimizes the number of defects in a layer of indium oxide material that is only about 2 nanometers thick.

A few defects, known as oxygen vacancies, are necessary for the transistor to switch on, but with too many defects it won’t work properly. This optimized fabrication process allows the researchers to produce an extremely tiny transistor that operates rapidly and cleanly, eliminating much of the additional energy required to switch a transistor between off and on.

Building on this approach, they also fabricated back-end transistors with integrated memory that are only about 20 nanometers in size. To do this, they added a layer of material called ferroelectric hafnium-zirconium-oxide as the memory component.

These compact memory transistors demonstrated switching speeds of only 10 nanoseconds, hitting the limit of the team’s measurement instruments. This switching also requires much lower voltage than similar devices, reducing electricity consumption.

And because the memory transistors are so tiny, the researchers can use them as a platform to study the fundamental physics of individual units of ferroelectric hafnium-zirconium-oxide.

“If we can better understand the physics, we can use this material for many new applications. The energy it uses is very minimal, and it gives us a lot of flexibility in how we can design devices. It really could open up many new avenues for the future,” Shao says.

The researchers also worked with a team at the University of Waterloo to develop a model of the performance of the back-end transistors, which is an important step before the devices can be integrated into larger circuits and electronic systems.

In the future, they want to build upon these demonstrations by integrating back-end memory transistors onto a single circuit. They also want to enhance the performance of the transistors and study how to more finely control the properties of ferroelectric hafnium-zirconium-oxide.

“Now, we can build a platform of versatile electronics on the back end of a chip that enable us to achieve high energy efficiency and many different functionalities in very small devices. We have a good device architecture and material to work with, but we need to keep innovating to uncover the ultimate performance limits,” Shao says.

This work is supported, in part, by Semiconductor Research Corporation (SRC) and Intel. Fabrication was carried out at the MIT Microsystems Technology Laboratories and MIT.nano facilities. 

PKG Center and the MIT Club of Princeton collaborate on food insecurity hackathon

Wed, 12/10/2025 - 4:50pm

On Nov. 8, the MIT Priscilla King Gray Public Service Center (MIT PKG Center) collaborated with the MIT Club of Princeton, New Jersey, and the Trenton Area Soup Kitchen (TASK) to prototype tech-driven interventions to the growing challenge of food insecurity in the Trenton, New Jersey region.  

Twelve undergraduates traveled to Trenton for a one-day social impact hackathon, working in teams with alumni active in the MIT Club of Princeton to address technical challenges posed by TASK. These included predicting the number of daily meals based on historical data for an organization serving over 12,000 meals each week, and gathering real-time feedback from hundreds of patrons with limited access to technology. 

The day culminated in a pitch session judged by MIT alumni and TASK leadership. The winning solution, developed by a cross-generational team of MIT alumni and students, addressed one of TASK’s most pressing challenges with a blend of technical ingenuity and human-centered design. Drawing on TASK datasets and external data such as weather and holidays, the team proposed a predictive dashboard that impressed judges with its practical utility, enabling the kitchen to reduce waste and distribute the appropriate number of meals to varied locations. TASK also appreciated several elements of solutions proposed to gather real-time feedback from patrons, and plans to experiment with them. 

“The last few weeks have shown how quickly the need for food can escalate in a place like Trenton, where so many people are living below or close to the federal poverty line,” says TASK CEO Amy Flynn. “The issues we are facing are complex and unprecedented, and the hackathon was an opportunity to think about our challenges, and their solutions, in modern and innovative ways. TASK is very excited to be partnering with MIT, the PKG Center for Social Impact, and the local MIT Club of Princeton for this event, particularly at this critical time.”

Students will implement the winning intervention through the PKG Center’s Social Impact Internship Program during MIT’s Independent Activities Period (IAP) in January 2026. Alumni from the MIT Club of Princeton will also serve as mentors to students during their internship. 

Alumni connections

The PKG Center recently completed a new strategic plan, and heard through the process that alumni and students passionate about making a positive impact want more opportunities to interact with and learn from each other.

“A hackathon seemed like an ideal way to connect students and alumni, generating mentoring relationships while making a tangible impact,” says Alison Badgett, associate dean and director of the PKG Center. “We’re grateful to the MIT Club of Princeton and the Trenton Area Soup Kitchen for enabling us to pilot what we hope will be a regular event.”

The idea for a regional hackathon came from the Friends of the PKG Center, the center’s alumni advisory board, which grew 25 percent this year with the addition of several young alumni. Princeton-based alumni Eberhard Wunderlich SM ’75, PhD ’78 and Shahla Wunderlich PhD ’78 offered to help make the idea a reality by connecting PKG with local partners. 

"We have been longtime friends of the PKG Center and have observed over the years that MIT students are uniquely positioned to make a real impact. We were eager to connect the PKG Center with the MIT Club of Princeton and TASK because we knew this collaboration would be meaningful not only for students, alumni, and families, but also for many people in need within our community," said the Wunderlichs. “It was a wonderful experience working with such talented students. We were happy to participate and look forward to the project enhancing the operation of TASK, which provides meals and develops skills for independence for those in need in Mercer County, New Jersey.”

A legacy of innovation and impact

The hackathon was facilitated by Lauren Tyger, the PKG Center’s assistant dean for social innovation, who leads a growing suite of social innovation and entrepreneurship programming for the PKG Center. Tyger recruited the 12 undergraduate participants from PKG’s Social Innovation Exploration first-year pre-orientation program (FPOP), an intensive five-day hackathon exploring food insecurity through the lens of sustainability at MIT and in Cambridge, Massachusetts. 

“For students, the regional alumni-student hackathon was an opportunity to implement what they learned through PKG’s FPOP to a real-world challenge with TASK,” says Tyger. “We hope students will not only be inspired to implement their winning interventions through an IAP internship, but also to explore social enterprise solutions to food insecurity through our IDEAS Social Innovation Incubator, now in its 25th year.”

With the success of this event, the PKG Center is exploring opportunities to host more alumni-student hackathons with regional MIT clubs, as a way to celebrate the 25th anniversary of the IDEAS Social Innovation Challenge, which has invested $1.3 million in nearly 300 social enterprises since its inception in 2001. 

“Getting to work with TASK was amazing because it allowed me to put the skills I learned in PKG’s SIE FPOP to a real-world application that could help people,” says Vivian Dinh, a student who participated in the hackathon. “It was a great feeling to put together things that we learned in SIE like ideation strategies, interviewing skills, and prototyping into a product, and then see that TASK truly believed in our ideas. Overall, it was a very empowering experience, knowing that my skills and ideas could help a community.”

MIT study shows how vision can be rebooted in adults with amblyopia

Wed, 12/10/2025 - 4:20pm

In the vision disorder amblyopia (commonly known as “lazy eye”), impaired vision in one eye during development causes neural connections in the brain’s visual system to shift toward supporting the other eye, leaving the amblyopic eye less capable even after the original impairment is corrected. Current interventions are only effective during infancy and early childhood, while the neural connections are still being formed. 

Now a study in mice by neuroscientists in The Picower Institute for Learning and Memory at MIT shows that if the retina of the amblyopic eye is temporarily and reversibly anesthetized just for a couple of days, the brain’s visual response to the eye can be restored, even in adulthood.

The open-access findings, published Nov. 25 in Cell Reports, may improve the clinical potential of the idea of temporarily anesthetizing a retina to restore the strength of the amblyopic eye’s neural connections. 

In 2021, the lab of Picower Professor Mark Bear and collaborators showed that anesthetizing the non-amblyopic eye could improve vision in the amblyopic one — an approach analogous in that way to the treatment used in childhood of patching the unimpaired eye. Those 2021 findings have now been replicated in adults of multiple species. But the new evidence on how inactivation works suggests that the proposed treatment also could be effective when applied directly to the amblyopic eye, Bear says, though a key next step will be to again show that it works in additional species and, ultimately, people.

“If it does, it’s a pretty substantial step forward, because it would be reassuring to know that vision in the good eye would not have to be interrupted by treatment,” says Bear, a faculty member in MIT’s Department of Brain and Cognitive Sciences. “The amblyopic eye, which is not doing much, could be inactivated and ‘brought back to life’ instead. Still, I think that especially with any invasive treatment, it’s extremely important to confirm the results in higher species with visual systems closer to our own.”

Madison Echavarri-Leet PhD ’25, whose doctoral thesis included this research, is the lead author of the study, which also demonstrates the underlying process in the brain that makes the potential treatment work.

A beneficial burst

Bear’s lab has been studying the science underlying amblyopia for decades, for instance by working to understand the molecular mechanisms that enable neural circuits to change their connections in response to visual experience or deprivation. The research has produced ideas about how to address amblyopia in adulthood. In a 2016 study with collaborators at Dalhousie University, they showed that temporarily anesthetizing both retinas could restore vision loss in amblyopia. Then, five years later, they published the study showing that anesthetizing just the non-amblyopic eye produced visual recovery for the amblyopic eye.

Throughout that time, the lab weighed multiple hypotheses to explain how retinal inactivation works its magic. Lingering in the lab’s archive of results, Bear says, was an unexplored finding in the lateral geniculate nucleus (LGN) that relays information from the eyes to the visual cortex, where vision is processed: back in 2008, they had found that blocking inputs from a retina to neurons in the LGN caused those neurons to fire synchronous “bursts” of electrical signals to downstream neurons in the visual cortex. Similar patterns of activity occur in the visual system before birth and guide early synaptic development.

The new study tested whether those bursts might have a role in the potential amblyopia treatments the lab was reporting. To get started, Leet and Bear’s team used a single injection of tetrodotoxin (TTX) to anesthetize retinas in the lab animals. They found that the bursting occurred not only in LGN neurons that received input from the anesthetized eye, but also in LGN neurons that received input from the unaffected eye.

From there, they showed that the bursting response depended on a particular “T-type” channel for calcium in the LGN neurons. This was important, because knowing this gave the scientists a way to turn it off. Once they gained that ability, then they could test whether doing so prevented TTX from having a therapeutic effect in mice with amblyopia.

Sure enough, when the researchers genetically knocked out the channels and disrupted the bursting, they found that anesthetizing the non-amblyopic eye could no longer help amblyopic mice. That showed the bursting is necessary for the treatment to work.

Aiding amblyopia

Given their finding that bursting occurs when either retina is anesthetized, the scientists hypothesized it might be enough to just do it in the amblyopic eye. To test this, they ran an experiment in which some mice modeling amblyopia received TTX in their amblyopic eye and some did not. The injection took the retina offline for two days. After a week, the scientists then measured activity in neurons in the visual cortex to calculate a ratio of input from each eye. They found that the ratio was much more even in mice that received the treatment versus those left untreated, indicating that after the amblyopic eye was anesthetized, its input in the brain rose to be at parity with input from the non-amblyopic one.

Further testing is needed, Bear notes, but the team wrote in the study that the results were encouraging.

“We are cautiously optimistic that these findings may lead to a new treatment approach for human amblyopia, particularly given the discovery that silencing the amblyopic eye is effective,” the scientists wrote.

In addition to Leet and Bear, the paper’s authors are Tushar Chauhan, Teresa Cramer, and Ming-fai Fong.

The National Institutes of Health, the Swiss National Science Foundation, the Severin Hacker Vision Research Fund, and the Freedom Together Foundation supported the study.

Vine-inspired robotic gripper gently lifts heavy and fragile objects

Wed, 12/10/2025 - 2:00pm

In the horticultural world, some vines are especially grabby. As they grow, the woody tendrils can wrap around obstacles with enough force to pull down entire fences and trees.

Inspired by vines’ twisty tenacity, engineers at MIT and Stanford University have developed a robotic gripper that can snake around and lift a variety of objects, including a glass vase and a watermelon, offering a gentler approach compared to conventional gripper designs. A larger version of the robo-tendrils can also safely lift a human out of bed.

The new bot consists of a pressurized box, positioned near the target object, from which long, vine-like tubes inflate and grow, like socks being turned inside out. As they extend, the vines twist and coil around the object before continuing back toward the box, where they are automatically clamped in place and mechanically wound back up to gently lift the object in a soft, sling-like grasp.

The researchers demonstrated that the vine robot can safely and stably lift a variety of heavy and fragile objects. The robot can also squeeze through tight quarters and push through clutter to reach and grasp a desired object.

The team envisions that this type of robot gripper could be used in a wide range of scenarios, from agricultural harvesting to loading and unloading heavy cargo. In the near term, the group is exploring applications in eldercare settings, where soft inflatable robotic vines could help to gently lift a person out of bed.

“Transferring a person out of bed is one of the most physically strenuous tasks that a caregiver carries out,” says Kentaro Barhydt, a PhD candidate in MIT’s Department of Mechanical Engineering. “This kind of robot can help relieve the caretaker, and can be gentler and more comfortable for the patient.”

Barhydt, along with his co-first author from Stanford, O. Godson Osele, and their colleagues, present the new robotic design today in the journal Science Advances. The study’s co-authors are Harry Asada, the Ford Professor of Engineering at MIT, and Allison Okamura, the Richard W. Weiland Professor of Engineering at Stanford University, along with Sreela Kodali and Cosmia du Pasquier at Stanford University, and former MIT graduate student Chase Hartquist, now at the University of Florida, Gainesville.

Open and closed


The team’s Stanford collaborators, led by Okamura, pioneered the development of soft, vine-inspired robots that grow outward from their tips. These designs are largely built from thin yet sturdy pneumatic tubes that grow and inflate with controlled air pressure. As they grow, the tubes can twist, bend, and snake their way through the environment, and squeeze through tight and cluttered spaces.

Researchers have mostly explored vine robots for use in safety inspections and search and rescue operations. But at MIT, Barhydt and Asada, whose group has developed robotic aides for the elderly, wondered whether such vine-inspired robots could address certain challenges in eldercare — specifically, the challenge of safely lifting a person out of bed. Often in nursing and rehabilitation settings, this transfer process is done with a patient lift, operated by a caretaker who must first physically move a patient onto their side, then back onto a hammock-like sheet. The caretaker straps the sheet around the patient and hooks it onto the mechanical lift, which then can gently hoist the patient out of bed, similar to suspending a hammock or sling.

The MIT and Stanford team imagined that as an alternative, a vine-like robot could gently snake under and around a patient to create its own sort of sling, without a caretaker having to physically maneuver the patient. But in order to lift the sling, the researchers realized they would have to add an element that was missing in existing vine robot designs: Essentially, they would have to close the loop.

Most vine-inspired robots are designed as “open-loop” systems, meaning they act as open-ended strings that can extend and bend in different configurations, but they are not designed to secure themselves to anything to form a closed loop. If a vine robot could be made to transform from an open loop to a closed loop, Barhydt surmised that it could make itself into a sling around the object and pull itself up, along with whatever, or whomever, it might hold.

For their new study, Barhydt, Osele, and their colleagues outline the design for a new vine-inspired robotic gripper that combines both open- and closed-loop actions. In an open-loop configuration, a robotic vine can grow and twist around an object to create a firm grasp. It can even burrow under a human lying on a bed. Once a grasp is made, the vine can continue to grow back toward and attach to its source, creating a closed loop that can then be retracted to retrieve the object.

“People might assume that in order to grab something, you just reach out and grab it,” Barhydt says. “But there are different stages, such as positioning and holding. By transforming between open and closed loops, we can achieve new levels of performance by leveraging the advantages of both forms for their respective stages.”

Gentle suspension

As a demonstration of their new open- and closed-loop concept, the team built a large-scale robotic system designed to safely lift a person up from a bed. The system comprises a set of pressurized boxes attached on either end of an overhead bar. An air pump inside the boxes slowly inflates and unfurls thin vine-like tubes that extend down toward the head and foot of a bed. The air pressure can be controlled to gently work the tubes under and around a person, before stretching back up to their respective boxes. The vines then thread through a clamping mechanism that secures the vines to each box. A winch winds the vines back up toward the boxes, gently lifting the person up in the process.

“Heavy but fragile objects, such as a human body, are difficult to grasp with the robotic hands that are available today,” Asada says. “We have developed a vine-like, growing robot gripper that can wrap around an object and suspend it gently and securely.”

"There’s an entire design space we hope this work inspires our colleagues to continue to explore,” says co-lead author Osele. “I especially look forward to the implications for patient transfer applications in health care.”

“I am very excited about future work to use robots like these for physically assisting people with mobility challenges,” adds co-author Okamura. “Soft robots can be relatively safe, low-cost, and optimally designed for specific human needs, in contrast to other approaches like humanoid robots.”

While the team’s design was motivated by challenges in eldercare, the researchers realized the new design could also be adapted to perform other grasping tasks. In addition to their large-scale system, they have built a smaller version that can attach to a commercial robotic arm. With this version, the team has shown that the vine robot can grasp and lift a variety of heavy and fragile objects, including a watermelon, a glass vase, a kettle bell, a stack of metal rods, and a playground ball. The vines can also snake through a cluttered bin to pull out a desired object.

“We think this kind of robot design can be adapted to many applications,” Barhydt says. “We are also thinking about applying this to heavy industry, and things like automating the operation of cranes at ports and warehouses.”

This work was supported, in part, by the National Science Foundation and the Ford Foundation.

When it comes to language, context matters

Wed, 12/10/2025 - 12:00am

In everyday conversation, it’s critical to understand not just the words that are spoken, but the context in which they are said. If it’s pouring rain and someone remarks on the “lovely weather,” you won’t understand their meaning unless you realize that they’re being sarcastic.

Making inferences about what someone really means when it doesn’t match the literal meaning of their words is a skill known as pragmatic language ability. This includes not only interpreting sarcasm but also understanding metaphors and white lies, among many other conversational subtleties.

“Pragmatics is trying to reason about why somebody might say something, and what is the message they’re trying to convey given that they put it in this particular way,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

New research from Fedorenko and her colleagues has revealed that these abilities can be grouped together based on what types of inferences they require. In a study of 800 people, the researchers identified three clusters of pragmatic skills that are based on the same kinds of inferences and may have similar underlying neural processes.

One of these clusters includes inferences that are based on our knowledge of social conventions and rules. Another depends on knowledge of how the physical world works, while the last requires the ability to interpret differences in tone, which can indicate emphasis or emotion.

Fedorenko and Edward Gibson, an MIT professor of brain and cognitive sciences, are the senior authors of the study, which appears today in the Proceedings of the National Academy of Sciences. The paper’s lead authors are Sammy Floyd, a former MIT postdoc who is now an assistant professor of psychology at Sarah Lawrence College, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor of cognitive science at Carleton University.

The importance of context

Much past research on how people understand language has focused on processing the literal meanings of words and how they fit together. To really understand what someone is saying, however, we need to interpret those meanings based on context.

“Language is about getting meanings across, and that often requires taking into account many different kinds of information — such as the social context, the visual context, or the present topic of the conversation,” Fedorenko says.

As one example, the phrase “people are leaving” can mean different things depending on the context, Gibson points out. If it’s late at night and someone asks you how a party is going, you may say “people are leaving,” to convey that the party is ending and everyone’s going home.

“However, if it’s early, and I say ‘people are leaving,’ then the implication is that the party isn’t very good,” Gibson says. “When you say a sentence, there’s a literal meaning to it, but how you interpret that literal meaning depends on the context.”

About 10 years ago, with support from the Simons Center for the Social Brain at MIT, Fedorenko and Gibson decided to explore whether it might be possible to precisely distinguish the types of processing that go into pragmatic language skills.

One way that neuroscientists can approach a question like this is to use functional magnetic resonance imaging (fMRI) to scan the brains of participants as they perform different tasks. This allows them to link brain activity in different locations to different functions. However, the tasks that the researchers designed for this study didn’t easily lend themselves to being performed in a scanner, so they took an alternative approach.

This approach, known as “individual differences,” involves studying a large number of people as they perform a variety of tasks. This technique allows researchers to determine whether the same underlying brain processes may be responsible for performance on different tasks.

To do this, the researchers evaluate whether each participant tends to perform similarly on certain groups of tasks. For example, some people might perform well on tasks that require an understanding of social conventions, such as interpreting indirect requests and irony. The same people might do only so-so on tasks that require understanding how the physical world works, and poorly on tasks that require distinguishing meanings based on changes in intonation — the melody of speech. This would suggest that separate brain processes are being recruited for each set of tasks.

The first phase of the study was led by Jouravlev, who assembled existing tasks that require pragmatic skills and created many more, for a total of 20. These included tasks that require people to understand humor and sarcasm, as well as tasks where changes in intonation can affect the meaning of a sentence. For example, someone who says “I wanted blue and black socks,” with emphasis on the word “black,” is implying that the black socks were forgotten.

“People really find ways to communicate creatively and indirectly and non-literally, and this battery of tasks captures that,” Floyd says.

Components of pragmatic ability

The researchers recruited study participants from an online crowdsourcing platform to perform the tasks, which took about eight hours to complete. From this first set of 400 participants, the researchers found that the tasks formed three clusters, related to social context, general knowledge of the world, and intonation. To test the robustness of the findings, the researchers continued the study with another set of 400 participants, with this second half run by Floyd after Jouravlev had left MIT.

With the second set of participants, the researchers found that tasks clustered into the same three groups. They also confirmed that differences in general intelligence, or in auditory processing ability (which is important for the processing of intonation), did not affect the outcomes that they observed.

In future work, the researchers hope to use brain imaging to explore whether the pragmatic components they identified are correlated with activity in different brain regions. Previous work has found that brain imaging often mirrors the distinctions identified in individual difference studies, but can also help link the relevant abilities to specific neural systems, such as the core language system or the theory of mind system.

This set of tests could also be used to study people with autism, who sometimes have difficulty understanding certain social cues. Such studies could determine more precisely the nature and extent of these difficulties. Another possibility could be studying people who were raised in different cultures, which may have different norms around speaking directly or indirectly.

“In Russian, which happens to be my native language, people are more direct. So perhaps there might be some differences in how native speakers of Russian process indirect requests compared to speakers of English,” Jouravlev says.

The research was funded by the Simons Center for the Social Brain at MIT, the National Institutes of Health, and the National Science Foundation. 

MIT takes manufacturing education across the country

Wed, 12/10/2025 - 12:00am

MIT has long bolstered U.S. manufacturing by developing key innovations and production technologies, and training entrepreneurs. This fall, the Institute introduced a new tool for U.S. manufacturing: an education program for workers, held at collaborating institutions, which teaches core principles of production, helping employees and firms alike.

The new effort, the Technologist Advanced Manufacturing Program, or TechAMP, developed with U.S. Department of Defense funding, features a mix of in-person lab instruction at participating institutions, online lectures by MIT faculty and staff, and interactive simulations. There are also capstone projects, in which employees study manufacturing issues with the aim of saving their firms money.

Ultimately, TechAMP is a 12-month certificate program aimed at making the concept of the accredited “technologist” a vital part of the manufacturing enterprise. That could help workers advance in their careers. And it could help firms develop a more skilled workforce.

“We think there’s a gap between the traditional worker categories of engineer and technician, and this technologist training fills it,” says John Liu, a principal research scientist in MIT’s Department of Mechanical Engineering and co-principal investigator of the TechAMP program. “We’re very interested in creating new career pathways and allowing the manufacturing workforce to have a different kind of perspective. We want to formalize the path to becoming a technologist.”

Liu, who is also the principal investigator of the MIT Learning Engineering and Practice Group (LEAP), adds that the MIT program “is a pathway to leadership. No longer should a technician just think about one piece of equipment. They can think about the whole system, the whole operation, and help with decision-making.”

TechAMP launched this fall, in collaboration with multiple institutions, including the University of Massachusetts at Lowell, Cape Cod Community College, Ohio State University, the Community College of Rhode Island, the Connecticut Center for Advanced Technology, and the Berkshire Innovation Center in Pittsfield, Massachusetts. More than 70 people are in the initial cohort of students.

“MIT has embraced the idea that we’re reaching this new type of learner,” says Julie Diop, executive director of MIT’s Initiative for New Manufacturing (INM). TechAMP forms a key part of the education arm of that initiative, a campus-wide effort to reinvigorate U.S. manufacturing that was announced in May 2025. INM also collaborates with several industry firms embracing innovative approaches to manufacturing.

“Through TechAMP and other programs, we’re excited to reach beyond MIT’s traditional realm of manufacturing education and collaborate with companies of all sizes alongside our community college partners,” says John Hart, the Class of 1922 Professor of Mechanical Engineering, head of the Department of Mechanical Engineering at MIT, and faculty co-director of INM. “We hope that the program equips manufacturing technologists to be innovators and problem-solvers in their organizations, and to effectively deploy new technologies that can improve manufacturing productivity.”

INM is one of the key Institute-wide initiatives prioritized by MIT President Sally A. Kornbluth.

“Helping America build a future of new manufacturing is a perfect job for MIT,” Kornbluth said at the INM launch event in May. She continued: “I’m convinced that there is no more important work we can do to meet the moment and serve the nation now.”

A “confidence booster” for workers

TechAMP has been supported by two Department of Defense grants enabling the program’s development. MIT scholars collaborated with colleagues at Clemson University and Ohio State University to develop a number of the interactive simulations used in the course.

The course work is based around a “hub-and-spoke” model that includes segments on core principles of manufacturing — that’s the hub — as well as six areas, or spokes, where companies have advised MIT that workers need more training.

The four parts of the hub comprise manufacturing process controls and their statistical analysis; understanding manufacturing systems, including workflow and efficiency; leadership skills; and operations management, from factory analysis to supply chain issues. These are also the core issues studied in MIT’s online micromaster’s certificate in manufacturing.

The six spokes may change or expand over time but currently consist of mechatronics, automation programming, robotics, machining, digital manufacturing, and design and manufacturing fundamentals.

Having the TechAMP curriculum revolve around concepts common to all manufacturing industries helps technologists-in-training better understand how their companies are trying to function and how their own work relates to those principles.

“The hub concepts are what defines manufacturing,” Liu says. “We need to teach this undervalued set of principles to the workforce, including people without university degrees. If we do that, it means they have a timeless set of ideas. We can adapt ourselves to add industries like biomanufacturing, but we’re starting with the fundamentals.”

Students say they are enjoying the program.

“It’s been a confidence booster,” says Nicole Swan, an employee at the manufacturing firm Proterial, who is taking the TechAMP class at the Community College of Rhode Island campus in Westerly, Rhode Island. “This has really shown me so many different opportunities [for] what I could do in the future, and different avenues that are available.”

Direct value capture possible for firms

The TechAMP certificate program also involves a capstone project, in which the students try to analyze issues or challenges within their own firms. Ideally, if those projects lead to savings or add value, that could make it well worthwhile for manufacturing companies to pay for their students to attend the TechAMP program — which is about 10 to 14 hours of work per week, for the year.

“That could be a form of impact — direct value capture for the firm,” Diop says.

Some firms are already pleased with the development of TechAMP.

“There are so many manufacturing jobs that don’t need a four-year degree, but do require a very high skill level and good communications skills,” says Michael Trotta, CEO of Crystal Engineering, a versatile, 45-employee manufacturer in Newburyport, Massachusetts, whose products range from medical devices to aerospace and defense items. “I see TechAMP as a next logical step in developing a sustainable workforce."

Trotta and three of his employees worked with MIT on the TechAMP project last spring, studying the curriculum material and providing feedback about it to the program leaders, in an effort to make the coursework as useful as possible.

"What we want workers to do is progress to a point where they become that technologist making not $20 an hour, but $40 or $50 an hour, because they have that skill set to run a lot more than just one piece of the process,” Trotta explains. “They’re able to communicate effectively with the engineers, with operations, to identify strengths and weaknesses, to help the firm drive success."

And while the position of “technologist” may not yet be in every manufacturer’s vocabulary yet, the MIT program leaders think it makes eminent sense, as a way of further equipping workers who are currently regarded as technicians or machinists.

By analogy, Diop observes, “The role of nurse practitioner bridges the gap between nurse and doctor, and has changed how medicine is delivered.” Manufacturing, she adds, “has had a reputation for dead-end jobs, but if MIT can help break that image by providing a real pathway, I think that would be meaningful, especially for those without university degrees.”

Intriguingly — as shown by research from Ben Armstrong, executive director and a research scientist at MIT’s Industrial Performance Center — about 10 to 15 percent of titled engineers in manufacturing industries do not have engineering degrees, either. For that portion of the workforce as well, more formal training and credentials may prove useful over time.

TechAMP is new, evolving — and likely to be expanding soon. Diop and Liu are in talks with interested education networks in multiple manufacturing-heavy states, to see if they would like to partner with MIT. There is also new interest from more manufacturers, including some of the partners in MIT’s Initiative for New Manufacturing. Given that the initiative just launched in May, TechAMP has hit the ground running.

“There’s been a lot of excitement so far, we think,” Liu says. “And it’s coming from organizations and people who are eager to learn more.”  

Pages