MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 13 hours 57 min ago

Martina Solano Soto wants to solve the mysteries of the universe, and MIT Open Learning is part of her plan

Thu, 04/24/2025 - 3:00pm

Martina Solano Soto is on a mission to pursue her passion for physics and, ultimately, to solve big problems. Since she was a kid, she has had a lot of questions: Why do animals exist? What are we doing here? Why don’t we know more about the Big Bang? And she has been determined to find answers. 

“That’s why I found MIT OpenCourseWare,” says Solano, of Girona, Spain. “When I was 14, I started to browse and wanted to find information that was reliable, dynamic, and updated. I found MIT resources by chance, and it’s one of the biggest things that has happened to me.” 

In addition to OpenCourseWare, which offers free, online, open educational resources from more than 2,500 courses that span the MIT undergraduate and graduate curriculum, Solano also took advantage of the MIT Open Learning Library. Part of MIT Open Learning, the library offers free courses and invites people to learn at their own pace while receiving immediate feedback through interactive content and exercises. 

Solano, who is now 17, has studied quantum physics via OpenCourseWare — also part of MIT Open Learning — and she has taken Open Learning Library courses on electricity and magnetism, calculus, quantum computation, and kinematics. She even created her own syllabus, complete with homework, to ensure she stayed on track and kept her goals in mind. Those goals include studying math and physics as an undergraduate. She also hopes to study general relativity and quantum mechanics at the doctoral level. “I really want to unify them to find a theory of quantum gravity,” she says. “I want to spend all my life studying and learning.” 

Solano was particularly motivated by Barton Zwiebach, professor of physics, whose courses Quantum Physics I and Quantum Physics II are available on MIT OpenCourseWare. She took advantage of all of the resources that were provided: video lectures, assignments, lecture notes, and exams.  

“I was fascinated by the way he explained. I just understood everything, and it was amazing,” she says. “Then, I learned about his book, 'A First Course in String Theory,' and it was because of him that I learned about black holes and gravity. I’m extremely grateful.” 

While Solano gives much credit to the variety and quality of Open Learning resources, she also stresses the importance of being organized. As a high school student, she has things other than string theory on her mind: her school, extracurriculars, friends, and family.  

For anyone in a similar position, she recommends “figuring out what you’re most interested in and how you can take advantage of the flexibility of Open Learning resources. Is there a half-hour before bed to watch a video, or some time on the weekend to read lecture notes? If you figure out how to make it work for you, it is definitely worth the effort.”  

“If you do that, you are going to grow academically and personally,” Solano says. “When you go to school, you will feel more confident.” 

And Solano is not slowing down. She plans to continue using Open Learning resources, this time turning her attention to graduate-level courses, all in service of her curiosity and drive for knowledge. 

“When I was younger, I read the book 'The God Equation,' by Michio Kaku, which explains quantum gravity theory. Something inside me awoke,” she recalls. “I really want to know what happens at the center of a black hole, and how we unify quantum mechanics, black holes, and general relativity. I decided that I want to invest my life in this.”  

She is well on her way. Last summer, Solano applied for and received a scholarship to study particle physics at the Autonomous University of Barcelona. This summer, she’s applying for opportunities to study the cosmos. All of this, she says, is only possible thanks to what she has learned with MIT Open Learning resources. 

“The applications ask you to explain what you like about physics, and thanks to MIT, I’m able to express that,” Solano says. “I’m able to go for these scholarships and really fight for what I dream.” 

Luna: A moon on Earth

Thu, 04/24/2025 - 11:00am

On March 6, MIT launched its first lunar landing mission since the Apollo era, sending three payloads — the AstroAnt, the RESOURCE 3D camera, and the HUMANS nanowafer — to the moon’s south polar region. The mission was based out of Luna, a mission control space designed by MIT Department of Architecture students and faculty in collaboration with the MIT Space Exploration Initiative, Inploration, and Simpson Gumpertz and Heger. It is installed in the MIT Media Lab ground-floor gallery and is open to the public as part of Artfinity, MIT’s Festival for the Arts. The installation allows visitors to observe payload operators at work and interact with the software used for the mission, thanks to virtual reality.

A central hub for mission operations, the control room is a structural and conceptual achievement, balancing technical challenges with a vision for an immersive experience, and the result of a multidisciplinary approach. “This will be our moon on Earth,” says Mateo Fernandez, a third-year MArch student and 2024 MAD Design Fellow, who designed and fabricated Luna in collaboration with Nebyu Haile, a PhD student in the Building Technology program in the Department of Architecture, and Simon Lesina Debiasi, a research assistant in the SMArchS Computation program and part of the Self-Assembly Lab. “The design was meant for people — for the researchers to be able to see what’s happening at all times, and for the spectators to have a 360 panoramic view of everything that’s going on,” explains Fernandez. “A key vision of the team was to create a control room that broke away from the traditional, closed-off model — one that instead invited the public to observe, ask questions, and engage with the mission,” adds Haile.

For this project, students were advised by Skylar Tibbits, founder and co-director of the Self-Assembly Lab, associate professor of design research, and the Morningside Academy for Design (MAD)’s assistant director for education; J. Roc Jih, associate professor of the practice in architectural design; John Ochsendorf, MIT Class of 1942 Professor with appointments in the departments of Architecture and Civil and Environmental Engineering, and founding director of MAD; and Brandon Clifford, associate professor of architecture. The team worked closely with Cody Paige, director of the Space Exploration Initiative at the Media Lab, and her collaborators, emphasizing that they “tried to keep things very minimal, very simple, because at the end of the day,” explains Fernandez, “we wanted to create a design that allows the researchers to shine and the mission to shine.”

“This project grew out of the Space Architecture class we co-taught with Cody Paige and astronaut and MIT AeroAstro [Department of Aeronautics and Astronautics] faculty member Jeff Hoffman” in the fall semester, explains Tibbits. “Mateo was part of that studio, and from there, Cody invited us to design the mission control project. We then brought Mateo onboard, Simon, Nebyu, and the rest of the project team.” According to Tibbits, “this project represents MIT’s mind-and-hand ethos. We had designers, architects, artists, computational experts, and engineers working together, reflecting the polymath vision — left brain, right brain, the creative and the technical coming together to make this possible.”

Luna was funded and informed by Tibbits and Jih’s Professor Amar G. Bose Research Grant Program. “J. Jih and I had been doing research for the Bose grant around basalt and mono-material construction,” says Tibbits, adding that they “had explored foamed glass materials similar to pumice or foamed basalt, which are also similar to lunar regolith.” “FOAMGLAS is typically used for insulation, but it has diverse applications, including direct ground contact and exterior walls, with strong acoustic and thermal properties,” says Jih. “We helped Mateo understand how the material is used in architecture today, and how it could be applied in this project, aligning with our work on new material palettes and mono-material construction techniques.”

Additional funding came from Inploration, a project run by creative director, author, and curator Lawrence Azerrad, as well as expeditionary artist, curator, and analog astronaut artist Richelle Ellis, and Comcast, a Media Lab member company. It was also supported by the MIT Morningside Academy for Design through Fernandez’s Design Fellowship. Additional support came from industry members such as Owens Corning (construction materials), Bose (communications), as well as MIT Media Lab member companies Dell Technologies (operations hardware) and Steelcase (operations seating). 

A moon on Earth

While the lunar mission ended prematurely, the team says it achieved success in the design and construction of a control room embodying MIT’s design approach and capacity to explore new technologies while maintaining simplicity. Luna looks like variations of the moon, offering different perspectives of the moon’s round or crescent shape, depending on the viewer’s position.

“What’s remarkable is how close the final output is to Mateo’s original sketches and renderings,” Tibbits notes. “That often doesn’t happen — where the final built project aligns so precisely with the initial design intent.”

Luna’s entire structure is built from FOAMGLAS, a durable material composed of glass cells usually used for insulation. “FOAMGLAS is an interesting material,” says Lesina Debiasi, who supported fabrication efforts, ensuring a fast and safe process. “It’s relatively durable and light, but can easily be crumbled with a sharp edge or blade, requiring every step of the fabrication process — cutting, texturing, sealing — to be carefully controlled.”

Fernandez, whose design experience was influenced by the idea that “simple moves” are most powerful, explains: “We’re giving a second life to materials that are not thought of for building construction … and I think that’s an effective idea. Here, you don’t need wood, concrete, rebar — you can build with one material only.” While the interior of the dome-shaped construction is smooth, the exterior was hand textured to evoke the basalt-like surface of the moon.

The lightweight cellular glass produced by Owens Corning, which sponsored part of the material, comes as an unexpected choice for a compression structure — a type of architectural design where stability is achieved through the natural force of compression, usually implying heavy materials. The control room doesn’t use connections or additional supports, and depends upon the precise placement, size, and weight of individual blocks to create a stable form from a succession of arches.

“Traditional compression structures rely on their own weight for stability, but using a material that is more than 10 times lighter than masonry meant we had to rethink everything. It was about finding the perfect balance between design vision and structural integrity,” reflects Haile, who was responsible for the structural calculations for the dome and its support.

Compression relies on gravity, and wouldn’t be a viable construction method on the moon itself. “We’re building using physics, loads, structures, and equilibrium to create this thing that looks like the moon, but depends on Earth’s forces to be built. I think people don’t see that at first, but there’s something cheeky and ironic about it,” confides Fernandez, acknowledging that the project merges historical building methods with contemporary design.

The location and purpose of Luna — both a work space and an installation engaging the public — implied balancing privacy and transparency to achieve functionality. “One of the most important design elements that reflected this vision was the openness of the dome,” says Haile. “We worked closely from the start to find the right balance — adjusting the angle and size of the opening to make the space feel welcoming, while still offering some privacy to those working inside.”

The power of collaboration

With the FOAMGLAS material, the team had to invent a fabrication process that would achieve the initial vision while maintaining structural integrity. Sourcing a material with radically different properties compared to conventional construction implied collaborating closely on the engineering front, the lightweight nature of the cellular glass requiring creative problem-solving: “What appears perfect in digital models doesn’t always translate seamlessly into the real world,” says Haile. “The slope, curves, and overall geometry directly determine whether the dome will stand, requiring Mateo and me to work in sync from the very beginning through the end of construction.” While the engineering was primarily led by Haile and Ochsendorf, the structural design was officially reviewed and approved by Paul Kassabian at Simpson Gumpertz and Heger (SGH), ensuring compliance with engineering standards and building codes.

“None of us had worked with FOAMGLAS before, and we needed to figure out how best to cut, texture, and seal it,” says Lesina Debiasi. “Since each row consists of a distinct block shape and specific angles, ensuring accuracy and repeatability across all the blocks became a major challenge. Since we had to cut each individual block four times before we were able to groove and texture the surface, creating a safe production process and mitigating the distribution of dust was critical,” he explains. “Working inside a tent, wearing personal protective equipment like masks, visors, suits, and gloves made it possible to work for an extended period with this material.”

In addition, manufacturing introduced small margins of error threatening the structural integrity of the dome, prompting hands-on experimentation. “The control room is built from 12 arches,” explains Fernandez. “When one of the arches closes, it becomes stable, and you can move on to the next one … Going from side to side, you meet at the middle and close the arch using a special block — a keystone, which was cut to measure,” he says. “In conversations with our advisors, we decided to account for irregularities in the final keystone of each row. Once this custom keystone sat in place, the forces would stabilize the arch and make it secure,” adds Lesina Debiasi.

“This project exemplified the best practices of engineers and architects working closely together from design inception to completion — something that was historically common but is less typical today,” says Haile. “This collaboration was not just necessary — it ultimately improved the final result.”

Fernandez, who is supported this year by the MAD Design Fellowship, expressed how “the fellowship gave [him] the freedom to explore [his] passions and also keep [his] agency.”

“In a way, this project embodies what design education at MIT should be,” Tibbits reflects. “We’re building at full scale, with real-world constraints, experimenting at the limits of what we know — design, computation, engineering, and science. It’s hands-on, highly experimental, and deeply collaborative, which is exactly what we dream of for MAD, and MIT’s design education more broadly.”

“Luna, our physical lunar mission control, highlights the incredible collaboration across the Media Lab, Architecture, and the School of Engineering to bring our lunar mission to the world. We are democratizing access to space for all,” says Dava Newman, Media Lab director and Apollo Professor of Astronautics.

A full list of contributors and supporters can be found at the Morningside Academy of Design's website.

Six from MIT elected to American Academy of Arts and Sciences for 2025

Thu, 04/24/2025 - 12:00am

Six MIT faculty members are among the nearly 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced April 23.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

Those elected from MIT in 2025 are:

  • Lotte Bailyn, T. Wilson Professor of Management Emerita;
  • Gareth McKinley, School of Engineering Professor of Teaching Innovation;
  • Nasser Rabbat, Aga Khan Professor;
  • Susan Silbey, Leon and Anne Goldberg Professor of Humanities and professor of sociology and anthropology;
  • Anne Whiston Spirn, Cecil and Ida Green Distinguished Professor of Landscape Architecture and Planning; and
  • Catherine Wolfram, William Barton Rogers Professor in Energy and professor of applied economics.

“These new members’ accomplishments speak volumes about the human capacity for discovery, creativity, leadership, and persistence. They are a stellar testament to the power of knowledge to broaden our horizons and deepen our understanding,” says Academy President Laurie L. Patton. “We invite every new member to celebrate their achievement and join the Academy in our work to promote the common good.”

Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.

Robotic system zeroes in on objects most relevant for helping humans

Thu, 04/24/2025 - 12:00am

For a robot, the real world is a lot to take in. Making sense of every data point in a scene can take a huge amount of computational effort and time. Using that information to then decide how to best help a human is an even thornier exercise.

Now, MIT roboticists have a way to cut through the data noise, to help robots focus on the features in a scene that are most relevant for assisting humans.

Their approach, which they aptly dub “Relevance,” enables a robot to use cues in a scene, such as audio and visual information, to determine a human’s objective and then quickly identify the objects that are most likely to be relevant in fulfilling that objective. The robot then carries out a set of maneuvers to safely offer the relevant objects or actions to the human.

The researchers demonstrated the approach with an experiment that simulated a conference breakfast buffet. They set up a table with various fruits, drinks, snacks, and tableware, along with a robotic arm outfitted with a microphone and camera. Applying the new Relevance approach, they showed that the robot was able to correctly identify a human’s objective and appropriately assist them in different scenarios.

In one case, the robot took in visual cues of a human reaching for a can of prepared coffee, and quickly handed the person milk and a stir stick. In another scenario, the robot picked up on a conversation between two people talking about coffee, and offered them a can of coffee and creamer.

Overall, the robot was able to predict a human’s objective with 90 percent accuracy and to identify relevant objects with 96 percent accuracy. The method also improved a robot’s safety, reducing the number of collisions by more than 60 percent, compared to carrying out the same tasks without applying the new method.

“This approach of enabling relevance could make it much easier for a robot to interact with humans,” says Kamal Youcef-Toumi, professor of mechanical engineering at MIT. “A robot wouldn’t have to ask a human so many questions about what they need. It would just actively take information from the scene to figure out how to help.”

Youcef-Toumi’s group is exploring how robots programmed with Relevance can help in smart manufacturing and warehouse settings, where they envision robots working alongside and intuitively assisting humans.

Youcef-Toumi, along with graduate students Xiaotong Zhang and Dingcheng Huang, will present their new method at the IEEE International Conference on Robotics and Automation (ICRA) in May. The work builds on another paper presented at ICRA the previous year.

Finding focus

The team’s approach is inspired by our own ability to gauge what’s relevant in daily life. Humans can filter out distractions and focus on what’s important, thanks to a region of the brain known as the Reticular Activating System (RAS). The RAS is a bundle of neurons in the brainstem that acts subconsciously to prune away unnecessary stimuli, so that a person can consciously perceive the relevant stimuli. The RAS helps to prevent sensory overload, keeping us, for example, from fixating on every single item on a kitchen counter, and instead helping us to focus on pouring a cup of coffee.

“The amazing thing is, these groups of neurons filter everything that is not important, and then it has the brain focus on what is relevant at the time,” Youcef-Toumi explains. “That’s basically what our proposition is.”

He and his team developed a robotic system that broadly mimics the RAS’s ability to selectively process and filter information. The approach consists of four main phases. The first is a watch-and-learn “perception” stage, during which a robot takes in audio and visual cues, for instance from a microphone and camera, that are continuously fed into an AI “toolkit.” This toolkit can include a large language model (LLM) that processes audio conversations to identify keywords and phrases, and various algorithms that detect and classify objects, humans, physical actions, and task objectives. The AI toolkit is designed to run continuously in the background, similarly to the subconscious filtering that the brain’s RAS performs.

The second stage is a “trigger check” phase, which is a periodic check that the system performs to assess if anything important is happening, such as whether a human is present or not. If a human has stepped into the environment, the system’s third phase will kick in. This phase is the heart of the team’s system, which acts to determine the features in the environment that are most likely relevant to assist the human.

To establish relevance, the researchers developed an algorithm that takes in real-time predictions made by the AI toolkit. For instance, the toolkit’s LLM may pick up the keyword “coffee,” and an action-classifying algorithm may label a person reaching for a cup as having the objective of “making coffee.” The team’s Relevance method would factor in this information to first determine the “class” of objects that have the highest probability of being relevant to the objective of “making coffee.” This might automatically filter out classes such as “fruits” and “snacks,” in favor of “cups” and “creamers.” The algorithm would then further filter within the relevant classes to determine the most relevant “elements.” For instance, based on visual cues of the environment, the system may label a cup closest to a person as more relevant — and helpful — than a cup that is farther away.

In the fourth and final phase, the robot would then take the identified relevant objects and plan a path to physically access and offer the objects to the human.

Helper mode

The researchers tested the new system in experiments that simulate a conference breakfast buffet. They chose this scenario based on the publicly available Breakfast Actions Dataset, which comprises videos and images of typical activities that people perform during breakfast time, such as preparing coffee, cooking pancakes, making cereal, and frying eggs. Actions in each video and image are labeled, along with the overall objective (frying eggs, versus making coffee).

Using this dataset, the team tested various algorithms in their AI toolkit, such that, when receiving actions of a person in a new scene, the algorithms could accurately label and classify the human tasks and objectives, and the associated relevant objects.

In their experiments, they set up a robotic arm and gripper and instructed the system to assist humans as they approached a table filled with various drinks, snacks, and tableware. They found that when no humans were present, the robot’s AI toolkit operated continuously in the background, labeling and classifying objects on the table.

When, during a trigger check, the robot detected a human, it snapped to attention, turning on its Relevance phase and quickly identifying objects in the scene that were most likely to be relevant, based on the human’s objective, which was determined by the AI toolkit.

“Relevance can guide the robot to generate seamless, intelligent, safe, and efficient assistance in a highly dynamic environment,” says co-author Zhang.

Going forward, the team hopes to apply the system to scenarios that resemble workplace and warehouse environments, as well as to other tasks and objectives typically performed in household settings.

“I would want to test this system in my home to see, for instance, if I’m reading the paper, maybe it can bring me coffee. If I’m doing laundry, it can bring me a laundry pod. If I’m doing repair, it can bring me a screwdriver,” Zhang says. “Our vision is to enable human-robot interactions that can be much more natural and fluent.”

This research was made possible by the support and partnership of King Abdulaziz City for Science and Technology (KACST) through the Center for Complex Engineering Systems at MIT and KACST.

Wearable device tracks individual cells in the bloodstream in real time

Wed, 04/23/2025 - 3:00pm

Researchers at MIT have developed a noninvasive medical monitoring device powerful enough to detect single cells within blood vessels, yet small enough to wear like a wristwatch. One important aspect of this wearable device is that it can enable continuous monitoring of circulating cells in the human body.

The technology was presented online on March 3 by the journal npj Biosensing and is forthcoming in the journal’s print version.

The device — named CircTrek — was developed by researchers in the Nano-Cybernetic Biotrek research group, led by Deblina Sarkar, assistant professor at MIT and AT&T Career Development Chair at the MIT Media Lab. This technology could greatly facilitate early diagnosis of disease, detection of disease relapse, assessment of infection risk, and determination of whether a disease treatment is working, among other medical processes.

Whereas traditional blood tests are like a snapshot of a patient’s condition, CircTrek was designed to present real-time assessment, referred to in the npj Biosensing paper as having been “an unmet goal to date.” A different technology that offers monitoring of cells in the bloodstream with some continuity, in vivo flow cytometry, “requires a room-sized microscope, and patients need to be there for a long time,” says Kyuho Jang, a PhD student in Sarkar’s lab.

CircTrek, on the other hand, which is equipped with an onboard Wi-Fi module, could even monitor a patient’s circulating cells at home and send that information to the patient’s doctor or care team.

“CircTrek offers a path to harnessing previously inaccessible information, enabling timely treatments, and supporting accurate clinical decisions with real-time data,” says Sarkar. “Existing technologies provide monitoring that is not continuous, which can lead to missing critical treatment windows. We overcome this challenge with CircTrek.”

The device works by directing a focused laser beam to stimulate cells beneath the skin that have been fluorescently labeled. Such labeling can be accomplished with a number of methods, including applying antibody-based fluorescent dyes to the cells of interest or genetically modifying such cells so that they express fluorescent proteins.

For example, a patient receiving CAR T cell therapy, in which immune cells are collected and modified in a lab to fight cancer (or, experimentally, to combat HIV or Covid-19), could have those cells labeled at the same time with fluorescent dyes or genetic modification so the cells express fluorescent proteins. Importantly, cells of interest can also be labeled with in vivo labeling methods approved in humans. Once the cells are labeled and circulating in the bloodstream, CircTrek is designed to apply laser pulses to enhance and detect the cells’ fluorescent signal while an arrangement of filters minimizes low-frequency noise such as heartbeats.

“We optimized the optomechanical parts to reduce noise significantly and only capture the signal from the fluorescent cells,” says Jang.

Detecting the labeled CAR T cells, CircTrek could assess whether the cell therapy treatment is working. As an example, persistence of the CAR T cells in the blood after treatment is associated with better outcomes in patients with B-cell lymphoma.

To keep CircTrek small and wearable, the researchers were able to miniaturize the components of the device, such as the circuit that drives the high-intensity laser source and keeps the power level of the laser stable to avoid false readings.

The sensor that detects the fluorescent signals of the labeled cells is also minute, and yet it is capable of detecting a quantity of light equivalent to a single photon, Jang says.

The device’s subcircuits, including the laser driver and the noise filters, were custom-designed to fit on a circuit board measuring just 42 mm by 35 mm, allowing CircTrek to be approximately the same size as a smartwatch.

CircTrek was tested on an in vitro configuration that simulated blood flow beneath human skin, and its single-cell detection capabilities were verified through manual counting with a high-resolution confocal microscope. For the in vitro testing, a fluorescent dye called Cyanine5.5 was employed. That particular dye was selected because it reaches peak activation at wavelengths within skin tissue’s optical window, or the range of wavelengths that can penetrate the skin with minimal scattering.

The safety of the device, particularly the temperature increase on experimental skin tissue caused by the laser, was also investigated. An increase of 1.51 degrees Celsius at the skin surface was determined to be well below heating that would damage tissue, with enough of a margin that even increasing the device’s area of detection, and its power, in order to ensure the observation of at least one blood vessel could be safely permitted.

While clinical translation of CircTrek will require further steps, Jang says its parameters can be modified to broaden its potential, so that doctors could be provided with critical information on nearly any patient.

A brief history of expansion microscopy

Wed, 04/23/2025 - 3:00pm

Nearly 150 years ago, scientists began to imagine how information might flow through the brain based on the shapes of neurons they had seen under the microscopes of the time. With today’s imaging technologies, scientists can zoom in much further, seeing the tiny synapses through which neurons communicate with one another, and even the molecules the cells use to relay their messages. These inside views can spark new ideas about how healthy brains work and reveal important changes that contribute to disease.

This sharper view of biology is not just about the advances that have made microscopes more powerful than ever before. Using methodology developed in the lab of MIT McGovern Institute for Brain Research investigator Edward Boyden, researchers around the world are imaging samples that have been swollen to as much as 20 times their original size so their finest features can be seen more clearly.

“It’s a very different way to do microscopy,” says Boyden, who is also a Howard Hughes Medical Institute (HHMI) investigator, a professor of brain and cognitive sciences and biological engineering, and a member of the Yang Tan Collective at MIT. “In contrast to the last 300 years of bioimaging, where you use a lens to magnify an image of light from an object, we physically magnify objects themselves.” Once a tissue is expanded, Boyden says, researchers can see more even with widely available, conventional microscopy hardware.

Boyden’s team introduced this approach, which they named expansion microscopy (ExM), in 2015. Since then, they have been refining the method and adding to its capabilities, while researchers at MIT and beyond deploy it to learn about life on the smallest of scales.

“It’s spreading very rapidly throughout biology and medicine,” Boyden says. “It’s being applied to kidney disease, the fruit fly brain, plant seeds, the microbiome, Alzheimer’s disease, viruses, and more.”

Origins of ExM 

To develop expansion microscopy, Boyden and his team turned to hydrogel, a material with remarkable water-absorbing properties that had already been put to practical use; it’s layered inside disposable diapers to keep babies dry. Boyden’s lab hypothesized that hydrogels could retain their structure while they absorbed hundreds of times their original weight in water, expanding the space between their chemical components as they swell.

After some experimentation, Boyden’s team settled on four key steps to enlarging tissue samples for better imaging. First, the tissue must be infused with a hydrogel. Components of the tissue, biomolecules, are anchored to the gel’s web-like matrix, linking them directly to the molecules that make up the gel. Then the tissue is chemically softened and water is added. As the hydrogel absorbs the water, it swells and the tissue expands, growing evenly so the relative positions of its components are preserved.

Boyden and graduate students Fei Chen and Paul Tillberg’s first report on expansion microscopy was published in the journal Science in 2015. In it, the team demonstrated that by spreading apart molecules that had been crowded inside cells, features that would have blurred together under a standard light microscope became separate and distinct. Light microscopes can discriminate between objects that are separated by about 300 nanometers — a limit imposed by the laws of physics. With expansion microscopy, Boyden’s group reported an effective resolution of about 70 nanometers, for a fourfold expansion.

Boyden says this is a level of clarity that biologists need. “Biology is fundamentally, in the end, a nanoscale science,” he says. “Biomolecules are nanoscale, and the interactions between biomolecules are over nanoscale distances. Many of the most important problems in biology and medicine involve nanoscale questions.” Several kinds of sophisticated microscopes, each with their own advantages and disadvantages, can bring this kind of detail to light. But those methods are costly and require specialized skills, making them inaccessible for most researchers. “Expansion microscopy democratizes nanoimaging,” Boyden says. “Now, anybody can go look at the building blocks of life and how they relate to each other.”

Empowering scientists

Since Boyden’s team introduced expansion microscopy in 2015, research groups around the world have published hundreds of papers reporting on discoveries they have made using expansion microscopy. For neuroscientists, the technique has lit up the intricacies of elaborate neural circuits, exposed how particular proteins organize themselves at and across synapses to facilitate communication between neurons, and uncovered changes associated with aging and disease.

It has been equally empowering for studies beyond the brain. Sabrina Absalon uses expansion microscopy every week in her lab at Indiana University School of Medicine to study the malaria parasite, a single-celled organism packed with specialized structures that enable it to infect and live inside its hosts. The parasite is so small, most of those structures can’t be seen with ordinary light microscopy. “So as a cell biologist, I’m losing the biggest tool to infer protein function, organelle architecture, morphology, linked to function, and all those things — which is my eye,” she says. With expansion, she can not only see the organelles inside a malaria parasite, she can watch them assemble and follow what happens to them when the parasite divides. Understanding those processes, she says, could help drug developers find new ways to interfere with the parasite’s life cycle.

Absalon adds that the accessibility of expansion microscopy is particularly important in the field of parasitology, where a lot of research is happening in parts of the world where resources are limited. Workshops and training programs in Africa, South America, and Asia are ensuring the technology reaches scientists whose communities are directly impacted by malaria and other parasites. “Now they can get super-resolution imaging without very fancy equipment,” Absalon says.

Always improving

Since 2015, Boyden’s interdisciplinary lab group has found a variety of creative ways to improve expansion microscopy and use it in new ways. Their standard technique today enables better labeling, bigger expansion factors, and higher-resolution imaging. Cellular features less than 20 nanometers from one another can now be separated enough to appear distinct under a light microscope.

They’ve also adapted their protocols to work with a range of important sample types, from entire roundworms (popular among neuroscientists, developmental biologists, and other researchers) to clinical samples. In the latter regard, they’ve shown that expansion can help reveal subtle signs of disease, which could enable earlier or less-costly diagnoses.

Originally, the group optimized its protocol for visualizing proteins inside cells, by labeling proteins of interest and anchoring them to the hydrogel prior to expansion. With a new way of processing samples, users can now re-stain their expanded samples with new labels for multiple rounds of imaging, so they can pinpoint the positions of dozens of different proteins in the same tissue. That means researchers can visualize how molecules are organized with respect to one another and how they might interact, or survey large sets of proteins to see, for example, what changes with disease.

But better views of proteins were just the beginning for expansion microscopy. “We want to see everything,” Boyden says. “We’d love to see every biomolecule there is, with precision down to atomic scale.” They’re not there yet — but with new probes and modified procedures, it’s now possible to see not just proteins, but also RNA and lipids in expanded tissue samples.

Labeling lipids, including those that form the membranes surrounding cells, means researchers can now see clear outlines of cells in expanded tissues. With the enhanced resolution afforded by expansion, even the slender projections of neurons can be traced through an image. Typically, researchers have relied on electron microscopy, which generates exquisitely detailed pictures but requires expensive equipment, to map the brain’s circuitry. “Now, you can get images that look a lot like electron microscopy images, but on regular old light microscopes — the kind that everybody has access to,” Boyden says.

Boyden says expansion can be powerful in combination with other cutting-edge tools. When expanded samples are used with an ultra-fast imaging method developed by Eric Betzig, an HHMI investigator at the University of California at Berkeley, called lattice light-sheet microscopy, the entire brain of a fruit fly can be imaged at high resolution in just a few days.

And when RNA molecules are anchored within a hydrogel network and then sequenced in place, scientists can see exactly where inside cells the instructions for building specific proteins are positioned, which Boyden’s team demonstrated in a collaboration with Harvard University geneticist George Church and then-MIT-professor Aviv Regev. “Expansion basically upgrades many other technologies’ resolutions,” Boyden says. “You’re doing mass-spec imaging, X-ray imaging, or Raman imaging? Expansion just improved your instrument.”

Expanding possibilities

Ten years past the first demonstration of expansion microscopy’s power, Boyden and his team are committed to continuing to make expansion microscopy more powerful. “We want to optimize it for different kinds of problems, and making technologies faster, better, and cheaper is always important,” he says. But the future of expansion microscopy will be propelled by innovators outside the Boyden lab, too. “Expansion is not only easy to do, it’s easy to modify — so lots of other people are improving expansion in collaboration with us, or even on their own,” Boyden says.

Boyden points to a group led by Silvio Rizzoli at the University Medical Center Göttingen in Germany that, collaborating with Boyden, has adapted the expansion protocol to discern the physical shapes of proteins. At the Korea Advanced Institute of Science and Technology, researchers led by Jae-Byum Chang, a former postdoc in Boyden’s group, have worked out how to expand entire bodies of mouse embryos and young zebra fish, collaborating with Boyden to set the stage for examining developmental processes and long-distance neural connections with a new level of detail. And mapping connections within the brain’s dense neural circuits could become easier with light-microscopy based connectomics, an approach developed by Johann Danzl and colleagues at the Institute of Science and Technology in Austria that takes advantage of both the high resolution and molecular information that expansion microscopy can reveal.

“The beauty of expansion is that it lets you see a biological system down to its smallest building blocks,” Boyden says.

His team is intent on pushing the method to its physical limits, and anticipates new opportunities for discovery as they do. “If you can map the brain or any biological system at the level of individual molecules, you might be able to see how they all work together as a network — how life really operates,” he says.

New electronic “skin” could enable lightweight night-vision glasses

Wed, 04/23/2025 - 11:00am

MIT engineers have developed a technique to grow and peel ultrathin “skins” of electronic material. The method could pave the way for new classes of electronic devices, such as ultrathin wearable sensors, flexible transistors and computing elements, and highly sensitive and compact imaging devices. 

As a demonstration, the team fabricated a thin membrane of pyroelectric material — a class of heat-sensing material that produces an electric current in response to changes in temperature. The thinner the pyroelectric material, the better it is at sensing subtle thermal variations.

With their new method, the team fabricated the thinnest pyroelectric membrane yet, measuring 10 nanometers thick, and demonstrated that the film is highly sensitive to heat and radiation across the far-infrared spectrum.

The newly developed film could enable lighter, more portable, and highly accurate far-infrared (IR) sensing devices, with potential applications for night-vision eyewear and autonomous driving in foggy conditions. Current state-of-the-art far-IR sensors require bulky cooling elements. In contrast, the new pyroelectric thin film requires no cooling and is sensitive to much smaller changes in temperature. The researchers are exploring ways to incorporate the film into lighter, higher-precision night-vision glasses.

“This film considerably reduces weight and cost, making it lightweight, portable, and easier to integrate,” Xinyuan Zhang, a graduate student in MIT’s Department of Materials Science and Engineering (DMSE). “For example, it could be directly worn on glasses.”

The heat-sensing film could also have applications in environmental and biological sensing, as well as imaging of astrophysical phenomena that emit far-infrared radiation.

What’s more, the new lift-off technique is generalizable beyond pyroelectric materials. The researchers plan to apply the method to make other ultrathin, high-performance semiconducting films.

Their results are reported today in a paper appearing in the journal Nature. The study’s MIT co-authors are first author Xinyuan Zhang, Sangho Lee, Min-Kyu Song, Haihui Lan, Jun Min Suh, Jung-El Ryu, Yanjie Shao, Xudong Zheng, Ne Myo Han, and Jeehwan Kim, associate professor of mechanical engineering and of materials science and engineering, along with researchers at the University Wisconsin at Madison led by Professor Chang-Beom Eom and authors from multiple other institutions.

Chemical peel

Kim’s group at MIT is finding new ways to make smaller, thinner, and more flexible electronics. They envision that such ultrathin computing “skins” can be incorporated into everything from smart contact lenses and wearable sensing fabrics to stretchy solar cells and bendable displays. To realize such devices, Kim and his colleagues have been experimenting with methods to grow, peel, and stack semiconducting elements, to fabricate ultrathin, multifunctional electronic thin-film membranes.

One method that Kim has pioneered is “remote epitaxy” — a technique where semiconducting materials are grown on a single-crystalline substrate, with an ultrathin layer of graphene in between. The substrate’s crystal structure serves as a scaffold along which the new material can grow. The graphene acts as a nonstick layer, similar to Teflon, making it easy for researchers to peel off the new film and transfer it onto flexible and stacked electronic devices. After peeling off the new film, the underlying substrate can be reused to make additional thin films.

Kim has applied remote epitaxy to fabricate thin films with various characteristics. In trying different combinations of semiconducting elements, the researchers happened to notice that a certain pyroelectric material, called PMN-PT, did not require an intermediate layer assist in order to separate from its substrate. Just by growing PMN-PT directly on a single-crystalline substrate, the researchers could then remove the grown film, with no rips or tears to its delicate lattice.

“It worked surprisingly well,” Zhang says. “We found the peeled film is atomically smooth.”

Lattice lift-off

In their new study, the MIT and UW Madison researchers took a closer look at the process and discovered that the key to the material’s easy-peel property was lead. As part of its chemical structure, the team, along with colleagues at the Rensselaer Polytechnic Institute, discovered that the pyroelectric film contains an orderly arrangement of lead atoms that have a large “electron affinity,” meaning that lead attracts electrons and prevents the charge carriers from traveling and connecting to another materials such as an underlying substrate. The lead acts as tiny nonstick units, allowing the material as a whole to peel away, perfectly intact.

The team ran with the realization and fabricated multiple ultrathin films of PMN-PT, each about 10 nanometers thin. They peeled off pyroelectric films and transfered them onto a small chip to form an array of 100 ultrathin heat-sensing pixels, each about 60 square microns (about .006 square centimeters). They exposed the films to ever-slighter changes in temperature and found the pixels were highly sensitive to small changes across the far-infrared spectrum.

The sensitivity of the pyroelectric array is comparable to that of state-of-the-art night-vision devices. These devices are currently based on photodetector materials, in which a change in temperature induces the material’s electrons to jump in energy and briefly cross an energy “band gap,” before settling back into their ground state. This electron jump serves as an electrical signal of the temperature change. However, this signal can be affected by noise in the environment, and to prevent such effects, photodetectors have to also include cooling devices that bring the instruments down to liquid nitrogen temperatures.

Current night-vision goggles and scopes are heavy and bulky. With the group’s new pyroelectric-based approach, NVDs could have the same sensitivity without the cooling weight.

The researchers also found that the films were sensitive beyond the range of current night-vision devices and could respond to wavelengths across the entire infrared spectrum. This suggests that the films could be incorporated into small, lightweight, and portable devices for various applications that require different infrared regions. For instance, when integrated into autonomous vehicle platforms, the films could enable cars to “see” pedestrians and vehicles in complete darkness or in foggy and rainy conditions. 

The film could also be used in gas sensors for real-time and on-site environmental monitoring, helping detect pollutants. In electronics, they could monitor heat changes in semiconductor chips to catch early signs of malfunctioning elements.

The team says the new lift-off method can be generalized to materials that may not themselves contain lead. In those cases, the researchers suspect that they can infuse Teflon-like lead atoms into the underlying substrate to induce a similar peel-off effect. For now, the team is actively working toward incorporating the pyroelectric films into a functional night-vision system.

“We envision that our ultrathin films could be made into high-performance night-vision goggles, considering its broad-spectrum infrared sensitivity at room-temperature, which allows for a lightweight design without a cooling system,” Zhang says. “To turn this into a night-vision system, a functional device array should be integrated with readout circuitry. Furthermore, testing in varied environmental conditions is essential for practical applications.”

This work was supported by the U.S. Air Force Office of Scientific Research.

New model predicts a chemical reaction’s point of no return

Wed, 04/23/2025 - 11:00am

When chemists design new chemical reactions, one useful piece of information involves the reaction’s transition state — the point of no return from which a reaction must proceed.

This information allows chemists to try to produce the right conditions that will allow the desired reaction to occur. However, current methods for predicting the transition state and the path that a chemical reaction will take are complicated and require a huge amount of computational power.

MIT researchers have now developed a machine-learning model that can make these predictions in less than a second, with high accuracy. Their model could make it easier for chemists to design chemical reactions that could generate a variety of useful compounds, such as pharmaceuticals or fuels.

“We’d like to be able to ultimately design processes to take abundant natural resources and turn them into molecules that we need, such as materials and therapeutic drugs. Computational chemistry is really important for figuring out how to design more sustainable processes to get us from reactants to products,” says Heather Kulik, the Lammot du Pont Professor of Chemical Engineering, a professor of chemistry, and the senior author of the new study.

Former MIT graduate student Chenru Duan PhD ’22, who is now at Deep Principle; former Georgia Tech graduate student Guan-Horng Liu, who is now at Meta; and Cornell University graduate student Yuanqi Du are the lead authors of the paper, which appears today in Nature Machine Intelligence.

Better estimates

For any given chemical reaction to occur, it must go through a transition state, which takes place when it reaches the energy threshold needed for the reaction to proceed. These transition states are so fleeting that they’re nearly impossible to observe experimentally.

As an alternative, researchers can calculate the structures of transition states using techniques based on quantum chemistry. However, that process requires a great deal of computing power and can take hours or days to calculate a single transition state.

“Ideally, we’d like to be able to use computational chemistry to design more sustainable processes, but this computation in itself is a huge use of energy and resources in finding these transition states,” Kulik says.

In 2023, Kulik, Duan, and others reported on a machine-learning strategy that they developed to predict the transition states of reactions. This strategy is faster than using quantum chemistry techniques, but still slower than what would be ideal because it requires the model to generate about 40 structures, then run those predictions through a “confidence model” to predict which states were most likely to occur.

One reason why that model needs to be run so many times is that it uses randomly generated guesses for the starting point of the transition state structure, then performs dozens of calculations until it reaches its final, best guess. These randomly generated starting points may be very far from the actual transition state, which is why so many steps are needed.

The researchers’ new model, React-OT, described in the Nature Machine Intelligence paper, uses a different strategy. In this work, the researchers trained their model to begin from an estimate of the transition state generated by linear interpolation — a technique that estimates each atom’s position by moving it halfway between its position in the reactants and in the products, in three-dimensional space.

“A linear guess is a good starting point for approximating where that transition state will end up,” Kulik says. “What the model’s doing is starting from a much better initial guess than just a completely random guess, as in the prior work.”

Because of this, it takes the model fewer steps and less time to generate a prediction. In the new study, the researchers showed that their model could make predictions with only about five steps, taking about 0.4 seconds. These predictions don’t need to be fed through a confidence model, and they are about 25 percent more accurate than the predictions generated by the previous model.

“That really makes React-OT a practical model that we can directly integrate to the existing computational workflow in high-throughput screening to generate optimal transition state structures,” Duan says.

“A wide array of chemistry”

To create React-OT, the researchers trained it on the same dataset that they used to train their older model. These data contain structures of reactants, products, and transition states, calculated using quantum chemistry methods, for 9,000 different chemical reactions, mostly involving small organic or inorganic molecules.

Once trained, the model performed well on other reactions from this set, which had been held out of the training data. It also performed well on other types of reactions that it hadn’t been trained on, and could make accurate predictions involving reactions with larger reactants, which often have side chains that aren’t directly involved in the reaction.

“This is important because there are a lot of polymerization reactions where you have a big macromolecule, but the reaction is occurring in just one part. Having a model that generalizes across different system sizes means that it can tackle a wide array of chemistry,” Kulik says.

The researchers are now working on training the model so that it can predict transition states for reactions between molecules that include additional elements, including sulfur, phosphorus, chlorine, silicon, and lithium.

“To quickly predict transition state structures is key to all chemical understanding,” says Markus Reiher, a professor of theoretical chemistry at ETH Zurich, who was not involved in the study. “The new approach presented in the paper could very much accelerate our search and optimization processes, bringing us faster to our final result. As a consequence, also less energy will be consumed in these high-performance computing campaigns. Any progress that accelerates this optimization benefits all sorts of computational chemical research.”

The MIT team hopes that other scientists will make use of their approach in designing their own reactions, and have created an app for that purpose.

“Whenever you have a reactant and product, you can put them into the model and it will generate the transition state, from which you can estimate the energy barrier of your intended reaction, and see how likely it is to occur,” Duan says.

The research was funded by the U.S. Army Research Office, the U.S. Department of Defense Basic Research Office, the U.S. Air Force Office of Scientific Research, the National Science Foundation, and the U.S. Office of Naval Research.

MIT engineers print synthetic “metamaterials” that are both strong and stretchy

Wed, 04/23/2025 - 5:00am

In metamaterials design, the name of the game has long been “stronger is better.”

Metamaterials are synthetic materials with microscopic structures that give the overall material exceptional properties. A huge focus has been in designing metamaterials that are stronger and stiffer than their conventional counterparts. But there’s a trade-off: The stiffer a material, the less flexible it is.

MIT engineers have now found a way to fabricate a metamaterial that is both strong and stretchy. The base material is typically highly rigid and brittle, but it is printed in precise, intricate patterns that form a structure that is both strong and flexible.

The key to the new material’s dual properties is a combination of stiff microscopic struts and a softer woven architecture. This microscopic “double network,” which is printed using a plexiglass-like polymer, produced a material that could stretch over four times its size without fully breaking. In comparison, the polymer in other forms has little to no stretch and shatters easily once cracked.

The researchers say the new double-network design can be applied to other materials, for instance to fabricate stretchy ceramics, glass, and metals. Such tough yet bendy materials could be made into tear-resistant textiles, flexible semiconductors, electronic chip packaging, and durable yet compliant scaffolds on which to grow cells for tissue repair.

“We are opening up this new territory for metamaterials,” says Carlos Portela, the Robert N. Noyce Career Development Associate Professor at MIT. “You could print a double-network metal or ceramic, and you could get a lot of these benefits, in that it would take more energy to break them, and they would be significantly more stretchable.”

Portela and his colleagues report their findings today in the journal Nature Materials. His MIT co-authors include first author James Utama Surjadi as well as Bastien Aymon and Molly Carton.

Inspired gel

Along with other research groups, Portela and his colleagues have typically designed metamaterials by printing or nanofabricating microscopic lattices using conventional polymers similar to plexiglass and ceramic. The specific pattern, or architecture, that they print can impart exceptional strength and impact resistance to the resulting metamaterial.

Several years ago, Portela was curious whether a metamaterial could be made from an inherently stiff material, but be patterned in a way that would turn it into a much softer, stretchier version.

“We realized that the field of metamaterials has not really tried to make an impact in the soft matter realm,” he says. “So far, we’ve all been looking for the stiffest and strongest materials possible.”

Instead, he looked for a way to synthesize softer, stretchier metamaterials. Rather than printing microscopic struts and trusses, similar to those of conventional lattice-based metamaterials, he and his team made an architecture of interwoven springs, or coils. They found that, while the material they used was itself stiff like plexiglass, the resulting woven metamaterial was soft and springy, like rubber.

“They were stretchy, but too soft and compliant,” Portela recalls.

In looking for ways to bulk up their softer metamaterial, the team found inspiration in an entirely different material: hydrogel. Hydrogels are soft, stretchy, Jell-O-like materials that are composed of mostly water and a bit of polymer structure. Researchers including groups at MIT have devised ways to make hydrogels that are both soft and stretchy, and also tough. They do so by combining polymer networks with very different properties, such as a network of molecules that is naturally stiff,  which gets chemically cross-linked with another molecular network that is inherently soft. Portela and his colleagues wondered whether such a double-network design could be adapted to metamaterials.

“That was our ‘aha’ moment,” Portela says. “We thought: Can we get inspiration from these hydrogels to create a metamaterial with similar stiff and stretchy properties?”

Strut and weave

For their new study, the team fabricated a metamaterial by combining two microscopic architectures. The first is a rigid, grid-like scaffold of struts and trusses. The second is a pattern of coils that weave around each strut and truss. Both networks are made from the same acrylic plastic and are printed in one go, using a high-precision, laser-based printing technique called two-photon lithography.

The researchers printed samples of the new double-network-inspired metamaterial, each measuring in size from several square microns to several square millimeters. They put the material through a series of stress tests, in which they attached either end of the sample to a specialized nanomechanical press and measured the force it took to pull the material apart. They also recorded high-resolution videos to observe the locations and ways in which the material stretched and tore as it was pulled apart.

They found their new double-network design was able stretch three times its own length, which also happened to be 10 times farther compared to a conventional lattice-patterned metamaterial printed with the same acrylic plastic. Portela says the new material’s stretchy resistance comes from the interactions between the material’s rigid struts and the messier, coiled weave as the material is stressed and pulled.

“Think of this woven network as a mess of spaghetti tangled around a lattice. As we break the monolithic lattice network, those broken parts come along for the ride, and now all this spaghetti gets entangled with the lattice pieces,” Portela explains. “That promotes more entanglement between woven fibers, which means you have more friction and more energy dissipation.”

In other words, the softer structure wound throughout the material’s rigid lattice takes on more stress thanks to multiple knots or entanglements promoted by the cracked struts. As this stress spreads unevenly through the material, an initial crack is unlikely to go straight through and quickly tear the material. What’s more, the team found that if they introduced strategic holes, or “defects,” in the metamaterial, they could further dissipate any stress that the material undergoes, making it even stretchier and more resistant to tearing apart.

“You might think this makes the material worse,” says study co-author Surjadi. “But we saw once we started adding defects, we doubled the amount of stretch we were able to do, and tripled the amount of energy that we dissipated. That gives us a material that’s both stiff and tough, which is usually a contradiction.”

The team has developed a computational framework that can help engineers estimate how a metamaterial will perform given the pattern of its stiff and stretchy networks. They envision such a blueprint will be useful in designing tear-proof textiles and fabrics.

“We also want to try this approach on more brittle materials, to give them multifunctionality,” Portela says. “So far we’ve talked of mechanical properties, but what if we could also make them conductive, or responsive to temperature? For that, the two networks could be made from different polymers, that respond to temperature in different ways, so that a fabric can open its pores or become more compliant when it’s warm and can be more rigid when it’s cold. That’s something we can explore now.”

This research was supported, in part, by the U.S. National Science Foundation, and the MIT MechE MathWorks Seed Fund. This work was performed, in part, through the use of MIT.nano’s facilities.

MIT D-Lab spinout provides emergency transportation during childbirth

Wed, 04/23/2025 - 12:00am

Amama has lived in a rural region of northern Ghana all her life. In 2022, she went into labor with her first child. Women in the region traditionally give birth at home with the help of a local birthing attendant, but Amama experienced last-minute complications, and the decision was made to go to a hospital. Unfortunately, there were no ambulances in the community and the nearest hospital was 30 minutes away, so Amama was forced to take a motorcycle taxi, leaving her husband and caregiver behind.

Amama spent the next 30 minutes traveling over bumpy dirt roads to get to the hospital. She was in pain and afraid. When she arrived, she learned her child had not survived.

Unfortunately, Amama’s story is not unique. Around the world, more than 700 women die every day due to preventable pregnancy and childbirth complications. A lack of transportation to hospitals contributes to those deaths.

Moving Health was founded by MIT students to give people like Amama a safer way to get to the hospital. The company, which was started as part of a class at MIT D-Lab, works with local communities in rural Ghana to offer a network of motorized tricycle ambulances to communities that lack emergency transportation options.

The locally made ambulances are designed for the challenging terrain of rural Ghana, equipped with medical supplies, and have space for caregivers and family members.

“We’re providing the first rural-focused emergency transportation network,” says Moving Health CEO and co-founder Emily Young ’18. “We’re trying to provide emergency transportation coverage for less cost and with a vehicle tailored to local needs. When we first started, a report estimated there were 55 ambulances in the country of over 30 million people. Now, there is more coverage, but still the last mile areas of the country do not have access to reliable emergency transportation.”

Today, Moving Health’s ambulances and emergency transportation network cover more than 100,000 people in northern Ghana who previously lacked reliable medical transportation.

One of those people is Amama. During her most recent pregnancy, she was able to take a Moving Health ambulance to the hospital. This time, she traveled in a sanitary environment equipped with medical supplies and surrounded by loved ones. When she arrived, she gave birth to healthy twins.

From class project to company

Young and Sade Nabahe ’17, SM ’21 met while taking Course 2.722J (D-Lab: Design), which challenges students to think like engineering consultants on international projects. Their group worked on ways to transport pregnant women in remote areas of Tanzania to hospitals more safely and quickly. Young credits D-Lab instructor Matt McCambridge with helping students explore the project outside of class. Fellow Moving Health co-founder Eva Boal ’18 joined the effort the following year.

The early idea was to build a trailer that could attach to any motorcycle and be used to transport women. Following the early class projects, the students received funding from MIT’s PKG Center and the MIT Undergraduate Giving Campaign, which they used to travel to Tanzania in the following year’s Independent Activities Period (IAP). That’s when they built their first prototype in the field.

The founders realized they needed to better understand the problem from the perspective of locals and interviewed over 250 pregnant women, clinicians, motorcycle drivers, and birth attendants.

“We wanted to make sure the community was leading the charge to design what this solution should be. We had to learn more from the community about why emergency transportation doesn’t work in these areas,” Young says. “We ended up redesigning our vehicle completely.”

Following their graduation from MIT in 2018, the founders bought one-way tickets to Tanzania and deployed a new prototype. A big part of their plans was creating a product that could be manufactured by the community to support the local economy.

Nabahe and Boal left the company in 2020, but word spread of Moving Health’s mission, and Young received messages from organizations in about 15 different countries interested in expanding the company’s trials.

Young found the most alignment in Ghana, where she met two local engineers, Ambra Jiberu and Sufiyanu Imoro, who were building cars from scratch and inventing innovative agricultural technologies. With these two engineers joining the team, she was confident they had the team to build a solution in Ghana.

Taking what they’d learned in Tanzania, the new team set up hundreds of interviews and focus groups to understand the Ghanaian health system. The team redesigned their product to be a fully motorized tricycle based on the most common mode of transportation in northern Ghana. Today Moving Health focuses solely on Ghana, with local manufacturing and day-to-day operations led by Country Director and CTO Isaac Quansah.

Moving Health is focused on building a holistic emergency transportation network. To do this, Moving Health’s team sets up community-run dispatch systems, which involves organizing emergency phone numbers, training community health workers, dispatchers, and drivers, and integrating all of that within the existing health care system. The company also conducts educational campaigns in the communities it serves.

Moving Health officially launched its ambulances in 2023. The ambulance has an enclosed space for patients, family members, and medical providers and includes a removable stretcher along with supplies like first aid equipment, oxygen, IVs, and more. It costs about one-tenth the price of a traditional ambulance.

“We’ve built a really cool, small-volume manufacturing facility, led by our local engineering team, that has incredible quality,” Young says. “We also have an apprenticeship program that our two lead engineers run that allows young people to learn more hard skills. We want to make sure we’re providing economic opportunities in these communities. It’s very much a Ghanaian-made solution.”

Unlike the national ambulances, Moving Health’s ambulances are stationed in rural communities, at community health centers, to enable faster response times.

“When the ambulances are stationed in these people’s communities, at their local health centers, it makes all the difference,” Young says. “We’re trying to create an emergency transportation solution that is not only geared toward rural areas, but also focused on pregnancy and prioritizing women’s voices about what actually works in these areas.”

A lifeline for mothers

When Young first got to Ghana, she met Sahada, a local woman who shared the story of her first birth at the age of 18. Sahada had intended to give birth in her community with the help of a local birthing attendant, but she began experiencing so much pain during labor the attendant advised her to go to the nearest hospital. With no ambulances or vehicles in town, Sahada’s husband called a motorcycle driver, who took her alone on the three-hour drive to the nearest hospital.

“It was rainy, extremely muddy, and she was in a lot of pain,” Young recounts. “She was already really worried for her baby, and then the bike slips and they crash. They get back on, covered in mud, she has no idea if the baby survived, and finally gets to the maternity ward.”

Sahada was able to give birth to a healthy baby boy, but her story stuck with Young.

“The experience was extremely traumatic, and what’s really crazy is that counts as a successful birth statistic,” Young says. “We hear that kind of story a lot.”

This year, Moving Health plans to expand into a new region of northern Ghana. The team is also exploring other ways their network can provide health care to rural regions. But no matter how the company evolves, the team remain grateful to have seen their D-Lab project turn into such an impactful solution.

“Our long-term vision is to prove that this can work on a national level and supplement the existing health system,” Young says. “Then we’re excited to explore mobile health care outreach and other transportation solutions. We’ve always been focused on maternal health, but we’re staying cognizant of other community ideas that might be able to help improve health care more broadly.”

“Periodic table of machine learning” could fuel AI discovery

Wed, 04/23/2025 - 12:00am

MIT researchers have created a periodic table that shows how more than 20 classical machine-learning algorithms are connected. The new framework sheds light on how scientists could fuse strategies from different methods to improve existing AI models or come up with new ones.

For instance, the researchers used their framework to combine elements of two different algorithms to create a new image-classification algorithm that performed 8 percent better than current state-of-the-art approaches.

The periodic table stems from one key idea: All these algorithms learn a specific kind of relationship between data points. While each algorithm may accomplish that in a slightly different way, the core mathematics behind each approach is the same.

Building on these insights, the researchers identified a unifying equation that underlies many classical AI algorithms. They used that equation to reframe popular methods and arrange them into a table, categorizing each based on the approximate relationships it learns.

Just like the periodic table of chemical elements, which initially contained blank squares that were later filled in by scientists, the periodic table of machine learning also has empty spaces. These spaces predict where algorithms should exist, but which haven’t been discovered yet.

The table gives researchers a toolkit to design new algorithms without the need to rediscover ideas from prior approaches, says Shaden Alshammari, an MIT graduate student and lead author of a paper on this new framework.

“It’s not just a metaphor,” adds Alshammari. “We’re starting to see machine learning as a system with structure that is a space we can explore rather than just guess our way through.”

She is joined on the paper by John Hershey, a researcher at Google AI Perception; Axel Feldmann, an MIT graduate student; William Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Mark Hamilton, an MIT graduate student and senior engineering manager at Microsoft. The research will be presented at the International Conference on Learning Representations.

An accidental equation

The researchers didn’t set out to create a periodic table of machine learning.

After joining the Freeman Lab, Alshammari began studying clustering, a machine-learning technique that classifies images by learning to organize similar images into nearby clusters.

She realized the clustering algorithm she was studying was similar to another classical machine-learning algorithm, called contrastive learning, and began digging deeper into the mathematics. Alshammari found that these two disparate algorithms could be reframed using the same underlying equation.

“We almost got to this unifying equation by accident. Once Shaden discovered that it connects two methods, we just started dreaming up new methods to bring into this framework. Almost every single one we tried could be added in,” Hamilton says.

The framework they created, information contrastive learning (I-Con), shows how a variety of algorithms can be viewed through the lens of this unifying equation. It includes everything from classification algorithms that can detect spam to the deep learning algorithms that power LLMs.

The equation describes how such algorithms find connections between real data points and then approximate those connections internally.

Each algorithm aims to minimize the amount of deviation between the connections it learns to approximate and the real connections in its training data.

They decided to organize I-Con into a periodic table to categorize algorithms based on how points are connected in real datasets and the primary ways algorithms can approximate those connections.

“The work went gradually, but once we had identified the general structure of this equation, it was easier to add more methods to our framework,” Alshammari says.

A tool for discovery

As they arranged the table, the researchers began to see gaps where algorithms could exist, but which hadn’t been invented yet.

The researchers filled in one gap by borrowing ideas from a machine-learning technique called contrastive learning and applying them to image clustering. This resulted in a new algorithm that could classify unlabeled images 8 percent better than another state-of-the-art approach.

They also used I-Con to show how a data debiasing technique developed for contrastive learning could be used to boost the accuracy of clustering algorithms.

In addition, the flexible periodic table allows researchers to add new rows and columns to represent additional types of datapoint connections.

Ultimately, having I-Con as a guide could help machine learning scientists think outside the box, encouraging them to combine ideas in ways they wouldn’t necessarily have thought of otherwise, says Hamilton.

“We’ve shown that just one very elegant equation, rooted in the science of information, gives you rich algorithms spanning 100 years of research in machine learning. This opens up many new avenues for discovery,” he adds.

“Perhaps the most challenging aspect of being a machine-learning researcher these days is the seemingly unlimited number of papers that appear each year. In this context, papers that unify and connect existing algorithms are of great importance, yet they are extremely rare. I-Con provides an excellent example of such a unifying approach and will hopefully inspire others to apply a similar approach to other domains of machine learning,” says Yair Weiss, a professor in the School of Computer Science and Engineering at the Hebrew University of Jerusalem, who was not involved in this research.

This research was funded, in part, by the Air Force Artificial Intelligence Accelerator, the National Science Foundation AI Institute for Artificial Intelligence and Fundamental Interactions, and Quanta Computer.

Kripa Varanasi named faculty director of the Deshpande Center for Technological Innovation

Tue, 04/22/2025 - 3:00pm

Kripa Varanasi, professor of mechanical engineering, was named faculty director of the MIT Deshpande Center for Technological Innovation, effective March 1.

“Kripa is widely recognized for his significant contributions in the field of interfacial science, thermal fluids, electrochemical systems, and advanced materials. It’s remarkable to see the tangible impact Kripa’s ventures have made across such a wide range of fields,” says Anantha P. Chandrakasan, dean of the School of Engineering, chief innovation and strategy officer, and Vannevar Bush Professor of Electrical Engineering and Computer Science. “From energy and water conservation to consumer products and agriculture, his solutions are making a real difference. The Deshpande Center will benefit greatly from both his entrepreneurial expertise and deep technical insight.”

The MIT Deshpande Center for Technological Innovation is an interdepartmental center that empowers MIT students and faculty to make a difference in the world by helping them bring their innovative technologies from the lab to the marketplace in the form of breakthrough products and new companies. The center was established through a gift from philanthropist Guruaj “Desh” Deshpande and his wife, Jaishree.

“Kripa brings an entrepreneurial spirit, innovative thinking, and commitment to mentorship that has always been central to the Deshpande Center’s mission,” says Deshpande. “He is exceptionally well-positioned to help the next generation of MIT innovators turn bold ideas into real-world solutions that make a difference.”

Varanasi has seen the Deshpande Center’s influence on the MIT community since its founding in 2002, when he was a graduate student.

“The Deshpande Center was founded when I was a graduate student, and it truly inspired many of us to think about entrepreneurship and commercialization — with Desh himself being an incredible role model,” says Varanasi. “Over the years, the center has built a storied legacy as a one-of-a-kind institution for propelling university-invented technologies to commercialization. Many amazing companies have come out of this program, shaping industries and making a real impact.”

A member of the MIT faculty since 2009, Varanasi leads the interdisciplinary Varanasi Research Group, which focuses on understanding physico-chemical and biological phenomena at the interfaces of matter. His group develops novel surfaces, materials, and technologies that improve efficiency and performance across industries, including energy, decarbonization, life sciences, water, agriculture, transportation, and consumer products.

In addition to his academic work, Varanasi is a prolific entrepreneur who has co-founded six companies, including AgZen, Alsym Energy, CoFlo Medical, Dropwise, Infinite Cooling, and LiquiGlide, which was a Deshpande Center grantee in 2009. These ventures aim to translate research breakthroughs into products with global reach.

His companies have been widely recognized for driving innovation across a range of industries. LiquiGlide, which produces frictionless liquid coatings, was named one of Time and Forbes’ “Best Inventions of the Year” in 2012. Infinite Cooling, which offers a technology to capture and recycle power plant water vapor, has won the U.S. Department of Energy’s National Cleantech University Prize and top prizes at MassChallenge and the MIT $100K competition. It is also a participating company at this year’s IdeaStream: Next Gen event, hosted by the Deshpande Center.

Another company that Varanasi co-founded, AgZen, is pioneering feedback optimization for agrochemical application that allows farmers to use 30-90 percent less pesticides and fertilizers while achieving 1-10 percent more yield. Meanwhile, Alsym Energy is advancing nonflammable, high-performance batteries for energy storage solutions that are lithium-free and capable of a wide range of storage durations. 

Throughout his career, Varanasi has been recognized for both research excellence and mentorship. His honors include the National Science Foundation CAREER Award, DARPA Young Faculty Award, SME Outstanding Young Manufacturing Engineer Award, ASME’s Bergles-Rohsenow Heat Transfer Award and Gustus L. Larson Memorial Award, Boston Business Journal’s 40 Under 40, and MIT’s Frank E. Perkins Award for Excellence in Graduate Advising​.

Varanasi earned his undergraduate degree in mechanical engineering from the Indian Institute of Technology Madras, and his master’s degree and PhD from MIT. Prior to joining the Institute’s faculty, he served as lead researcher and project leader at the GE Global Research Center, where he received multiple internal awards for innovation and technical excellence​.

"It’s an honor to lead the Deshpande Center, and in collaboration with the MIT community, I look forward to building on its incredible foundation — fostering bold ideas, driving real-world impact from cutting-edge innovations, and making it a powerhouse for commercialization,” adds Varanasi.

As faculty director, Varanasi will work closely with Deshpande Center executive director Rana Gupta to guide the center’s support of MIT faculty and students developing technology-based ventures.

“With Kripa’s depth and background, we will capitalize on the initiatives started with Angela Koehler. Kripa shares our vision to grow and expand the center’s capabilities to serve more of MIT,” adds Gupta.

Varanasi succeeds Angela Koehler, associate professor of biological engineering, who served as faculty director from July 2023 through March 2025.

“Angela brought fresh vision and energy to the center,” he says. “She expanded its reach, introduced new funding priorities in climate and life sciences, and re-imagined the annual IdeaStream event as a more robust launchpad for innovation. We’re deeply grateful for her leadership.”

Koehler, who was recently appointed faculty lead of the MIT Health and Life Sciences Collaborative, will continue to play a key role in the Institute’s innovation and entrepreneurship ecosystem​.

3D modeling you can feel

Tue, 04/22/2025 - 3:00pm

Essential for many industries ranging from Hollywood computer-generated imagery to product design, 3D modeling tools often use text or image prompts to dictate different aspects of visual appearance, like color and form. As much as this makes sense as a first point of contact, these systems are still limited in their realism due to their neglect of something central to the human experience: touch.

Fundamental to the uniqueness of physical objects are their tactile properties, such as roughness, bumpiness, or the feel of materials like wood or stone. Existing modeling methods often require advanced computer-aided design expertise and rarely support tactile feedback that can be crucial for how we perceive and interact with the physical world.

With that in mind, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created a new system for stylizing 3D models using image prompts, effectively replicating both visual appearance and tactile properties.

The CSAIL team’s “TactStyle” tool allows creators to stylize 3D models based on images while also incorporating the expected tactile properties of the textures. TactStyle separates visual and geometric stylization, enabling the replication of both visual and tactile properties from a single image input.

PhD student Faraz Faruqi, lead author of a new paper on the project, says that TactStyle could have far-reaching applications, extending from home decor and personal accessories to tactile learning tools. TactStyle enables users to download a base design — such as a headphone stand from Thingiverse — and customize it with the styles and textures they desire. In education, learners can explore diverse textures from around the world without leaving the classroom, while in product design, rapid prototyping becomes easier as designers quickly print multiple iterations to refine tactile qualities.

“You could imagine using this sort of system for common objects, such as phone stands and earbud cases, to enable more complex textures and enhance tactile feedback in a variety of ways,” says Faruqi, who co-wrote the paper alongside MIT Associate Professor Stefanie Mueller, leader of the Human-Computer Interaction (HCI) Engineering Group at CSAIL. “You can create tactile educational tools to demonstrate a range of different concepts in fields such as biology, geometry, and topography.”

Traditional methods for replicating textures involve using specialized tactile sensors — such as GelSight, developed at MIT — that physically touch an object to capture its surface microgeometry as a “heightfield.” But this requires having a physical object or its recorded surface for replication. TactStyle allows users to replicate the surface microgeometry by leveraging generative AI to generate a heightfield directly from an image of the texture.

On top of that, for platforms like the 3D printing repository Thingiverse, it’s difficult to take individual designs and customize them. Indeed, if a user lacks sufficient technical background, changing a design manually runs the risk of actually “breaking” it so that it can’t be printed anymore. All of these factors spurred Faruqi to wonder about building a tool that enables customization of downloadable models on a high level, but that also preserves functionality.

In experiments, TactStyle showed significant improvements over traditional stylization methods by generating accurate correlations between a texture’s visual image and its heightfield. This enables the replication of tactile properties directly from an image. One psychophysical experiment showed that users perceive TactStyle’s generated textures as similar to both the expected tactile properties from visual input and the tactile features of the original texture, leading to a unified tactile and visual experience.

TactStyle leverages a preexisting method, called “Style2Fab,” to modify the model’s color channels to match the input image’s visual style. Users first provide an image of the desired texture, and then a fine-tuned variational autoencoder is used to translate the input image into a corresponding heightfield. This heightfield is then applied to modify the model’s geometry to create the tactile properties.

The color and geometry stylization modules work in tandem, stylizing both the visual and tactile properties of the 3D model from a single image input. Faruqi says that the core innovation lies in the geometry stylization module, which uses a fine-tuned diffusion model to generate heightfields from texture images — something previous stylization frameworks do not accurately replicate.

Looking ahead, Faruqi says the team aims to extend TactStyle to generate novel 3D models using generative AI with embedded textures. This requires exploring exactly the sort of pipeline needed to replicate both the form and function of the 3D models being fabricated. They also plan to investigate “visuo-haptic mismatches” to create novel experiences with materials that defy conventional expectations, like something that appears to be made of marble but feels like it’s made of wood.

Faruqi and Mueller co-authored the new paper alongside PhD students Maxine Perroni-Scharf and Yunyi Zhu, visiting undergraduate student Jaskaran Singh Walia, visiting masters student Shuyue Feng, and assistant professor Donald Degraen of the Human Interface Technology (HIT) Lab NZ in New Zealand.

Norma Kamali is transforming the future of fashion with AI

Tue, 04/22/2025 - 2:00pm

What happens when a fashion legend taps into the transformative power of artificial intelligence? For more than five decades, fashion designer and entrepreneur Norma Kamali has pioneered bold industry shifts, creating iconic silhouettes worn by celebrities including Whitney Houston and Jessica Biel. Now, she is embracing a new frontier — one that merges creativity with algorithms and AI to redefine the future of her industry.

Through MIT Professional Education’s online “Applied Generative AI for Digital Transformation” course, which she completed in 2023, Kamali explored AI’s potential to serve as creative partner and ensure the longevity and evolution of her brand.

Kamali’s introduction to AI began with a meeting in Abu Dhabi, where industry experts, inspired by her Walmart collection, suggested developing an AI-driven fashion platform. Intrigued by the idea, but wary of the concept of “downloading her brain,” Kamali instead envisioned a system that could expand upon her 57-year archive — a closed-loop AI tool trained solely on her work. “I thought, AI could be my Karl Lagerfeld,” she says, referencing the designer’s reverence for archival inspiration.

To bring this vision to life, Kamali sought a deeper understanding of generative AI — so she headed to MIT Professional Education, an arm of MIT that has taught and inspired global professionals for more than 75 years. “I wasn’t sure how much I could actually do,” she recalls. “I had all these preconceived notions, but the more I learned, the more ideas I had.” Initially intimidated by the technical aspects of AI, she persevered, diving into prompts and training data, and exploring its creative potential. “I was determined,” she says. “And then suddenly, I was playing.”

Experimenting with her proprietary AI model, created by Maison Meta, Kamali used AI to reinterpret one of her signature styles — black garments adorned with silver studs. By prompting AI with iterations of her existing silhouettes, she witnessed unexpected and thrilling results. “It was magic,” she says. “Art, technology, and fashion colliding in ways I never imagined.” Even AI’s so-called “hallucinations” — distortions often seen as errors — became a source of inspiration. “Some of the best editorial fashion is absurd,” she notes. “AI-generated anomalies created entirely new forms of art.”

Kamali’s approach to AI reflects a broader shift across industries, where technology is not just a tool but a catalyst for reinvention. Bhaskar Pant, executive director of MIT Professional Education, underscores this transformation. “While everyone is speculating about the impact of AI, we are committed to advancing AI’s role in helping industries and leaders achieve breakthroughs, higher levels of productivity, and, as in this case, unleash creativity. Professionals must be empowered to harness AI’s potential in ways that not only enhance their work, but redefine what’s possible. Norma’s journey is a testament to the power of lifelong learning — demonstrating that innovation is ageless, fueled by curiosity and ambition.”

The experience also deepened Kamali’s perspective on AI’s role in the creative process. “AI doesn’t have a heartbeat,” she asserts. “It can’t replace human passion. But it can enhance creativity in ways we’re only beginning to understand.” Kamali also addressed industry fears about job displacement, arguing that the technology is already reshaping fashion’s labor landscape. “Sewing talent is harder to find. Designers need new tools to adapt.”

Beyond its creative applications, Kamali sees AI as a vehicle for sustainability. A longtime advocate for reducing dry cleaning — a practice linked to chemical exposure — she envisions AI streamlining fabric selection, minimizing waste, and enabling on-demand production. “Imagine a system where you design your wedding dress online, and a robot constructs it, one garment at a time,” she says. “The possibilities are endless.”

Abel Sanchez, MIT research scientist and lead instructor for MIT Professional Education’s Applied Generative AI for Digital Transformation course, emphasizes the transformative potential of AI across industries. “AI is a force reshaping the foundations of every sector, including fashion. Generative AI is unlocking unprecedented digital transformation opportunities, enabling organizations to rethink processes, design, and customer engagement. Norma is at the forefront of this shift, exploring how AI can propel the fashion industry forward, spark new creative frontiers, and redefine how designers interact with technology.”

Kamali’s experience in the course sparked an ongoing exchange of ideas with Sanchez, further fueling her curiosity. “AI is evolving so fast, I know I’ll need to go back,” she says. “MIT gave me the foundation, but this is just the beginning.” For those hesitant to embrace AI, she offers a striking analogy: “Imagine landing in a small town, in a foreign country, where you don’t speak the language, don’t recognize the food, and feel completely lost. That’s what it will be like if you don’t learn AI. The train has left the station — it’s time to get on board.”

With her AI-generated designs now featured on her website alongside her traditional collections, Kamali is proving that technology and creativity aren’t at odds — they’re collaborators. And as she continues to push the boundaries of both, she remains steadfast in her belief: “Learning is the adventure of life. Why stop now?”

“Biomedical Lab in a Box” empowers engineers in low- and middle-income countries

Tue, 04/22/2025 - 2:00pm

Globally, and especially in low- and middle-income countries (LMICs), a significant portion of the population lacks access to essential health-care services. Although there are many contributing factors that create barriers to access, in many LMICs failing or obsolete equipment plays a significant role.

“Those of us who have investigated health-care systems in LMICs are familiar with so-called ‘equipment graveyards,’” says Nevan Hanumara SM ’06, PhD ’12, a research scientist in MIT’s Department of Mechanical Engineering, describing piles of broken, imported equipment, often bearing stickers indicating their origins from donor organizations.

“Looking at the root causes of medical equipment failing and falling out of service in LMICs, we find that the local biomedical engineers truly can’t do the maintenance, due to a cascade of challenges,” he says.

Among these challenges are: design weaknesses — systems designed for temperate, air-conditioned hospitals and stabilized power don’t fare well in areas with inconsistent power supply, dust, high heat and humidity, and continuous utilization; lack of supply chain — parts ordered in the U.S. can arrive in days, where parts ordered to East Africa may take months; and limited access to knowledgeable professionals — outside of major metropolitan areas, biomedical engineers are scarce.

Hanumara, Leroy Sibanda SM ’24, a recent graduate with a dual degree in management and electrical engineering and computer science (EECS), and Anthony Pennes ’16, a technical instructor in EECS, began to ponder what could be changed if local biomedical engineers were actually involved with the design of the equipment that they’re charged with maintaining.

Pennes, who staffs class 2.75/6.4861 (Medical Device Design), among other courses, developed hands-on biosensing and mechatronics exercises as class activities several years ago. Hanumara became interested in expanding that curriculum to produce something that could have a larger impact.

Working as a team, and with support from MIT International Science and Technology Initiatives (MISTI), the MIT Jameel World Education Lab, and the Priscilla King Gray Public Service Centerthe trio created a hands-on course, exercises, and curriculum, supported by what they’ve now dubbed a “Biomed Lab in a Box” kit.

Sibanda, who hails from Bulawayo, Zimbabwe, brings additional lived experience to the project. He says friends up and down the continent speak about great practical primary and secondary education, and a tertiary education that provides a heavy emphasis on theory. The consequence, he says, is a plethora of graduates who are absolutely brilliant at the theory, but less experienced in advanced practical concepts.

“Anyone who has ever had to build systems that need to stand up to real-world conditions understands the chasm between knowing how to calculate the theoretically perfect ‘x’ and being capable of implementing a real-world solution with the materials available,” says Sibanda.

Hanumara and Sibanda traveled to Nairobi, Kenya, and Mbarara, Uganda, in late 2024 to test their kit and their theory, teaching three-day long biomedical innovation mini-courses at both Kenyatta University and Mbarara University of Science and Technology (MUST), with Pennes providing remote support from MIT’s campus.

With a curriculum based off of 2.75, labs were designed to connect the theoretical to the physical, increasing in complexity and confronting students with the real challenges of biomedical hardware and sensing, such as weak signals, ambient noise, motion artifacts, debugging, and precision assembly.

Pennes says the goal for the mini-courses was to shape the project around the real-world experiences of the region’s biomedical engineering students. “One of the problems that they experience in this region is not simply a lack of equipment, but the lack of ability to maintain it,” he says. “Some organization will come in and donate thousands of dollars of surgical lighting; then a power supply will burn out, and the organization will never come back to fix it.”

But that’s just the beginning of the problem, he adds. Engineers often find that the design isn’t open, and there’s no manual, making it impossible to find a circuit design for what’s inside the donated, proprietary system. “You have to poke and prod around the disassembled gear to see if you can discern the makers’ original goals in wiring it, and figure out a fix,” says Pennes.

In one example, he recalls seeing a donated screen for viewing X-rays — the lightbox kind, used to backlight film so that technicians can read the image — with a burned-out bulb. “The screen is lit by a proprietary bulb, so when it burned out, they could not replace it,” he recounts.

Local biomedical engineers ultimately realized that they could take a number of off-the-shelf fluorescent bulbs and angle them to fit inside the box. “Then they sort of MacGyver’d the wiring to make them all work. You get the medical technology to work however you can.”

It’s this hands-on, imaginative approach to problem-solving that the team hopes to promote — and it’s one that’s very familiar at MIT. “We’re not just ideas people, where we write a paper and we’re done with it — we want to see it applied,” says Hanumara. “It’s why so many startups come out of MIT.”

Course modules presented at Kenyatta and MUST included “Breadboarding an optical LED – photodetector pulse detector,” “Soldering a PCB and testing a 3-lead EKG,” and “Assembling and programming a syringe pump.” Each module is designed to be a self-contained learning experience, and the kit is accompanied by a USB flash drive with a 96-page lab manual written by Sibanda, and all the needed software, which is important to have when internet access is unreliable. The third exercise, relating to the syringe pump, is available via open access from the journal Biomedical Engineering Education.

“Our mission was to expose eager, young biomedical engineers to the hands-on, ‘mens-et-manus’ (‘mind-and-hand’) culture which is the cornerstone of MIT, and encourage them to develop their talents and aspirations as engineers and innovators,” says Hanumara. “We wanted to help empower them to participate in developing high-quality, contextually appropriate, technologies that improve health-care delivery in their own region.”

A LinkedIn post written by Hanumara shared reflections from students on their experiences with the material. “Every lab — from pulse oximetry and EKGs to syringe pump prototyping — brought classroom concepts to life, showing me the real-world applications of what we study,” wrote Muthoni Muriithi, a student at Kenyatta University. “Using breadboards, coding microcontrollers, soldering components, and analyzing biological data in real time helped me grasp how much careful design and precision go into creating reliable health-care tools.”

Feedback provided by students at both institutions is already helping to inform updates to the materials and future pilot programs.

Sibanda says another key thing the team is tracking what happens beyond the sessions, after the instructors leave. “It’s not just about offering the resource,” he says. “It’s important to understand what students find to be the most valuable, especially on their own.”

Hanumara concurs. “[Pennes] designed the core board that we’re using to be multifunctional. We didn’t touch any of the functions he built in — we want to see what the students will do with them. We also want to see what they can do with the mental framework,” he says, adding that this approach is important to empower students to explore, invent, and eventually scale up their own ideas.

Further, the project addresses another challenge the team identified early on: supply chain issues. In keeping with the mission of local capacity building, the entire kit was assembled in Nairobi by Gearbox Europlacer, which operates the only automated circuit board line in East Africa and is licensed to produce Raspberry Pi’s microcontrollers. “We did not tell the students anything,” says Hanumara, “but left it to them to notice that their circuit boards and microcontrollers said ‘Made in Kenya.’”

“The insistence on local manufacturing keeps us from falling into the trap that so much equipment donated into East Africa creates — you have one of these items, and if some part of it breaks you can never replace it,” says Pennes. “Having locally sourced items instead means that if you need another component, or devise an interesting side project, you have a shopping list and you can go get whatever you need.”

“Building off our ‘Biomed Lab in a Box’ experiment,” says Hanumara, “we aim to work with our colleagues in East Africa to further explore what can be designed and built with the eager, young talent and capabilities in the region.”

Hanumara’s LinkedIn post also thanked collaborating professors June Madete and Dean Johnes Obungoloch, from Kenyatta and MUST, respectively, and Latiff Cherono, managing director of Gearbox. The team hopes to eventually release the whole course in open-source format. 

Julie Lucas to step down as MIT’s vice president for resource development

Tue, 04/22/2025 - 10:45am

Julie A. Lucas has decided to step down as MIT’s vice president for resource development, President Sally Kornbluth announced today. Lucas has set her last day as June 30, which coincides with the close of the Institute’s fiscal year, to ensure a smooth transition for staff and donors. 

Lucas has led fundraising at the Institute since 2014. During that time, MIT’s average annual fundraising has increased 96 percent to $611 million, up from $313 million in the decade before her arrival. MIT’s annual fundraising totals have exceeded the Institute’s annual $500 million fundraising target for nine straight fiscal years, including a few banner fiscal years with results upward of $700 to $900 million.

“Before I arrived at MIT, Julie built a fundraising operation worthy of the Institute’s world-class stature,” Kornbluth says. “I have seen firsthand how Julie’s expertise, collegial spirit, and commitment to our mission resonates with alumni and friends, motivating them to support the Institute.”

Lucas spearheaded the MIT Campaign for a Better World, which concluded in 2021 and raised $6.2 billion, setting a record as the Institute’s largest fundraising initiative. Emphasizing the Institute’s hands-on approach to solving the world’s toughest challenges — and centered on its strengths in education, research, and innovation — the campaign attracted participation from more than 112,000 alumni and friends around the globe, including nearly 56,000 new donors.  

“From the moment I met Julie Lucas, I knew she was the right person to serve as MIT’s chief philanthropic leader of our capital campaign,” says MIT President Emeritus L. Rafael Reif. “Julie is both a ‘maker’ and a ‘doer,’ well attuned to our ‘mens et manus’ motto. The Institute has benefited immensely from her impressive set of skills and ability to convey a coherent message that has inspired and motivated alumni and friends, foundations and corporations, to support MIT.” 

Under Lucas, MIT’s Office of Resource Development (RD) created new fundraising programs and processes, and introduced expanded ways of giving. For example, RD established the Institute’s planned giving program, which supports donors who want to make a lasting impact at MIT through philanthropic vehicles such as bequests, retirement plan distributions, life-income gifts, and gifts of complex assets. She also played a lead role in creating a donor-advised fund at MIT that, since its inception in 2017, has seen almost $120 million in contributions.  

“Julie is a remarkable fundraiser and leader — and when it comes to Julie’s leadership of Resource Development, the results speak for themselves,” says Mark Gorenberg ’76, chair of the MIT Corporation, who has participated in multiple MIT committees and campaigns over the last two decades. “These tangible fundraising outcomes have helped to facilitate innovations and discoveries, expand educational programs and facilities, support faculty and researchers, and ensure that an MIT education is affordable and accessible to the brightest minds from around the world.”

Prior to joining MIT, Lucas served in senior fundraising roles at the University of Southern California and Fordham Law School, as well as New York University and its business and law schools. 

While Lucas readies herself for the next phase in her career, she remains grateful for her time at the Institute. 

“Philanthropy is a powerful fuel for good in our world,” Lucas says. “My decision to step down was difficult. I feel honored and thankful that my work — and the work of the team of professionals I lead in Resource Development — has helped continue the amazing trajectory of MIT research and innovation that benefits all of us by solving humanity’s greatest challenges, both now and in the future.”

Lucas currently serves on the steering committee and is the immediate past chair of CASE 50, the Council for Advancement and Support of Education group that includes the top 50 fundraising institutions in the world. In addition, she is chair of the 2025 CASE Summit for Leaders in Advancement and a founding member of Aspen Leadership Group’s Chief Development Officer Network.

Astronomers discover a planet that’s rapidly disintegrating, producing a comet-like tail

Tue, 04/22/2025 - 10:30am

MIT astronomers have discovered a planet some 140 light-years from Earth that is rapidly crumbling to pieces.

The disintegrating world is about the mass of Mercury, although it circles about 20 times closer to its star than Mercury does to the sun, completing an orbit every 30.5 hours. At such close proximity to its star, the planet is likely covered in magma that is boiling off into space. As the roasting planet whizzes around its star, it is shedding an enormous amount of surface minerals and effectively evaporating away.

The astronomers spotted the planet using NASA’s Transiting Exoplanet Survey Satellite (TESS), an MIT-led mission that monitors the nearest stars for transits, or periodic dips in starlight that could be signs of orbiting exoplanets. The signal that tipped the astronomers off was a peculiar transit, with a dip that fluctuated in depth every orbit.

The scientists confirmed that the signal is of a tightly orbiting rocky planet that is trailing a long, comet-like tail of debris.

“The extent of the tail is gargantuan, stretching up to 9 million kilometers long, or roughly half of the planet’s entire orbit,” says Marc Hon, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research.

It appears that the planet is disintegrating at a dramatic rate, shedding an amount of material equivalent to one Mount Everest each time it orbits its star. At this pace, given its small mass, the researchers predict that the planet may completely disintegrate in about 1 million to 2 million years.

“We got lucky with catching it exactly when it’s really going away,” says Avi Shporer, a collaborator on the discovery who is also at the TESS Science Office. “It’s like on its last breath.”

Hon and Shporer, along with their colleagues, have published their results today in the Astrophysical Journal Letters. Their MIT co-authors include Saul Rappaport, Andrew Vanderburg, Jeroen Audenaert, William Fong, Jack Haviland, Katharine Hesse, Daniel Muthukrishna, Glen Petitpas, Ellie Schmelzer, Sara Seager, and George Ricker, along with collaborators from multiple other institutions.

Roasting away

The new planet, which scientists have tagged as BD+05 4868 Ab, was detected almost by happenstance.

“We weren’t looking for this kind of planet,” Hon says. “We were doing the typical planet vetting, and I happened to spot this signal that appeared very unusual.”

The typical signal of an orbiting exoplanet looks like a brief dip in a light curve, which repeats regularly, indicating that a compact body such as a planet is briefly passing in front of, and temporarily blocking, the light from its host star.

This typical pattern was unlike what Hon and his colleagues detected from the host star BD+05 4868 A, located in the constellation of Pegasus. Though a transit appeared every 30.5 hours, the brightness took much longer to return to normal, suggesting a long trailing structure still blocking starlight. Even more intriguing, the depth of the dip changed with each orbit, suggesting that whatever was passing in front of the star wasn’t always the same shape or blocking the same amount of light.

“The shape of the transit is typical of a comet with a long tail,” Hon explains. “Except that it’s unlikely that this tail contains volatile gases and ice as expected from a real comet — these would not survive long at such close proximity to the host star. Mineral grains evaporated from the planetary surface, however, can linger long enough to present such a distinctive tail.”

Given its proximity to its star, the team estimates that the planet is roasting at around 1,600 degrees Celsius, or close to 3,000 degrees Fahrenheit. As the star roasts the planet, any minerals on its surface are likely boiling away and escaping into space, where they cool into a long and dusty tail.

The dramatic demise of this planet is a consequence of its low mass, which is between that of Mercury and the moon. More massive terrestrial planets like the Earth have a stronger gravitational pull and therefore can hold onto their atmospheres. For BD+05 4868 Ab, the researchers suspect there is very little gravity to hold the planet together.

“This is a very tiny object, with very weak gravity, so it easily loses a lot of mass, which then further weakens its gravity, so it loses even more mass,” Shporer explains. “It’s a runaway process, and it’s only getting worse and worse for the planet.”

Mineral trail

Of the nearly 6,000 planets that astronomers have discovered to date, scientists know of only three other disintegrating planets beyond our solar system. Each of these crumbling worlds were spotted over 10 years ago using data from NASA’s Kepler Space Telescope. All three planets were spotted with similar comet-like tails. BD+05 4868 Ab has the longest tail and the deepest transits out of the four known disintegrating planets to date.

“That implies that its evaporation is the most catastrophic, and it will disappear much faster than the other planets,” Hon explains.

The planet’s host star is relatively close, and thus brighter than the stars hosting the other three disintegrating planets, making this system ideal for further observations using NASA’s James Webb Space Telescope (JWST), which can help determine the mineral makeup of the dust tail by identifying which colors of infrared light it absorbs.

This summer, Hon and graduate student Nicholas Tusay from Penn State University will lead observations of BD+05 4868 Ab using JWST. “This will be a unique opportunity to directly measure the interior composition of a rocky planet, which may tell us a lot about the diversity and potential habitability of terrestrial planets outside our solar system,” Hon says.

The researchers also will look through TESS data for signs of other disintegrating worlds.

“Sometimes with the food comes the appetite, and we are now trying to initiate the search for exactly these kinds of objects,” Shporer says. “These are weird objects, and the shape of the signal changes over time, which is something that’s difficult for us to find. But it’s something we’re actively working on.”

This work was supported, in part, by NASA.

MIT’s McGovern Institute is shaping brain science and improving human lives on a global scale

Fri, 04/18/2025 - 10:40am

In 2000, Patrick J. McGovern ’59 and Lore Harp McGovern made an extraordinary gift to establish the McGovern Institute for Brain Research at MIT, driven by their deep curiosity about the human mind and their belief in the power of science to change lives. Their $350 million pledge began with a simple yet audacious vision: to understand the human brain in all its complexity, and to leverage that understanding for the betterment of humanity.
 
Twenty-five years later, the McGovern Institute stands as a testament to the power of interdisciplinary collaboration, continuing to shape our understanding of the brain and improve the quality of life for people worldwide.

In the beginning

“This is, by any measure, a truly historic moment for MIT,” said MIT’s 15th president, Charles M. Vest, during his opening remarks at an event in 2000 to celebrate the McGovern gift agreement. “The creation of the McGovern Institute will launch one of the most profound and important scientific ventures of this century in what surely will be a cornerstone of MIT scientific contributions from the decades ahead.”
 
Vest tapped Phillip A. Sharp, MIT Institute professor emeritus of biology and Nobel laureate, to lead the institute, and appointed six MIT professors — Emilio Bizzi, Martha Constantine-Paton, Ann Graybiel PhD ’71, H. Robert Horvitz ’68, Nancy Kanwisher ’80, PhD ’86, and Tomaso Poggio — to represent its founding faculty.  Construction began in 2003 on Building 46, a 376,000 square foot research complex at the northeastern edge of campus. MIT’s new “gateway from the north” would eventually house the McGovern Institute, the Picower Institute for Learning and Memory, and MIT’s Department of Brain and Cognitive Sciences.

Robert Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT, succeeded Sharp as director of the McGovern Institute in 2005, and assembled a distinguished roster of 22 faculty members, including a Nobel laureate, a Breakthrough Prize winner, two National Medal of Science/Technology awardees, and 15 members of the American Academy of Arts and Sciences.
 
A quarter century of innovation

On April 11, 2025, the McGovern Institute celebrated its 25th anniversary with a half-day symposium featuring presentations by MIT Institute Professor Robert Langer, alumni speakers from various McGovern labs, and Desimone, who is in his 20th year as director of the institute.

Desimone highlighted the institute’s recent discoveries, including the development of the CRISPR genome-editing system, which has culminated in the world’s first CRISPR gene therapy approved for humans — a remarkable achievement that is ushering in a new era of transformative medicine. In other milestones, McGovern researchers developed the first prosthetic limb fully controlled by the body’s nervous system; a flexible probe that taps into gut-brain communication; an expansion microscopy technique that paves the way for biology labs around the world to perform nanoscale imaging; and advanced computational models that demonstrate how we see, hear, use language, and even think about what others are thinking. Equally transformative has been the McGovern Institute’s work in neuroimaging, uncovering the architecture of human thought and establishing markers that signal the early emergence of mental illness, before symptoms even appear.

Synergy and open science
 
“I am often asked what makes us different from other neuroscience institutes and programs around the world,” says Desimone. “My answer is simple. At the McGovern Institute, the whole is greater than the sum of its parts.”
 
Many discoveries at the McGovern Institute have depended on collaborations across multiple labs, ranging from biological engineering to human brain imaging and artificial intelligence. In modern brain research, significant advances often require the joint expertise of people working in neurophysiology, behavior, computational analysis, neuroanatomy, and molecular biology. More than a dozen different MIT departments are represented by McGovern faculty and graduate students, and this synergy has led to insights and innovations that are far greater than what any single discipline could achieve alone.
 
Also baked into the McGovern ethos is a spirit of open science, where newly developed technologies are shared with colleagues around the world. Through hospital partnerships for example, McGovern researchers are testing their tools and therapeutic interventions in clinical settings, accelerating their discoveries into real-world solutions.

The McGovern legacy  

Hundreds of scientific papers have emerged from McGovern labs over the past 25 years, but most faculty would argue that it’s the people — the young researchers — that truly define the McGovern Institute. Award-winning faculty often attract the brightest young minds, but many McGovern faculty also serve as mentors, creating a diverse and vibrant scientific community that is setting the global standard for brain research and its applications. Kanwisher, for example, has guided more than 70 doctoral students and postdocs who have gone on to become leading scientists around the world. Three of her former students, Evelina Fedorenko PhD ’07, Josh McDermott PhD ’06, and Rebecca Saxe PhD ’03, the John W. Jarve (1978) Professor of Brain and Cognitive Sciences, are now her colleagues at the McGovern Institute. Other McGovern alumni shared stories of mentorship, science, and real-world impact at the 25th anniversary symposium.

Looking to the future, the McGovern community is more committed than ever to unraveling the mysteries of the brain and making a meaningful difference in lives of individuals at a global scale.
 
“By promoting team science, open communication, and cross-discipline partnerships,” says institute co-founder Lore Harp McGovern, “our culture demonstrates how individual expertise can be amplified through collective effort. I am honored to be the co-founder of this incredible institution — onward to the next 25 years!”

Equipping living cells with logic gates to fight cancer

Fri, 04/18/2025 - 12:00am

One of the most exciting developments in cancer treatment is a wave of new cell therapies that train a patient’s immune system to attack cancer cells. Such therapies have saved the lives of patients with certain aggressive cancers and few other options. Most of these therapies work by teaching immune cells to recognize and attack specific proteins on the surface of cancer cells.

Unfortunately, most proteins found on cancer cells aren’t unique to tumors. They’re also often present on healthy cells, making it difficult to target cancer aggressively without triggering dangerous attacks on other tissue. The problem has limited the application of cell therapies to a small subset of cancers.

Now Senti Bio is working to create smarter cell therapies using synthetic biology. The company, which was founded by former MIT faculty member and current MIT Research Associate Tim Lu ’03, MEng ’03, PhD ’08 and Professor James Collins, is equipping cells with gene circuits that allow the cells to sense and respond to their environments.

Lu, who studied computer science as an undergraduate at MIT, describes Senti’s approach as programming living cells to behave more like computers — responding to specific biological cues with “if/then” logic, just like computer code.

“We have innovated a cell therapy that says, ‘Kill anything displaying the cancer target, but spare anything that has this healthy target,’” Lu explains. “Despite the promise of certain cancer targets, problems can arise when they are expressed on healthy cells that we want to protect. Our logic gating technology was designed to recognize and avoid killing those healthy cells, which introduces a whole spectrum of additional cancers that don’t have a single clean target that we can now potentially address. That’s the power of embedding these cells with logic.”

The company’s lead drug candidate aims to help patients with acute myeloid leukemia (AML) who have experienced a relapse or are unresponsive to other therapies. The prognosis for such patients is poor, but early data from the company’s first clinical trial showed that two of the first three patients Senti treated experienced complete remission, where subsequent bone marrow tests couldn’t detect a single cancer cell.

“It’s essentially one of the best responses you can get in this disease, so we were really excited to see that,” says Lu, who served on MIT’s faculty until leaving to lead Senti in 2022.

Senti is expecting to release more patient data at the upcoming American Association for Cancer Research (AACR) meeting at the end of April.

“Our groundbreaking work at Senti is showing that one can harness synthetic biology technologies to create programmable, smart medicines for treating patients with cancer,” says Collins, who is currently MIT’s Termeer Professor of Medical Engineering and Science. “This is tremendously exciting and demonstrates how one can utilize synthetic biological circuits, in this case logic gates, to design highly effective, next-generation living therapeutics.”

From computer science to cancer care

Lu was inspired as an undergraduate studying electrical engineering and computer science by the Human Genome Project, an international race to sequence the human genome. Later, he entered the Harvard-MIT Health Sciences and Technology (HST) program, through which he earned a PhD from MIT in electrical and biomedical imaging and an MD from Harvard. During that time, he worked in the lab of his eventual Senti co-founder James Collins, a synthetic biology pioneer.

In 2010, Lu joined MIT as an assistant professor with a joint appointment in the departments of Biological Engineering and of Electrical Engineering and Computer Science. Over the course of the next 14 years, Lu led the Synthetic Biology Group at MIT and started several biotech companies, including Engine Biosciences and Tango Therapeutics, which are also developing precision cancer treatments.

In 2015, a group of researchers including Lu and MIT Institute Professor Phillip Sharp published research showing they could use gene circuits to get immune cells to selectively respond to tumor cells in their environment.

“One of the first things we published focused on the idea of logic gates in living cells,” Lu says. “A computer has ‘and’ gates, ‘or’ gates, and ‘not’ gates that allow it to perform computations, and we started publishing gene circuits that implement logic into living cells. These allow cells to detect signals and then make logical decisions like, ‘Should we switch on or off?’”

Around that time, the first cell therapies and cancer immunotherapies began to be approved by the Food and Drug Administration, and the founders saw their technology as a way to take those approaches to the next level. They officially founded Senti Bio in 2016, with Lu taking a sabbatical from MIT to serve as CEO.

The company licensed technology from MIT and subsequently advanced the cellular logic gates so they could work with multiple types of engineered immune cells, including T cells and “natural killer” cells. Senti’s cells can respond to specific proteins that exist on the surface of both cancer and healthy cells to increase selectivity.

“We can now create a cell therapy where the cell makes a decision as to whether to kill a cancer cell or spare a healthy cell even when those cells are right next to each other,” Lu says. “If you can’t distinguish between cancerous and healthy cells, you get unwanted side effects, or you may not be able to hit the cancer as hard as you’d like. But once you can do that, there’s a lot of ways to maximize your firepower against the cancer cells.”

Hope for patients

Senti’s lead clinical trial is focusing on patients with relapsed or refractory blood cancers, including AML.

“Obviously the most important thing is getting a good response for patients,” Lu says. “But we’re also doing additional scientific work to confirm that the logic gates are working the way we expect them to in humans. Based on that information, we can then deploy logic gates into additional therapeutic indications such as solid tumors, where you have a lot of the same problems with finding a target.”

Another company that has partnered with Senti to use some of Senti’s technology also has an early clinical trial underway in liver cancer. Senti is also partnering with other companies to apply its gene circuit technology in areas like regenerative medicine and neuroscience.

“I think this is broader than just cell therapies,” Lu says. “We believe if we can prove this out in AML, it will lead to a fundamentally new way of diagnosing and treating cancer, where we’re able to definitively identify and target cancer cells and spare healthy cells. We hope it will become a whole new class of medicines moving forward.”

Making AI-generated code more accurate in any language

Fri, 04/18/2025 - 12:00am

Programmers can now use large language models (LLMs) to generate computer code more quickly. However, this only makes programmers’ lives easier if that code follows the rules of the programming language and doesn’t cause a computer to crash.

Some methods exist for ensuring LLMs conform to the rules of whatever language they are generating text in, but many of these methods either distort the model’s intended meaning or are too time-consuming to be feasible for complex tasks.

A new approach developed by researchers at MIT and elsewhere automatically guides an LLM to generate text that adheres to the rules of the relevant language, such as a particular programming language, and is also error-free. Their method allows an LLM to allocate efforts toward outputs that are most likely to be valid and accurate, while discarding unpromising outputs early in the process. This probabilistic approach boosts computational efficiency.

Due to these efficiency gains, the researchers’ architecture enabled small LLMs to outperform much larger models in generating accurate, properly structured outputs for several real-world use cases, including molecular biology and robotics.

In the long run, this new architecture could help nonexperts control AI-generated content. For instance, it could allow businesspeople to write complex queries in SQL, a language for database manipulation, using only natural language prompts.

“This work has implications beyond research. It could improve programming assistants, AI-powered data analysis, and scientific discovery tools by ensuring that AI-generated outputs remain both useful and correct,” says João Loula, an MIT graduate student and co-lead author of a paper on this framework.

Loula is joined on the paper by co-lead authors Benjamin LeBrun, a research assistant at the Mila-Quebec Artificial Intelligence Institute, and Li Du, a graduate student at John Hopkins University; co-senior authors Vikash Mansinghka ’05, MEng ’09, PhD ’09, a principal research scientist and leader of the Probabilistic Computing Project in the MIT Department of Brain and Cognitive Sciences; Alexander K. Lew SM ’20, an assistant professor at Yale University; Tim Vieira, a postdoc at ETH Zurich; and Timothy J. O’Donnell, an associate professor at McGill University and a Canada CIFAR AI Chair at Mila, who led the international team; as well as several others. The research will be presented at the International Conference on Learning Representations.

Enforcing structure and meaning

One common approach for controlling the structured text generated by LLMs involves checking an entire output, like a block of computer code, to make sure it is valid and will run error-free. If not, the user must start again, racking up computational resources.

On the other hand, a programmer could stop to check the output along the way. While this can ensure the code adheres to the programming language and is structurally valid, incrementally correcting the code may cause it to drift from the meaning the user intended, hurting its accuracy in the long run.

“It is much easier to enforce structure than meaning. We can quickly check whether something is in the right programming language, but to check its meaning you have to execute the code. Our work is also about dealing with these different types of information,” Loula says.

The researchers’ approach involves engineering knowledge into the LLM to steer it toward the most promising outputs. These outputs are more likely to follow the structural constraints defined by a user, and to have the meaning the user intends.

“We are not trying to train an LLM to do this. Instead, we are engineering some knowledge that an expert would have and combining it with the LLM’s knowledge, which offers a very different approach to scaling than you see in deep learning,” Mansinghka adds.

They accomplish this using a technique called sequential Monte Carlo, which enables parallel generation from an LLM to compete with each other. The model dynamically allocates resources to different threads of parallel computation based on how promising their output appears.

Each output is given a weight that represents how likely it is to be structurally valid and semantically accurate. At each step in the computation, the model focuses on those with higher weights and throws out the rest.

In a sense, it is like the LLM has an expert looking over its shoulder to ensure it makes the right choices at each step, while keeping it focused on the overall goal. The user specifies their desired structure and meaning, as well as how to check the output, then the researchers’ architecture guides the LLM to do the rest.

“We’ve worked out the hard math so that, for any kinds of constraints you’d like to incorporate, you are going to get the proper weights. In the end, you get the right answer,” Loula says.

Boosting small models

To test their approach, they applied the framework to LLMs tasked with generating four types of outputs: Python code, SQL database queries, molecular structures, and plans for a robot to follow.

When compared to existing approaches, the researchers’ method performed more accurately while requiring less computation.

In Python code generation, for instance, the researchers’ architecture enabled a small, open-source model to outperform a specialized, commercial closed-source model that is more than double its size.

“We are very excited that we can allow these small models to punch way above their weight,” Loula says.

Moving forward, the researchers want to use their technique to control larger chunks of generated text, rather than working one small piece at a time. They also want to combine their method with learning, so that as they control the outputs a model generates, it learns to be more accurate.

In the long run, this project could have broader applications for non-technical users. For instance, it could be combined with systems for automated data modeling, and querying generative models of databases.

The approach could also enable machine-assisted data analysis systems, where the user can converse with software that accurately models the meaning of the data and the questions asked by the user, adds Mansinghka.

“One of the fundamental questions of linguistics is how the meaning of words, phrases, and sentences can be grounded in models of the world, accounting for uncertainty and vagueness in meaning and reference. LLMs, predicting likely token sequences, don’t address this problem. Our paper shows that, in narrow symbolic domains, it is technically possible to map from words to distributions on grounded meanings. It’s a small step towards deeper questions in cognitive science, linguistics, and artificial intelligence needed to understand how machines can communicate about the world like we do,” says O’Donnell.

This research is funded, in part, by the Canada CIFAR AI Chairs Program, the MIT Quest for Intelligence, and Convergent Research. 

Pages