Feed aggregator
Quest to retake $20B in climate money puts agencies at ‘significant’ risk, attorney warned
Connecticut considers borrowing money to cut electricity bills
California cap-and-trade negotiator sets broad scope for reauthorization
Meat giant accused of deforestation gets nod to join NY stock exchange
Fashion is the next frontier for clean tech as textile waste mounts
84% of world’s coral reefs hit by worst bleaching event on record
More Americans breathe unhealthy air due to wildfires, extreme heat
Six from MIT elected to American Academy of Arts and Sciences for 2025
Six MIT faculty members are among the nearly 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced April 23.
One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.
Those elected from MIT in 2025 are:
- Lotte Bailyn, T. Wilson Professor of Management Emerita;
- Gareth McKinley, School of Engineering Professor of Teaching Innovation;
- Nasser Rabbat, Aga Khan Professor;
- Susan Silbey, Leon and Anne Goldberg Professor of Humanities and professor of sociology and anthropology;
- Anne Whiston Spirn, Cecil and Ida Green Distinguished Professor of Landscape Architecture and Planning; and
- Catherine Wolfram, William Barton Rogers Professor in Energy and professor of applied economics.
“These new members’ accomplishments speak volumes about the human capacity for discovery, creativity, leadership, and persistence. They are a stellar testament to the power of knowledge to broaden our horizons and deepen our understanding,” says Academy President Laurie L. Patton. “We invite every new member to celebrate their achievement and join the Academy in our work to promote the common good.”
Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.
Robotic system zeroes in on objects most relevant for helping humans
For a robot, the real world is a lot to take in. Making sense of every data point in a scene can take a huge amount of computational effort and time. Using that information to then decide how to best help a human is an even thornier exercise.
Now, MIT roboticists have a way to cut through the data noise, to help robots focus on the features in a scene that are most relevant for assisting humans.
Their approach, which they aptly dub “Relevance,” enables a robot to use cues in a scene, such as audio and visual information, to determine a human’s objective and then quickly identify the objects that are most likely to be relevant in fulfilling that objective. The robot then carries out a set of maneuvers to safely offer the relevant objects or actions to the human.
The researchers demonstrated the approach with an experiment that simulated a conference breakfast buffet. They set up a table with various fruits, drinks, snacks, and tableware, along with a robotic arm outfitted with a microphone and camera. Applying the new Relevance approach, they showed that the robot was able to correctly identify a human’s objective and appropriately assist them in different scenarios.
In one case, the robot took in visual cues of a human reaching for a can of prepared coffee, and quickly handed the person milk and a stir stick. In another scenario, the robot picked up on a conversation between two people talking about coffee, and offered them a can of coffee and creamer.
Overall, the robot was able to predict a human’s objective with 90 percent accuracy and to identify relevant objects with 96 percent accuracy. The method also improved a robot’s safety, reducing the number of collisions by more than 60 percent, compared to carrying out the same tasks without applying the new method.
“This approach of enabling relevance could make it much easier for a robot to interact with humans,” says Kamal Youcef-Toumi, professor of mechanical engineering at MIT. “A robot wouldn’t have to ask a human so many questions about what they need. It would just actively take information from the scene to figure out how to help.”
Youcef-Toumi’s group is exploring how robots programmed with Relevance can help in smart manufacturing and warehouse settings, where they envision robots working alongside and intuitively assisting humans.
Youcef-Toumi, along with graduate students Xiaotong Zhang and Dingcheng Huang, will present their new method at the IEEE International Conference on Robotics and Automation (ICRA) in May. The work builds on another paper presented at ICRA the previous year.
Finding focus
The team’s approach is inspired by our own ability to gauge what’s relevant in daily life. Humans can filter out distractions and focus on what’s important, thanks to a region of the brain known as the Reticular Activating System (RAS). The RAS is a bundle of neurons in the brainstem that acts subconsciously to prune away unnecessary stimuli, so that a person can consciously perceive the relevant stimuli. The RAS helps to prevent sensory overload, keeping us, for example, from fixating on every single item on a kitchen counter, and instead helping us to focus on pouring a cup of coffee.
“The amazing thing is, these groups of neurons filter everything that is not important, and then it has the brain focus on what is relevant at the time,” Youcef-Toumi explains. “That’s basically what our proposition is.”
He and his team developed a robotic system that broadly mimics the RAS’s ability to selectively process and filter information. The approach consists of four main phases. The first is a watch-and-learn “perception” stage, during which a robot takes in audio and visual cues, for instance from a microphone and camera, that are continuously fed into an AI “toolkit.” This toolkit can include a large language model (LLM) that processes audio conversations to identify keywords and phrases, and various algorithms that detect and classify objects, humans, physical actions, and task objectives. The AI toolkit is designed to run continuously in the background, similarly to the subconscious filtering that the brain’s RAS performs.
The second stage is a “trigger check” phase, which is a periodic check that the system performs to assess if anything important is happening, such as whether a human is present or not. If a human has stepped into the environment, the system’s third phase will kick in. This phase is the heart of the team’s system, which acts to determine the features in the environment that are most likely relevant to assist the human.
To establish relevance, the researchers developed an algorithm that takes in real-time predictions made by the AI toolkit. For instance, the toolkit’s LLM may pick up the keyword “coffee,” and an action-classifying algorithm may label a person reaching for a cup as having the objective of “making coffee.” The team’s Relevance method would factor in this information to first determine the “class” of objects that have the highest probability of being relevant to the objective of “making coffee.” This might automatically filter out classes such as “fruits” and “snacks,” in favor of “cups” and “creamers.” The algorithm would then further filter within the relevant classes to determine the most relevant “elements.” For instance, based on visual cues of the environment, the system may label a cup closest to a person as more relevant — and helpful — than a cup that is farther away.
In the fourth and final phase, the robot would then take the identified relevant objects and plan a path to physically access and offer the objects to the human.
Helper mode
The researchers tested the new system in experiments that simulate a conference breakfast buffet. They chose this scenario based on the publicly available Breakfast Actions Dataset, which comprises videos and images of typical activities that people perform during breakfast time, such as preparing coffee, cooking pancakes, making cereal, and frying eggs. Actions in each video and image are labeled, along with the overall objective (frying eggs, versus making coffee).
Using this dataset, the team tested various algorithms in their AI toolkit, such that, when receiving actions of a person in a new scene, the algorithms could accurately label and classify the human tasks and objectives, and the associated relevant objects.
In their experiments, they set up a robotic arm and gripper and instructed the system to assist humans as they approached a table filled with various drinks, snacks, and tableware. They found that when no humans were present, the robot’s AI toolkit operated continuously in the background, labeling and classifying objects on the table.
When, during a trigger check, the robot detected a human, it snapped to attention, turning on its Relevance phase and quickly identifying objects in the scene that were most likely to be relevant, based on the human’s objective, which was determined by the AI toolkit.
“Relevance can guide the robot to generate seamless, intelligent, safe, and efficient assistance in a highly dynamic environment,” says co-author Zhang.
Going forward, the team hopes to apply the system to scenarios that resemble workplace and warehouse environments, as well as to other tasks and objectives typically performed in household settings.
“I would want to test this system in my home to see, for instance, if I’m reading the paper, maybe it can bring me coffee. If I’m doing laundry, it can bring me a laundry pod. If I’m doing repair, it can bring me a screwdriver,” Zhang says. “Our vision is to enable human-robot interactions that can be much more natural and fluent.”
This research was made possible by the support and partnership of King Abdulaziz City for Science and Technology (KACST) through the Center for Complex Engineering Systems at MIT and KACST.
Wearable device tracks individual cells in the bloodstream in real time
Researchers at MIT have developed a noninvasive medical monitoring device powerful enough to detect single cells within blood vessels, yet small enough to wear like a wristwatch. One important aspect of this wearable device is that it can enable continuous monitoring of circulating cells in the human body.
The technology was presented online on March 3 by the journal npj Biosensing and is forthcoming in the journal’s print version.
The device — named CircTrek — was developed by researchers in the Nano-Cybernetic Biotrek research group, led by Deblina Sarkar, assistant professor at MIT and AT&T Career Development Chair at the MIT Media Lab. This technology could greatly facilitate early diagnosis of disease, detection of disease relapse, assessment of infection risk, and determination of whether a disease treatment is working, among other medical processes.
Whereas traditional blood tests are like a snapshot of a patient’s condition, CircTrek was designed to present real-time assessment, referred to in the npj Biosensing paper as having been “an unmet goal to date.” A different technology that offers monitoring of cells in the bloodstream with some continuity, in vivo flow cytometry, “requires a room-sized microscope, and patients need to be there for a long time,” says Kyuho Jang, a PhD student in Sarkar’s lab.
CircTrek, on the other hand, which is equipped with an onboard Wi-Fi module, could even monitor a patient’s circulating cells at home and send that information to the patient’s doctor or care team.
“CircTrek offers a path to harnessing previously inaccessible information, enabling timely treatments, and supporting accurate clinical decisions with real-time data,” says Sarkar. “Existing technologies provide monitoring that is not continuous, which can lead to missing critical treatment windows. We overcome this challenge with CircTrek.”
The device works by directing a focused laser beam to stimulate cells beneath the skin that have been fluorescently labeled. Such labeling can be accomplished with a number of methods, including applying antibody-based fluorescent dyes to the cells of interest or genetically modifying such cells so that they express fluorescent proteins.
For example, a patient receiving CAR T cell therapy, in which immune cells are collected and modified in a lab to fight cancer (or, experimentally, to combat HIV or Covid-19), could have those cells labeled at the same time with fluorescent dyes or genetic modification so the cells express fluorescent proteins. Importantly, cells of interest can also be labeled with in vivo labeling methods approved in humans. Once the cells are labeled and circulating in the bloodstream, CircTrek is designed to apply laser pulses to enhance and detect the cells’ fluorescent signal while an arrangement of filters minimizes low-frequency noise such as heartbeats.
“We optimized the optomechanical parts to reduce noise significantly and only capture the signal from the fluorescent cells,” says Jang.
Detecting the labeled CAR T cells, CircTrek could assess whether the cell therapy treatment is working. As an example, persistence of the CAR T cells in the blood after treatment is associated with better outcomes in patients with B-cell lymphoma.
To keep CircTrek small and wearable, the researchers were able to miniaturize the components of the device, such as the circuit that drives the high-intensity laser source and keeps the power level of the laser stable to avoid false readings.
The sensor that detects the fluorescent signals of the labeled cells is also minute, and yet it is capable of detecting a quantity of light equivalent to a single photon, Jang says.
The device’s subcircuits, including the laser driver and the noise filters, were custom-designed to fit on a circuit board measuring just 42 mm by 35 mm, allowing CircTrek to be approximately the same size as a smartwatch.
CircTrek was tested on an in vitro configuration that simulated blood flow beneath human skin, and its single-cell detection capabilities were verified through manual counting with a high-resolution confocal microscope. For the in vitro testing, a fluorescent dye called Cyanine5.5 was employed. That particular dye was selected because it reaches peak activation at wavelengths within skin tissue’s optical window, or the range of wavelengths that can penetrate the skin with minimal scattering.
The safety of the device, particularly the temperature increase on experimental skin tissue caused by the laser, was also investigated. An increase of 1.51 degrees Celsius at the skin surface was determined to be well below heating that would damage tissue, with enough of a margin that even increasing the device’s area of detection, and its power, in order to ensure the observation of at least one blood vessel could be safely permitted.
While clinical translation of CircTrek will require further steps, Jang says its parameters can be modified to broaden its potential, so that doctors could be provided with critical information on nearly any patient.
A brief history of expansion microscopy
Nearly 150 years ago, scientists began to imagine how information might flow through the brain based on the shapes of neurons they had seen under the microscopes of the time. With today’s imaging technologies, scientists can zoom in much further, seeing the tiny synapses through which neurons communicate with one another, and even the molecules the cells use to relay their messages. These inside views can spark new ideas about how healthy brains work and reveal important changes that contribute to disease.
This sharper view of biology is not just about the advances that have made microscopes more powerful than ever before. Using methodology developed in the lab of MIT McGovern Institute for Brain Research investigator Edward Boyden, researchers around the world are imaging samples that have been swollen to as much as 20 times their original size so their finest features can be seen more clearly.
“It’s a very different way to do microscopy,” says Boyden, who is also a Howard Hughes Medical Institute (HHMI) investigator, a professor of brain and cognitive sciences and biological engineering, and a member of the Yang Tan Collective at MIT. “In contrast to the last 300 years of bioimaging, where you use a lens to magnify an image of light from an object, we physically magnify objects themselves.” Once a tissue is expanded, Boyden says, researchers can see more even with widely available, conventional microscopy hardware.
Boyden’s team introduced this approach, which they named expansion microscopy (ExM), in 2015. Since then, they have been refining the method and adding to its capabilities, while researchers at MIT and beyond deploy it to learn about life on the smallest of scales.
“It’s spreading very rapidly throughout biology and medicine,” Boyden says. “It’s being applied to kidney disease, the fruit fly brain, plant seeds, the microbiome, Alzheimer’s disease, viruses, and more.”
Origins of ExM
To develop expansion microscopy, Boyden and his team turned to hydrogel, a material with remarkable water-absorbing properties that had already been put to practical use; it’s layered inside disposable diapers to keep babies dry. Boyden’s lab hypothesized that hydrogels could retain their structure while they absorbed hundreds of times their original weight in water, expanding the space between their chemical components as they swell.
After some experimentation, Boyden’s team settled on four key steps to enlarging tissue samples for better imaging. First, the tissue must be infused with a hydrogel. Components of the tissue, biomolecules, are anchored to the gel’s web-like matrix, linking them directly to the molecules that make up the gel. Then the tissue is chemically softened and water is added. As the hydrogel absorbs the water, it swells and the tissue expands, growing evenly so the relative positions of its components are preserved.
Boyden and graduate students Fei Chen and Paul Tillberg’s first report on expansion microscopy was published in the journal Science in 2015. In it, the team demonstrated that by spreading apart molecules that had been crowded inside cells, features that would have blurred together under a standard light microscope became separate and distinct. Light microscopes can discriminate between objects that are separated by about 300 nanometers — a limit imposed by the laws of physics. With expansion microscopy, Boyden’s group reported an effective resolution of about 70 nanometers, for a fourfold expansion.
Boyden says this is a level of clarity that biologists need. “Biology is fundamentally, in the end, a nanoscale science,” he says. “Biomolecules are nanoscale, and the interactions between biomolecules are over nanoscale distances. Many of the most important problems in biology and medicine involve nanoscale questions.” Several kinds of sophisticated microscopes, each with their own advantages and disadvantages, can bring this kind of detail to light. But those methods are costly and require specialized skills, making them inaccessible for most researchers. “Expansion microscopy democratizes nanoimaging,” Boyden says. “Now, anybody can go look at the building blocks of life and how they relate to each other.”
Empowering scientists
Since Boyden’s team introduced expansion microscopy in 2015, research groups around the world have published hundreds of papers reporting on discoveries they have made using expansion microscopy. For neuroscientists, the technique has lit up the intricacies of elaborate neural circuits, exposed how particular proteins organize themselves at and across synapses to facilitate communication between neurons, and uncovered changes associated with aging and disease.
It has been equally empowering for studies beyond the brain. Sabrina Absalon uses expansion microscopy every week in her lab at Indiana University School of Medicine to study the malaria parasite, a single-celled organism packed with specialized structures that enable it to infect and live inside its hosts. The parasite is so small, most of those structures can’t be seen with ordinary light microscopy. “So as a cell biologist, I’m losing the biggest tool to infer protein function, organelle architecture, morphology, linked to function, and all those things — which is my eye,” she says. With expansion, she can not only see the organelles inside a malaria parasite, she can watch them assemble and follow what happens to them when the parasite divides. Understanding those processes, she says, could help drug developers find new ways to interfere with the parasite’s life cycle.
Absalon adds that the accessibility of expansion microscopy is particularly important in the field of parasitology, where a lot of research is happening in parts of the world where resources are limited. Workshops and training programs in Africa, South America, and Asia are ensuring the technology reaches scientists whose communities are directly impacted by malaria and other parasites. “Now they can get super-resolution imaging without very fancy equipment,” Absalon says.
Always improving
Since 2015, Boyden’s interdisciplinary lab group has found a variety of creative ways to improve expansion microscopy and use it in new ways. Their standard technique today enables better labeling, bigger expansion factors, and higher-resolution imaging. Cellular features less than 20 nanometers from one another can now be separated enough to appear distinct under a light microscope.
They’ve also adapted their protocols to work with a range of important sample types, from entire roundworms (popular among neuroscientists, developmental biologists, and other researchers) to clinical samples. In the latter regard, they’ve shown that expansion can help reveal subtle signs of disease, which could enable earlier or less-costly diagnoses.
Originally, the group optimized its protocol for visualizing proteins inside cells, by labeling proteins of interest and anchoring them to the hydrogel prior to expansion. With a new way of processing samples, users can now re-stain their expanded samples with new labels for multiple rounds of imaging, so they can pinpoint the positions of dozens of different proteins in the same tissue. That means researchers can visualize how molecules are organized with respect to one another and how they might interact, or survey large sets of proteins to see, for example, what changes with disease.
But better views of proteins were just the beginning for expansion microscopy. “We want to see everything,” Boyden says. “We’d love to see every biomolecule there is, with precision down to atomic scale.” They’re not there yet — but with new probes and modified procedures, it’s now possible to see not just proteins, but also RNA and lipids in expanded tissue samples.
Labeling lipids, including those that form the membranes surrounding cells, means researchers can now see clear outlines of cells in expanded tissues. With the enhanced resolution afforded by expansion, even the slender projections of neurons can be traced through an image. Typically, researchers have relied on electron microscopy, which generates exquisitely detailed pictures but requires expensive equipment, to map the brain’s circuitry. “Now, you can get images that look a lot like electron microscopy images, but on regular old light microscopes — the kind that everybody has access to,” Boyden says.
Boyden says expansion can be powerful in combination with other cutting-edge tools. When expanded samples are used with an ultra-fast imaging method developed by Eric Betzig, an HHMI investigator at the University of California at Berkeley, called lattice light-sheet microscopy, the entire brain of a fruit fly can be imaged at high resolution in just a few days.
And when RNA molecules are anchored within a hydrogel network and then sequenced in place, scientists can see exactly where inside cells the instructions for building specific proteins are positioned, which Boyden’s team demonstrated in a collaboration with Harvard University geneticist George Church and then-MIT-professor Aviv Regev. “Expansion basically upgrades many other technologies’ resolutions,” Boyden says. “You’re doing mass-spec imaging, X-ray imaging, or Raman imaging? Expansion just improved your instrument.”
Expanding possibilities
Ten years past the first demonstration of expansion microscopy’s power, Boyden and his team are committed to continuing to make expansion microscopy more powerful. “We want to optimize it for different kinds of problems, and making technologies faster, better, and cheaper is always important,” he says. But the future of expansion microscopy will be propelled by innovators outside the Boyden lab, too. “Expansion is not only easy to do, it’s easy to modify — so lots of other people are improving expansion in collaboration with us, or even on their own,” Boyden says.
Boyden points to a group led by Silvio Rizzoli at the University Medical Center Göttingen in Germany that, collaborating with Boyden, has adapted the expansion protocol to discern the physical shapes of proteins. At the Korea Advanced Institute of Science and Technology, researchers led by Jae-Byum Chang, a former postdoc in Boyden’s group, have worked out how to expand entire bodies of mouse embryos and young zebra fish, collaborating with Boyden to set the stage for examining developmental processes and long-distance neural connections with a new level of detail. And mapping connections within the brain’s dense neural circuits could become easier with light-microscopy based connectomics, an approach developed by Johann Danzl and colleagues at the Institute of Science and Technology in Austria that takes advantage of both the high resolution and molecular information that expansion microscopy can reveal.
“The beauty of expansion is that it lets you see a biological system down to its smallest building blocks,” Boyden says.
His team is intent on pushing the method to its physical limits, and anticipates new opportunities for discovery as they do. “If you can map the brain or any biological system at the level of individual molecules, you might be able to see how they all work together as a network — how life really operates,” he says.
Regulating AI Behavior with a Hypervisor
Interesting research: “Guillotine: Hypervisors for Isolating Malicious AIs.”
Abstract:As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that, by accident or malice, can generate existential threats to humanity. Although Guillotine borrows some well-known virtualization techniques, Guillotine must also introduce fundamentally new isolation mechanisms to handle the unique threat model posed by existential-risk AIs. For example, a rogue AI may try to introspect upon hypervisor software or the underlying hardware substrate to enable later subversion of that control plane; thus, a Guillotine hypervisor requires careful co-design of the hypervisor software and the CPUs, RAM, NIC, and storage devices that support the hypervisor software, to thwart side channel leakage and more generally eliminate mechanisms for AI to exploit reflection-based vulnerabilities. Beyond such isolation at the software, network, and microarchitectural layers, a Guillotine hypervisor must also provide physical fail-safes more commonly associated with nuclear power plants, avionic platforms, and other types of mission critical systems. Physical fail-safes, e.g., involving electromechanical disconnection of network cables, or the flooding of a datacenter which holds a rogue AI, provide defense in depth if software, network, and microarchitectural isolation is compromised and a rogue AI must be temporarily shut down or permanently destroyed. ...
New electronic “skin” could enable lightweight night-vision glasses
MIT engineers have developed a technique to grow and peel ultrathin “skins” of electronic material. The method could pave the way for new classes of electronic devices, such as ultrathin wearable sensors, flexible transistors and computing elements, and highly sensitive and compact imaging devices.
As a demonstration, the team fabricated a thin membrane of pyroelectric material — a class of heat-sensing material that produces an electric current in response to changes in temperature. The thinner the pyroelectric material, the better it is at sensing subtle thermal variations.
With their new method, the team fabricated the thinnest pyroelectric membrane yet, measuring 10 nanometers thick, and demonstrated that the film is highly sensitive to heat and radiation across the far-infrared spectrum.
The newly developed film could enable lighter, more portable, and highly accurate far-infrared (IR) sensing devices, with potential applications for night-vision eyewear and autonomous driving in foggy conditions. Current state-of-the-art far-IR sensors require bulky cooling elements. In contrast, the new pyroelectric thin film requires no cooling and is sensitive to much smaller changes in temperature. The researchers are exploring ways to incorporate the film into lighter, higher-precision night-vision glasses.
“This film considerably reduces weight and cost, making it lightweight, portable, and easier to integrate,” Xinyuan Zhang, a graduate student in MIT’s Department of Materials Science and Engineering (DMSE). “For example, it could be directly worn on glasses.”
The heat-sensing film could also have applications in environmental and biological sensing, as well as imaging of astrophysical phenomena that emit far-infrared radiation.
What’s more, the new lift-off technique is generalizable beyond pyroelectric materials. The researchers plan to apply the method to make other ultrathin, high-performance semiconducting films.
Their results are reported today in a paper appearing in the journal Nature. The study’s MIT co-authors are first author Xinyuan Zhang, Sangho Lee, Min-Kyu Song, Haihui Lan, Jun Min Suh, Jung-El Ryu, Yanjie Shao, Xudong Zheng, Ne Myo Han, and Jeehwan Kim, associate professor of mechanical engineering and of materials science and engineering, along with researchers at the University Wisconsin at Madison led by Professor Chang-Beom Eom and authors from multiple other institutions.
Chemical peel
Kim’s group at MIT is finding new ways to make smaller, thinner, and more flexible electronics. They envision that such ultrathin computing “skins” can be incorporated into everything from smart contact lenses and wearable sensing fabrics to stretchy solar cells and bendable displays. To realize such devices, Kim and his colleagues have been experimenting with methods to grow, peel, and stack semiconducting elements, to fabricate ultrathin, multifunctional electronic thin-film membranes.
One method that Kim has pioneered is “remote epitaxy” — a technique where semiconducting materials are grown on a single-crystalline substrate, with an ultrathin layer of graphene in between. The substrate’s crystal structure serves as a scaffold along which the new material can grow. The graphene acts as a nonstick layer, similar to Teflon, making it easy for researchers to peel off the new film and transfer it onto flexible and stacked electronic devices. After peeling off the new film, the underlying substrate can be reused to make additional thin films.
Kim has applied remote epitaxy to fabricate thin films with various characteristics. In trying different combinations of semiconducting elements, the researchers happened to notice that a certain pyroelectric material, called PMN-PT, did not require an intermediate layer assist in order to separate from its substrate. Just by growing PMN-PT directly on a single-crystalline substrate, the researchers could then remove the grown film, with no rips or tears to its delicate lattice.
“It worked surprisingly well,” Zhang says. “We found the peeled film is atomically smooth.”
Lattice lift-off
In their new study, the MIT and UW Madison researchers took a closer look at the process and discovered that the key to the material’s easy-peel property was lead. As part of its chemical structure, the team, along with colleagues at the Rensselaer Polytechnic Institute, discovered that the pyroelectric film contains an orderly arrangement of lead atoms that have a large “electron affinity,” meaning that lead attracts electrons and prevents the charge carriers from traveling and connecting to another materials such as an underlying substrate. The lead acts as tiny nonstick units, allowing the material as a whole to peel away, perfectly intact.
The team ran with the realization and fabricated multiple ultrathin films of PMN-PT, each about 10 nanometers thin. They peeled off pyroelectric films and transfered them onto a small chip to form an array of 100 ultrathin heat-sensing pixels, each about 60 square microns (about .006 square centimeters). They exposed the films to ever-slighter changes in temperature and found the pixels were highly sensitive to small changes across the far-infrared spectrum.
The sensitivity of the pyroelectric array is comparable to that of state-of-the-art night-vision devices. These devices are currently based on photodetector materials, in which a change in temperature induces the material’s electrons to jump in energy and briefly cross an energy “band gap,” before settling back into their ground state. This electron jump serves as an electrical signal of the temperature change. However, this signal can be affected by noise in the environment, and to prevent such effects, photodetectors have to also include cooling devices that bring the instruments down to liquid nitrogen temperatures.
Current night-vision goggles and scopes are heavy and bulky. With the group’s new pyroelectric-based approach, NVDs could have the same sensitivity without the cooling weight.
The researchers also found that the films were sensitive beyond the range of current night-vision devices and could respond to wavelengths across the entire infrared spectrum. This suggests that the films could be incorporated into small, lightweight, and portable devices for various applications that require different infrared regions. For instance, when integrated into autonomous vehicle platforms, the films could enable cars to “see” pedestrians and vehicles in complete darkness or in foggy and rainy conditions.
The film could also be used in gas sensors for real-time and on-site environmental monitoring, helping detect pollutants. In electronics, they could monitor heat changes in semiconductor chips to catch early signs of malfunctioning elements.
The team says the new lift-off method can be generalized to materials that may not themselves contain lead. In those cases, the researchers suspect that they can infuse Teflon-like lead atoms into the underlying substrate to induce a similar peel-off effect. For now, the team is actively working toward incorporating the pyroelectric films into a functional night-vision system.
“We envision that our ultrathin films could be made into high-performance night-vision goggles, considering its broad-spectrum infrared sensitivity at room-temperature, which allows for a lightweight design without a cooling system,” Zhang says. “To turn this into a night-vision system, a functional device array should be integrated with readout circuitry. Furthermore, testing in varied environmental conditions is essential for practical applications.”
This work was supported by the U.S. Air Force Office of Scientific Research.
New model predicts a chemical reaction’s point of no return
When chemists design new chemical reactions, one useful piece of information involves the reaction’s transition state — the point of no return from which a reaction must proceed.
This information allows chemists to try to produce the right conditions that will allow the desired reaction to occur. However, current methods for predicting the transition state and the path that a chemical reaction will take are complicated and require a huge amount of computational power.
MIT researchers have now developed a machine-learning model that can make these predictions in less than a second, with high accuracy. Their model could make it easier for chemists to design chemical reactions that could generate a variety of useful compounds, such as pharmaceuticals or fuels.
“We’d like to be able to ultimately design processes to take abundant natural resources and turn them into molecules that we need, such as materials and therapeutic drugs. Computational chemistry is really important for figuring out how to design more sustainable processes to get us from reactants to products,” says Heather Kulik, the Lammot du Pont Professor of Chemical Engineering, a professor of chemistry, and the senior author of the new study.
Former MIT graduate student Chenru Duan PhD ’22, who is now at Deep Principle; former Georgia Tech graduate student Guan-Horng Liu, who is now at Meta; and Cornell University graduate student Yuanqi Du are the lead authors of the paper, which appears today in Nature Machine Intelligence.
Better estimates
For any given chemical reaction to occur, it must go through a transition state, which takes place when it reaches the energy threshold needed for the reaction to proceed. These transition states are so fleeting that they’re nearly impossible to observe experimentally.
As an alternative, researchers can calculate the structures of transition states using techniques based on quantum chemistry. However, that process requires a great deal of computing power and can take hours or days to calculate a single transition state.
“Ideally, we’d like to be able to use computational chemistry to design more sustainable processes, but this computation in itself is a huge use of energy and resources in finding these transition states,” Kulik says.
In 2023, Kulik, Duan, and others reported on a machine-learning strategy that they developed to predict the transition states of reactions. This strategy is faster than using quantum chemistry techniques, but still slower than what would be ideal because it requires the model to generate about 40 structures, then run those predictions through a “confidence model” to predict which states were most likely to occur.
One reason why that model needs to be run so many times is that it uses randomly generated guesses for the starting point of the transition state structure, then performs dozens of calculations until it reaches its final, best guess. These randomly generated starting points may be very far from the actual transition state, which is why so many steps are needed.
The researchers’ new model, React-OT, described in the Nature Machine Intelligence paper, uses a different strategy. In this work, the researchers trained their model to begin from an estimate of the transition state generated by linear interpolation — a technique that estimates each atom’s position by moving it halfway between its position in the reactants and in the products, in three-dimensional space.
“A linear guess is a good starting point for approximating where that transition state will end up,” Kulik says. “What the model’s doing is starting from a much better initial guess than just a completely random guess, as in the prior work.”
Because of this, it takes the model fewer steps and less time to generate a prediction. In the new study, the researchers showed that their model could make predictions with only about five steps, taking about 0.4 seconds. These predictions don’t need to be fed through a confidence model, and they are about 25 percent more accurate than the predictions generated by the previous model.
“That really makes React-OT a practical model that we can directly integrate to the existing computational workflow in high-throughput screening to generate optimal transition state structures,” Duan says.
“A wide array of chemistry”
To create React-OT, the researchers trained it on the same dataset that they used to train their older model. These data contain structures of reactants, products, and transition states, calculated using quantum chemistry methods, for 9,000 different chemical reactions, mostly involving small organic or inorganic molecules.
Once trained, the model performed well on other reactions from this set, which had been held out of the training data. It also performed well on other types of reactions that it hadn’t been trained on, and could make accurate predictions involving reactions with larger reactants, which often have side chains that aren’t directly involved in the reaction.
“This is important because there are a lot of polymerization reactions where you have a big macromolecule, but the reaction is occurring in just one part. Having a model that generalizes across different system sizes means that it can tackle a wide array of chemistry,” Kulik says.
The researchers are now working on training the model so that it can predict transition states for reactions between molecules that include additional elements, including sulfur, phosphorus, chlorine, silicon, and lithium.
“To quickly predict transition state structures is key to all chemical understanding,” says Markus Reiher, a professor of theoretical chemistry at ETH Zurich, who was not involved in the study. “The new approach presented in the paper could very much accelerate our search and optimization processes, bringing us faster to our final result. As a consequence, also less energy will be consumed in these high-performance computing campaigns. Any progress that accelerates this optimization benefits all sorts of computational chemical research.”
The MIT team hopes that other scientists will make use of their approach in designing their own reactions, and have created an app for that purpose.
“Whenever you have a reactant and product, you can put them into the model and it will generate the transition state, from which you can estimate the energy barrier of your intended reaction, and see how likely it is to occur,” Duan says.
The research was funded by the U.S. Army Research Office, the U.S. Department of Defense Basic Research Office, the U.S. Air Force Office of Scientific Research, the National Science Foundation, and the U.S. Office of Naval Research.