Feed aggregator
New prediction model could improve the reliability of fusion power plants
Tokamaks are machines that are meant to hold and harness the power of the sun. These fusion machines use powerful magnets to contain a plasma hotter than the sun’s core and push the plasma’s atoms to fuse and release energy. If tokamaks can operate safely and efficiently, the machines could one day provide clean and limitless fusion energy.
Today, there are a number of experimental tokamaks in operation around the world, with more underway. Most are small-scale research machines built to investigate how the devices can spin up plasma and harness its energy. One of the challenges that tokamaks face is how to safely and reliably turn off a plasma current that is circulating at speeds of up to 100 kilometers per second, at temperatures of over 100 million degrees Celsius.
Such “rampdowns” are necessary when a plasma becomes unstable. To prevent the plasma from further disrupting and potentially damaging the device’s interior, operators ramp down the plasma current. But occasionally the rampdown itself can destabilize the plasma. In some machines, rampdowns have caused scrapes and scarring to the tokamak’s interior — minor damage that still requires considerable time and resources to repair.
Now, scientists at MIT have developed a method to predict how plasma in a tokamak will behave during a rampdown. The team combined machine-learning tools with a physics-based model of plasma dynamics to simulate a plasma’s behavior and any instabilities that may arise as the plasma is ramped down and turned off. The researchers trained and tested the new model on plasma data from an experimental tokamak in Switzerland. They found the method quickly learned how plasma would evolve as it was tuned down in different ways. What’s more, the method achieved a high level of accuracy using a relatively small amount of data. This training efficiency is promising, given that each experimental run of a tokamak is expensive and quality data is limited as a result.
The new model, which the team highlights this week in an open-access Nature Communications paper, could improve the safety and reliability of future fusion power plants.
“For fusion to be a useful energy source it’s going to have to be reliable,” says lead author Allen Wang, a graduate student in aeronautics and astronautics and a member of the Disruption Group at MIT’s Plasma Science and Fusion Center (PSFC). “To be reliable, we need to get good at managing our plasmas.”
The study’s MIT co-authors include PSFC Principal Research Scientist and Disruptions Group leader Cristina Rea, and members of the Laboratory for Information and Decision Systems (LIDS) Oswin So, Charles Dawson, and Professor Chuchu Fan, along with Mark (Dan) Boyer of Commonwealth Fusion Systems and collaborators from the Swiss Plasma Center in Switzerland.
“A delicate balance”
Tokamaks are experimental fusion devices that were first built in the Soviet Union in the 1950s. The device gets its name from a Russian acronym that translates to a “toroidal chamber with magnetic coils.” Just as its name describes, a tokamak is toroidal, or donut-shaped, and uses powerful magnets to contain and spin up a gas to temperatures and energies high enough that atoms in the resulting plasma can fuse and release energy.
Today, tokamak experiments are relatively low-energy in scale, with few approaching the size and output needed to generate safe, reliable, usable energy. Disruptions in experimental, low-energy tokamaks are generally not an issue. But as fusion machines scale up to grid-scale dimensions, controlling much higher-energy plasmas at all phases will be paramount to maintaining a machine’s safe and efficient operation.
“Uncontrolled plasma terminations, even during rampdown, can generate intense heat fluxes damaging the internal walls,” Wang notes. “Quite often, especially with the high-performance plasmas, rampdowns actually can push the plasma closer to some instability limits. So, it’s a delicate balance. And there’s a lot of focus now on how to manage instabilities so that we can routinely and reliably take these plasmas and safely power them down. And there are relatively few studies done on how to do that well.”
Bringing down the pulse
Wang and his colleagues developed a model to predict how a plasma will behave during tokamak rampdown. While they could have simply applied machine-learning tools such as a neural network to learn signs of instabilities in plasma data, “you would need an ungodly amount of data” for such tools to discern the very subtle and ephemeral changes in extremely high-temperature, high-energy plasmas, Wang says.
Instead, the researchers paired a neural network with an existing model that simulates plasma dynamics according to the fundamental rules of physics. With this combination of machine learning and a physics-based plasma simulation, the team found that only a couple hundred pulses at low performance, and a small handful of pulses at high performance, were sufficient to train and validate the new model.
The data they used for the new study came from the TCV, the Swiss “variable configuration tokamak” operated by the Swiss Plasma Center at EPFL (the Swiss Federal Institute of Technology Lausanne). The TCV is a small experimental fusion experimental device that is used for research purposes, often as test bed for next-generation device solutions. Wang used the data from several hundred TCV plasma pulses that included properties of the plasma such as its temperature and energies during each pulse’s ramp-up, run, and ramp-down. He trained the new model on this data, then tested it and found it was able to accurately predict the plasma’s evolution given the initial conditions of a particular tokamak run.
The researchers also developed an algorithm to translate the model’s predictions into practical “trajectories,” or plasma-managing instructions that a tokamak controller can automatically carry out to for instance adjust the magnets or temperature maintain the plasma’s stability. They implemented the algorithm on several TCV runs and found that it produced trajectories that safely ramped down a plasma pulse, in some cases faster and without disruptions compared to runs without the new method.
“At some point the plasma will always go away, but we call it a disruption when the plasma goes away at high energy. Here, we ramped the energy down to nothing,” Wang notes. “We did it a number of times. And we did things much better across the board. So, we had statistical confidence that we made things better.”
The work was supported in part by Commonwealth Fusion Systems (CFS), an MIT spinout that intends to build the world’s first compact, grid-scale fusion power plant. The company is developing a demo tokamak, SPARC, designed to produce net-energy plasma, meaning that it should generate more energy than it takes to heat up the plasma. Wang and his colleagues are working with CFS on ways that the new prediction model and tools like it can better predict plasma behavior and prevent costly disruptions to enable safe and reliable fusion power.
“We’re trying to tackle the science questions to make fusion routinely useful,” Wang says. “What we’ve done here is the start of what is still a long journey. But I think we’ve made some nice progress.”
Additional support for the research came from the framework of the EUROfusion Consortium, via the Euratom Research and Training Program and funded by the Swiss State Secretariat for Education, Research, and Innovation.
Printable aluminum alloy sets strength records, may enable lighter aircraft parts
MIT engineers have developed a printable aluminum alloy that can withstand high temperatures and is five times stronger than traditionally manufactured aluminum.
The new printable metal is made from a mix of aluminum and other elements that the team identified using a combination of simulations and machine learning, which significantly pruned the number of possible combinations of materials to search through. While traditional methods would require simulating over 1 million possible combinations of materials, the team’s new machine learning-based approach needed only to evaluate 40 possible compositions before identifying an ideal mix for a high-strength, printable aluminum alloy.
When they printed the alloy and tested the resulting material, the team confirmed that, as predicted, the aluminum alloy was as strong as the strongest aluminum alloys that are manufactured today using traditional casting methods.
The researchers envision that the new printable aluminum could be made into stronger, more lightweight and temperature-resistant products, such as fan blades in jet engines. Fan blades are traditionally cast from titanium — a material that is more than 50 percent heavier and up to 10 times costlier than aluminum — or made from advanced composites.
“If we can use lighter, high-strength material, this would save a considerable amount of energy for the transportation industry,” says Mohadeseh Taheri-Mousavi, who led the work as a postdoc at MIT and is now an assistant professor at Carnegie Mellon University.
“Because 3D printing can produce complex geometries, save material, and enable unique designs, we see this printable alloy as something that could also be used in advanced vacuum pumps, high-end automobiles, and cooling devices for data centers,” adds John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering at MIT.
Hart and Taheri-Mousavi provide details on the new printable aluminum design in a paper published in the journal Advanced Materials. The paper’s MIT co-authors include Michael Xu, Clay Houser, Shaolou Wei, James LeBeau, and Greg Olson, along with Florian Hengsbach and Mirko Schaper of Paderborn University in Germany, and Zhaoxuan Ge and Benjamin Glaser of Carnegie Mellon University.
Micro-sizing
The new work grew out of an MIT class that Taheri-Mousavi took in 2020, which was taught by Greg Olson, professor of the practice in the Department of Materials Science and Engineering. As part of the class, students learned to use computational simulations to design high-performance alloys. Alloys are materials that are made from a mix of different elements, the combination of which imparts exceptional strength and other unique properties to the material as a whole.
Olson challenged the class to design an aluminum alloy that would be stronger than the strongest printable aluminum alloy designed to date. As with most materials, the strength of aluminum depends in large part on its microstructure: The smaller and more densely packed its microscopic constituents, or “precipitates,” the stronger the alloy would be.
With this in mind, the class used computer simulations to methodically combine aluminum with various types and concentrations of elements, to simulate and predict the resulting alloy’s strength. However, the exercise failed to produce a stronger result. At the end of the class, Taheri-Mousavi wondered: Could machine learning do better?
“At some point, there are a lot of things that contribute nonlinearly to a material’s properties, and you are lost,” Taheri-Mousavi says. “With machine-learning tools, they can point you to where you need to focus, and tell you for example, these two elements are controlling this feature. It lets you explore the design space more efficiently.”
Layer by layer
In the new study, Taheri-Mousavi continued where Olson’s class left off, this time looking to identify a stronger recipe for aluminum alloy. This time, she used machine-learning techniques designed to efficiently comb through data such as the properties of elements, to identify key connections and correlations that should lead to a more desirable outcome or product.
She found that, using just 40 compositions mixing aluminum with different elements, their machine-learning approach quickly homed in on a recipe for an aluminum alloy with higher volume fraction of small precipitates, and therefore higher strength, than what the previous studies identified. The alloy’s strength was even higher than what they could identify after simulating over 1 million possibilities without using machine learning.
To physically produce this new strong, small-precipitate alloy, the team realized 3D printing would be the way to go instead of traditional metal casting, in which molten liquid aluminum is poured into a mold and is left to cool and harden. The longer this cooling time is, the more likely the individual precipitate is to grow.
The researchers showed that 3D printing, broadly also known as additive manufacturing, can be a faster way to cool and solidify the aluminum alloy. Specifically, they considered laser bed powder fusion (LBPF) — a technique by which a powder is deposited, layer by layer, on a surface in a desired pattern and then quickly melted by a laser that traces over the pattern. The melted pattern is thin enough that it solidfies quickly before another layer is deposited and similarly “printed.” The team found that LBPF’s inherently rapid cooling and solidification enabled the small-precipitate, high-strength aluminum alloy that their machine learning method predicted.
“Sometimes we have to think about how to get a material to be compatible with 3D printing,” says study co-author John Hart. “Here, 3D printing opens a new door because of the unique characteristics of the process — particularly, the fast cooling rate. Very rapid freezing of the alloy after it’s melted by the laser creates this special set of properties.”
Putting their idea into practice, the researchers ordered a formulation of printable powder, based on their new aluminum alloy recipe. They sent the powder — a mix of aluminum and five other elements — to collaborators in Germany, who printed small samples of the alloy using their in-house LPBF system. The samples were then sent to MIT where the team ran multiple tests to measure the alloy’s strength and image the samples’ microstructure.
Their results confirmed the predictions made by their initial machine learning search: The printed alloy was five times stronger than a casted counterpart and 50 percent stronger than alloys designed using conventional simulations without machine learning. The new alloy’s microstructure also consisted of a higher volume fraction of small precipitates, and was stable at high temperatures of up to 400 degrees Celsius — a very high temperature for aluminum alloys.
The researchers are applying similar machine-learning techniques to further optimize other properties of the alloy.
“Our methodology opens new doors for anyone who wants to do 3D printing alloy design,” Taheri-Mousavi says. “My dream is that one day, passengers looking out their airplane window will see fan blades of engines made from our aluminum alloys.”
This work was carried out, in part, using MIT.nano’s characterization facilities.
Study sheds light on musicians’ enhanced attention
In a world full of competing sounds, we often have to filter out a lot of noise to hear what’s most important. This critical skill may come more easily for people with musical training, according to scientists at MIT’s McGovern Institute for Brain Research, who used brain imaging to follow what happens when people try to focus their attention on certain sounds.
When Cassia Low Manting, a recent MIT postdoc working in the labs of MIT Professor and McGovern Institute PI John Gabrieli and former McGovern Institute PI Dimitrios Pantazis, asked people to focus on a particular melody while another melody played at the same time, individuals with musical backgrounds were, unsurprisingly, better able to follow the target tune. An analysis of study participants’ brain activity suggests this advantage arises because musical training sharpens neural mechanisms that amplify the sounds they want to listen to while turning down distractions.
“People can hear, understand, and prioritize multiple sounds around them that flow on a moment-to-moment basis,” explains Gabrieli, who is the Grover Hermann Professor of Health Sciences and Technology at MIT. “This study reveals the specific brain mechanisms that successfully process simultaneous sounds on a moment-to-moment basis and promote attention to the most important sounds. It also shows how musical training alters that processing in the mind and brain, offering insight into how experience shapes the way we listen and pay attention.”
The research team, which also included senior author Daniel Lundqvist at the Karolinska Institute in Sweden, reported their open-access findings Sept. 17 in the journal Science Advances. Manting, who is now at the Karolinska Institute, notes that the research is part of an ongoing collaboration between the two institutions.
Overcoming challenges
Participants in the study had vastly difference backgrounds when it came to music. Some were professional musicians with deep training and experience, while others struggled to differentiate between the two tunes they were played, despite each one’s distinct pitch. This disparity allowed the researchers to explore how the brain’s capacity for attention might change with experience. “Musicians are very fun to study because their brains have been morphed in ways based on their training,” Manting says. “It’s a nice model to study these training effects.”
Still, the researchers had significant challenges to overcome. It has been hard to study how the brain manages auditory attention, because when researchers use neuroimaging to monitor brain activity, they see the brain’s response to all sounds: those that the listener cares most about, as well as those the listener is trying to ignore. It is usually difficult to figure out which brain signals were triggered by which sounds.
Manting and her colleagues overcame this challenge with a method called frequency tagging. Rather than playing the melodies in their experiments at a constant volume, the volume of each melody oscillated, rising and falling with a particular frequency. Each melody had its own frequency, creating detectable patterns in the brain signals that responded to it. “When you play these two sounds simultaneously to the subject and you record the brain signal, you can say, this 39-Hertz activity corresponds to the lower-pitch sound and the 43-Hertz activity corresponds specifically to the higher-pitch sound,” Manting explains. “It is very clean and very clear.”
When they paired frequency tagging with magnetoencephalography, a noninvasive method of monitoring brain activity, the team was able to track how their study participants’ brains responded to each of two melodies during their experiments. While the two tunes played, subjects were instructed to follow either the higher-pitched or the lower-pitched melody. When the music stopped, they were asked about the final notes of the target tune: did they rise or did they fall? The researchers could make this task harder by making the two tunes closer together in pitch, as well as by altering the timing of the notes.
Manting used a survey that asked about musical experience to score each participant’s musicality, and this measure had an obvious effect on task performance: The more musical a person was, the more successful they were at following the tune they had been asked to track.
To look for differences in brain activity that might explain this, the research team developed a new machine-learning approach to analyze their data. They used it to tease apart what was happening in the brain as participants focused on the target tune — even, in some cases, when the notes of the distracting tune played at the exact same time.
Top-down versus bottom-up attention
What they found was a clear separation of brain activity associated with two kinds of attention, known as top-down and bottom-up attention. Manting explains that top-down attention is goal-oriented, involving a conscious focus — the kind of attention listeners called on as they followed the target tune. Bottom-up attention, on the other hand, is triggered by the nature of the sound itself. A fire alarm would be expected to trigger this kind of attention, both with its volume and its suddenness. The distracting tune in the team’s experiments triggered activity associated with bottom-up attention — but more so in some people than in others.
“The more musical someone is, the better they are at focusing their top-down selective attention, and the less the effect of bottom-up attention is,” Manting explains.
Manting expects that musicians use their heightened capacity for top-down attention in other situations, as well. For example, they might be better than others at following a conversation in a room filled with background chatter. “I would put my bet on it that there is a high chance that they will be great at zooming into sounds,” she says.
She wonders, however, if one kind of distraction might actually be harder for a musician to filter out: the sound of their own instrument. Manting herself plays both the piano and the Chinese harp, and she says hearing those instruments is “like someone calling my name.” It’s one of many questions about how musical training affects cognition that she plans to explore in her future work.
Matthew Shoulders named head of the Department of Chemistry
Matthew D. Shoulders, the Class of 1942 Professor of Chemistry, a MacVicar Faculty Fellow, and an associate member of the Broad Institute of MIT and Harvard, has been named head of the MIT Department of Chemistry, effective Jan. 16, 2026.
“Matt has made pioneering contributions to the chemistry research community through his research on mechanisms of proteostasis and his development of next-generation techniques to address challenges in biomedicine and agriculture,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “He is also a dedicated educator, beloved by undergraduates and graduates alike. I know the department will be in good hands as we double down on our commitment to world-leading research and education in the face of financial headwinds.”
Shoulders succeeds Troy Van Voorhis, the Robert T. Haslam and Bradley Dewey Professor of Chemistry, who has been at the helm since October 2019.
“I am tremendously grateful to Troy for his leadership the past six years, building a fantastic community here in our department. We face challenges, but also many exciting opportunities, as a department in the years to come,” says Shoulders. “One thing is certain: Chemistry innovations are critical to solving pressing global challenges. Through the research that we do and the scientists we train, our department has a huge role to play in shaping the future.”
Shoulders studies how cells fold proteins, and he develops and applies novel protein engineering techniques to challenges in biotechnology. His work across chemistry and biochemistry fields including proteostasis, extracellular matrix biology, virology, evolution, and synthetic biology is yielding not just important insights into topics like how cells build healthy tissues and how proteins evolve, but also influencing approaches to disease therapy and biotechnology development.
“Matt is an outstanding researcher whose work touches on fundamental questions about how the cell machinery directs the synthesis and folding of proteins. His discoveries about how that machinery breaks down as a result of mutations or in response to stress has a fundamental impact on how we think about and treat human diseases,” says Van Voorhis.
In one part of Matt's current research program, he is studying how protein folding systems in cells — known as chaperones — shape the evolution of their clients. Amongst other discoveries, his lab has shown that viral pathogens hijack human chaperones to enable their rapid evolution and escape from host immunity. In related recent work, they have discovered that these same chaperones can promote access to malignancy-driving mutations in tumors. Beyond fundamental insights into evolutionary biology, these findings hold potential to open new therapeutic strategies to target cancer and viral infections.
“Matt’s ability to see both the details and the big picture makes him an outstanding researcher and a natural leader for the department,” says Timothy Swager, the John D. MacArthur Professor of Chemistry. “MIT Chemistry can only benefit from his dedication to understanding and addressing the parts and the whole.”
Shoulders also leads a food security project through the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Shoulders, along with MIT Research Scientist Robbie Wilson, assembled an interdisciplinary team based at MIT to enhance climate resilience in agriculture by improving one of the most inefficient aspects of photosynthesis, the carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk, high-reward MIT Grand Challenge project in 2023, and it has received further support from federal research agencies and the Grantham Foundation for the Protection of the Environment.
“Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists, creating a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team is making a concerted effort using state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”
In addition to his research contributions, Shoulders has taught multiple classes for Course V, including 5.54 (Advances in Chemical Biology) and 5.111 (Principles of Chemical Science), along with a number of other key chemistry classes. His contributions to a 5.111 “bootcamp” through the MITx platform served to address gaps in the classroom curriculum by providing online tools to help undergraduate students better grasp the material in the chemistry General Institute Requirement (GIR). His development of Guided Learning Demonstrations to support first-year chemistry courses at MIT has helped bring the lab to the GIR, and also contributed to the popularity of 5.111 courses offered regularly via MITx.
“I have had the pleasure of teaching with Matt on several occasions, and he is a fantastic educator. He is an innovator both inside and outside the classroom and has an unwavering commitment to his students’ success,” says Van Voorhis of Shoulders, who was named a 2022 MacVicar Faculty Fellow, and who received a Committed to Caring award through the Office of Graduate Education.
Shoulders also founded the MIT Homeschool Internship Program for Science and Technology, which brings high school students to campus for paid summer research experiences in labs across the Institute.
He is a founding member of the Department of Chemistry’s Quality of Life Committee and chair for the last six years, helping to improve all aspects of opportunity, professional development, and experience in the department: “countless changes that have helped make MIT a better place for all,” as Van Voorhis notes, including creating a peer mentoring program for graduate students and establishing universal graduate student exit interviews to collect data for department-wide assessment and improvement.
At the Institute level, Shoulders has served on the Committee on Graduate Programs, Committee on Sexual Misconduct Prevention and Response (in which he co-chaired the provost's working group on the Faculty and Staff Sexual Misconduct Survey), and the Committee on Assessment of Biohazards and Embryonic Stem Cell Research Oversight, among other roles.
Shoulders graduated summa cum laude from Virginia Tech in 2004, earning a BS in chemistry with a minor in biochemistry. He earned a PhD in chemistry at the University of Wisconsin at Madison in 2009 under Professor Ronald Raines. Following an American Cancer Society Postdoctoral Fellowship at Scripps Research Institute, working with professors Jeffery Kelly and Luke Wiseman, Shoulders joined the MIT Department of Chemistry faculty as an assistant professor in 2012. Shoulders also serves as an associate member of the Broad Institute and an investigator at the Center for Musculoskeletal Research at Massachusetts General Hospital.
Among his many awards, Shoulders has received a NIH Director's New Innovator Award under the NIH High-Risk, High-Reward Research Program; an NSF CAREER Award; an American Cancer Society Research Scholar Award; the Camille Dreyfus Teacher-Scholar Award; and most recently the Ono Pharma Foundation Breakthrough Science Award.
Report: Sustainability in supply chains is still a firm-level priority
Corporations are actively seeking sustainability advances in their supply chains — but many need to improve the business metrics they use in this area to realize more progress, according to a new report by MIT researchers.
During a time of shifting policies globally and continued economic uncertainty, the survey-based report finds 85 percent of companies say they are continuing supply chain sustainability practices at the same level as in recent years, or are increasing those efforts.
“What we found is strong evidence that sustainability still matters,” says Josué Velázquez Martínez, a research scientist and director of the MIT Sustainable Supply Chain Lab, which helped produce the report. “There are many things that remain to be done to accomplish those goals, but there’s a strong willingness from companies in all parts of the world to do something about sustainability.”
The new analysis, titled “Sustainability Still Matters,” was released today. It is the sixth annual report on the subject prepared by the MIT Sustainable Supply Chain Lab, which is part of MIT’s Center for Transportation and Logistics. The Council of Supply Chain Management Professionals collaborated on the project as well.
The report is based on a global survey, with responses from 1,203 professionals in 97 countries. This year, the report analyzes three issues in depth, including regulations and the role they play in corporate approaches to supply chain management. A second core topic is management and mitigation of what industry professionals call “Scope 3” emissions, which are those not from a firm itself, but from a firm’s supply chain. And a third issue of focus is the future of freight transportation, which by itself accounts for a substantial portion of supply chain emissions.
Broadly, the survey finds that for European-based firms, the principal driver of action in this area remains government mandates, such as the Corporate Sustainability Reporting Directive, which requires companies to publish regular reports on their environmental impact and the risks to society involved. In North America, firm leadership and investor priorities are more likely to be decisive factors in shaping a company’s efforts.
“In Europe the pressure primarily comes more from regulation, but in the U.S. it comes more from investors, or from competitors,” Velázquez Martínez says.
The survey responses on Scope 3 emissions reveal a number of opportunities for improvement. In business and sustainability terms, Scope 1 greenhouse gas emissions are those a firm produces directly. Scope 2 emissions are the energy it has purchased. And Scope 3 emissions are those produced across a firm’s value chain, including the supply chain activities involved in producing, transporting, using, and disposing of its products.
The report reveals that about 40 percent of firms keep close track of Scope 1 and 2 emissions, but far fewer tabulate Scope 3 on equivalent terms. And yet Scope 3 may account for roughly 75 percent of total firm emissions, on aggregate. About 70 percent of firms in the survey say they do not have enough data from suppliers to accurately tabulate the total greenhouse gas and climate impact of their supply chains.
Certainly it can be hard to calculate the total emissions when a supply chain has many layers, including smaller suppliers lacking data capacity. But firms can upgrade their analytics in this area, too. For instance, 50 percent of North American firms are still using spreadsheets to tabulate emissions data, often making rough estimates that correlate emissions to simple economic activity. An alternative is life cycle assessment software that provides more sophisticated estimates of a product’s emissions, from the extraction of its materials to its post-use disposal. By contrast, only 32 percent of European firms are still using spreadsheets rather than life cycle assessment tools.
“You get what you measure,” Velázquez Martínez says. “If you measure poorly, you’re going to get poor decisions that most likely won’t drive the reductions you’re expecting. So we pay a lot of attention to that particular issue, which is decisive to defining an action plan. Firms pay a lot of attention to metrics in their financials, but in sustainability they’re often using simplistic measurements.”
When it comes to transportation, meanwhile, the report shows that firms are still grappling with the best ways to reduce emissions. Some see biofuels as the best short-term alternative to fossil fuels; others are investing in electric vehicles; some are waiting for hydrogen-powered vehicles to gain traction. Supply chains, after all, frequently involve long-haul trips. For firms, as for individual consumers, electric vehicles are more practical with a larger infrastructure of charging stations. There are advances on that front but more work to do as well.
That said, “Transportation has made a lot of progress in general,” Velázquez Martínez says, noting the increased acceptance of new modes of vehicle power in general.
Even as new technologies loom on the horizon, though, supply chain sustainability is not wholly depend on their introduction. One factor continuing to propel sustainability in supply chains is the incentives companies have to lower costs. In a competitive business environment, spending less on fossil fuels usually means savings. And firms can often find ways to alter their logistics to consume and spend less.
“Along with new technologies, there is another side of supply chain sustainability that is related to better use of the current infrastructure,” Velázquez Martínez observes. “There is always a need to revise traditional ways of operating to find opportunities for more efficiency.”
AI in the 2026 Midterm Elections
We are nearly one year out from the 2026 midterm elections, and it’s far too early to predict the outcomes. But it’s a safe bet that artificial intelligence technologies will once again be a major storyline.
The widespread fear that AI would be used to manipulate the 2024 U.S. election seems rather quaint in a year where the president posts AI-generated images of himself as the pope on official White House accounts. But AI is a lot more than an information manipulator. It’s also emerging as a politicized issue. Political first-movers are adopting the technology, and that’s opening a ...
Wright says it’s not windy in winter. Data says otherwise.
Shutdown, Trump budget threaten popular sea ice website
Gradual warming — not disasters — will be most harmful, paper says
Sustainable aviation fuel offers lifeline to ethanol, researchers say
Heat safety rules for illness also reduce injuries, study finds
Shift to a plant-based diet to avoid worst climate impacts, scientists say
World ablaze with more damaging fires now than in 1980s — study
Reddit’s former CEO wants you to buy a subscription for trees
Bank climate group formally winds down after Wall Street exodus
Chemists create red fluorescent dyes that may enable clearer biomedical imaging
MIT chemists have designed a new type of fluorescent molecule that they hope could be used for applications such as generating clearer images of tumors.
The new dye is based on a borenium ion — a positively charged form of boron that can emit light in the red to near-infrared range. Until recently, these ions have been too unstable to be used for imaging or other biomedical applications.
In a study appearing today in Nature Chemistry, the researchers showed that they could stabilize borenium ions by attaching them to a ligand. This approach allowed them to create borenium-containing films, powders, and crystals, all of which emit and absorb light in the red and near-infrared range.
That is important because near-IR light is easier to see when imaging structures deep within tissues, which could allow for clearer images of tumors and other structures in the body.
“One of the reasons why we focus on red to near-IR is because those types of dyes penetrate the body and tissue much better than light in the UV and visible range. Stability and brightness of those red dyes are the challenges that we tried to overcome in this study,” says Robert Gilliard, the Novartis Professor of Chemistry at MIT and the senior author of the study.
MIT research scientist Chun-Lin Deng is the lead author of the paper. Other authors include Bi Youan (Eric) Tra PhD ’25, former visiting graduate student Xibao Zhang, and graduate student Chonghe Zhang.
Stabilized borenium
Most fluorescent imaging relies on dyes that emit blue or green light. Those imaging agents work well in cells, but they are not as useful in tissue because low levels of blue and green fluorescence produced by the body interfere with the signal. Blue and green light also scatters in tissue, limiting how deeply it can penetrate.
Imaging agents that emit red fluorescence can produce clearer images, but most red dyes are inherently unstable and don’t produce a bright signal, because of their low quantum yields (the ratio of fluorescent photons emitted per photon of light is absorbed). For many red dyes, the quantum yield is only about 1 percent.
Among the molecules that can emit near-infrared light are borenium cations —positively charged ions containing an atom of boron attached to three other atoms.
When these molecules were first discovered in the mid-1980s, they were considered “laboratory curiosities,” Gilliard says. These molecules were so unstable that they had to be handled in a sealed container called a glovebox to protect them from exposure to air, which can lead them to break down.
Later, chemists realized they could make these ions more stable by attaching them to molecules called ligands. Working with these more stable ions, Gillliard’s lab discovered in 2019 that they had some unusual properties: Namely, they could respond to changes in temperature by emitting different colors of light.
However, at that point, “there was a substantial problem in that they were still too reactive to be handled in open air,” Gilliard says.
His lab began working on new ways to further stabilize them using ligands known as carbodicarbenes (CDCs), which they reported in a 2022 study. Due to this stabilization, the compounds can now be studied and handled without using a glovebox. They are also resistant to being broken down by light, unlike many previous borenium-based compounds.
In the new study, Gilliard began experimenting with the anions (negatively charged ions) that are a part of the CDC-borenium compounds. Interactions between these anions and the borenium cation generate a phenomenon known as exciton coupling, the researchers discovered. This coupling, they found, shifted the molecules’ emission and absorption properties toward the infrared end of the color spectrum. These molecules also generated a high quantum yield, allowing them to shine more brightly.
“Not only are we in the correct region, but the efficiency of the molecules is also very suitable,” Gilliard says. “We’re up to percentages in the thirties for the quantum yields in the red region, which is considered to be high for that region of the electromagnetic spectrum.”
Potential applications
The researchers also showed that they could convert their borenium-containing compounds into several different states, including solid crystals, films, powders, and colloidal suspensions.
For biomedical imaging, Gilliard envisions that these borenium-containing materials could be encapsulated in polymers, allowing them to be injected into the body to use as an imaging dye. As a first step, his lab plans to work with researchers in the chemistry department at MIT and at the Broad Institute of MIT and Harvard to explore the potential of imaging these materials within cells.
Because of their temperature responsiveness, these materials could also be deployed as temperature sensors, for example, to monitor whether drugs or vaccines have been exposed to temperatures that are too high or low during shipping.
“For any type of application where temperature tracking is important, these types of ‘molecular thermometers’ can be very useful,” Gilliard says.
If incorporated into thin films, these molecules could also be useful as organic light-emitting diodes (OLEDs), particularly in new types of materials such as flexible screens, Gilliard says.
“The very high quantum yields achieved in the near-IR, combined with the excellent environmental stability, make this class of compounds extremely interesting for biological applications,” says Frieder Jaekle, a professor of chemistry at Rutgers University, who was not involved in the study. “Besides the obvious utility in bioimaging, the strong and tunable near-IR emission also makes these new fluorophores very appealing as smart materials for anticounterfeiting, sensors, switches, and advanced optoelectronic devices.”
In addition to exploring possible applications for these dyes, the researchers are now working on extending their color emission further into the near-infrared region, which they hope to achieve by incorporating additional boron atoms. Those extra boron atoms could make the molecules less stable, so the researchers are also working on new types of carbodicarbenes to help stabilize them.
The research was funded by the Arnold and Mabel Beckman Foundation and the National Institutes of Health.
AI maps how a new antibiotic targets gut bacteria
For patients with inflammatory bowel disease, antibiotics can be a double-edged sword. The broad-spectrum drugs often prescribed for gut flare-ups can kill helpful microbes alongside harmful ones, sometimes worsening symptoms over time. When fighting gut inflammation, you don’t always want to bring a sledgehammer to a knife fight.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and McMaster University have identified a new compound that takes a more targeted approach. The molecule, called enterololin, suppresses a group of bacteria linked to Crohn’s disease flare-ups while leaving the rest of the microbiome largely intact. Using a generative AI model, the team mapped how the compound works, a process that usually takes years but was accelerated here to just months.
“This discovery speaks to a central challenge in antibiotic development,” says Jon Stokes, senior author of a new paper on the work, assistant professor of biochemistry and biomedical sciences at McMaster, and research affiliate at MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health. “The problem isn’t finding molecules that kill bacteria in a dish — we’ve been able to do that for a long time. A major hurdle is figuring out what those molecules actually do inside bacteria. Without that detailed understanding, you can’t develop these early-stage antibiotics into safe and effective therapies for patients.”
Enterololin is a stride toward precision antibiotics: treatments designed to knock out only the bacteria causing trouble. In mouse models of Crohn’s-like inflammation, the drug zeroed in on Escherichia coli, a gut-dwelling bacterium that can worsen flares, while leaving most other microbial residents untouched. Mice given enterololin recovered faster and maintained a healthier microbiome than those treated with vancomycin, a common antibiotic.
Pinning down a drug’s mechanism of action, the molecular target it binds inside bacterial cells, normally requires years of painstaking experiments. Stokes’ lab discovered enterololin using a high-throughput screening approach, but determining its target would have been the bottleneck. Here, the team turned to DiffDock, a generative AI model developed at CSAIL by MIT PhD student Gabriele Corso and MIT Professor Regina Barzilay.
DiffDock was designed to predict how small molecules fit into the binding pockets of proteins, a notoriously difficult problem in structural biology. Traditional docking algorithms search through possible orientations using scoring rules, often producing noisy results. DiffDock instead frames docking as a probabilistic reasoning problem: a diffusion model iteratively refines guesses until it converges on the most likely binding mode.
“In just a couple of minutes, the model predicted that enterololin binds to a protein complex called LolCDE, which is essential for transporting lipoproteins in certain bacteria,” says Barzilay, who also co-leads the Jameel Clinic. “That was a very concrete lead — one that could guide experiments, rather than replace them.”
Stokes’ group then put that prediction to the test. Using DiffDock predictions as an experimental GPS, they first evolved enterololin-resistant mutants of E. coli in the lab, which revealed that changes in the mutant’s DNA mapped to lolCDE, precisely where DiffDock had predicted enterololin to bind. They also performed RNA sequencing to see which bacterial genes switched on or off when exposed to the drug, as well as used CRISPR to selectively knock down expression of the expected target. These laboratory experiments all revealed disruptions in pathways tied to lipoprotein transport, exactly what DiffDock had predicted.
“When you see the computational model and the wet-lab data pointing to the same mechanism, that’s when you start to believe you’ve figured something out,” says Stokes.
For Barzilay, the project highlights a shift in how AI is used in the life sciences. “A lot of AI use in drug discovery has been about searching chemical space, identifying new molecules that might be active,” she says. “What we’re showing here is that AI can also provide mechanistic explanations, which are critical for moving a molecule through the development pipeline.”
That distinction matters because mechanism-of-action studies are often a major rate-limiting step in drug development. Traditional approaches can take 18 months to two years, or more, and cost millions of dollars. In this case, the MIT–McMaster team cut the timeline to about six months, at a fraction of the cost.
Enterololin is still in the early stages of development, but translation is already underway. Stokes’ spinout company, Stoked Bio, has licensed the compound and is optimizing its properties for potential human use. Early work is also exploring derivatives of the molecule against other resistant pathogens, such as Klebsiella pneumoniae. If all goes well, clinical trials could begin within the next few years.
The researchers also see broader implications. Narrow-spectrum antibiotics have long been sought as a way to treat infections without collateral damage to the microbiome, but they have been difficult to discover and validate. AI tools like DiffDock could make that process more practical, rapidly enabling a new generation of targeted antimicrobials.
For patients with Crohn’s and other inflammatory bowel conditions, the prospect of a drug that reduces symptoms without destabilizing the microbiome could mean a meaningful improvement in quality of life. And in the bigger picture, precision antibiotics may help tackle the growing threat of antimicrobial resistance.
“What excites me is not just this compound, but the idea that we can start thinking about the mechanism of action elucidation as something we can do more quickly, with the right combination of AI, human intuition, and laboratory experiments,” says Stokes. “That has the potential to change how we approach drug discovery for many diseases, not just Crohn’s.”
“One of the greatest challenges to our health is the increase of antimicrobial-resistant bacteria that evade even our best antibiotics,” adds Yves Brun, professor at the University of Montreal and distinguished professor emeritus at Indiana University Bloomington, who wasn’t involved in the paper. “AI is becoming an important tool in our fight against these bacteria. This study uses a powerful and elegant combination of AI methods to determine the mechanism of action of a new antibiotic candidate, an important step in its potential development as a therapeutic.”
Corso, Barzilay, and Stokes wrote the paper with McMaster researchers Denise B. Catacutan, Vian Tran, Jeremie Alexander, Yeganeh Yousefi, Megan Tu, Stewart McLellan, and Dominique Tertigas, and professors Jakob Magolan, Michael Surette, Eric Brown, and Brian Coombes. Their research was supported, in part, by the Weston Family Foundation; the David Braley Centre for Antibiotic Discovery; the Canadian Institutes of Health Research; the Natural Sciences and Engineering Research Council of Canada; M. and M. Heersink; Canadian Institutes for Health Research; Ontario Graduate Scholarship Award; the Jameel Clinic; and the U.S. Defense Threat Reduction Agency Discovery of Medical Countermeasures Against New and Emerging Threats program.
The researchers posted sequencing data in public repositories and released the DiffDock-L code openly on GitHub.
What Europe’s New Gig Work Law Means for Unions and Technology
At EFF, we believe that tech rights are worker’s rights. Since the pandemic, workers of all kinds have been subjected to increasingly invasive forms of bossware. These are the “algorithmic management” tools that surveil workers on and off the job, often running on devices that (nominally) belong to workers, hijacking our phones and laptops. On the job, digital technology can become both a system of ubiquitous surveillance and a means of total control.
Enter the EU’s Platform Work Directive (PWD). The PWD was finalized in 2024, and every EU member state will have to implement (“transpose”) it by 2026. The PWD contains far-reaching measures to protect workers from abuse, wage theft, and other unfair working conditions.
But the PWD isn’t self-enforcing! Over the decades that EFF has fought for user rights, we’ve proved that having a legal right on paper isn’t the same as having that right in the real world. And workers are rarely positioned to take on their bosses in court or at a regulatory body. To do that, they need advocates.
That’s where unions come in. Unions are well-positioned to defend their members – and all workers (EFF employees are proudly organized under the International Federation of Professional and Technical Engineers).
The European Trade Union Confederation has just published “Negotiating the Algorithm,” a visionary – but detailed and down-to-earth – manual for unions seeking to leverage the PWD to protect and advance workers’ interests in Europe.
The report notes the alarming growth of algorithmic management, with 79% of European firms employing some form of bossware. Report author Ben Wray enumerates many of the harms of algorithmic management, such as “algorithmic wage discrimination,” where each worker is offered a different payscale based on surveillance data that is used to infer how economically desperate they are.
Algorithmic management tools can also be used for wage theft, for example, by systematically undercounting the distances traveled by delivery drivers or riders. These tools can also subject workers to danger by penalizing workers who deviate from prescribed tasks (for example, when riders are downranked for taking an alternate route to avoid a traffic accident).
Gig workers live under the constant threat of being “deactivated” (kicked off the app) and feel pressure to do unpaid work for clients who can threaten their livelihoods with one-star reviews. Workers also face automated de-activation: a whole host of “anti-fraud” tripwires can see workers de-activated without appeal. These risks do not befall all workers equally: Black and brown workers face a disproportionate risk of de-activation when they fail facial recognition checks meant to prevent workers from sharing an account (facial recognition systems make more errors when dealing with darker skin tones).
Algorithmic management is typically accompanied by a raft of cost-cutting measures, and workers under algorithmic management often find that their employer’s human resources department has been replaced with chatbots, web-forms, and seemingly unattended email boxes. When algorithmic management goes wrong, workers struggle to reach a human being who can hear their appeal.
For these reasons and more, the ETUC believes that unions need to invest in technical capacity to protect workers’ interests in the age of algorithmic management.
The report sets out many technological activities that unions can get involved with. At the most basic level, unions can invest in developing analytical capabilities, so that when they request logs from algorithmic management systems as part of a labor dispute, they can independently analyze those files.
But that’s just table-stakes. Unions should also consider investing in “counter apps” that help workers. There are workers that act as an external check on employers’ automation, like the UberCheats app, which double-checked the mileage that Uber drivers were paid for. There are apps that enable gig workers to collectively refuse lowball offers, raising the prevailing wage for all the workers in a region, such as the Brazilian StopClub app. Indonesian gig riders have a wide range of “tuyul” apps that let them modify the functionality of their dispatch apps. We love this kind of “adversarial interoperability.” Any time the users of technology get to decide how it works, we celebrate. And in the US, this sort of tech-enabled collective action by workers is likely to be shielded from antitrust liability even if the workers involved are classified as independent contractors.
Developing in-house tech teams also gives unions the know-how to develop the tools for organizers and workers to coordinate their efforts to protect workers. The report acknowledges that this is a lot of tech work to ask individual unions to fund, and it moots the possibility of unions forming cooperative ventures to do this work for the unions in the co-op. At EFF, we regularly hear from skilled people who want to become public interest technologists, and we bet there’d be plenty of people who’d jump at the chance to do this work.
The new Platform Work Directive gives workers and their representatives the right to challenge automated decision-making, to peer inside the algorithms used to dispatch and pay workers, to speak to a responsible human about disputes, and to have their privacy and other fundamental rights protected on the job. It represents a big step forward for workers’ rights in the digital age.
But as the European Trade Union Confederation’s report reminds us, these rights are only as good as workers’ ability to claim them. After 35 years of standing up for people’s digital rights, we couldn’t agree more.
Tile’s Lack of Encryption Is a Danger for Users Everywhere
In research shared with Wired this week, security researchers detailed a series of vulnerabilities and design flaws with Life360’s Tile Bluetooth trackers that make it easy for stalkers and the company itself to track the location of Tile devices.
Tile trackers are small Bluetooth trackers, similar to Apple’s Airtags, but they work on their own network, not Apple’s. We’ve been raising concerns about these types of trackers since they were first introduced and provide guidance for finding them if you think someone is using them to track you without your knowledge.
EFF has worked on improving the Detecting Unwanted Location Trackers standard that Apple, Google, and Samsung use, and these companies have at least made incremental improvements. But Tile has done little to mitigate the concerns we’ve raised around stalkers using their devices to track people.
One of the core fundamentals of that standard is that Bluetooth trackers should rotate their MAC address, making them harder for a third-party to track, and that they should encrypt information sent. According to the researchers, Tile does neither.
This has a direct impact on the privacy of legitimate users and opens the device up to potentially even more dangerous stalking. Tile devices do have a rotating ID, but since the MAC address is static and unencrypted, anyone in the vicinity could pick up and track that Bluetooth device.
Other Bluetooth trackers don’t broadcast their MAC address, and instead use only a rotating ID, which makes it much harder for someone to record and track the movement of that tag. Apple, Google, and Samsung also all use end-to-end encryption when data about the location is sent to the companies’ servers, meaning the companies themselves cannot access that information.
In its privacy policy, Life360 states that, “You are the only one with the ability to see your Tile location and your device location.” But if the information from a tracker is sent to and stored by Tile in cleartext (i.e. unencrypted text) as the researchers believe, then the company itself can see the location of the tags and their owners, turning them from single item trackers into surveillance tools.
There are also issues with the “anti-theft mode” that Tile offers. The anti-theft setting hides the tracker from Tile’s “Scan and Secure” detection feature, so it can’t be easily found using the app. Ostensibly this is a feature meant to make it harder for a thief to just use the app to locate a tracker. In exchange for enabling the anti-theft feature, a user has to submit a photo ID and agree to pay a $1 million fine if they’re convicted of misusing the tracker.
But that’s only helpful if the stalker gets caught, which is a lot less likely when the person being tracked can’t use the anti-stalking protection feature in the app to find the tracker following them. As we’ve said before, it is impossible to make an anti-theft device that secretly notifies only the owner without also making a perfect tool for stalking.
Life360, the company that owns Tile, told Wired it “made a number of improvements” after the researchers reported them, but did not detail what those improvements are.
Many of these issues would be mitigated by doing what their competition is already doing: encrypting the broadcasts from its Bluetooth trackers and randomizing MAC addresses. Every company involved in the location tracker industry business has the responsibility to create a safeguard for people, not just for their lost keys.