Feed aggregator
Continent-wide mapping shows increasing sensitivity of East Antarctica to meltwater ponding
Nature Climate Change, Published online: 04 July 2025; doi:10.1038/s41558-025-02363-5
This study provides a continent-wide assessment of surface meltwater area in Antarctica between 2006 and 2021, highlighting recent increases in magnitude and variability in East Antarctica, with indications that the ice-sheet surface is becoming increasingly prone to further meltwater ponding.Rapid increases in satellite-observed ice sheet surface meltwater production
Nature Climate Change, Published online: 04 July 2025; doi:10.1038/s41558-025-02364-4
Surface melt is an important component of ice sheet dynamics, but for many remote regions the melt rates are mainly known from models. Here the authors present satellite observations of melt rates for Greenland and Antarctica, showing that East Antarctica has become a melting hotspot.MIT and Mass General Hospital researchers find disparities in organ acceptance
In 1954, the world’s first successful organ transplant took place at Brigham and Women’s Hospital, in the form of a kidney donated from one twin to the other. At the time, a group of doctors and scientists had correctly theorized that the recipient’s antibodies were unlikely to reject an organ from an identical twin. One Nobel Prize and a few decades later, advancements in immune-suppressing drugs increased the viability of and demand for organ transplants. Today, over 1 million organ transplants have been performed in the United States, more than any other country in the world.
The impressive scale of this achievement was made possible due to advances in organ matching systems: The first computer-based organ matching system was released in 1977. Despite continued innovation in computing, medicine, and matching technology over the years, over 100,000 people in the U.S. are currently on the national transplant waiting list and 13 people die each day waiting for an organ transplant.
Most computational research in organ allocation is focused on the initial stages, when waitlisted patients are being prioritized for organ transplants. In a new paper presented at ACM Conference on Fairness, Accountability, and Transparency (FAccT) in Athens, Greece, researchers from MIT and Massachusetts General Hospital focused on the final, less-studied stage: organ offer acceptance, when an offer is made and the physician at the transplant center decides on behalf of the patient whether to accept or reject the offered organ.
“I don’t think we were terribly surprised, but we were obviously disappointed,” co-first author and MIT PhD student Hammaad Adam says. Using computational models to analyze transplantation data from over 160,000 transplant candidates in the Scientific Registry of Transplant Recipients (SRTR) between 2010 and 2020, the researchers found that physicians were overall less likely to accept liver and lung offers on behalf of Black candidates, resulting in additional barriers for Black patients in the organ offer acceptance process.
For livers, Black patients had 7 percent lower odds of offer acceptance than white patients. When it came to lungs, the disparity became even larger, with 20 percent lower odds of having an offer acceptance than white patients with similar characteristics.
The data don’t necessarily point to clinician bias as the main influence. “The bigger takeaway is that even if there are factors that justify clinical decision-making, there could be clinical conditions that we didn’t control for, that are more common for Black patients,” Adam explains. If the wait-list fails to account for certain patterns in decision-making, they could create obstacles in the process even if the process itself is “unbiased.”
The researchers also point out that high variability in offer acceptance and risk tolerances among transplant centers is a potential factor complicating the decision-making process. Their FAccT paper references a 2020 paper published in JAMA Cardiology, which concluded that wait-list candidates listed at transplant centers with lower offer acceptance rates have a higher likelihood of mortality.
Another key finding was that an offer was more likely to be accepted if the donor and candidate were of the same race. The paper describes this trend as “concerning,” given the historical inequities in organ procurement that have limited donation from racial and ethnic minority groups.
Previous work from Adam and his collaborators has aimed to address this gap. Last year, they compiled and released Organ Retrieval and Collection of Health Information for Donation (ORCHID), the first multi-center dataset describing the performance of organ procurement organizations (OPOs). ORCHID contains 10 years’ worth of OPO data, and is intended to facilitate research that addresses bias in organ procurement.
“Being able to do good work in this field takes time,” says Adam, who notes that the entirety of the organ offer acceptance project took years to complete. To his knowledge, only one paper to date studies the association between offer acceptance and race.
While the bureaucratic and highly interdisciplinary nature of clinical AI projects can dissuade computer science graduate students from pursuing them, Adam committed to the project for the duration of his PhD in the lab of associate professor of electrical engineering Marzyeh Ghassemi, an affiliate of the MIT Jameel Clinic and the Institute of Medical Engineering and Sciences.
To graduate students interested in pursuing clinical AI research projects, Adam recommends that they “free [themselves] from the cycle of publishing every four months.”
“I found it freeing, to be honest — it’s OK if these collaborations take a while,” he says. “It’s hard to avoid that. I made the conscious choice a few years ago and I was happy doing that work.”
This work was supported with funding from the MIT Jameel Clinic. It was also supported, in part, by Takeda Development Center Americas Inc. (successor in interest to Millennium Pharmaceuticals Inc.), an NIH Ruth L. Kirschstein National Research Service Award, a CIFAR AI Chair at the Vector Institute, and by the National Institutes of Health.
MIT and Mass General Hospital researchers find disparities in organ allocation
In 1954, the world’s first successful organ transplant took place at Brigham and Women’s Hospital, in the form of a kidney donated from one twin to the other. At the time, a group of doctors and scientists had correctly theorized that the recipient’s antibodies were unlikely to reject an organ from an identical twin. One Nobel Prize and a few decades later, advancements in immune-suppressing drugs increased the viability of and demand for organ transplants. Today, over 1 million organ transplants have been performed in the United States, more than any other country in the world.
The impressive scale of this achievement was made possible due to advances in organ matching systems: The first computer-based organ matching system was released in 1977. Despite continued innovation in computing, medicine, and matching technology over the years, over 100,000 people in the U.S. are currently on the national transplant waiting list and 13 people die each day waiting for an organ transplant.
Most computational research in organ allocation is focused on the initial stages, when waitlisted patients are being prioritized for organ transplants. In a new paper presented at ACM Conference on Fairness, Accountability, and Transparency (FAccT) in Athens, Greece, researchers from MIT and Massachusetts General Hospital focused on the final, less-studied stage: when an offer is made and the physician at the transplant center decides on behalf of the patient whether to accept or reject the offered organ.
“I don’t think we were terribly surprised, but we were obviously disappointed,” co-first author and recent MIT PhD graduate Hammaad Adam says. Using computational models to analyze transplantation data from over 160,000 transplant candidates in the Scientific Registry of Transplant Recipients (SRTR) between 2010 and 2020, the researchers found that physicians were overall less likely to accept liver and lung offers on behalf of Black candidates, resulting in additional barriers for Black patients in the organ allocation process.
For livers, Black patients had 7 percent lower odds of offer acceptance than white patients. When it came to lungs, the disparity became even larger, with 20 percent lower odds of having an offer acceptance than white patients with similar characteristics.
The data don’t necessarily point to clinician bias as the main influence. “The bigger takeaway is that even if there are factors that justify clinical decision-making, there could be clinical conditions that we didn’t control for, that are more common for Black patients,” Adam explains. If the wait-list fails to account for certain patterns in decision-making, they could create obstacles in the process even if the process itself is “unbiased.”
The researchers also point out that high variability in offer acceptance and risk tolerances among transplant centers is a potential factor complicating the decision-making process. Their FAccT paper references a 2020 paper published in JAMA Cardiology, which concluded that wait-list candidates listed at transplant centers with lower offer acceptance rates have a higher likelihood of mortality.
Another key finding was that an offer was more likely to be accepted if the donor and candidate were of the same race. The paper describes this trend as “concerning,” given the historical inequities in organ procurement that have limited donation from racial and ethnic minority groups.
Previous work from Adam and his collaborators has aimed to address this gap. Last year, they compiled and released Organ Retrieval and Collection of Health Information for Donation (ORCHID), the first multi-center dataset describing the performance of organ procurement organizations (OPOs). ORCHID contains 10 years’ worth of OPO data, and is intended to facilitate research that addresses bias in organ procurement.
“Being able to do good work in this field takes time,” says Adam, who notes that the entirety of the organ allocation project took years to complete. To his knowledge, only one paper to date studies the association between offer acceptance and race.
While the bureaucratic and highly interdisciplinary nature of clinical AI projects can dissuade computer science graduate students from pursuing them, Adam committed to the project for the duration of his PhD in the lab of associate professor of electrical engineering Marzyeh Ghassemi, an affiliate of the MIT Jameel Clinic and the Institute of Medical Engineering and Sciences.
To graduate students interested in pursuing clinical AI research projects, Adam recommends that they “free [themselves] from the cycle of publishing every four months.”
“I found it freeing, to be honest — it’s OK if these collaborations take a while,” he says. “It’s hard to avoid that. I made the conscious choice a few years ago and I was happy doing that work.”
This work was supported with funding from the MIT Jameel Clinic. This research was supported, in part, by Takeda Development Center Americas Inc. (successor in interest to Millennium Pharmaceuticals Inc.), an NIH Ruth L. Kirschstein National Research Service Award, a CIFAR AI Chair at the Vector Institute, and by the National Institutes of Health.
Surveillance Used by a Drug Cartel
Once you build a surveillance system, you can’t control who will use it:
A hacker working for the Sinaloa drug cartel was able to obtain an FBI official’s phone records and use Mexico City’s surveillance cameras to help track and kill the agency’s informants in 2018, according to a new US justice department report.
The incident was disclosed in a justice department inspector general’s audit of the FBI’s efforts to mitigate the effects of “ubiquitous technical surveillance,” a term used to describe the global proliferation of cameras and the thriving trade in vast stores of communications, travel, and location data...
California asks judge to reject push to halt climate disclosure laws
Trump’s science guidelines could amplify climate skeptics
Workers died in high heat as OSHA debates protections
Human rights court will decide sweeping climate case today
Megabill passage all but assured
Republicans' megabill not so beautiful for the climate, analysts find
Sabin Center, activists launch state-focused ‘Model Climate Laws Initiative’
Brussels proposes softened 90 percent 2040 climate target
EU set to offer shields for exporters under carbon border levy
JPMorgan executive says US pressure can’t derail climate agenda
Study: Babies’ poor vision may help organize visual brain pathways
Incoming information from the retina is channeled into two pathways in the brain’s visual system: one that’s responsible for processing color and fine spatial detail, and another that’s involved in spatial localization and detecting high temporal frequencies. A new study from MIT provides an account for how these two pathways may be shaped by developmental factors.
Newborns typically have poor visual acuity and poor color vision because their retinal cone cells are not well-developed at birth. This means that early in life, they are seeing blurry, color-reduced imagery. The MIT team proposes that such blurry, color-limited vision may result in some brain cells specializing in low spatial frequencies and low color tuning, corresponding to the so-called magnocellular system. Later, with improved vision, cells may tune to finer details and richer color, consistent with the other pathway, known as the parvocellular system.
To test their hypothesis, the researchers trained computational models of vision on a trajectory of input similar to what human babies receive early in life — low-quality images early on, followed by full-color, sharper images later. They found that these models developed processing units with receptive fields exhibiting some similarity to the division of magnocellular and parvocellular pathways in the human visual system. Vision models trained on only high-quality images did not develop such distinct characteristics.
“The findings potentially suggest a mechanistic account of the emergence of the parvo/magno distinction, which is one of the key organizing principles of the visual pathway in the mammalian brain,” says Pawan Sinha, an MIT professor of brain and cognitive sciences and the senior author of the study.
MIT postdocs Marin Vogelsang and Lukas Vogelsang are the lead authors of the study, which appears today in the journal Communications Biology. Sidney Diamond, an MIT research affiliate, and Gordon Pipa, a professor of neuroinformatics at the University of Osnabrueck, are also authors of the paper.
Sensory input
The idea that low-quality visual input might be beneficial for development grew out of studies of children who were born blind but later had their sight restored. An effort from Sinha’s laboratory, Project Prakash, has screened and treated thousands of children in India, where reversible forms of vision loss such as cataracts are relatively common. After their sight is restored, many of these children volunteer to participate in studies in which Sinha and his colleagues track their visual development.
In one of these studies, the researchers found that children who had cataracts removed exhibited a marked drop in object-recognition performance when the children were presented with black and white images, compared to colored ones. Those findings led the researchers to hypothesize that reduced color input characteristic of early typical development, far from being a hindrance, allows the brain to learn to recognize objects even in images that have impoverished or shifted colors.
“Denying access to rich color at the outset seems to be a powerful strategy to build in resilience to color changes and make the system more robust against color loss in images,” Sinha says.
In that study, the researchers also found that when computational models of vision were initially trained on grayscale images, followed by color images, their ability to recognize objects was more robust than that of models trained only on color images. Similarly, another study from the lab found that models performed better when they were trained first on blurry images, followed by sharper images.
To build on those findings, the MIT team wanted to explore what might be the consequences of both of those features — color and visual acuity — being limited at the outset of development. They hypothesized that these limitations might contribute to the development of the magnocellular and parvocellular pathways.
In addition to being highly attuned to color, cells in the parvocellular pathway have small receptive fields, meaning that they receive input from more compact clusters of retinal ganglion cells. This helps them to process fine detail. Cells in the magnocellular pathway pool information across larger areas, allowing them to process more global spatial information.
To test their hypothesis that developmental progressions could contribute to the magno and parvo cell selectivities, the researchers trained models on two different sets of images. One model was presented with a standard dataset of images that are used to train models to categorize objects. The other dataset was designed to roughly mimic the input that the human visual system receives from birth. This “biomimetic” data consists of low-resolution, grayscale images in the first half of the training, followed by high-resolution, colorful images in the second half.
After the models were trained, the researchers analyzed the models’ processing units — nodes within the network that bear some resemblance to the clusters of cells that process visual information in the brain. They found that the models trained on the biomimetic data developed a distinct subset of units that are jointly responsive to low-color and low-spatial-frequency inputs, similar to the magnocellular pathway. Additionally, these biomimetic models exhibited groups of more heterogenous parvocellular-like units tuned predominantly to higher spatial frequencies or richer color signals. Such distinction did not emerge in the models trained on full color, high-resolution images from the start.
“This provides some support for the idea that the ‘correlation’ we see in the biological system could be a consequence of the types of inputs that are available at the same time in normal development,” Lukas Vogelsang says.
Object recognition
The researchers also performed additional tests to reveal what strategies the differently trained models were using for object recognition tasks. In one, they asked the models to categorize images of objects where the shape and texture did not match — for example, an animal with the shape of cat but the texture of an elephant.
This is a technique several researchers in the field have employed to determine which image attributes a model is using to categorize objects: the overall shape or the fine-grained textures. The MIT team found that models trained on biomimetic input were markedly more likely to use an object’s shape to make those decisions, just as humans usually do. Moreover, when the researchers systematically removed the magnocellular-like units from the models, the models quickly lost their tendency to use shape to make categorizations.
In another set of experiments, the researchers trained the models on videos instead of images, which introduces a temporal dimension. In addition to low spatial resolution and color sensitivity, the magnocellular pathway responds to high temporal frequencies, allowing it to quickly detect changes in the position of an object. When models were trained on biomimetic video input, the units most tuned to high temporal frequencies were indeed the ones that also exhibited magnocellular-like properties in the spatial domain.
Overall, the results support the idea that low-quality sensory input early in life may contribute to the organization of sensory processing pathways of the brain, the researchers say. The findings do not rule out innate specification of the magno and parvo pathways, but provide a proof of principle that visual experience over the course of development could also play a role.
“The general theme that seems to be emerging is that the developmental progression that we go through is very carefully structured in order to give us certain kinds of perceptual proficiencies, and it may also have consequences in terms of the very organization of the brain,” Sinha says.
The research was funded by the National Institutes of Health, the Simons Center for the Social Brain, the Japan Society for the Promotion of Science, and the Yamada Science Foundation.
Promoting targeted heat early warning systems for at-risk populations
Nature Climate Change, Published online: 03 July 2025; doi:10.1038/s41558-025-02374-2
Extreme heat poses a growing threat to vulnerable urban populations, and the existing heat early warning system usually operates at population level. Pairing emerging individualized and population early warning systems could directly and meaningfully extend protection to those most in need.A new platform for developing advanced metals at scale
Companies building next-generation products and breakthrough technologies are often limited by the physical constraints of traditional materials. In aerospace, defense, energy, and industrial tooling, pushing those constraints introduces possible failure points into the system, but companies don’t have better options, given that producing new materials at scale involves multiyear timelines and huge expenses.
Foundation Alloy wants to break the mold. The company, founded by a team from MIT, is capable of producing a new class of ultra-high-performance metal alloys using a novel production process that doesn’t rely on melting raw materials. The company’s solid-state metallurgy technology, which simplifies development and manufacturing of next-generation alloys, was developed over many years of research by former MIT professor Chris Schuh and collaborators.
“This is an entirely new approach to making metals,” says CEO Jake Guglin MBA ’19, who co-founded Foundation Alloy with Schuh, Jasper Lienhard ’15, PhD ’22, and Tim Rupert PhD ’11. “It gives us a broad set of rules on the materials engineering side that allows us to design a lot of different compositions with previously unattainable properties. We use that to make products that work better for advanced industrial applications.”
Foundation Alloy says its metal alloys can be made twice as strong as traditional metals, with 10 times faster product development, allowing companies to test, iterate, and deploy new metals into products in months instead of years.
The company is already designing metals and shipping demonstration parts to companies manufacturing components for things like planes, bikes, and cars. It’s also making test parts for partners in industries with longer development cycles, such as defense and aerospace.
Moving forward, the company believes its approach enables companies to build higher-performing, more reliable systems, from rockets to cars, nuclear fusion reactors, and artificial intelligence chips.
“For advanced systems like rocket and jet engines, if you can run them hotter, you can get more efficient use of fuel and a more powerful system,” Guglin says. “The limiting factor is whether or not you have structural integrity at those higher temperatures, and that is fundamentally a materials problem. Right now, we’re also doing a lot of work in advanced manufacturing and tooling, which is the unsexy but super critical backbone of the industrial world, where being able to push properties up without multiplying costs can unlock efficiencies in operations, performance, and capacity, all in a way that’s only possible with different materials.”
From MIT to the world
Schuh joined MIT’s faculty in 2002 to study the processing, structure, and properties of metal and other materials. He was named head of the Department of Materials Science and Engineering in 2011 before becoming dean of engineering at Northwestern University in 2023, after more than 20 years at MIT.
“Chris wanted to look at metals from different perspectives and make things more economically efficient and higher performance than what’s possible with traditional processes,” Guglin says. “It wasn’t just for academic papers — it was about making new methods that would be valuable for the industrial world.”
Rupert and Lienhard conducted their PhDs in Schuh’s lab, and Rupert invented complementary technologies to the solid-state processes developed by Schuh and his collaborators as a professor at the University of California at Irvine.
Guglin came to MIT’s Sloan School of Management in 2017 eager to work with high-impact technologies.
“I wanted to go somewhere where I could find the types of fundamental technological breakthroughs that create asymmetric value — the types of things where if they didn’t happen here, they weren’t going to happen anywhere else,” Guglin recalls.
In one of his classes, a PhD student in Schuh’s lab practiced his thesis defense by describing his research on a new way to create metal alloys.
“I didn’t understand any of it — I have a philosophy background,” Guglin says. “But I heard ‘stronger metals’ and I saw the potential of this incredible platform Chris’ lab was working on, and it tied into exactly why I wanted to come to MIT.”
Guglin connected with Schuh, and the pair stayed in touch over the next several years as Guglin graduated and went to work for aerospace companies SpaceX and Blue Origin, where he saw firsthand the problems being caused by the metal parts supply chain.
In 2022, the pair finally decided to launch a company, adding Rupert and Lienhard and licensing technology from MIT and UC Irvine.
The founders’ first challenge was scaling up the technology.
“There’s a lot of process engineering to go from doing something once at 5 grams to doing it 100 times a week at 100 kilograms per batch,” Guglin says.
Today, Foundation Alloys starts with its customers’ material requirements and decides on a precise mixture of the powdered raw materials that every metal starts out as. From there, it uses a specialized industrial mixer — Guglin calls it an industrial KitchenAid blender — to create a metal powder that is homogenous down to the atomic level.
“In our process, from raw material all the way through to the final part, we never melt the metal,” Guglin says. “That is uncommon if not unknown in traditional metal manufacturing.
From there, the company’s material can be solidified using traditional methods like metal injection molding, pressing, or 3D printing. The final step is sintering in a furnace.
“We also do a lot of work around how the metal reacts in the sintering furnace,” Guglin says. “Our materials are specifically designed to sinter at relatively low temperatures, relatively quickly, and all the way to full density.”
The advanced sintering process uses an order of magnitude less heat, saving on costs while allowing the company to forego secondary processes for quality control. It also gives Foundation Alloy more control over the microstructure of the final parts.
“That’s where we get a lot of our performance boost from,” Guglin says. “And by not needing those secondary processing steps, we’re saving days if not weeks in addition to the costs and energy savings.”
A foundation for industry
Foundation Alloy is currently piloting their metals across the industrial base and has also received grants to develop parts for critical components of nuclear fusion reactors.
“The name Foundation Alloy in a lot of ways came from wanting to be the foundation for the next generation of industry,” Guglin says.
Unlike in traditional metals manufacturing, where new alloys require huge investments to scale, Guglin says the company’s process for developing new alloys is nearly the same as its production processes, allowing it to scale new materials production far more quickly.
“At the core of our approach is looking at problems like material scientists with a new technology,” Guglin says. “We’re not beholden to the idea that this type of steel must solve this type of problem. We try to understand why that steel is failing and then use our technology to solve the problem in a way that produces not a 10 percent improvement, but a two- or five-times improvement in terms of performance.”
Confronting the AI/energy conundrum
The explosive growth of AI-powered computing centers is creating an unprecedented surge in electricity demand that threatens to overwhelm power grids and derail climate goals. At the same time, artificial intelligence technologies could revolutionize energy systems, accelerating the transition to clean power.
“We’re at a cusp of potentially gigantic change throughout the economy,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering, at MITEI’s Spring Symposium, “AI and energy: Peril and promise,” held on May 13. The event brought together experts from industry, academia, and government to explore solutions to what Green described as both “local problems with electric supply and meeting our clean energy targets” while seeking to “reap the benefits of AI without some of the harms.” The challenge of data center energy demand and potential benefits of AI to the energy transition is a research priority for MITEI.
AI’s startling energy demands
From the start, the symposium highlighted sobering statistics about AI’s appetite for electricity. After decades of flat electricity demand in the United States, computing centers now consume approximately 4 percent of the nation's electricity. Although there is great uncertainty, some projections suggest this demand could rise to 12-15 percent by 2030, largely driven by artificial intelligence applications.
Vijay Gadepally, senior scientist at MIT’s Lincoln Laboratory, emphasized the scale of AI’s consumption. “The power required for sustaining some of these large models is doubling almost every three months,” he noted. “A single ChatGPT conversation uses as much electricity as charging your phone, and generating an image consumes about a bottle of water for cooling.”
Facilities requiring 50 to 100 megawatts of power are emerging rapidly across the United States and globally, driven both by casual and institutional research needs relying on large language programs such as ChatGPT and Gemini. Gadepally cited congressional testimony by Sam Altman, CEO of OpenAI, highlighting how fundamental this relationship has become: “The cost of intelligence, the cost of AI, will converge to the cost of energy.”
“The energy demands of AI are a significant challenge, but we also have an opportunity to harness these vast computational capabilities to contribute to climate change solutions,” said Evelyn Wang, MIT vice president for energy and climate and the former director at the Advanced Research Projects Agency-Energy (ARPA-E) at the U.S. Department of Energy.
Wang also noted that innovations developed for AI and data centers — such as efficiency, cooling technologies, and clean-power solutions — could have broad applications beyond computing facilities themselves.
Strategies for clean energy solutions
The symposium explored multiple pathways to address the AI-energy challenge. Some panelists presented models suggesting that while artificial intelligence may increase emissions in the short term, its optimization capabilities could enable substantial emissions reductions after 2030 through more efficient power systems and accelerated clean technology development.
Research shows regional variations in the cost of powering computing centers with clean electricity, according to Emre Gençer, co-founder and CEO of Sesame Sustainability and former MITEI principal research scientist. Gençer’s analysis revealed that the central United States offers considerably lower costs due to complementary solar and wind resources. However, achieving zero-emission power would require massive battery deployments — five to 10 times more than moderate carbon scenarios — driving costs two to three times higher.
“If we want to do zero emissions with reliable power, we need technologies other than renewables and batteries, which will be too expensive,” Gençer said. He pointed to “long-duration storage technologies, small modular reactors, geothermal, or hybrid approaches” as necessary complements.
Because of data center energy demand, there is renewed interest in nuclear power, noted Kathryn Biegel, manager of R&D and corporate strategy at Constellation Energy, adding that her company is restarting the reactor at the former Three Mile Island site, now called the “Crane Clean Energy Center,” to meet this demand. “The data center space has become a major, major priority for Constellation,” she said, emphasizing how their needs for both reliability and carbon-free electricity are reshaping the power industry.
Can AI accelerate the energy transition?
Artificial intelligence could dramatically improve power systems, according to Priya Donti, assistant professor and the Silverman Family Career Development Professor in MIT's Department of Electrical Engineering and Computer Science and the Laboratory for Information and Decision Systems. She showcased how AI can accelerate power grid optimization by embedding physics-based constraints into neural networks, potentially solving complex power flow problems at “10 times, or even greater, speed compared to your traditional models.”
AI is already reducing carbon emissions, according to examples shared by Antonia Gawel, global director of sustainability and partnerships at Google. Google Maps’ fuel-efficient routing feature has “helped to prevent more than 2.9 million metric tons of GHG [greenhouse gas] emissions reductions since launch, which is the equivalent of taking 650,000 fuel-based cars off the road for a year," she said. Another Google research project uses artificial intelligence to help pilots avoid creating contrails, which represent about 1 percent of global warming impact.
AI’s potential to speed materials discovery for power applications was highlighted by Rafael Gómez-Bombarelli, the Paul M. Cook Career Development Associate Professor in the MIT Department of Materials Science and Engineering. “AI-supervised models can be trained to go from structure to property,” he noted, enabling the development of materials crucial for both computing and efficiency.
Securing growth with sustainability
Throughout the symposium, participants grappled with balancing rapid AI deployment against environmental impacts. While AI training receives most attention, Dustin Demetriou, senior technical staff member in sustainability and data center innovation at IBM, quoted a World Economic Forum article that suggested that “80 percent of the environmental footprint is estimated to be due to inferencing.” Demetriou emphasized the need for efficiency across all artificial intelligence applications.
Jevons’ paradox, where “efficiency gains tend to increase overall resource consumption rather than decrease it” is another factor to consider, cautioned Emma Strubell, the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Strubell advocated for viewing computing center electricity as a limited resource requiring thoughtful allocation across different applications.
Several presenters discussed novel approaches for integrating renewable sources with existing grid infrastructure, including potential hybrid solutions that combine clean installations with existing natural gas plants that have valuable grid connections already in place. These approaches could provide substantial clean capacity across the United States at reasonable costs while minimizing reliability impacts.
Navigating the AI-energy paradox
The symposium highlighted MIT’s central role in developing solutions to the AI-electricity challenge.
Green spoke of a new MITEI program on computing centers, power, and computation that will operate alongside the comprehensive spread of MIT Climate Project research. “We’re going to try to tackle a very complicated problem all the way from the power sources through the actual algorithms that deliver value to the customers — in a way that’s going to be acceptable to all the stakeholders and really meet all the needs,” Green said.
Participants in the symposium were polled about priorities for MIT’s research by Randall Field, MITEI director of research. The real-time results ranked “data center and grid integration issues” as the top priority, followed by “AI for accelerated discovery of advanced materials for energy.”
In addition, attendees revealed that most view AI's potential regarding power as a “promise,” rather than a “peril,” although a considerable portion remain uncertain about the ultimate impact. When asked about priorities in power supply for computing facilities, half of the respondents selected carbon intensity as their top concern, with reliability and cost following.
3 Questions: How MIT’s venture studio is partnering with MIT labs to solve “holy grail” problems
MIT Proto Ventures is the Institute’s in-house venture studio — a program designed not to support existing startups, but to create entirely new ones from the ground up. Operating at the intersection of breakthrough research and urgent real-world problems, Proto Ventures proactively builds startups that leverage MIT technologies, talent, and ideas to address high-impact industry challenges.
Each venture-building effort begins with a “channel” — a defined domain such as clean energy, fusion, or AI in health care — where MIT is uniquely positioned to lead, and where there are pressing real-world problems needing solutions. Proto Ventures hires full-time venture builders, deeply technical entrepreneurs who embed in MIT labs, connect with faculty, scout promising inventions, and explore unmet market needs. These venture builders work alongside researchers and aspiring founders from across MIT who are accepted into Proto Ventures’ fellowship program to form new teams, shape business concepts, and drive early-stage validation. Once a venture is ready to spin out, Proto Ventures connects it with MIT’s broader innovation ecosystem, including incubation programs, accelerators, and technology licensing.
David Cohen-Tanugi SM '12, PhD '15 has been the venture builder for the fusion and clean energy channel since 2023.
Q: What are the challenges of launching startups out of MIT labs? In other words, why does MIT need a venture studio?
A: MIT regularly takes on the world’s “holy grail” challenges, such as decarbonizing heavy industry, preventing future pandemics, or adapting to climate extremes. Yet despite its extraordinary depth in research, too few of MIT’s technical breakthroughs evolve into successful startups targeting these problems. Not enough technical breakthroughs in MIT labs are turning into commercial efforts to address these highest-impact problems.
There are a few reasons for this. Right now, it takes a great deal of serendipity for a technology or idea in the lab to evolve into a startup project within the Institute’s ecosystem. Great startups don’t just emerge from great technology alone — they emerge from combinations of great technology, unmet market needs, and committed people.
A second reason is that many MIT researchers don’t have the time, professional incentives, or skill set to commercialize a technology. They often lack someone that they can partner with, someone who is technical enough to understand the technology but who also has experience bringing technologies to market.
Finally, while MIT excels at supporting entrepreneurial teams that are already in motion — thanks to world-class accelerators, mentorship services, and research funding programs — what’s missing is actually further upstream: a way to deliberately uncover and develop venture opportunities that haven’t even taken shape yet.
MIT needs a venture studio because we need a new, proactive model for research translation — one that breaks down silos and that bridges deep technical talent with validated market needs.
Q: How do you add value for MIT researchers?
A: As a venture builder, I act as a translational partner for researchers — someone who can take the lead on exploring commercial pathways in partnership with the lab. Proto Ventures fills the gap for faculty and researchers who believe their work could have real-world applications but don’t have the time, entrepreneurial expertise, or interested graduate students to pursue them. Proto Ventures fills that gap.
Having done my PhD studies at MIT a decade ago, I’ve seen firsthand how many researchers are interested in impact beyond academia but don’t know where to start. I help them think strategically about how their work fits into the real market, I break down tactical blockers such as intellectual property conversations or finding a first commercial partner, and I roll up my sleeves to do customer discovery, identify potential co-founders, or locate new funding opportunities. Even when the outcome isn’t a startup, the process often reveals new collaborators, use cases, or research directions. We’re not just scouting for IP — we’re building a deeper culture of tech translation at MIT, one lab at a time.
Q: What counts as a success?
A: We’ve launched five startups across two channels so far, including one that will provide energy-efficient propulsion systems for satellites and another that is developing advanced power supply units for data centers.
But counting startups is not the only way to measure impact. While embedded at the MIT Plasma Science and Fusion Center, I have engaged with 75 researchers in translational activities — many for the first time. For example, I’ve helped research scientist Dongkeun Park craft funding proposals for next-generation MRI and aircraft engines enabled by high-temperature superconducting magnets. Working with Mike Nour from the MIT Sloan Executive MBA program, we’ve also developed an innovative licensing strategy for Professor Michael P. Short and his antifouling coating technology. Sometimes it takes an outsider like me to connect researchers across departments, suggest a new collaboration, or unearth an overlooked idea. Perhaps most importantly, we’ve validated that this model works: embedding entrepreneurial scientists in labs changes how research is translated.
We’ve also seen that researchers are eager to translate their work — they just need a structure and a partner to help them do it. That’s especially true in the hard tech in which MIT excels. That’s what Proto Ventures offers. And based on our early results, we believe this model could be transformative not just for MIT, but for research institutions everywhere.