Feed aggregator
GOP to California: Build back better
Republicans blame DEI for the LA fires. This fire captain disagrees.
Cholera tops Africa’s disease emergency events as climate change fuels contagion
Germany links up with Interpol to stamp out environmental crimes ‘worth billions’
Carbon burial in sediments below seaweed farms matches that of Blue Carbon habitats
Nature Climate Change, Published online: 17 January 2025; doi:10.1038/s41558-024-02238-1
To understand the potential for seaweed as a Blue Carbon strategy, the authors quantify carbon burial under 20 globally distributed seaweed farms. They attribute an average of 1.06 ± 0.74 tCO2e ha−1 yr−1 to seaweed farms, and show increased accumulation of carbon with farm age.Explained: Generative AI’s environmental impact
In a two-part series, MIT News explores the environmental implications of generative AI. In this article, we look at why this technology is so resource-intensive. A second piece will investigate what experts are doing to reduce genAI’s carbon footprint and other impacts.
The excitement surrounding potential benefits of generative AI, from improving worker productivity to advancing scientific research, is hard to ignore. While the explosive growth of this new technology has enabled rapid deployment of powerful models in many industries, the environmental consequences of this generative AI “gold rush” remain difficult to pin down, let alone mitigate.
The computational power required to train generative AI models that often have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid.
Furthermore, deploying these models in real-world applications, enabling millions to use generative AI in their daily lives, and then fine-tuning the models to improve their performance draws large amounts of energy long after a model has been developed.
Beyond electricity demands, a great deal of water is needed to cool the hardware used for training, deploying, and fine-tuning generative AI models, which can strain municipal water supplies and disrupt local ecosystems. The increasing number of generative AI applications has also spurred demand for high-performance computing hardware, adding indirect environmental impacts from its manufacture and transport.
“When we think about the environmental impact of generative AI, it is not just the electricity you consume when you plug the computer in. There are much broader consequences that go out to a system level and persist based on actions that we take,” says Elsa A. Olivetti, professor in the Department of Materials Science and Engineering and the lead of the Decarbonization Mission of MIT’s new Climate Project.
Olivetti is senior author of a 2024 paper, “The Climate and Sustainability Implications of Generative AI,” co-authored by MIT colleagues in response to an Institute-wide call for papers that explore the transformative potential of generative AI, in both positive and negative directions for society.
Demanding data centers
The electricity demands of data centers are one major factor contributing to the environmental impacts of generative AI, since data centers are used to train and run the deep learning models behind popular tools like ChatGPT and DALL-E.
A data center is a temperature-controlled building that houses computing infrastructure, such as servers, data storage drives, and network equipment. For instance, Amazon has more than 100 data centers worldwide, each of which has about 50,000 servers that the company uses to support cloud computing services.
While data centers have been around since the 1940s (the first was built at the University of Pennsylvania in 1945 to support the first general-purpose digital computer, the ENIAC), the rise of generative AI has dramatically increased the pace of data center construction.
“What is different about generative AI is the power density it requires. Fundamentally, it is just computing, but a generative AI training cluster might consume seven or eight times more energy than a typical computing workload,” says Noman Bashir, lead author of the impact paper, who is a Computing and Climate Impact Fellow at MIT Climate and Sustainability Consortium (MCSC) and a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Scientists have estimated that the power requirements of data centers in North America increased from 2,688 megawatts at the end of 2022 to 5,341 megawatts at the end of 2023, partly driven by the demands of generative AI. Globally, the electricity consumption of data centers rose to 460 terawatts in 2022. This would have made data centers the 11th largest electricity consumer in the world, between the nations of Saudi Arabia (371 terawatts) and France (463 terawatts), according to the Organization for Economic Co-operation and Development.
By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatts (which would bump data centers up to fifth place on the global list, between Japan and Russia).
While not all data center computation involves generative AI, the technology has been a major driver of increasing energy demands.
“The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants,” says Bashir.
The power needed to train and deploy a model like OpenAI’s GPT-3 is difficult to ascertain. In a 2021 research paper, scientists from Google and the University of California at Berkeley estimated the training process alone consumed 1,287 megawatt hours of electricity (enough to power about 120 average U.S. homes for a year), generating about 552 tons of carbon dioxide.
While all machine-learning models must be trained, one issue unique to generative AI is the rapid fluctuations in energy use that occur over different phases of the training process, Bashir explains.
Power grid operators must have a way to absorb those fluctuations to protect the grid, and they usually employ diesel-based generators for that task.
Increasing impacts from inference
Once a generative AI model is trained, the energy demands don’t disappear.
Each time a model is used, perhaps by an individual asking ChatGPT to summarize an email, the computing hardware that performs those operations consumes energy. Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search.
“But an everyday user doesn’t think too much about that,” says Bashir. “The ease-of-use of generative AI interfaces and the lack of information about the environmental impacts of my actions means that, as a user, I don’t have much incentive to cut back on my use of generative AI.”
With traditional AI, the energy usage is split fairly evenly between data processing, model training, and inference, which is the process of using a trained model to make predictions on new data. However, Bashir expects the electricity demands of generative AI inference to eventually dominate since these models are becoming ubiquitous in so many applications, and the electricity needed for inference will increase as future versions of the models become larger and more complex.
Plus, generative AI models have an especially short shelf-life, driven by rising demand for new AI applications. Companies release new models every few weeks, so the energy used to train prior versions goes to waste, Bashir adds. New models often consume more energy for training, since they usually have more parameters than their predecessors.
While electricity demands of data centers may be getting the most attention in research literature, the amount of water consumed by these facilities has environmental impacts, as well.
Chilled water is used to cool a data center by absorbing heat from computing equipment. It has been estimated that, for each kilowatt hour of energy a data center consumes, it would need two liters of water for cooling, says Bashir.
“Just because this is called ‘cloud computing’ doesn’t mean the hardware lives in the cloud. Data centers are present in our physical world, and because of their water usage they have direct and indirect implications for biodiversity,” he says.
The computing hardware inside data centers brings its own, less direct environmental impacts.
While it is difficult to estimate how much power is needed to manufacture a GPU, a type of powerful processor that can handle intensive generative AI workloads, it would be more than what is needed to produce a simpler CPU because the fabrication process is more complex. A GPU’s carbon footprint is compounded by the emissions related to material and product transport.
There are also environmental implications of obtaining the raw materials used to fabricate GPUs, which can involve dirty mining procedures and the use of toxic chemicals for processing.
Market research firm TechInsights estimates that the three major producers (NVIDIA, AMD, and Intel) shipped 3.85 million GPUs to data centers in 2023, up from about 2.67 million in 2022. That number is expected to have increased by an even greater percentage in 2024.
The industry is on an unsustainable path, but there are ways to encourage responsible development of generative AI that supports environmental objectives, Bashir says.
He, Olivetti, and their MIT colleagues argue that this will require a comprehensive consideration of all the environmental and societal costs of generative AI, as well as a detailed assessment of the value in its perceived benefits.
“We need a more contextual way of systematically and comprehensively understanding the implications of new developments in this space. Due to the speed at which there have been improvements, we haven’t had a chance to catch up with our abilities to measure and understand the tradeoffs,” Olivetti says.
MIT Global SCALE Network named No. 1 supply chain and logistics master’s program for 2024-25
The MIT Global Supply Chain and Logistics Excellence (SCALE) Network has once again been ranked as the world’s top master’s program for supply chain and logistics management by Eduniversal’s 2024/2025 Best Masters Rankings. This recognition marks the eighth consecutive No. 1 ranking since 2016, reaffirming MIT’s unparalleled leadership in supply chain education, research, and practice.
Eduniversal evaluates more than 20,000 postgraduate programs globally each year, considering academic reputation, graduate employability, and student satisfaction.
The MIT SCALE Network’s sustained top ranking reflects its commitment to fostering international diversity; delivering hands-on, project-based learning; and success in developing a generation of supply chain leaders ready to tackle global supply chain challenges.
A growing global network with local impact
This year’s ranking coincides with the MIT SCALE Network’s expansion of its global footprint, highlighted by the recent announcement of the UK SCALE Center at Loughborough University. The center, which will welcome its inaugural cohort in fall 2025, underscores MIT’s commitment to advancing supply chain innovation and creating transformative opportunities for students and researchers.
The UK SCALE Center joins the network’s global community of centers in the United States, China, Spain, Colombia, and Luxembourg. Together, these centers deliver world-class education and practical solutions that address critical supply chain challenges across industries, empowering a global alumni base of more than 1,900 leaders representing over 50 different countries.
"The launch of the UK SCALE Center represents a fantastic opportunity for Loughborough University to showcase our cutting-edge research and data-driven, forward-thinking approach to supporting the U.K. supply chain industry,” says Jan Godsell, dean of Loughborough Business School. “Through projects like the InterAct Network and our implementation of the Made Smarter Innovation 'Leading Digital Transformation' program, we’re offering businesses and industry professionals the essential training and leading insights into the future of the supply chain ecosystem, which I’m excited to build on with the creation of this new MSc in supply chain management."
Other MIT SCALE centers also emphasized the network’s transformative impact:
“The MIT SCALE Network provides NISCI students with the tools, expertise, and global connections to lead in today’s rapidly evolving supply chain environment,” says Jay Guo, director of the Ningbo China Institute for Supply Chain Innovation.
Susana Val, director of Zaragoza Logistics Center (ZLC), highlights the program’s reach and influence: “For the last 21 years, ZLC has educated over 5,000 logistics professionals from more than 90 nationalities. We are proud of this recognition and look forward to continuing our alliance with the MIT SCALE Network, upholding the rigor and quality that define our teaching.”
From Luxembourg, Benny Mantin, director of the Luxembourg Center for Logistics and Supply Chain Management (LCL), adds, “Our students greatly appreciate the LCL’s SCALE Network membership as it provides them with superb experience and ample opportunities to network and expand their scope.”
The global presence and collaborative approach of the MIT SCALE Network help define its mission: to deliver education and research that drive transformative impact in every corner of the world.
A legacy of leadership
This latest recognition from Eduniversal underscores the MIT SCALE Network’s leadership in supply chain education. For over two decades, its master’s programs have shaped graduates who tackle pressing challenges across industries and geographies.
"This recognition reflects the dedication of our faculty, researchers, and global partners to delivering excellence in supply chain education," says Yossi Sheffi, director of the MIT Center for Transportation and Logistics. “The MIT SCALE Network’s alumni are proof of the program’s impact, applying their skills to tackle challenges across every industry and continent.”
Maria Jesus Saenz, executive director of the MIT SCM Master’s Program, emphasizes the strength of the global alumni network: “The MIT SCALE Network doesn’t just prepare graduates — it connects them to a global community of supply chain leaders. This powerful ecosystem fosters collaboration and innovation that transcends borders, enabling our graduates to tackle the world’s most pressing supply chain challenges.”
Founded in 2003, the MIT SCALE Network connects world-class research centers across multiple continents, offering top-ranked master’s and executive education programs that combine academic rigor with real-world application. Graduates are among the most sought-after professionals in the global supply chain field.
Systemic Risk Reporting: A System in Crisis?
The first batch of reports assessing the so called “systemic risks” posed by the largest online platforms are in. These reports are a result of the Digital Services Act (DSA), Europe’s new law regulating platforms like Google, Meta, Amazon or X, and have been eagerly awaited by civil society groups across the globe. In their reports, companies are supposed to assess whether their services contribute to a wide range of barely defined risks. These go beyond the dissemination of illegal content and include vaguely defined categories such as negative effects on the integrity of elections, impediments to the exercise of fundamental rights or undermining of civic discourse. We have previously warned that the subjectivity of these categories invites a politization of the DSA.
In view of a new DSA investigation into TikTok’s potential role in Romania’s presidential election, we take a look at the reports and the framework that has produced them to understand their value and limitations.
A Short DSA ExplainerThe DSA covers a lot of different services. It regulates online markets like Amazon or Shein, social networks like Instagram and TikTok, search engines like Google and Bing, and even app stores like those run by Apple and Google. Different obligations apply to different services, depending on their type and size. Generally, the lower the degree of control a service provider has over content shared via its product, the fewer obligations it needs to comply with.
For example, hosting services like cloud computing must provide points of contact for government authorities and users and basic transparency reporting. Online platforms, meaning any service that makes user generated content available to the public, must meet additional requirements like providing users with detailed information about content moderation decisions and the right to appeal. They must also comply with additional transparency obligations.
While the DSA is a necessary update to the EU’s liability rules and improved users’ rights, we have plenty of concerns with the route that it takes:
- We worry about the powers it gives to authorities to request user data and the obligation on providers to proactively share user data with law enforcement.
- We are also concerned about the ways in which trusted flaggers could lead to the over-removal of speech, and
- We caution against the misuse of the DSA’s mechanism to deal with emergencies like a pandemic.
The most stringent DSA obligations apply to large online platforms and search engines that have more than 45 million users in the EU. The European Commission has so far designated more than 20 services to constitute such “very large online platforms” (VLOPs) or “very large online search engines” (VLOSEs). These companies, which include X, TikTok, Amazon, Google Search, Maps and Play, YouTube and several porn platforms, must proactively assess and mitigate “systemic risks” related to the design, operation and use of their services. The DSA’s non-conclusive list of risks includes four broad categories: 1) the dissemination of illegal content, 2) negative effects on the exercise of fundamental rights, 3) threats to elections, civic discourse and public safety, and 4) negative effects and consequences in relation to gender-based violence, protection of minors and public health, and on a person’s physical and mental wellbeing.
The DSA does not provide much guidance on how VLOPs and VLOSEs are supposed to analyze whether they contribute to the somewhat arbitrary seeming list of risks mentioned. Nor does the law offer clear definitions of how these risks should be understood, leading to concerns that they could be interpreted widely and lead to the extensive removal of lawful but awful content. There is equally little guidance on risk mitigation as the DSA merely names a few measures that platforms can choose to employ. Some of these recommendations are incredibly broad, such as adapting the design, features or functioning of a service, or “reinforcing internal processes”. Others, like introducing age verification measures, are much more specific but come with a host of issues and can undermine fundamental rights themselves.
Risk Management Through the Lens of the Romanian ElectionPer the DSA, platforms must annually publish reports detailing how they have analyzed and managed risks. These reports are complemented by separate reports compiled by external auditors, tasked with assessing platforms’ compliance with their obligations to manage risks and other obligations put forward by the DSA.
To better understand the merits and limitations of these reports, let’s examine the example of the recent Romanian election. In late November 2024, an ultranationalist and pro-Russian candidate, Calin Georgescu, unexpectedly won the first round of Romania’s presidential election. After reports by local civil society groups accusing TikTok of amplifying pro-Georgescu content, and a declassified brief published by Romania’s intelligence services that alleges cyberattacks and influence operations, the Romanian constitutional court annulled the results of the election. Shortly after, the European Commission opened formal proceedings against TikTok for insufficiently managing systemic risks related to the integrity of the Romanian election. Specifically, the Commission’s investigation focuses on “TikTok's recommender systems, notably the risks linked to the coordinated inauthentic manipulation or automated exploitation of the service and TikTok's policies on political advertisements and paid-for political content.”
TikTok’s own risk assessment report dedicates eight pages to potential negative effects on elections and civic discourse. Curiously, TikTok’s definition of this particular category of risk focuses on the spread of election misinformation but makes no mention of coordinated inauthentic behavior or the manipulation of its recommender systems. This illustrates the wide margin on platforms to define systemic risks and implement their own mitigation strategies. Leaving it up to platforms to define relevant risks not only makes the comparison of approaches taken by different companies impossible, it can also lead to overly broad or narrow approaches—potentially undermining fundamental rights or running counter to the obligation to effectively deal with risks, as in this example. It should also be noted that mis- and disinformation are terms not defined by international human rights law and are therefore not well suited as a robust basis on which freedom of expression may be restricted.
In its report, TikTok describes the measures taken to mitigate potential risks to elections and civic discourse. This overview broadly describes some election-specific interventions like labels for content that has not been fact checked but might contain misinformation, and describes TikTok’s policies like its ban of political ads, which is notoriously easy to circumvent. It does not entail any indication that the robustness and utility of the measures employed are documented or have been tested, nor any benchmarks of when TikTok considers a risk successfully mitigated. It does not, for example, contain figures on how many pieces of content receive certain labels, and how these influence users’ interactions with the content in question.
Similarly, the report does not contain any data regarding the efficacy of TikTok’s enforcement of its political ads ban. TikTok’s “methodology” for risk assessments, also included in the report, does not help in answering any of these questions, either. And looking at the report compiled by the external auditor, in this case KPMG, we are once again left disappointed: KPMG concluded that it was impossible to assess TikTok’s systemic risk compliance because of two earlier, pending investigations by the European Commission due to potential non-compliance with the systemic risk mitigation obligations.
Limitations of the DSA’s Risk Governance ApproachWhat then, is the value of the risk and audit reports, published roughly a year after their finalization? The answer may be very little.
As explained above, companies have a lot of flexibility in how to assess and deal with risks. On the one hand, some degree of flexibility is necessary: every VLOP and VLOSE differs significantly in terms of product logics, policies, user base and design choices. On the other hand, the high degree of flexibility in determining what exactly a systemic risk is can lead to significant inconsistencies and render risk analysis unreliable. It also allows regulators to put forward their own definitions, thereby potentially expanding risk categories as they see fit to deal with emerging or politically salient issues.
Rather than making sense of diverse and possibly conflicting definitions of risks, companies and regulators should put forward joint benchmarks, and include civil society experts in the process.
Speaking of benchmarks: There is a critical lack of standardized processes, assessment methodologies and reporting templates. Most assessment reports contain very little information on how the actual assessments are carried out, and the auditors’ reports distinguish themselves through an almost complete lack of insight into the auditing process itself. This information is crucial, but it is near impossible to adequately scrutinize the reports themselves without understanding whether auditors were provided the necessary information, whether they ran into any roadblocks looking at specific issues, and how evidence was produced and documented. And without methodologies that are applicable across the board it will remain very challenging, if not impossible, to compare approaches taken by different companies.
The TikTok example shows that the risk and audit reports do not contain the “smoking gun” some might have hoped for. Besides the shortcomings explained above, this is due to the inherent limitations of the DSA itself. Although the DSA attempts to take a holistic approach to complex societal risks that cut across different but interconnected challenges, its reporting system is forced to only consider the obligations put forward by the DSA itself. Any legal assessment framework will struggle to capture complex societal challenges like the integrity of elections or public safety. In addition, phenomena as complex as electoral processes and civic discourse are shaped by a range of different legal instruments, including European rules on political ads, data protection, cybersecurity and media pluralism, not to mention countless national laws. Expecting a definitive answer on the potential implications of large online services on complex societal processes from a risk report will therefore always fall short.
The Way ForwardThe reports do present a slight improvement in terms of companies’ accountability and transparency. Even if the reports may not include the hard evidence of non-compliance some might have expected, they are a starting point to understanding how platforms attempt to grapple with complex issues taking place on their services. As such, they are, at best, the basis for an iterative approach to compliance. But many of the risks described by the DSA as systemic and their relationships with online services are still poorly understood.
Instead of relying on platforms or regulators to define how risks should be conceptualized and mitigated, a joint approach is needed—one that builds on expertise by civil society, academics and activists, and emphasizes best practices. A collaborative approach would help make sense of these complex challenges and how they can be addressed in ways that strengthen users’ rights and protect fundamental rights.
Judge dismisses NYC’s climate case against the fossil fuel industry
Trump transition official faced sexual misconduct accusation in 2020, DOE inspector general says
FBI Deletes PlugX Malware from Thousands of Computers
According to a DOJ press release, the FBI was able to delete the Chinese-used PlugX malware from “approximately 4,258 U.S.-based computers and networks.”
To retrieve information from and send commands to the hacked machines, the malware connects to a command-and-control server that is operated by the hacking group. According to the FBI, at least 45,000 IP addresses in the US had back-and-forths with the command-and-control server since September 2023.
It was that very server that allowed the FBI to finally kill this pesky bit of malicious software. First, they tapped the know-how of French intelligence agencies, which had ...