Feed aggregator

EU will work on setting water use caps for thirsty data centers

ClimateWire News - Fri, 05/16/2025 - 6:11am
The European Commission will propose the measure by the end of 2026 as part of a scheme to make data centers more sustainable.

European Central Bank official warns against undermining ESG rules

ClimateWire News - Fri, 05/16/2025 - 6:08am
The European Commission has proposed amendments to environmental, social and governance legislation amid complaints that the rules pose too great a regulatory burden on business.

In India, Indigenous women seek to protect lands from climate change

ClimateWire News - Fri, 05/16/2025 - 6:06am
The women have created what are known as dream maps, showing their villages in their ideal states.

The U.S. Copyright Office’s Draft Report on AI Training Errs on Fair Use

EFF: Updates - Fri, 05/16/2025 - 12:53am

Within the next decade, generative AI could join computers and electricity as one of the most transformational technologies in history, with all of the promise and peril that implies. Governments’ responses to GenAI—including new legal precedents—need to thoughtfully address real-world harms without destroying the public benefits GenAI can offer. Unfortunately, the U.S. Copyright Office’s rushed draft report on AI training misses the mark.

The Report Bungles Fair Use

Released amidst a set of controversial job terminations, the Copyright Office’s report covers a wide range of issues with varying degrees of nuance. But on the core legal question—whether using copyrighted works to train GenAI is a fair use—it stumbles badly. The report misapplies long-settled fair use principles and ultimately puts a thumb on the scale in favor of copyright owners at the expense of creativity and innovation.

To work effectively, today’s GenAI systems need to be trained on very large collections of human-created works—probably millions of them. At this scale, locating copyright holders and getting their permission is daunting for even the biggest and wealthiest AI companies, and impossible for smaller competitors. If training makes fair use of copyrighted works, however, then no permission is needed.

Right now, courts are considering dozens of lawsuits that raise the question of fair use for GenAI training. Federal District Judge Vince Chhabria is poised to rule on this question, after hearing oral arguments in Kadrey v. Meta PlatformsThe Third Circuit Court of Appeals is expected to consider a similar fair use issue in Thomson Reuters v. Ross Intelligence. Courts are well-equipped to resolve this pivotal issue by applying existing law to specific uses and AI technologies. 

Courts Should Reject the Copyright Office’s Fair Use Analysis

The report’s fair use discussion contains some fundamental errors that place a thumb on the scale in favor of rightsholders. Though the report is non-binding, it could influence courts, including in cases like Kadrey, where plaintiffs have already filed a copy of the report and urged the court to defer to its analysis.   

Courts need only accept the Copyright Office’s draft conclusions, however, if they are persuasive. They are not.   

The Office’s fair use analysis is not one the courts should follow. It repeatedly conflates the use of works for training models—a necessary step in the process of building a GenAI model—with the use of the model to create substantially similar works. It also misapplies basic fair use principles and embraces a novel theory of market harm that has never been endorsed by any court.

The first problem is the Copyright Office’s transformative use analysis. Highly transformative uses—those that serve a different purpose than that of the original work—are very likely to be fair. Courts routinely hold that using copyrighted works to build new software and technology—including search engines, video games, and mobile apps—is a highly transformative use because it serves a new and distinct purpose. Here, the original works were created for various purposes and using them to train large language models is surely very different.

The report attempts to sidestep that conclusion by repeatedly ignoring the actual use in question—training —and focusing instead on how the model may be ultimately used. If the model is ultimately used primarily to create a class of works that are similar to the original works on which it was trained, the Office argues, then the intermediate copying can’t be considered transformative. This fundamentally misunderstands transformative use, which should turn on whether a model itself is a new creation with its own distinct purpose, not whether any of its potential uses might affect demand for a work on which it was trained—a dubious standard that runs contrary to decades of precedent.

The Copyright Office’s transformative use analysis also suggests that the fair use analysis should consider whether works were obtained in “bad faith,” and whether developers respected the right “to control” the use of copyrighted works.  But the Supreme Court is skeptical that bad faith has any role to play in the fair use analysis and has made clear that fair use is not a privilege reserved for the well-behaved. And rightsholders don’t have the right to control fair uses—that’s kind of the point.

Finally, the Office adopts a novel and badly misguided theory of “market harm.” Traditionally, the fair use analysis requires courts to consider the effects of the use on the market for the work in question. The Copyright Office suggests instead that courts should consider overall effects of the use of the models to produce generally similar works. By this logic, if a model was trained on a Bridgerton novel—among millions of other works—and was later used by a third party to produce romance novels, that might harm series author Julia Quinn’s bottom line.

This market dilution theory has four fundamental problems. First, like the transformative use analysis, it conflates training with outputs. Second, it’s not supported by any relevant precedent. Third, it’s based entirely on speculation that Bridgerton fans will buy random “romance novels” instead of works produced by a bestselling author they know and love.  This relies on breathtaking assumptions that lack evidence, including that all works in the same genre are good substitutes for each other—regardless of their quality, originality, or acclaim. Lastly, even if competition from other, unique works might reduce sales, it isn’t the type of market harm that weighs against fair use.

Nor is lost revenue from licenses for fair uses a type of market harm that the law should recognize. Prioritizing private licensing market “solutions” over user rights would dramatically expand the market power of major media companies and chill the creativity and innovation that copyright is intended to promote. Indeed, the fair use doctrine exists in part to create breathing room for technological innovation, from the phonograph record to the videocassette recorder to the internet itself. Without fair use, crushing copyright liability could stunt the development of AI technology.

We’re still digesting this report, but our initial review suggests that, on balance, the Copyright Office’s approach to fair use for GenAI training isn’t a dispassionate report on how existing copyright law applies to this new and revolutionary technology. It’s a policy judgment about the value of GenAI technology for future creativity, by an office that has no business making new, free-floating policy decisions.

The courts should not follow the Copyright Office’s speculations about GenAI. They should follow precedent.

In Memoriam: John L. Young, Cryptome Co-Founder

EFF: Updates - Thu, 05/15/2025 - 3:57pm

John L. Young, who died March 28 at age 89 in New York City, was among the first people to see the need for an online library of official secrets, a place where the public could find out things that governments and corporations didn’t want them to know. He made real the idea – revolutionary in its time – that the internet could make more information available to more people than ever before.

John and architect Deborah Natsios, his wife, in 1996 founded Cryptome, an online library which collects and publishes data about freedom of expression, privacy, cryptography, dual-use technologies, national security, intelligence, and government secrecy. Its slogan: “The greatest threat to democracy is official secrecy which favors a few over the many.” And its invitation: “We welcome documents for publication that are prohibited by governments worldwide.”

Cryptome soon became known for publishing an encyclopedic array of government, court, and corporate documents. Cryptome assembled an indispensable, almost daily chronicle of the ‘crypto wars’ of the 1990s – when the first generation of internet lawyers and activists recognized the need to free up encryption from government control and undertook litigation, public activism and legislative steps to do so.  Cryptome became required reading for anyone looking for information about that early fight, as well as many others.    

John and Cryptome were also among the early organizers and sponsors of WikiLeaks, though like many others, he later broke with that organization over what he saw as its monetization. Cryptome later published Wikileaks’ alleged internal emails. Transparency was the core of everything John stood for.

John was one of the early, under-recognized heroes of the digital age.

John was a West Texan by birth and an architect by training and trade. Even before he launched the website, his lifelong pursuit of not-for-profit, public-good ideals led him to seek access to documents about shadowy public development entities that seemed to ignore public safety, health, and welfare. As the digital age dawned, this expertise in and passion for exposing secrets evolved into Cryptome with John its chief information architect, designing and building a real-time archive of seminal debates shaping cyberspace’s evolving information infrastructures.

The FBI and Secret Service tried to chill his activities. Big Tech companies like Microsoft tried to bully him into pulling documents off the internet. But through it all, John remained a steadfast if iconoclastic librarian without fear or favor.

John served in the United States Army Corps of Engineers in Germany (1953–1956) and earned degrees in philosophy and architecture from Rice University (1957–1963) and his graduate degree in architecture from Columbia University in 1969. A self-identified radical, he became an activist and helped create the community service group Urban Deadline, where his fellow student-activists initially suspected him of being a police spy. Urban Deadline went on to receive citations from the Citizens Union of the City of New York and the New York City Council.

John was one of the early, under-recognized heroes of the digital age. He not only saw the promise of digital technology to help democratize access to information, he brought that idea into being and nurtured it for many years.  We will miss him and his unswerving commitment to the public’s right to know.

The Kids Online Safety Act Will Make the Internet Worse for Everyone

EFF: Updates - Thu, 05/15/2025 - 2:00pm

The Kids Online Safety Act (KOSA) is back in the Senate. Sponsors are claiming—again—that the latest version won’t censor online content. It isn’t true. This bill still sets up a censorship regime disguised as a “duty of care,” and it will do what previous versions threatened: suppress lawful, important speech online, especially for young people.

TAKE ACTION

KOSA Will silence kids and adults

KOSA Still Forces Platforms to Police Legal Speech

At the center of the bill is a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” The bill claims to bar lawsuits over “the viewpoint of users,” but that’s a smokescreen. Its core function is to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to one of these harms. 

When the safest legal option is to delete a forum, platforms will delete the forum.

This bill won’t bother big tech. Large companies will be able to manage this regulation, which is why Apple and X have agreed to support it. In fact, X helped negotiate the text of the last version of this bill we saw. Meanwhile, those companies’ smaller competitors will be left scrambling to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself. 

To avoid liability, platforms will over-censor. It’s not merely hypothetical. It’s what happens when speech becomes a legal risk. The list of harms in KOSA’s “duty of care” provision is so broad and vague that no platform will know what to do regarding any given piece of content. Forums won’t be able to host posts with messages like “love your body,” “please don’t do drugs,” or “here’s how I got through depression” without fearing that an attorney general or FTC lawyer might later decide the content was harmful. Support groups and anti-harm communities, which can’t do their work without talking about difficult subjects like eating disorders, mental health, and drug abuse, will get caught in the dragnet. 

When the safest legal option is to delete a forum, platforms will delete the forum.

There’s Still No Science Behind KOSA’s Core Claims

KOSA relies heavily on vague, subjective harms like “compulsive usage.” The bill defines it as repetitive online behavior that disrupts life activities like eating, sleeping, or socializing. But here’s the problem: there is no accepted clinical definition of “compulsive usage” of online services.

There’s no scientific consensus that online platforms cause mental health disorders, nor agreement on how to measure so-called “addictive” behavior online. The term sounds like settled medical science, but it’s legislative sleight-of-hand: an undefined concept given legal teeth, with major consequences for speech and access to information.

Carveouts Don’t Fix the First Amendment Problem

The bill says it can’t be enforced based on a user’s “viewpoint.” But the text of the bill itself preferences certain viewpoints over others. Plus, liability in KOSA attaches to the platform, not the user. The only way for platforms to reduce risk in the world of KOSA is to monitor, filter, and restrict what users say.

If the FTC can sue a platform because minors saw a medical forum discussing anorexia, or posts about LGBTQ identity, or posts discussing how to help a friend who’s depressed, then that’s censorship. The bill’s stock language that “viewpoints are protected” won’t matter. The legal incentives guarantee that platforms will silence even remotely controversial speech to stay safe.

Lawmakers who support KOSA today are choosing to trust the current administration, and future administrations, to define what youth—and to some degree, all of us—should be allowed to read online. 

KOSA will not make kids safer. It will make the internet more dangerous for anyone who relies on it to learn, connect, or speak freely. Lawmakers should reject it, and fast. 

TAKE ACTION

TELL CONGRESS: OPPOSE KOSA

EFF to California Lawmakers: There’s a Better Way to Help Young People Online

EFF: Updates - Thu, 05/15/2025 - 11:46am

We’ve covered a lot of federal and state proposals that badly miss the mark when attempting to grapple with protecting young people’s safety online. These include bills that threaten to cut young people off from vital information, infringe on their First Amendment rights to speak for themselves, subject them (and adults) to invasive and insecure age verification technology, and expose them to danger by sharing personal information with people they may not want to see it.

Several such bills are moving through the California legislature this year, continuing a troubling years-long trend of lawmakers pushing similarly problematic proposals. This week, EFF sent a letter to the California legislature expressing grave concerns with lawmakers’ approach to regulating young people’s ability to speak online.

We’re far from the only ones who have issues with this approach. Many of the laws California has passed attempting to address young people’s online safety have been subsequently challenged in court and stopped from going into effect.

Our letter outlines the legal, technical, and policy problems with proposed “solutions” including age verification mandates, age gating, mandatory parental controls, and proposals that will encourage platforms to take down speech that’s even remotely controversial.

There are better paths that don’t hurt young people’s First Amendment rights.

We also note that the current approach completely ignores what we’ve heard from thousands of young people: the online platforms and communities they frequent can be among the safest spaces for them in the physical or digital world. These responses show the relationship between social media and young people’s mental health is far more nuanced than many lawmakers are willing to believe.

While our letter is addressed to California’s Assembly and Senate, they are not the only state lawmakers taking this approach. All lawmakers should listen to the people they’re trying to protect and find ways to help young people without hurting the spaces that are so important to them.

There are better paths that don’t hurt young people’s First Amendment rights and still help protect them against many of the harms that lawmakers have raised. In fact, elements of such approaches, such as data minimization, are already included in some of these otherwise problematic bills. A well-crafted privacy law that empowers everyone—children and adults—to control how their data is collected and used would be a crucial step in curbing many of these problems.

We recognize that many young people face real harms online, that families are grappling with how to deal with them, and that tech companies are not offering much help.

However, many of the California legislature’s proposals—this year, and for several years—miss the root of the problem. We call on lawmakers work with us to enact better solutions.

With AI, researchers predict the location of virtually any protein within a human cell

MIT Latest News - Thu, 05/15/2025 - 10:30am

A protein located in the wrong part of a cell can contribute to several diseases, such as Alzheimer’s, cystic fibrosis, and cancer. But there are about 70,000 different proteins and protein variants in a single human cell, and since scientists can typically only test for a handful in one experiment, it is extremely costly and time-consuming to identify proteins’ locations manually.

A new generation of computational techniques seeks to streamline the process using machine-learning models that often leverage datasets containing thousands of proteins and their locations, measured across multiple cell lines. One of the largest such datasets is the Human Protein Atlas, which catalogs the subcellular behavior of over 13,000 proteins in more than 40 cell lines. But as enormous as it is, the Human Protein Atlas has only explored about 0.25 percent of all possible pairings of all proteins and cell lines within the database.

Now, researchers from MIT, Harvard University, and the Broad Institute of MIT and Harvard have developed a new computational approach that can efficiently explore the remaining uncharted space. Their method can predict the location of any protein in any human cell line, even when both protein and cell have never been tested before.

Their technique goes one step further than many AI-based methods by localizing a protein at the single-cell level, rather than as an averaged estimate across all the cells of a specific type. This single-cell localization could pinpoint a protein’s location in a specific cancer cell after treatment, for instance.

The researchers combined a protein language model with a special type of computer vision model to capture rich details about a protein and cell. In the end, the user receives an image of a cell with a highlighted portion indicating the model’s prediction of where the protein is located. Since a protein’s localization is indicative of its functional status, this technique could help researchers and clinicians more efficiently diagnose diseases or identify drug targets, while also enabling biologists to better understand how complex biological processes are related to protein localization.

“You could do these protein-localization experiments on a computer without having to touch any lab bench, hopefully saving yourself months of effort. While you would still need to verify the prediction, this technique could act like an initial screening of what to test for experimentally,” says Yitong Tseo, a graduate student in MIT’s Computational and Systems Biology program and co-lead author of a paper on this research.

Tseo is joined on the paper by co-lead author Xinyi Zhang, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and the Eric and Wendy Schmidt Center at the Broad Institute; Yunhao Bai of the Broad Institute; and senior authors Fei Chen, an assistant professor at Harvard and a member of the Broad Institute, and Caroline Uhler, the Andrew and Erna Viterbi Professor of Engineering in EECS and the MIT Institute for Data, Systems, and Society (IDSS), who is also director of the Eric and Wendy Schmidt Center and a researcher at MIT’s Laboratory for Information and Decision Systems (LIDS). The research appears today in Nature Methods.

Collaborating models

Many existing protein prediction models can only make predictions based on the protein and cell data on which they were trained or are unable to pinpoint a protein’s location within a single cell.

To overcome these limitations, the researchers created a two-part method for prediction of unseen proteins’ subcellular location, called PUPS.

The first part utilizes a protein sequence model to capture the localization-determining properties of a protein and its 3D structure based on the chain of  amino acids that forms it.

The second part incorporates an image inpainting model, which is designed to fill in missing parts of an image. This computer vision model looks at three stained images of a cell to gather information about the state of that cell, such as its type, individual features, and whether it is under stress.

PUPS joins the representations created by each model to predict where the protein is located within a single cell, using an image decoder to output a highlighted image that shows the predicted location.

“Different cells within a cell line exhibit different characteristics, and our model is able to understand that nuance,” Tseo says.

A user inputs the sequence of amino acids that form the protein and three cell stain images — one for the nucleus, one for the microtubules, and one for the endoplasmic reticulum. Then PUPS does the rest.

A deeper understanding

The researchers employed a few tricks during the training process to teach PUPS how to combine information from each model in such a way that it can make an educated guess on the protein’s location, even if it hasn’t seen that protein before.

For instance, they assign the model a secondary task during training: to explicitly name the compartment of localization, like the cell nucleus. This is done alongside the primary inpainting task to help the model learn more effectively.

A good analogy might be a teacher who asks their students to draw all the parts of a flower in addition to writing their names. This extra step was found to help the model improve its general understanding of the possible cell compartments.

In addition, the fact that PUPS is trained on proteins and cell lines at the same time helps it develop a deeper understanding of where in a cell image proteins tend to localize.

PUPS can even understand, on its own, how different parts of a protein’s sequence contribute separately to its overall localization.

“Most other methods usually require you to have a stain of the protein first, so you’ve already seen it in your training data. Our approach is unique in that it can generalize across proteins and cell lines at the same time,” Zhang says.

Because PUPS can generalize to unseen proteins, it can capture changes in localization driven by unique protein mutations that aren’t included in the Human Protein Atlas.

The researchers verified that PUPS could predict the subcellular location of new proteins in unseen cell lines by conducting lab experiments and comparing the results. In addition, when compared to a baseline AI method, PUPS exhibited on average less prediction error across the proteins they tested.

In the future, the researchers want to enhance PUPS so the model can understand protein-protein interactions and make localization predictions for multiple proteins within a cell. In the longer term, they want to enable PUPS to make predictions in terms of living human tissue, rather than cultured cells.

This research is funded by the Eric and Wendy Schmidt Center at the Broad Institute, the National Institutes of Health, the National Science Foundation, the Burroughs Welcome Fund, the Searle Scholars Foundation, the Harvard Stem Cell Institute, the Merkin Institute, the Office of Naval Research, and the Department of Energy.

Particles carrying multiple vaccine doses could reduce the need for follow-up shots

MIT Latest News - Thu, 05/15/2025 - 10:00am

Around the world, 20 percent of children are not fully immunized, leading to 1.5 million child deaths each year from diseases that are preventable by vaccination. About half of those underimmunized children received at least one vaccine dose but did not complete the vaccination series, while the rest received no vaccines at all.

To make it easier for children to receive all of their vaccines, MIT researchers are working to develop microparticles that can release their payload weeks or months after being injected. This could lead to vaccines that can be given just once, with several doses that would be released at different time points.

In a study appearing today in the journal Advanced Materials, the researchers showed that they could use these particles to deliver two doses of diphtheria vaccine — one released immediately, and the second two weeks later. Mice that received this vaccine generated as many antibodies as mice that received two separate doses two weeks apart.

The researchers now hope to extend those intervals, which could make the particles useful for delivering childhood vaccines that are given as several doses over a few months, such as the polio vaccine.

“The long-term goal of this work is to develop vaccines that make immunization more accessible — especially for children living in areas where it’s difficult to reach health care facilities. This includes rural regions of the United States as well as parts of the developing world where infrastructure and medical clinics are limited,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research.

Jaklenec and Robert Langer, the David H. Koch Institute Professor at MIT, are the senior authors of the study. Linzixuan (Rhoda) Zhang, an MIT graduate student who recently completed her PhD in chemical engineering, is the paper’s lead author.

Self-boosting vaccines

In recent years, Jaklenec, Langer, and their colleagues have been working on vaccine delivery particles made from a polymer called PLGA. In 2018, they showed they could use these types of particles to deliver two doses of the polio vaccine, which were released about 25 days apart.

One drawback to PLGA is that as the particles slowly break down in the body, the immediate environment can become acidic, which may damage the vaccine contained within the particles.

The MIT team is now working on ways to overcome that issue in PLGA particles and is also exploring alternative materials that would create a less acidic environment. In the new study, led by Zhang, the researchers decided to focus on another type of polymer, known as polyanhydride.

“The goal of this work was to advance the field by exploring new strategies to address key challenges, particularly those related to pH sensitivity and antigen degradation,” Jaklenec says.

Polyanhydrides, biodegradable polymers that Langer developed for drug delivery more than 40 years ago, are very hydrophobic. This means that as the polymers gradually erode inside the body, the breakdown products hardly dissolve in water and generate a much less acidic environment.

Polyanhydrides usually consist of chains of two different monomers that can be assembled in a huge number of possible combinations. For this study, the researchers created a library of 23 polymers, which differed from each other based on the chemical structures of the monomer building blocks and the ratio of the two monomers that went into the final product.

The researchers evaluated these polymers based on their ability to withstand temperatures of at least 104 degrees Fahrenheit (40 degrees Celsius, or slightly above body temperature) and whether they could remain stable throughout the process required to form them into microparticles.

To make the particles, the researchers developed a process called stamped assembly of polymer layers, or SEAL. First, they use silicon molds to form cup-shaped particles that can be filled with the vaccine antigen. Then, a cap made from the same polymer is applied and sealed using heat. Polymers that proved too brittle or didn’t seal completely were eliminated from the pool, leaving six top candidates.

The researchers used those polymers to design particles that would deliver diphtheria vaccine two weeks after injection, and gave them to mice along with vaccine that was released immediately. Four weeks after the initial injection, those mice showed comparable levels of antibodies to mice that received two doses two weeks apart.

Extended release

As part of their study, the researchers also developed a machine-learning model to help them explore the factors that determine how long it takes the particles to degrade once in the body. These factors include the type of monomers that go into the material, the ratio of the monomers, the molecular weight of the polymer, and the loading capacity or how much vaccine can go into the particle.

Using this model, the researchers were able to rapidly evaluate nearly 500 possible particles and predict their release time. They tested several of these particles in controlled buffers and showed that the model’s predictions were accurate.

In future work, this model could also help researchers to develop materials that would release their payload after longer intervals — months or even years. This could make them useful for delivering many childhood vaccines, which require multiple doses over several years.

“If we want to extend this to longer time points, let’s say over a month or even further, we definitely have some ways to do this, such as increasing the molecular weight or the hydrophobicity of the polymer. We can also potentially do some cross-linking. Those are further changes to the chemistry of the polymer to slow down the release kinetics or to extend the retention time of the particle,” Zhang says.

The researchers now hope to explore using these delivery particles for other types of vaccines. The particles could also prove useful for delivering other types of drugs that are sensitive to acidity and need to be given in multiple doses, they say.

“This technology has broad potential for single-injection vaccines, but it could also be adapted to deliver small molecules or other biologics that require durability or multiple doses. Additionally, it can accommodate drugs with pH sensitivities,” Jaklenec says.

The research was funded, in part, by the Koch Institute Support (core) Grant from the National Cancer Institute.

AI-Generated Law

Schneier on Security - Thu, 05/15/2025 - 7:00am

On April 14, Dubai’s ruler, Sheikh Mohammed bin Rashid Al Maktoum, announced that the United Arab Emirates would begin using artificial intelligence to help write its laws. A new Regulatory Intelligence Office would use the technology to “regularly suggest updates” to the law and “accelerate the issuance of legislation by up to 70%.” AI would create a “comprehensive legislative plan” spanning local and federal law and would be connected to public administration, the courts, and global policy trends.

The plan was widely greeted with astonishment. This sort of AI legislating would be a global “...

Why the courts could blunt Trump’s assault on state climate action

ClimateWire News - Thu, 05/15/2025 - 6:45am
Legal experts say it’s unlikely judges will bow to the president’s expansive arguments that state efforts to address global warming are unconstitutional.

How the fight over birthright citizenship could shape US energy policy

ClimateWire News - Thu, 05/15/2025 - 6:44am
The Supreme Court will hear oral arguments in a case that could limit federal judges' power to block Trump administration policies.

RFK recalls being told ‘people will die’ without LIHEAP

ClimateWire News - Thu, 05/15/2025 - 6:43am
The Health and Human Services secretary appeared sympathetic to people who need help paying for heat. He said boosting energy production would help.

Whitehouse warns law firms: Don't be ‘dragooned’ into Trump’s anti-climate agenda

ClimateWire News - Thu, 05/15/2025 - 6:43am
The top Democrat on the Senate Environment and Public Works Committee said he opened a probe into nine law firms that agreed to do pro bono work for the Trump administration.

Lawmakers push to legalize emissions-heavy ‘supersonic’ planes

ClimateWire News - Thu, 05/15/2025 - 6:40am
A bill to repeal the ban on overland supersonic flights could increase the demand for the gas-guzzling jets from around a dozen to as many as 240.

Highway nominee trumpets rule-killing agenda

ClimateWire News - Thu, 05/15/2025 - 6:39am
Lawmakers also pushed Sean McMaster on the administration's spending freezes.

China’s CO2 emissions plunged as clean energy production grew

ClimateWire News - Thu, 05/15/2025 - 6:38am
The decline in climate pollution came as energy demand was on the rise, underscoring the country’s expansion of renewable power.

Vermont delays enforcement of California EV regulations

ClimateWire News - Thu, 05/15/2025 - 6:37am
Republican Gov. Phil Scott said his state doesn't have "anywhere near enough charging infrastructure."

Newsom to propose extending a landmark California climate law

ClimateWire News - Thu, 05/15/2025 - 6:36am
His budget proposal would extend the state’s carbon trading program through 2045 and reserve at least $1 billion per year for high-speed rail.

New Heathrow runway will boost annual CO2 emissions by 2.4M tons, UK admits

ClimateWire News - Thu, 05/15/2025 - 6:35am
The climate impact of a third Heathrow runway was revealed in internal government documents obtained by POLITICO.

Pages