Feed aggregator

The tenured engineers of 2025

MIT Latest News - Tue, 06/24/2025 - 3:10pm

In 2025, MIT granted tenure to 11 faculty members across the School of Engineering. This year’s tenured engineers hold appointments in the departments of Aeronautics and Astronautics, Biological Engineering, Chemical Engineering, Electrical Engineering and Computer Science (EECS) — which reports jointly to the School of Engineering and MIT Schwarzman College of Computing — Materials Science and Engineering, Mechanical Engineering, and Nuclear Science and Engineering.

“It is with great pride that I congratulate the 11 newest tenured faculty members in the School of Engineering. Their dedication to advancing their fields, mentoring future innovators, and contributing to a vibrant academic community is truly inspiring,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science who will assume the title of MIT provost July 1. “This milestone is not only a testament to their achievements, but a promise of even greater impact ahead.”

This year’s newly tenured engineering faculty include:

Bryan Bryson, the Phillip and Susan Ragon Career Development Professor in the Department of Biological Engineering, conducts research in infectious diseases and immunoengineering. He is interested in developing new tools to dissect the complex dynamics of bacterial infection at a variety of scales ranging from single cells to infected animals, sitting in both “reference frames” by taking both an immunologist’s and a microbiologist’s perspective.

Connor Coley is the Class of 1957 Career Development Professor and associate professor of chemical engineering, with a shared appointment in EECS. His research group develops new computational methods at the intersection of artificial intelligence and chemistry with relevance to small molecule drug discovery, chemical synthesis, and structure elucidation.

Mohsen Ghaffari is the Steven and Renee Finn Career Development Professor and an associate professor in the EECS. His research explores the theory of distributed and parallel computation. He has done influential work on a range of algorithmic problems, including generic derandomization methods for distributed computing and parallel computing, improved distributed algorithms for graph problems, sublinear algorithms derived via distributed techniques, and algorithmic and impossibility results for massively parallel computation.

Rafael Gomez-Bombarelli, the Paul M. Cook Development Professor and associate professor of materials science and engineering, works at the interface between machine learning and atomistic simulations. He uses computational tools to tackle design of materials in complex combinatorial search spaces, such as organic electronic materials, energy storage polymers and molecules, and heterogeneous (electro)catalysts. 

Song Han, an associate professor in EECS, is a pioneer in model compression and TinyML. He has innovated in key areas of pruning quantization, parallelization, KV cache optimization, long-context learning, and multi-modal representation learning to minimize generative AI costs, and he designed the first hardware accelerator (EIE) to exploit weight sparsity.

Kaiming He, the Douglass Ross (1954) Career Development Professor of Software Technology and an associate professor in EECS, is best known for his work on deep residual networks (ResNets). His research focuses on building computer models that can learn representations and develop intelligence from and for the complex world, with the long-term goal of augmenting human intelligence with more capable artificial intelligence.

Phillip Isola, the Class of 1948 Career Development Professor and associate professor in EECS, studies computer vision, machine learning, and AI. His research aims to uncover fundamental principles of intelligence, with a particular focus on how models and representations of the world can be acquired through self-supervised learning, from raw sensory experience alone, and without the use of labeled data.

Mingda Li is the Class of 1947 Career Development Professor and an associate professor in the Department of Nuclear Science and Engineering. His research lies in characterization and computation.

Richard Linares is an associate professor in the Department of Aeronautics and Astronautics. His research focuses on astrodynamics, space systems, and satellite autonomy. Linares develops advanced computational tools and analytical methods to address challenges associated with space traffic management, space debris mitigation, and space weather modeling.

Jonathan Ragan-Kelley, an associate professor in EECS, has designed everything from tools for visual effects in movies to the Halide programming language that’s widely used in industry for photo editing and processing. His research focuses on high-performance computer graphics and accelerated computing, at the intersection of graphics with programming languages, systems, and architecture.

Arvind Satyanarayan is an associate professor in EECS. His research areas cover data visualization, human-computer interaction, and artificial intelligence and machine learning. He leads the MIT Visualization Group, which uses interactive data visualization as a petri dish to study intelligence augmentation — how computation can help amplify human cognition and creativity while respecting our agency.

Why Are Hundreds of Data Brokers Not Registering with States?

EFF: Updates - Tue, 06/24/2025 - 1:58pm

Written in collaboration with Privacy Rights Clearinghouse

Hundreds of data brokers have not registered with state consumer protection agencies. These findings come as more states are passing data broker transparency laws that require brokers to provide information about their business and, in some cases, give consumers an easy way to opt out.

In recent years, California, Texas, Oregon, and Vermont have passed data broker registration laws that require brokers to identify themselves to state regulators and the public. A new analysis by Privacy Rights Clearinghouse (PRC) and the Electronic Frontier Foundation (EFF) reveals that many data brokers registered in one state aren’t registered in others.

Companies that registered in one state but did not register in another include: 291 companies that did not register in California, 524 in Texas, 475 in Oregon, and 309 in Vermont. These numbers come from data analyzed from early April 2025.

PRC and EFF sent letters to state enforcement agencies urging them to investigate these findings. More investigation by states is needed to determine whether these registration discrepancies reflect widespread noncompliance, gaps and definitional differences in the various state laws, or some other explanation.

New data broker transparency laws are an essential first step to reining in the data broker industry. This is an ecosystem in which your personal data taken from apps and other web services can be bought and sold largely without your knowledge. The data can be highly sensitive like location information, and can be used to target you with ads, discriminate against you, and even enhance government surveillance. The widespread sharing of this data also makes it more susceptible to data breaches. And its easy availability allows personal data to be obtained by bad actors for phishing, harassment, or stalking.

Consumers need robust deletion mechanisms to remove their data stored and sold by these companies. But the potential registration gaps we identified threaten to undermine such tools. California’s Delete Act will soon provide consumers with an easy tool to delete their data held by brokers—but it can only work if brokers register. California has already brought a handful of enforcement actions against brokers who failed to register under that law, and such compliance efforts are becoming even more critical as deletion mechanisms come online.

It is important to understand the scope of our analysis.

This analysis only includes companies that registered in at least one state. It does not capture data brokers that completely disregard state laws by failing to register in any state. A total of 750 data brokers have registered in at least one state. While harder to find, shady data brokers who have failed to register anywhere should remain a primary enforcement target.

This analysis also does not claim or prove that any of the data brokers we found broke the law. While the definition of “data broker” is similar across states, there are variations that could require a company to register in one state and not another. To take one example, a data broker registered in Texas that only brokers the data of Texas residents would not be legally required to register in California. To take another, a data broker that registered with Vermont in 2020 that then changed its business model and is no longer a broker, would not be required to register in 2025. More detail on variations in data broker laws is outlined in our letters to regulators.

States should investigate compliance with data broker registration requirements, enforce their laws, and plug any loopholes. Ultimately, consumers deserve protections regardless of where they reside, and Congress should also work to pass baseline federal data broker legislation that minimizes collection and includes strict use and disclosure limits, transparency obligations, and consumer rights.

Read more here:

California letter

Texas Letter

Oregon Letter

Vermont Letter

Spreadsheet of data brokers

MIx helps innovators tackle challenges in national security

MIT Latest News - Tue, 06/24/2025 - 1:35pm

Startups and government defense agencies have historically seemed like polar opposites. Startups thrive on speed and risk, while defense agencies are more cautious. Over the past few years, however, things have changed. Many startups are eager to work with these organizations, which are always looking for innovative solutions to their hardest problems.

To help bridge that gap while advancing research along the way, MIT Lecturer Gene Keselman launched MIT’s Mission Innovation X (MIx) along with Sertac Karaman, a professor in the MIT Department of Aeronautics and Astronautics, and Fiona Murray, the William Porter Professor of Entrepreneurship at the MIT Sloan School of Management. MIx develops educational programming, supports research at MIT, and facilitates connections among government organizations, startups, and researchers.

“Startups know how to commercialize their tech, but they don’t necessarily know how to work with the government, and especially how to understand the needs of defense customers,” explains MIx Senior Program Manager Keenan Blatt. “There are a lot of different challenges when it comes to engaging with defense, not only from a procurement cycle and timeline perspective, but also from a culture perspective.”

MIx’s work helps innovators secure crucial early funding while giving defense agencies access to cutting-edge technologies, boosting America’s security capabilities in the process. Through the work, MIx has also become a thought leader in the emerging “dual-use” space, in which researchers and founders make strategic choices to advance technologies that have both civilian and defense applications.

Gene Keselman, the executive director of MIx as well as managing director of MIT’s venture studio Proto Ventures and a colonel in the U.S. Air Force Reserve, believes MIT is uniquely positioned to deliver on MIx’s mission.

“It’s not a coincidence MIx is happening at MIT,” says Keselman, adding that supporting national security “is part of MIT’s ethos.”

A history of service

MIx’s work has deep roots at the Institute.

“MIT has worked with the Department of Defense since at least since the 1940s, but really going back to its founding years,” says Karaman, who is also the director of MIT’s Laboratory for Information and Decision Systems (LIDS), a research group with its own long history of working with the government.

“The difference today,” adds Murray, who teaches courses on building deep tech ventures and regional innovation ecosystems and is the vice chair of NATO's Innovation Fund, “is that defense departments and others looking to support the defense, security, and resilience agenda are looking to several innovation ecosystem stakeholders — universities, startup ventures, and venture capitalists — for solutions. Not only from the large prime contractors.  We have learned this lesson from Ukraine, but the same ecosystem logic is at the core of our MIx offer.”

MIx was borne out of the MIT Innovation Initiative in response to interest Keselman saw from researchers and defense officials in expanding MIT’s work with the defense and global security communities. About seven years ago, he hired Katie Person, who left MIT last year to become a battalion commander, to handle all that interest as a program manager with the initiative. MIx activities, like mentoring and educating founders, began shortly after, and MIx officially launched at MIT in 2021.

“It was a good example of the ways in which MIT responds to its students’ interests and external demand,” Keselman says.

One source of early interest was from startup founders who wanted to know how to work with the defense industry and commercialize technology that could have dual commercial and defense applications. That led the team to launch the Dual Use Ventures course, which helps startup founders and other innovators work with defense agencies. The course has since been offered annually during MIT’s Independent Activities Period (IAP) and tailored for NATO’s Defense Innovation Accelerator for the North Atlantic (DIANA).

Personnel from agencies including U.S. Special Operations Command were also interested in working with MIT students, which led the MIx team to develop course 15.362/6.9160 (Engineering Innovation: Global Security Systems), which is taken each spring by students across MIT and Harvard University.

“There are the government organizations that want to be more innovative and work with startups, and there are startups that want to get access to funding from government and have government as a customer,” Keselman says. “We’re kind of the middle layer, facilitating connections, educating, and partnering on research.”

MIx research activities give student and graduate researchers opportunities to work on pressing problems in the real world, and the MIT community has responded eagerly: More than 150 students applied for MIx’s openings in this summer’s Undergraduate Research Opportunities Program.

"We’re helping push the boundaries of what’s possible and explore the frontiers of technology, but do it in a way that is publishable," says MIx Head Research Scientist A.J. Perez ’13, MEng ’14, PhD ’23. “More broadly, we want to unlock as much support for students and researchers at MIT as possible to work on problems that we know matter to defense agencies.”

Early wins

Some of MIx’s most impactful research so far has come in partnership with startups. For example, MIx helped the startup Picogrid secure a small business grant from the U.S. Air Force to build an early wildfire detection system. As part of the grant, MIT students built a computer vision model for Picogrid’s devices that can detect smoke in the sky, proving the technical feasibility of the system and describing a promising new pathway in the field of machine learning.

In another recent project with the MIT alumni-founded startup Nominal, MIT students helped improve and automate post-flight data analysis for the U.S. Air Force’s Test Pilot School.

MIx’s work connecting MIT’s innovators and the wider innovation ecosystem with defense agencies has already begun to bear fruit, and many members of MIx believe early collaborations are a sign of things to come.

“We haven’t even scratched the surface of the potential for MIx,” says Karaman, “This could be the start of something much bigger.”

Major Setback for Intermediary Liability in Brazil: Risks and Blind Spots

EFF: Updates - Tue, 06/24/2025 - 11:33am

This is the third post of a series about internet intermediary liability in Brazil. Our first post gives an overview of Brazil's current internet intermediary liability regime, set out in a law known as "Marco Civil da Internet," the context of its approval in 2014, and the beginning of the Supreme Court's judgment of such regime in November 2024. Our second post provides a bigger picture of the Brazilian context underlying the court's analysis and its most likely final decision. 

The court’s examination of Marco Civil’s Article 19 began with Justice Dias Toffoli in November last year. We explained here about the cases under trial, the reach of the Supreme Court’s decision, and Article 19’s background related to Marco Civil’s approval in 2014. We also highlighted some aspects and risks of Justice Dias Toffoli’s vote, who considered the intermediary liability regime established in Article 19 unconstitutional.  

Most of the justices have agreed to find this regime at least partially unconstitutional, but differ on the specifics. Relevant elements of their votes include: 

  • Notice-and-takedown is likely to become the general rule for platforms' liability for third-party content (based on Article 21 of Marco Civil). Justices still have to settle whether this applies to internet applications in general or if some distinctions are relevant, for example, applying only to those that curate or recommend content. Another open question refers to the type of content subject to liability under this rule: votes pointed to unlawful content/acts, manifestly criminal or clearly unlawful content, or opted to focus on crimes. Some justices didn’t explicitly qualify the nature of the restricted content under this rule.   

  • If partially valid, the need for a previous judicial order to hold intermediaries liable for user posts (Article 19 of Marco Civil) remains in force for certain types of content (or certain types of internet applications). For some justices, Article 19 should be the liability regime in the case of crimes against honor, such as defamation. Justice Luís Roberto Barroso also considered this rule should apply for any unlawful acts under civil law. Justice Cristiano Zanin has a different approach. For him, Article 19 should prevail for internet applications that don’t curate, recommend or boost content (what he called “neutral” applications) or when there’s reasonable doubt about whether the content is unlawful.

  • Platforms are considered liable for ads and boosted content that they deliver to users. This was the position held by most of the votes so far. Justices did so either by presuming platforms’ knowledge of the paid content they distribute, holding them strictly liable for paid posts, or by considering the delivery of paid content as platforms’ own act (rather than “third-party” conduct). Justice Dias Toffoli went further, including also non-paid recommended content. Some justices extended this regime to content posted by inauthentic or fake accounts, or when the non-identification of accounts hinders holding the content authors liable for their posts.   

  • Monitoring duty of specific types of harmful and/or criminal content. Most concerning is that different votes establish some kind of active monitoring and likely automated restriction duty for a list of contents, subject to internet applications' liability. Justices have either recognized a “monitoring duty” or considered platforms liable for these types of content regardless of a previous notification. Justices Luís Roberto Barroso, Cristiano Zanin, and Flávio Dino adopt a less problematic systemic flaw approach, by which applications’ liability would not derive from each piece of content individually, but from an analysis of whether platforms employ the proper means to tackle these types of content. The list of contents also varies. In most of the cases they are restricted to criminal offenses, such as crimes against the democratic state, racism, and crimes against children and adolescents; yet they may also include vaguer terms, like “any violence against women,” as in Justice Dias Toffoli’s vote. 

  • Complementary or procedural duties. Justices have also voted to establish complementary or procedural duties. These include providing a notification system that is easily accessible to users, a due process mechanism where users can appeal against content restrictions, and the release of periodic transparency reports. Justice Alexandre de Moraes also specifically mentioned algorithmic transparency measures. 

  • Oversight. Justices also discussed which entity or oversight model should be used to monitor compliance while Congress doesn’t approve a specific regulation. They raised different possibilities, including the National Council of Justice, the General Attorney’s Office, the National Data Protection Authority, a self-regulatory body, or a multistakeholder entity with government, companies, and civil society participation. 

Three other justices have yet to present their votes to complete the judgment. As we pointed out, the ruling will both decide the individual cases that entered the Supreme Court through appeals and the “general repercussion” issues underlying these individual cases. For addressing such general repercussion issues, the Supreme Court approves a thesis that orients lower court decisions in similar cases. The final thesis will reflect the majority of the court's agreements around the topics we outlined above. 

Justice Alexandre de Moraes argued that the final thesis should equate the liability regime of social media and private messaging applications to the one applied to traditional media outlets. This disregards important differences between both: even if social media platforms curate content, it involves a massive volume of third-party posts, mainly organized through algorithms. Although such curation reflects business choices, it does not equate to media outlets that directly create or individually purchase specific content from approved independent producers. This is even more complicated with messaging applications, seriously endangering privacy and end-to-end encryption. 

Justice André Mendonça was the only one so far to preserve the full application of Article 19. His proposed thesis highlighted the necessity of safeguarding privacy, data protection, and the secrecy of communications in messaging applications, among other aspects. It also indicated that judicial takedown orders must provide specific reasoning and be made available to platforms, even if issued within a sealed proceeding. The platform must also have the ability to appeal the takedown order. These are all important points the final ruling should endorse. 

Risks and Blind Spots 

We have stressed the many problems entangled with broad notice-and-takedown mandates and expanded content monitoring obligations. Extensively relying on AI-based content moderation and tying it to intermediary liability for user content will likely exacerbate the detrimental effects of these systems’ limitations and flaws. The perils and concerns that grounded Article 19's approval remain valid and should have led to a position of the court preserving its regime.  

However, given the judgement’s current stage, there are still some minimum safeguards that justices should consider or reinforce to reduce harm.  

It’s crucial to put in place guardrails against the abuse and weaponization of notification mechanisms. At a minimum, platforms shouldn’t be liable following an extrajudicial notification when there’s reasonable doubt concerning the content’s lawfulness. In addition, notification procedures should make sure that notices are sufficiently precise and properly substantiated indicating the content’s specific location (e.g. URL) and why the notifier considers it to be illegal. Internet applications must also provide reasoned justification and adequate appeal mechanisms for those who face content restrictions.  

On the other hand, holding intermediaries liable for individual pieces of user content regardless of notification, by massively relying on AI-based content flagging, is a recipe for over censorship. Adopting a systemic flaw approach could minimally mitigate this problem. Moreover, justices should clearly set apart private messaging applications, as mandated content-based restrictions would erode secure and end-to-end encrypted implementations. 

Finally, we should note that justices generally didn’t distinguish large internet applications from other providers when detailing liability regimes and duties in their votes. This is one major blind spot, as it could significantly impact the feasibility of alternate and decentralized alternatives to Big Tech’s business models, entrenching platform concentration. Similarly, despite criticism of platforms’ business interests in monetizing and capturing user attention, court debates mainly failed to address the pervasive surveillance infrastructure lying underneath Big Tech’s power and abuses.   

Indeed, while justices have called out Big Tech’ enormous power over the online flow of information – over what’s heard and seen, and by whom – the consequences of this decision can actually deepen this powerful position. 

It’s worth recalling a line of Aaron Schwarz in the film “The Internet’s Own Boy” when comparing broadcasting and the internet. He said: “[…] what you see now is not a question of who gets access to the airwaves, it’s a question of who gets control over the ways you find people.” As he puts it, today’s challenge is less about who gets to speak, but rather about who gets to be heard.  

There’s an undeniable source of power in operating the inner rules and structures by which the information flows within a platform with global reach and millions of users. The crucial interventions must aim at this source of power, putting a stop to behavioral surveillance ads, breaking Big Tech’s gatekeeper dominance, and redistributing the information flow.  

That’s not to say that we shouldn’t care about how each platform organizes its online environment. We should, and we do. The EU Digital Services Act, for example, established rules in this sense, leaving the traditional liability regime largely intact. Rather than leveraging platforms as users’ speech watchdogs by potentially holding intermediaries liable for each piece of user content, platform accountability efforts should broadly look at platforms’ processes and business choices. Otherwise, we will end up focusing on monitoring users instead of targeting platforms’ abuses. 

Major Setback for Intermediary Liability in Brazil: How Did We Get Here?

EFF: Updates - Tue, 06/24/2025 - 11:13am

This is the second post of a series about intermediary liability in Brazil. Our first post gives an overview of Brazil's current intermediary liability regime, the context of its approval in 2014, and the beginning of the Supreme Court's analysis of such regime in November 2024. Our third post provides an outlook on justices' votes up until June 23, underscoring risks, mitigation measures, and blind spots of their potential decision.

The Brazilian Supreme Court has formed a majority to overturn the country’s current online intermediary liability regime. With eight out of eleven justices having presented their opinions, the court has reached enough votes to mostly remove the need for a previous judicial order demanding content takedown to hold digital platforms liable for user posts, which is currently the general rule.  

The judgment relates to Article 19 of Brazil’s Civil Rights Framework for the Internet (“Marco Civil da Internet,” Law n. 12.965/2014), wherein internet applications can only be held liable for third-party content if they fail to comply with a judicial decision ordering its removal. Article 19 aligns with the Manila Principles and reflects the important understanding that holding platforms liable for user content without a judicial analysis creates strong incentives for enforcement overreach and over censorship of protected speech.  

Nonetheless, while Justice André Mendonça voted to preserve Article 19’s application, four other justices stated it should prevail only in specific cases, mainly for crimes against honor (such as defamation). The remaining three justices considered that Article 19 offers insufficient protection to constitutional guarantees, such as the integral protection of children and teenagers.  

The judgment will resume on June 25th, with the three final justices completing the analysis by the plenary of the court. Whereas Article 19’s partial unconstitutionality (or its interpretation “in accordance with” the Constitution) seems to be the position the majority of the court will take, the details of each vote vary, indicating important agreements still to sew up and critical tweaks to make.   

As we previously noted, the outcome of this ruling can seriously undermine free expression and privacy safeguards if they lead to general content monitoring obligations or broadly expand notice-and-takedown mandates. This trend could negatively shape developments globally in other courts, parliaments, or with respect to executive powers. Sadly, the votes so far have aggravated these concerns.  

But before we get to them, let's look at some circumstances underlying the Supreme Court's analysis. 

2014 vs. 2025: The Brazilian Techlash After Marco Civil's Approval 

How did Article 19 end up (mostly) overturned a decade after Marco Civil’s much-celebrated approval in Brazil back in 2014?   

In addition to the broader techlash following the impacts of an increasing concentration of power in the digital realm, developments in Brazil have leveraged a harsher approach towards internet intermediaries. Marco Civil became a scapegoat, especially Article 19, within regulatory approaches that largely diminished the importance of the free expression concerns that informed its approval. Rather than viewing the provision as a milestone to be complemented with new legislation, this context has reinforced the view that Article 19 should be left behind. 

 The tougher approach to internet intermediaries gained steam after former President Jair Bolsonaro’s election in 2018 and throughout the legislative debates around draft bill 2630, also known as the “Fake News bill.”  

Specifically, though not exhaustive, concerns around the spread of disinformation, online-fueled discrimination, and political violence, as well as threats to election integrity, constitute an important piece of this scenario. This includes the use of social media by the far right within the escalation of acts seeking to undermine the integrity of elections and ultimately overthrow the legitimately elected President Luis Inácio da Silva in January 2023. Investigations later unveiled that related plans included killing the new president, the vice-president, and Justice Alexandre de Moraes.  

Concerns over child and adolescents’ rights and safety are another part of the underlying context. Among others, a wave of violent threats and actual attacks in schools in early 2023 was bolstered by online content. Social media challenges also led to injuries and deaths of young people.  

Finally, the political reactions to Big Tech’s alignment with far-right politicians and feuds with Brazilian authorities complete this puzzle. It includes reactions to Meta’s policy changes in January 2025 and the Trump’s administration’s decision to restrict visas to foreign officials based on grounds of limiting free speech online. This decision is viewed as an offensive against Brazil's Supreme Court from U.S. authorities in alliance with Bolsonaro’s supporters, including his son now living in the U.S

Changes in the tech landscape, including concerns about the attention-driven information flow, alongside geopolitical tensions, landed in Article 19 examination by the Brazilian Supreme Court. Hurdles in the legislative debate of draft bill 2630 turned attention to the internet intermediary liability cases pending in the Supreme Court as the main vehicles for providing “some” response. Yet, the scope of such cases (explained here) determined the most likely outcome. As they focus on assessing platform liability for user content and whether it involves a duty to monitor, these issues became the main vectors for analysis and potential change. Alternative approaches, such as improving transparency, ensuring due process, and fostering platform accountability through different measures, like risk assessments, were mainly sidelined.  

Read our third post in this series to learn more about the analysis of the Supreme Court so far and its risks and blind spots. 

Here’s a Subliminal Channel You Haven’t Considered Before

Schneier on Security - Tue, 06/24/2025 - 7:09am

Scientists can manipulate air bubbles trapped in ice to encode messages.

GOP budget would slash wind and solar subsidies

ClimateWire News - Tue, 06/24/2025 - 7:05am
Tax credits for clean energy have previously enjoyed bipartisan support.

GOP attorneys general want legal immunity for fossil fuel industry

ClimateWire News - Tue, 06/24/2025 - 7:04am
Red states are urging the Trump administration to take steps to quash lawsuits that seek to hold the oil and gas industry accountable for climate change.

Saudis, US drive strife inside global climate science body

ClimateWire News - Tue, 06/24/2025 - 7:04am
The proposal for a Saudi Aramco oil company staffer to become author of key science report is denounced as “political capture.”

Regulation of industrial carbon emissions surged in past year

ClimateWire News - Tue, 06/24/2025 - 7:00am
A new World Bank report says 40 percent of global industrial emissions are now regulated through carbon taxes or carbon markets.

Digital tool tracks impact of heat, pollution on California’s Latino communities

ClimateWire News - Tue, 06/24/2025 - 6:59am
The dashboard was launched Tuesday by UCLA’s Latino Policy and Politics Institute.

Postal Service EV fleet back on Congress’ hit list

ClimateWire News - Tue, 06/24/2025 - 6:58am
Republicans have proposed selling the Postal Service's electric vehicles. The issue may come up during a hearing Tuesday.

‘Getting especially ugly’: Industry analyst sees uncertain future for US carmakers

ClimateWire News - Tue, 06/24/2025 - 6:58am
Edmunds' Ivan Drury is trying to make sense of an American auto market in constant flux.

Japan boosts effort to curb methane leaks from LNG supply chains

ClimateWire News - Tue, 06/24/2025 - 6:57am
The announcement was made after a three-day energy summit in Tokyo where government officials urged energy importers to secure gas past 2050.

Scientists stumble upon way to cut cow dung methane emissions

ClimateWire News - Tue, 06/24/2025 - 6:57am
Two local scientists began testing the addition of polyferric sulfate in an attempt to recycle the water in cow dung lagoons and made a startling observation.

EU climate boss fought Commission plan to nix greenwashing rules

ClimateWire News - Tue, 06/24/2025 - 6:56am
The vice president of the EU executive pressured the environment commissioner over several days to preserve the law.

Greenpeace joins anti-Bezos protest in Venice about wedding, tax breaks

ClimateWire News - Tue, 06/24/2025 - 6:55am
Activists argue Jeff Bezos' wedding exemplifies broader failures in municipal governance, particularly the prioritization of tourism over resident needs.

Protect young secondary forests for optimum carbon removal

Nature Climate Change - Tue, 06/24/2025 - 12:00am

Nature Climate Change, Published online: 24 June 2025; doi:10.1038/s41558-025-02355-5

The authors generate ~1-km2 growth curves for aboveground live carbon in regrowing forests, globally. They show that maximum carbon removal rates can vary by 200-fold spatially and with age, with the greatest rates estimated at about 30 ± 12 years, highlighting the role of secondary forests in carbon cycling.

Copyright Cases Should Not Threaten Chatbot Users’ Privacy

EFF: Updates - Mon, 06/23/2025 - 10:07pm

Like users of all technologies, ChatGPT users deserve the right to delete their personal data. Nineteen U.S. States, the European Union, and a host of other countries already protect users’ right to delete. For years, OpenAI gave users the option to delete their conversations with ChatGPT, rather than let their personal queries linger on corporate servers. Now, they can’t. A badly misguided court order in a copyright lawsuit requires OpenAI to store all consumer ChatGPT conversations indefinitely—even if a user tries to delete them. This sweeping order far outstrips the needs of the case and sets a dangerous precedent by disregarding millions of users’ privacy rights.

The privacy harms here are significant. ChatGPT’s 300+ million users submit over 1 billion messages to its chatbots per day, often for personal purposes. Virtually any personal use of a chatbot—anything from planning family vacations and daily habits to creating social media posts and fantasy worlds for Dungeons and Dragons games—reveal personal details that, in aggregate, create a comprehensive portrait of a person’s entire life. Other uses risk revealing people’s most sensitive information. For example, tens of millions of Americans use ChatGPT to obtain medical and financial information. Notwithstanding other risks of these uses, people still deserve privacy rights like the right to delete their data. Eliminating protections for user-deleted data risks chilling beneficial uses by individuals who want to protect their privacy.

This isn’t a new concept. Putting users in control of their data is a fundamental piece of privacy protection. Nineteen states, the European Union, and numerous other countries already protect the right to delete under their privacy laws. These rules exist for good reasons: retained data can be sold or given away, breached by hackers, disclosed to law enforcement, or even used to manipulate a user’s choices through online behavioral advertising.

While appropriately tailored orders to preserve evidence are common in litigation, that’s not what happened here. The court disregarded the privacy rights of millions of ChatGPT users without any reasonable basis to believe it would yield evidence. The court granted the order based on unsupported assertions that users who delete their data are probably copyright infringers looking to “cover their tracks.” This is simply false, and it sets a dangerous precedent for cases against generative AI developers and other companies that have vast stores of user information. Unless courts limit orders to information that is actually relevant and useful, they will needlessly violate the privacy rights of millions of users.

OpenAI is challenging this order. EFF urges the court to lift the order and correct its mistakes.  

The NO FAKES Act Has Changed – and It’s So Much Worse

EFF: Updates - Mon, 06/23/2025 - 3:39pm

A bill purporting to target the issue of misinformation and defamation caused by generative AI has mutated into something that could change the internet forever, harming speech and innovation from here on out.

The Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES) Act aims to address understandable concerns about generative AI-created “replicas” by creating a broad new intellectual property right. That approach was the first mistake: rather than giving people targeted tools to protect against harmful misrepresentations—balanced against the need to protect legitimate speech such as parodies and satires—the original NO FAKES just federalized an image-licensing system.

Take Action

Tell Congress to Say No to NO FAKES

The updated bill doubles down on that initial mistaken approach by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them, with few safeguards against abuse.

The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters;  c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”

This bill would be a disaster for internet speech and innovation.

Targeting Tools

The first version of NO FAKES focused on digital replicas. The new version goes further, targeting tools that can be used to produce images that aren’t authorized by the individual, anyone who owns the rights in that individual’s image, or the law. Anyone who makes, markets, or hosts such tools is on the hook. There are some limits—the tools must be primarily designed for, or have only limited commercial uses other than making unauthorized images—but those limits will offer cold comfort to developers given that they can be targeted based on nothing more than a bare allegation. These provisions effectively give rights-holders the veto power on innovation they’ve long sought in the copyright wars, based on the same tech panics. 

Takedown Notices and Filter Mandate

The first version of NO FAKES set up a notice and takedown system patterned on the DMCA, with even fewer safeguards. NO FAKES expands it to cover more service providers and require those providers to not only take down targeted materials (or tools) but keep them from being uploaded in the future.  In other words, adopt broad filters or lose the safe harbor.

Filters are already a huge problem when it comes to copyright, and at least in that instance all it should be doing is flagging for human review if an upload appears to be a whole copy of a work. The reality is that these systems often flag things that are similar but not the same (like two different people playing the same piece of public domain music). They also flag things for infringement based on mere seconds of a match, and they frequently do not take into account context that would make the use authorized by law.

But copyright filters are not yet required by law. NO FAKES would create a legal mandate that will inevitably lead to hecklers’ vetoes and other forms of over-censorship.

The bill does contain carve outs for parody, satire, and commentary, but those will also be cold comfort for those who cannot afford to litigate the question.

Threats to Anonymous Speech

As currently written, NO FAKES also allows anyone to get a subpoena from a court clerk—not a judge, and without any form of proof—forcing a service to hand over identifying information about a user.

We've already seen abuse of a similar system in action. In copyright cases, those unhappy with the criticisms being made against them get such subpoenas to silence critics. Often that the criticism includes the complainant's own words as proof of the criticism, an ur-example of fair use. But the subpoena is issued anyway and, unless the service is incredibly on the ball, the user can be unmasked.

Not only does this chill further speech, the unmasking itself can cause harm to users. Either reputationally or in their personal life.

Threats to Innovation

Most of us are very unhappy with the state of Big Tech. It seems like not only are we increasingly forced to use the tech giants, but that the quality of their services is actively degrading. By increasing the sheer amount of infrastructure a new service would need to comply with the law, NO FAKES makes it harder for any new service to challenge Big Tech. It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES.

Requiring removal of tools, apps, and services could likewise stymie innovation. For one, it would harm people using such services for otherwise lawful creativity.  For another, it would discourage innovators from developing new tools. Who wants to invest in a tool or service that can be forced offline by nothing more than an allegation?

This bill is a solution in search of a problem. Just a few months ago, Congress passed Take It Down, which targeted images involving intimate or sexual content. That deeply flawed bill pressures platforms to actively monitor online speech, including speech that is presently encrypted. But if Congress is really worried about privacy harms, it should at least wait to see the effects of the last piece of internet regulation before going further into a new one. Its failure to do so makes clear that this is not about protecting victims of harmful digital replicas.

NO FAKES is designed to consolidate control over the commercial exploitation of digital images, not prevent it. Along the way, it will cause collateral damage to all of us.

Take Action

Tell Congress to Say No to NO FAKES

Pages