EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 4 hours 22 min ago

Google’s Advanced Protection Arrives on Android: Should You Use It?

Mon, 06/16/2025 - 4:33pm

With this week’s release of Android 16, Google added a new security feature to Android, called Advanced Protection. At-risk people—like journalists, activists, or politicians—should consider turning on. Here’s what it does, and how to decide if it’s a good fit for your security needs.

To get some confusing naming schemes clarified at the start: Advanced Protection is an extension of Google’s Advanced Protection Program, which protects your Google account from phishing and harmful downloads, and is not to be confused with Apple’s Advanced Data Protection, which enables end-to-end encryption for most data in iCloud. Instead, Google's Advanced Protection is more comparable to the iPhone’s Lockdown Mode, Apple’s solution to protecting high risk people from specific types of digital threats on Apple devices.

Advanced Protection for Android is meant to provide stronger security by: enabling certain features that aren’t on by default, disabling the ability to turn off features that are enabled by default, and adding new security features. Put together, this suite of features is designed to isolate data where possible, and reduce the chances of interacting with unsecure websites and unknown individuals.

For example, when it comes to enabling existing features, Advanced Protection turns on Android’s “theft detection” features (designed to protect against in-person thefts), forces Chrome to use HTTPS for all website connections (a feature we’d like to see expand to everything on the phone), enables scam and spam protection features in Google Messages, and disables 2G (which helps prevent your phone from connecting to some Cell Site Simulators). You could go in and enable each of these individually in the Settings app, but having everything turned on with one tap is much easier to do.

Advanced Protection also prevents you from disabling certain core security features that are enabled by default, like Google Play Protect (Android’s built-in malware protection) and Android Safe Browsing (which safeguards against malicious websites).

But Advanced Protection also adds some new features. Once turned on, the “Inactivity reboot” feature restarts your device if it’s locked for 72 hours, which prevents ease of access that can occur when your device is on for a while and you have settings that could unlock your device. By forcing a reboot, it resets everything to being encrypted and behind biometric or pin access. It also turns on “USB Protection,” which makes it so any new USB connection can only be used for charging when the device is locked. It also prevents your device from auto-reconnecting to unsecured Wi-Fi networks.

As with all things Android, some of these features are limited to select devices, or only phones made by certain manufacturers. Memory Tagging Extension (MTE), which attempts to mitigate memory vulnerabilities by blocking unauthorized access, debuted on Pixel 8 devices in 2023 is only now showing up on other phones. These segmentations in features makes it a little difficult to know exactly what your device is protecting against if you’re not using a Pixel phone.

Some of the new features, like the ability to generate security logs that you can then share with security professionals in case your device is ever compromised, along with the aforementioned insecure network reconnect and USB protection features, won’t launch until later this year.

It’s also worth considering that enabling Advanced Protection may impact how you use your device. For example, Advanced Protection disables the JavaScript optimizer in Chrome, which may break some websites, and since Advanced Protection blocks unknown apps, you won’t be able to side-load. There’s also the chance that some of the call screening and scam detection features may misfire and flag legitimate calls.

How to Turn on Advanced Protection

Advanced Protection is easy to turn on and off, so there’s no harm in giving it a try. Advanced Protection was introduced with Android 16, so you may need to update your phone, or wait a little longer for your device manufacturer to support the update if it doesn’t already. Once you’re updated, to turn it on:

  • Open the Settings app.
  • Tap Security and Privacy > Advanced Protection, and enable the option next to “Device Protection.” 
  • If you haven’t already done so, now is a good time to consider enabling Advanced Protection for your Google account as well, though you will need to enroll a security key or a passkey to use this feature.

We welcome these features on Android, as well as the simplicity of its approach to enabling several pre-existing security and privacy features all at once. While there is no panacea for every security threat, this is a baseline that improves the security on Android for at-risk individuals without drastically altering day-to-day use, which is a win for everyone. We hope to see Google continue to push new improvements to this feature and for different phone manufacturer’s to support Advanced Protection where they don’t already.

EFF to NJ Supreme Court: Prosecutors Must Disclose Details Regarding FRT Used to Identify Defendant

Mon, 06/16/2025 - 3:56pm

This post was written by EFF legal intern Alexa Chavara.

Black box technology has no place in the criminal legal system. That’s why we’ve once again filed an amicus brief arguing that the both the defendant and the public have a right to information regarding face recognition technology (FRT) that was used during an investigation to identify a criminal defendant.

Back in June 2023, we filed an amicus brief along with Electronic Privacy Information Center (EPIC) and the National Association of Criminal Defense Lawyers (NACDL) in State of New Jersey v. Arteaga. We argued that information regarding the face recognition technology used to identify the defendant should be disclosed due to the fraught process of a face recognition search and the many ways that inaccuracies manifest in the use of the technology. The New Jersey appellate court agreed, holding that state prosecutors must turn over detailed information to the defendant about the FRT used, including how it works, its source code, and its error rate. The court held that this ensures the defendant’s due process rights with the ability to examine the information, scrutinize its reliability, and build a defense.

Last month, partnering with the same organizations, we filed another amicus brief in favor of transparency regarding FRT in the criminal system, this time in the New Jersey Supreme Court in State of New Jersey v. Miles.

In Miles, New Jersey law enforcement used FRT to identify Mr. Miles as a suspect in a criminal investigation. The defendant, represented by the same public defender in Arteaga, moved for discovery on information about the FRT used, relying on Arteaga. The trial court granted this request for discovery, and the appellate court affirmed. The State then appealed to the New Jersey Supreme Court, where the issue is before the Court for the first time.

As explained in our amicus brief, disclosure is necessary to ensure criminal prosecutions are based on accurate evidence. Every search using face recognition technology presents a unique risk of error depending on various factors from the specific FRT system used, the databases searched, the quality of the photograph, and the demographics of the individual. Study after study shows that facial recognition algorithms are not always reliable, and that error rates spike significantly when involving faces of people of color,  especially Black women, as well as trans and nonbinary people.

Moreover, these searches often determine the course of investigation, reinforcing errors and resulting in numerous wrongful arrests, most often of Black folks. Discovery is the last chance to correct harm from misidentification and to allow the defendant to understand the evidence against them.

Furthermore, the public, including independent experts, have the right to examine the technology used in criminal proceedings. Under the First Amendment and the more expansive New Jersey Constitution corollary, the public’s right to access criminal judicial proceedings includes filings in pretrial proceedings, like the information being sought here. That access provides the public meaningful oversight of the criminal justice system and increases confidence in judicial outcomes, which is especially significant considering the documented risks and shortcomings of FRT.

Protecting Minors Online Must Not Come at the Cost of Privacy and Free Expression

Mon, 06/16/2025 - 11:52am

The European Commission has taken an important step toward protecting minors online by releasing draft guidelines under Article 28 of the Digital Services Act (DSA). EFF recently submitted feedback to the Commission’s Targeted Consultation, emphasizing a critical point: Online safety for young people must not come at the expense of privacy, free expression, and equitable access to digital spaces.

We support the Commission’s commitment to proportionality, rights-based protections, and its efforts to include young voices in shaping these guidelines. But we remain deeply concerned by the growing reliance on invasive age assurance and verification technologies—tools that too often lead to surveillance, discrimination, and censorship.

Age verification systems typically depend on government-issued ID or biometric data, posing significant risks to privacy and shutting out millions of people without formal documentation. Age estimation methods fare no better: they’re inaccurate, especially for marginalized groups, and often rely on sensitive behavioral or biometric data. Meanwhile, vague mandates to protect against “unrealistic beauty standards” or “potentially risky content” threaten to overblock legitimate expression, disproportionately harming vulnerable users, including LGBTQ+ youth.

By placing a disproportionate emphasis on age assurance as a necessary tool to safeguard minors, the guidelines do not address the root causes of risks encountered by all users, including minors, and instead merely focus on treating their symptoms.

Safety matters—but so do privacy, access to information, and the fundamental rights of all users. We urge the Commission to avoid endorsing disproportionate, one-size-fits-all technical solutions. Instead, we recommend user-empowering approaches: Strong default privacy settings, transparency in recommender systems, and robust user control over the content they see and share.

The DSA presents an opportunity to protect minors while upholding digital rights. We hope the final guidelines reflect that balance.

Read more about digital identity and the future of age verification in Europe here.

A New Digital Dawn for Syrian Tech Users

Thu, 06/12/2025 - 11:19am

U.S. sanctions on Syria have for several decades not only restricted trade and financial transactions, they’ve also severely limited Syrians’ access to digital technology. From software development tools to basic cloud services, Syrians were locked out of the global internet economy—stifling innovation, education, and entrepreneurship.

EFF has for many years pushed for sanctions exemptions for technology in Syria, as well as in Sudan, Iran, and Cuba. While civil society had early wins in securing general licenses for Iran and Sudan allowing the export of communications technologies, the conflict in Syria that began in 2011 made loosening of sanctions a pipe dream.

But recent changes to U.S. policy could mark the beginning of a shift. In a quiet yet significant move, the U.S. government has eased sanctions on Syria. On May 23, the Treasury Department issued General License 25, effectively allowing technology companies to provide services to Syrians. This decision could have an immediate and positive impact on the lives of millions of Syrian internet users—especially those working in the tech and education sectors.

A Legacy of Digital Isolation

For years, Syrians have found themselves barred from accessing even the most basic tools. U.S. sanctions meant that companies like Google, Apple, Microsoft, and Amazon—either by law or by cautious decisions taken to avoid potential penalties—restricted access to many of their services. Developers couldn’t access GitHub repositories or use Google Cloud; students couldn’t download software for virtual classrooms; and entrepreneurs struggled to build startups without access to payment gateways or secure infrastructure.

Such restrictions can put users in harm’s way; for instance, not being able to access the Google Play store from inside the country means that Syrians can’t easily download secure versions of everyday tools like Signal or WhatsApp, thus potentially subjecting their communications to surveillance.

These restrictions also compounded the difficulties of war, economic collapse, and internal censorship. Even when Syrian tech workers could connect with global communities, their participation was hampered by legal gray zones and technical blocks.

What the Sanctions Relief Changes

Under General License 25, companies will now be able to provide services to Syria that have never officially been available. While it may take time for companies to catch up with any regulatory changes, it is our hope that Syrians will soon be able to access and make use of technologies that will enable them to more freely communicate and rebuild.

For Syrian developers, the impact could be transformative. Restored access to platforms like GitHub, AWS, and Google Cloud means the ability to build, test, and deploy apps without the need for VPNs or workarounds. It opens the door to participation in global hackathons, remote work, and open-source communities—channels that are often lifelines for those in conflict zones. Students and educators stand to benefit, too. With sanctions eased, educational tools and platforms that were previously unavailable could soon be accessible. Entrepreneurs may also finally gain access to secure communications, e-commerce platforms, and the broader digital infrastructure needed to start and scale businesses. These developments could help jumpstart local economies.

Despite the good news, challenges remain. Major tech companies have historically been slow to respond to sanctions relief, often erring on the side of over-compliance to avoid liability. Many of the financial and logistical barriers—such as payment processing, unreliable internet, and ongoing conflict—will not disappear overnight.

Moreover, the lifting of sanctions is not a blanket permission slip; it’s a cautious opening. Any future geopolitical shifts or changes in U.S. foreign policy could once again cut off access, creating an uncertain digital future for Syrians.

Nevertheless, by removing barriers imposed by sanctions, the U.S. is taking a step toward recognizing that access to technology is not a luxury, but a necessity—even in sanctioned or conflict-ridden countries.

For Syrian users, the lifting of tech sanctions is more than a bureaucratic change—it’s a door, long closed, beginning to open. And for the international tech community, it’s an opportunity to re-engage, responsibly and thoughtfully, with a population that has been cut off from essential services for too long.

EFFecting Change: Pride in Digital Freedom

Wed, 06/11/2025 - 8:06pm

Join us for our next EFFecting Change livestream this Thursday! We're talking about emerging laws and platform policies that affect the digital privacy and free expression rights of the LGBT+ community, and how this echoes the experience of marginalized people across the world.

EFFecting Change Livestream Series:
Pride in Digital Freedom
Thursday, June 12th
4:00 PM - 5:00 PM Pacific - Check Local Time
This event is LIVE and FREE!

Join our panel featuring EFF Senior Staff Technologist Daly Barnett, EFF Legislative Activist Rindala Alajaji, Chosen Family Law Center Senior Legal Director Andy Izenson, and Woodhull Freedom Foundation Chief Operations Officer Mandy Salley while they discuss what is happening and what should change to protect digital freedom.

effectingchangepride_social_banner.png

We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates.

Congress Can Act Now to Protect Reproductive Health Data

Wed, 06/11/2025 - 6:58pm

State, federal, and international regulators are increasingly concerned about the harms they believe the internet and new technology are causing to users of all categories. Lawmakers are currently considering many proposals that are intended to provide protections to the most vulnerable among us. Too often, however, those proposals do not carefully consider the likely unintended consequences or even whether the law will actually reduce the harms it’s supposed to target. That’s why EFF supports Rep. Sara Jacobs’ newly reintroduced “My Body, My Data" Act, which will protect the privacy and safety of people seeking reproductive health care, while maintaining important constitutional protections and avoiding any erosion of end-to-end encryption. 

Privacy fears should never stand in the way of healthcare. That's why this common-sense bill will require businesses and non-governmental organizations to act responsibly with personal information concerning reproductive health care. Specifically, it restricts them from collecting, using, retaining, or disclosing reproductive health information that isn't essential to providing the service someone requests.

The bill would protect people who use fertility or period-tracking apps or are seeking information about reproductive health services.

These restrictions apply to companies that collect personal information related to a person’s reproductive or sexual health. That includes data related to pregnancy, menstruation, surgery, termination of pregnancy, contraception, basal body temperature or diagnoses. The bill would protect people who, for example, use fertility or period-tracking apps or are seeking information about reproductive health services. 

We are proud to join Center for Democracy and Technology, Electronic Privacy Information Center, National Partnership for Women & Families, Planned Parenthood Federation of America, Reproductive Freedom for All, Physicians for Reproductive Health, National Women’s Law Center, National Abortion Federation, Catholics for Choice, National Council for Jewish Women, Power to Decide, United for Reproductive & Gender Equity, Indivisible, Guttmacher, and National Network of Abortion Funds, and All* Above All in support of this bill. 

In addition to the restrictions on company data processing, this bill also provides people with necessary rights to access and delete their reproductive health information. Companies must also publish a privacy policy, so that everyone can understand what information companies process and why. It also ensures that companies are held to public promises they make about data protection and gives the Federal Trade Commission the authority to hold them to account if they break those promises. 

The bill also lets people take on companies that violate their privacy with a strong private right of action. Empowering people to bring their own lawsuits not only places more control in the individual's hands, but also ensures that companies will not take these regulations lightly. 

Finally, while Rep. Jacobs' bill establishes an important national privacy foundation for everyone, it also leaves room for states to pass stronger or complementary laws to protect the data privacy of those seeking reproductive health care. 

We thank Rep. Jacobs and Sens. Mazie Hirono and Ron Wyden for taking up this important bill and using it as an opportunity not only to protect those seeking reproductive health care, but also highlight why data privacy is an important element of reproductive justice. 

Betting on Your Digital Rights: EFF Benefit Poker Tournament at DEF CON 33

Tue, 06/10/2025 - 9:17pm

Hacker Summer Camp is almost here... and with it comes the Third Annual EFF Benefit Poker Tournament at DEF CON 33 hosted by security expert Tarah Wheeler.

Please join us at the same place and time as last year: Friday, August 8th, at high noon at the Horseshoe Poker Room. The fees haven’t changed; it’s still $250 to register plus $100 the day of the tournament with unlimited rebuys. (AND your registration donation covers your EFF membership for the year.) 

Tarah Wheeler—EFF board member and resident poker expert—has been working hard on the tournament since last year! We will have Lintile as emcee this year and there's going to be bug bounties! When you take someone out of the tournament, they will give you a pin. Prizes—and major bragging rights—go to the player with the most bounty pins. Be sure to register today and see Lintile in action!

Did we mention there will be Celebrity Bounties? Knock out Wendy Nather, Chris “WeldPond” Wysopal, Jake “MalwareJake” Williams and get neat EFF swag and the respect of your peers! Plus, as always, knock out Tarah's dad Mike, and she donates $250 to the EFF in your name!

Register Now

Find Full Event Details and Registration

Have a friend that might be interested but not sure how to play? Have you played some poker before but could use a refresher? Join poker pro Mike Wheeler (Tarah’s dad) and celebrities for a free poker clinic from 11:00 am-11:45 am just before the tournament. Mike will show you the rules, strategy, table behavior, and general Vegas slang at the poker table. Even if you know poker pretty well, come a bit early and help out.

Register today and reserve your deck. Be sure to invite your friends to join you!

 

Oppose STOP CSAM: Protecting Kids Shouldn’t Mean Breaking the Tools That Keep Us Safe

Tue, 06/10/2025 - 7:08pm

A Senate bill re-introduced this week threatens security and free speech on the internet. EFF urges Congress to reject the STOP CSAM Act of 2025 (S. 1829), which would undermine services offering end-to-end encryption and force internet companies to take down lawful user content.   

TAKE ACTION

Tell Congress Not to Outlaw Encrypted Apps

As in the version introduced last Congress, S. 1829 purports to limit the online spread of child sexual abuse material (CSAM), also known as child pornography. CSAM is already highly illegal. Existing law already requires online service providers who have actual knowledge of “apparent” CSAM on their platforms to report that content to the National Center for Missing and Exploited Children (NCMEC). NCMEC then forwards actionable reports to law enforcement agencies for investigation. 

S. 1829 goes much further than current law and threatens to punish any service that works to keep its users secure, including those that do their best to eliminate and report CSAM. The bill applies to “interactive computer services,” which broadly includes private messaging and email apps, social media platforms, cloud storage providers, and many other internet intermediaries and online service providers. 

The Bill Threatens End-to-End Encryption

The bill makes it a crime to intentionally “host or store child pornography” or knowingly “promote or facilitate” the sexual exploitation of children. The bill also opens the door for civil lawsuits against providers for the intentional, knowing or even reckless “promotion or facilitation” of conduct relating to child exploitation, the “hosting or storing of child pornography,” or for “making child pornography available to any person.”  

The terms “promote” and “facilitate” are broad, and civil liability may be imposed based on a low recklessness state of mind standard. This means a court can find an app or website liable for hosting CSAM even if the app or website did not even know it was hosting CSAM, including because the provider employed end-to-end encryption and could not view the contents of content uploaded by users.

Creating new criminal and civil claims against providers based on broad terms and low standards will undermine digital security for all internet users. Because the law already prohibits the distribution of CSAM, the bill’s broad terms could be interpreted as reaching more passive conduct, like merely providing an encrypted app.  

Due to the nature of their services, encrypted communications providers who receive a notice of CSAM may be deemed to have “knowledge” under the criminal law even if they cannot verify and act on that notice. And there is little doubt that plaintiffs’ lawyers will (wrongly) argue that merely providing an encrypted service that can be used to store any image—not necessarily CSAM—recklessly facilitates the sharing of illegal content.  

Affirmative Defense Is Expensive and Insufficient 

While the bill includes an affirmative defense that a provider can raise if it is “technologically impossible” to remove the CSAM without “compromising encryption,” it is not sufficient to protect our security. Online services that offer encryption shouldn’t have to face the impossible task of proving a negative in order to avoid lawsuits over content they can’t see or control. 

First, by making this protection an affirmative defense, providers must still defend against litigation, with significant costs to their business. Not every platform will have the resources to fight these threats in court, especially newcomers that compete with entrenched giants like Meta and Google. Encrypted platforms should not have to rely on prosecutorial discretion or favorable court rulings after protracted litigation. Instead, specific exemptions for encrypted providers should be addressed in the text of the bill.  

Second, although technologies like client-side scanning break encryption, members of Congress have misleadingly claimed otherwise. Plaintiffs are likely to argue that providers who do not use these techniques are acting recklessly, leading many apps and websites to scan all of the content on their platforms and remove any content that a state court could find, even wrongfully, is CSAM.

TAKE ACTION

Tell Congress Not to Outlaw Encrypted Apps

The Bill Threatens Free Speech by Creating a New Exception to Section 230 

The bill allows a new type of lawsuit to be filed against internet platforms, accusing them of “facilitating” child sexual exploitation based on the speech of others. It does this by creating an exception to Section 230, the foundational law of the internet and online speech. Section 230 provides partial immunity to internet intermediaries when sued over content posted by their users. Without that protection, platforms are much more likely to aggressively monitor and censor users.

Section 230 creates the legal breathing room for internet intermediaries to create online spaces for people to freely communicate around the world, with low barriers to entry. However, creating a new exception that exposes providers to more lawsuits will cause them to limit that legal exposure. Online services will censor more and more user content and accounts, with minimal regard as to whether that content is in fact legal. Some platforms may even be forced to shut down or may not even get off the ground in the first place, for fear of being swept up in a flood of litigation and claims around alleged CSAM. On balance, this harms all internet users who rely on intermediaries to connect with their communities and the world at large. 

Despite Changes, A.B. 412 Still Harms Small Developers

Tue, 06/10/2025 - 6:07pm

California lawmakers are continuing to promote a bill that will reinforce the power of giant AI companies by burying small AI companies and non-commercial developers in red tape, copyright demands and potentially, lawsuits. After several amendments, the bill hasn’t improved much, and in some ways has actually gotten worse. If A.B. 412 is passed, it will make California’s economy less innovative, and less competitive. 

The Bill Threatens Small Tech Companies

A.B. 412 masquerades as a transparency bill, but it’s actually a government-mandated “reading list” that will allow rights holders to file a new type of lawsuit in state court, even as the federal courts continue to assess whether and how federal copyright law applies to the development of generative AI technologies. 

The bill would require developers—even two-person startups— to keep lists of training materials that are “registered, pre-registered or indexed” with the U.S. Copyright Office, and help rights holders create digital ‘fingerprints’ of those works—a technical task with no established standards and no realistic path for small teams to follow. Even if it were limited to registered copyrighted material, that’s a monumental task, as we explained in March when we examined the earlier text of A.B. 412. 

The bill’s amendments have made compliance even harder, since it now requires technologists to go beyond copyrighted material and somehow identify “pre-registered” copyrights. The amended bill also has new requirements that demand technologists document and keep track of when they look at works that aren’t copyrighted but are subject to exclusive rights, such as pre-1972 sound recordings—rights that, not coincidentally, are primarily controlled by large entertainment companies. 

The penalties for noncompliance are steep—up to $1,000 per day per violation—putting small developers at enormous financial risk even for accidental lapses.

The goal of this list is clear: for big content companies to more easily file lawsuits against software developers, big and small. And for most AI developers, the burden will be crushing. Under A.B. 412, a two-person startup building an open-source chatbot, or an indie developer fine-tuning a language model for disability access, would face the same compliance burdens as Google or Meta. 

Reading and Analyzing The Open Web Is Not a Crime 

It’s critical to remember that AI training is very likely protected by fair use under U.S. copyright law—a point that’s still being worked out in the courts. The idea that we should preempt that process with sweeping state regulation is not just premature; it’s dangerous.

It’s also worth noting that copyright is governed by federal law. Federal courts are already working to define the boundaries of fair use and copyright in the AI context—the California legislature should let them do their job. A.B. 412 tries to create a state-level regulatory scheme in an area that belongs in federal hands—a risky legal overreach that could further complicate an already unsettled policy space.

A.B. 412 is a solution in search of a problem. The courthouse doors are far from closed to content owners who want to dispute the use of their copyrighted works. There are multiple high-profile litigations over the copyright status of AI training works that are working their way through trial courts and appeal courts right now. 

Scope Creep

Rather than narrowing its focus to make compliance more realistic, the latest amendments to A.B. 412 actually expand the scope of covered works. The bill now demands documentation of obscure categories of content like pre-1972 sound recordings. These recordings have rights that are often murky, and largely controlled by major media companies.

The bill also adds “preregistered” and indexed works to its coverage. Preregistration, designed to help entertainment companies punish unauthorized copying even before commercial release, expands the universe of content that developers must track—without offering any meaningful help to small creators. 

A Moat Serving Big Tech

Ironically, the companies that will benefit most from A.B. 412 are the very same large tech firms that lawmakers often claim they want to regulate. Big companies can hire teams of lawyers and compliance officers to handle these requirements. Small developers? They’re more likely to shut down, sell out, or never enter the field in the first place.

This bill doesn’t create a fairer marketplace. It builds a regulatory moat around the incumbents, locking out new competitors and ensuring that only a handful of companies have the resources to develop advanced AI systems. Truly innovative technology often comes from unknown or small companies, but A.B. 412 threatens to turn California—and anyone who does business there—into a fortress where only the biggest players survive.

A Lopsided Bill 

A.B. 412 is becoming an increasingly extreme and one-sided piece of legislation. It’s a maximalist wishlist for legacy rights-holders, delivered at the expense of small developers and the public. The result will be less competition, less innovation, and fewer choices for consumers—not more protection for creators.

This new version does close a few loopholes, and expands the period for AI developers to respond to copyright demands from 7 days to 30 days. But it seriously fails to close others: for instance, the exemption for noncommercial development applies only to work done “exclusively for noncommercial academic or governmental” institutions. That still leaves a huge window to sue hobbyists and independent researchers who don’t have university or government jobs. 

While the bill nominally exempts developers who use only public or developer-owned data, that’s a carve-out with no practical value. Like a search engine, nearly every meaningful AI system relies on mixed sources — and developers can’t realistically track the copyright status of them all.

At its core, A.B. 412 is a flawed bill that would harm the whole U.S. tech ecosystem. Lawmakers should be advancing policies that protect privacy, promote competition, and ensure that innovation benefits the public—not just a handful of entrenched interests.

If you’re a California resident, now is the time to speak out. Tell your legislators that A.B. 412 will hurt small companies, help big tech, and lock California’s economy in the past.

35 Years for Your Freedom Online

Tue, 06/10/2025 - 3:04am

Once upon a time we were promised flying cars and jetpacks. Yet we've arrived at a more complicated timeline where rights advocates can find themselves defending our hard-earned freedoms more often than shooting for the moon. In tough times, it's important to remember that your vision for the future can be just as valuable as the work you do now.

Thirty-five years ago, a small group of folks saw the coming digital future and banded together to ensure that technology would empower people, not oppress them—and EFF was born. While the dangers of corporate and state forces grew alongside the internet, EFF and supporters like you faithfully rose to the occasion. Will you help celebrate EFF’s 35th anniversary and donate in support of digital freedom?

Give today

Protect Online Privacy & Free Expression

Together we’ve won many fights for encryption, free speech, innovation, and privacy online. Yet it’s plain to see that we must keep advocating for technology users whether that’s in the courts, before lawmakers, educating the public, or creating privacy-enhancing tools. EFF members make it possible—you can lend a hand and get some great perks!

Summer Swag Is Here

We love making stuff for EFF’s members each year. It’s our way of saying thanks for supporting the mission for your rights online, and I hope it’s your way of starting a conversation about internet freedom with people in your life.

shirts-both-necklines-wider-square-750px.jpg

Celebrate EFF's 35th Anniversary in the digital rights movement with this EFF35 Cityscape member t-shirt by Hugh D’Andrade! EFF has a not-so-secret weapon that keeps us in the fight even when the odds are against us: we never lose sight of our vision for a better future. Choose a roomy Classic Fit Crewneck or a soft Slim Fit V-Neck.

hoodie-front-back-alt-square-750px.jpg

And enjoy Lovelace-Klimtian vibes on EFF’s new Motherboard Hooded Sweatshirt by Shirin Mori. Gold details and orange poppies pop on lush forest green. Don't lose the forest for the trees—keep fighting for a world where tech supports people irl.

Join the Sustaining Donor Challenge (it’s easy)

You'll get a numbered EFF35 Challenge Coin when you become a monthly or annual Sustaining Donor by July 10. It’s that simple.

If you're already a Sustaining Donor—THANKS! You too can get an EFF 35th Anniversary Challenge Coin when you upgrade your donation. Just increase your monthly or annual gift and let us know by emailing upgrade@eff.org. Get started at eff.org/recurring or go to your PayPal account if you used one.

coin_cat_1200px.jpg

Support internet freedom with a no-fuss automated recurring donation! Over 30% of EFF members have joined as Sustaining Donors to defend digital rights (and get some great swag every year). Challenge coins follow a long tradition of offering a symbol of kinship and respect for great achievements—and EFF owes its strength to technology creators and users like you.

With your help, EFF is here to stay.

Join EFF

Protect Online Privacy & Free Expression

NYC lets AI gamble with Child Welfare

Mon, 06/09/2025 - 5:36pm

The Markup revealed in its reporting last month that New York City’s Administration for Children’s Services (ACS) has been quietly deploying an algorithmic tool to categorize families as “high risk". Using a grab-bag of factors like neighborhood and mother’s age, this AI tool can put families under intensified scrutiny without proper justification and oversight.

ACS knocking on your door is a nightmare for any parent, with the risk that any mistakes can break up your family and have your children sent to the foster care system. Putting a family under such scrutiny shouldn’t be taken lightly and shouldn’t be a testing ground for  automated decision-making by the government.

 This “AI” tool, developed internally by ACS’s Office of Research Analytics, scores families for “risk” using 279 variables and subjects those deemed highest-risk to intensified scrutiny. The lack of transparency, accountability, or due process protections demonstrates that ACS has learned nothing from the failures of similar products in the realm of child services.

The algorithm operates in complete secrecy and the harms from this opaque “AI theater” are not theoretical. The 279 variables are derived only from cases back in 2013 and 2014 where children were seriously harmed. However, it is unclear how many cases were analyzed, what, if any, kind of auditing and testing was conducted, and whether including of data from other years would have altered the scoring.

What we do know is disturbing: Black families in NYC face ACS investigations at seven times the rate of white families and ACS staff has admitted that the agency is more punitive towards Black families, with parents and advocates calling its practices “predatory.” It is likely that the algorithm effectively automates and amplifies this discrimination.

Despite the disturbing lack of transparency and accountability, ACS’s usage of this system has subjected families that this system ranks as “highest risk” to additional scrutiny, including possible home visits, calls to teachers and family, or consultations with outside experts. But those families, their attorneys, and even caseworkers don't know when and why the system flags a case, making it difficult to challenge the circumstances or process that leads to this intensified scrutiny.

This is not the only incidence in which usage of AI tools in the child services system has encountered issues with systemic biases. Back in 2022, the Associated Press reported that Carnegie Mellon researchers found that from August 2016 to May 2018, Allegheny County in Pennsylvania used an algorithmic tool that flagged 32.5% of Black children for “mandatory” investigation compared to just 20.8% of white, all while social workers disagreed with the algorithm's risk scores about one-third of the time.

The Allegheny system operates with the same toxic combination of secrecy and bias now plaguing NYC. Families and their attorneys can never know their algorithmic scores, making it impossible to challenge decisions that could destroy their lives. When a judge asked to see a family’s score in court, the county resisted, claiming it didn't want to influence legal proceedings with algorithmic numbers, which suggests that the scores are too unreliable for judicial scrutiny yet acceptable for targeting families.

Elsewhere these biased systems were successfully challenged. The developers of the Allegheny tool had already had their product rejected in New Zealand, where researchers correctly identified that the tool would likely result in more Māori families being tagged for investigation. Meanwhile, California spent $195,273 developing a similar tool before abandoning it in 2019 due in part to concerns about racial equity.

Governmental deployment of automated and algorithmic decision making not only perpetuates social inequalities, but removes mechanisms for accountability when agencies make mistakes. The state should not be using these tools for rights-determining decisions and any other uses must be subject to vigorous scrutiny and independent auditing to ensure the public’s trust in the government’s actions.

Criminalizing Masks at Protests is Wrong

Mon, 06/09/2025 - 4:37pm

There has been a crescendo of states attempting to criminalize the wearing of face coverings while attending protests. Now the President has demanded, in the context of ongoing protests in Los Angeles: “ARREST THE PEOPLE IN FACE MASKS, NOW!”

But the truth is: whether you are afraid of catching an airborne illness from your fellow protestors, or you are concerned about reprisals from police or others for expressing your political opinions in public, you should have the right to wear a mask. Attempts to criminalize masks at protests fly in the face of a right to privacy.

Anonymity is a fundamental human right.

In terms of public health, wearing a mask while in a crowd can be a valuable tool to prevent the spread of communicable illnesses. This can be essential for people with compromised immune systems who still want to exercise their First Amendment-protected right to protest.

Moreover, wearing a mask is a perfectly legitimate surveillance self-defense practice during a protest. There has been a massive proliferation of surveillance camera networks, face recognition technology, and databases of personal information. There also is a long law enforcement’s history of harassing and surveilling people for publicly criticizing or opposing law enforcement practices and other government policies. What’s more, non-governmental actors may try to identify protesters in order to retaliate against them, for example, by limiting their employment opportunities.

All of this may chill our willingness to speak publicly or attend a protest in a cause we believe in. Many people would be less willing to attend a rally or march if they know that a drone or helicopter, equipped with a camera, will take repeated passes over the crowd, and police later will use face recognition to scan everyone’s faces and create a list of protest attendees. This would make many people rightfully concerned about surveillance and harassment from law enforcement.

Anonymity is a fundamental human right. EFF has long advocated for anonymity online. We’ve also supported low-tech methods to protect our anonymity from high-tech snooping in public places; for example, we’ve supported legislation to allow car owners to use license plate covers when their cars are parked to reduce their exposure to ALPRs.

A word of caution. No surveillance self-defense technique is perfect. Technology companies are trying to develop ways to use face recognition technology to identify people wearing masks. But if somebody wants to hide their face to try to avoid government scrutiny, the government should not punish them.

While members of the public have a right to wear a mask when they protest, law enforcement officials should not wear a mask when they arrest protesters and others. An elementary principle of police accountability is to require uniformed officers to identify themselves to the public; this discourages officer misconduct, and facilitates accountability if an officer violates the law. This is one reason EFF has long supported the First Amendment right to record on-duty police, including ICE officers.

For these reasons, EFF believes it is wrong for state legislatures, and now federal law enforcement, to try to criminalize or punish mask wearing at protests. It is especially wrong, in moments like the present, where government it taking extreme measures to crack down on the civil liberties of protesters. 

Privacy Victory! Judge Grants Preliminary Injunction in OPM/DOGE Lawsuit

Mon, 06/09/2025 - 3:28pm
Court to Decide Scope of Injunction Later This Week

NEW YORK–In a victory for personal privacy, a New York federal district court judge today granted a preliminary injunction in a lawsuit challenging the U.S. Office of Personnel Management’s (OPM) disclosure of records to DOGE and its agents.

Judge Denise L. Cote of the U.S. District Court for the Southern District of New York found that OPM violated the Privacy Act and bypassed its established cybersecurity practices under the Administrative Procedures Act. The court will decide the scope of the injunction later this week. The plaintiffs have asked the court to halt DOGE agents’ access to OPM records and for DOGE and its agents to delete any records that have already been disclosed. OPM’s databases hold highly sensitive personal information about tens of millions of federal employees, retirees, and job applicants.

“The plaintiffs have shown that the defendants disclosed OPM records to individuals who had no legal right of access to those records,” Cote found. “In doing so, the defendants violated the Privacy Act and departed from cybersecurity standards that they are obligated to follow. This was a breach of law and of trust. Tens of millions of Americans depend on the Government to safeguard records that reveal their most private and sensitive affairs.”

The Electronic Frontier Foundation (EFF), Lex Lumina LLP, Democracy Defenders Fund, and The Chandra Law Firm requested the injunction as part of their ongoing lawsuit against OPM and DOGE on behalf of two labor unions and individual current and former government workers across the country. The lawsuit’s union plaintiffs are the American Federation of Government Employees AFL-CIO and the Association of Administrative Law Judges, International Federation of Professional and Technical Engineers Judicial Council 1 AFL-CIO

The lawsuit argues that OPM and OPM Acting Director Charles Ezell illegally disclosed personnel records to DOGE agents in violation of the Administrative Procedures Act and the federal Privacy Act of 1974, a watershed anti-surveillance statute that prevents the federal government from abusing our personal information. In addition to seeking to permanently halt the disclosure of further OPM data to DOGE, the lawsuit asks for the deletion of any data previously disclosed by OPM to DOGE.

The federal government is the nation’s largest employer, and the records held by OPM represent one of the largest collections of sensitive personal data in the country. In addition to personally identifiable information such as names, social security numbers, and demographic data, these records include work information like salaries and union activities; personal health records and information regarding life insurance and health benefits; financial information like death benefit designations and savings programs; nondisclosure agreements; and information concerning family members and other third parties referenced in background checks and health records.

OPM holds these records for tens of millions of Americans, including current and former federal workers and those who have applied for federal jobs. OPM has a history of privacy violations—an OPM breach in 2015 exposed the personal information of 22.1 million people—and its recent actions make its systems less secure. 

With few exceptions, the Privacy Act limits the disclosure of federally maintained sensitive records on individuals without the consent of the individuals whose data is being shared. It protects all Americans from harms caused by government stockpiling of our personal data. This law was enacted in 1974, the last time Congress acted to limit the data collection and surveillance powers of an out-of-control President. 

A number of courts have already found that DOGE’s activities at other agencies likely violate the law, including at the Social Security Administration and the Treasury Department.

For the preliminary injunction: https://www.eff.org/document/afge-v-opm-opinion-and-order-granting-preliminary-injunction
For the complaint: https://www.eff.org/document/afge-v-opm-complaint
For more about the case: https://www.eff.org/cases/american-federation-government-employees-v-us-office-personnel-management

Contacts:
Electronic Frontier Foundation: press@eff.org
Lex Lumina LLP: Managing Partner Rhett Millsaps, rhett@lex-lumina.com

Victory! Austin Organizers Cancel City's Flock ALPR Contract

Fri, 06/06/2025 - 6:38pm

Austin organizers turned out to rebuke the city’s misguided contract with Flock Safety— and won. This successful pushback from the community means at the end of the month Austin police will no longer be able to use the surveillance network of automated license plate readers (ALPRs) across the city.

Two years ago Austin City Council approved this controversial contract, despite strong local opposition. We knew then that these AI-driven surveillance systems weren’t just creepy, they are prone to misuse and mistakes which have a real human toll.

In the years since, this concern has materialized time and time again, and now the risks have heightened with the potential of using the data against immigrants and people seeking trans or reproductive healthcare. Most recently Texas authorities were implicated in a 404 media report on the use of these cameras to target abortion seekers

Today's victory in Austin is a tribute to what happens when a coalition of activist groups come together in common cause

Just a few days before the scheduled vote, an audit of the Austin Police Department program also revealed that over 20% of ALPR database searches lacked proper documentation or justification, in violation of department policy. The audit also revealed contract language allowed for data retention beyond council-mandated limits on retention and potential sharing with outside agencies. 

Fortunately, more than 30 community groups, including Electronic Frontier Alliance member EFF-Austin,  joined forces to successfully prevent contract renewal.

EFF-Austin Executive Director Kevin Welch told us that, "Today's victory in Austin is a tribute to what happens when a coalition of activist groups come together in common cause and stand in solidarity against the expansion of the surveillance state.” He went on to say, “But the fight is not over. While the Flock contract has been discontinued, Austin still makes use of ALPRs via its contract with Axon, and [the] council may attempt to bring this technology back [...] That being said, real progress in educating elected officials on the dangers of these technologies has been made.” 

This win in a city as large as Austin lends momentum to the larger trend across the country where local communities are pushing back against ALPR surveillance. EFF continues to stand with these local efforts, and encourages other organizers to reach out at organizing [at] eff.org in the fight against local surveillance.

Speaking to this trend, Kevin added, “As late as Monday, it didn't look like we had the votes to make this victory happen. While these are dark times, there are still lights burning in the dark, and through collective action, we can burn bright."

EFF to Department Homeland Security: No Social Media Surveillance of Immigrants

Fri, 06/06/2025 - 4:51pm

EFF submitted comments to the Department of Homeland Security (DHS) and its subcomponent U.S. Citizenship and Immigration Services (USCIS), urging them to abandon a proposal to collect social media identifiers on forms for immigration benefits. This collection would mark yet a further expansion of the government’s efforts to subject immigrants to social media surveillance, invading their privacy and chilling their free speech and associational rights for fear of being denied key immigration benefits.

Specifically, the proposed rule would require applicants to disclose their social media identifiers on nine immigration forms, including applications for permanent residency and naturalization, impacting more than 3.5 million people annually. USCIS’s purported reason for this collection is to assist with identity verification, as well as vetting and national security screening, to comply with Executive Order 14161. USCIS separately announced that it would look for “antisemitic activity” on social media as grounds for denying immigration benefits, which appears to be related to the proposed rule, although not expressly included it.

Additionally, a day after the proposed rule was published, Axios reported that the State Department, the Department of Justice, and DHS confirmed a joint collaboration called “Catch and Revoke,” using AI tools to review student visa holders’ social media accounts for speech related to “pro-Hamas” sentiment or “antisemitic activity.”

If the proposed rule sounds familiar, it’s because this is not the first time the government has proposed the collection of social media identifiers to monitor noncitizens. In 2019, for example, the State Department implemented a policy requiring visa and visa waiver applicants to the United States to disclose the identifiers they used on some 20 social media platforms over the last five years—affecting over 14.7 million people annually. EFF joined a large contingent of civil and human rights organizations in objecting to that collection. That policy is now the subject of ongoing litigation in Doc Society v. Blinken, a case brought by two documentary film organizations, who argue that the rule affects the expressive and associational rights of their members by impeding their ability to collaborate and engage with filmmakers around the world. EFF filed two amicus briefs in that case.

What distinguishes this proposed rule from the State Department’s existing program is that most, if not all, of the noncitizens who would be affected currently legally reside in the United States, allowing them to benefit from constitutional protections.

In our comments, we explained that surveillance of even public-facing social media can implicate privacy interests by aggregating a wealth of information about both an applicant for immigration benefits, and also people in their networks, including U.S. citizens. This is because of the quantity and quality of information available on social media, and because of its inherent interconnected nature.

We also argued that the proposed rule appears to allow for the collection and consideration of First Amendment-protected speech, including core political speech, and anonymous and pseudonymous speech. This inevitably leads to a chilling effect because immigration benefits applicants will have to choose between potentially forgoing key benefits or self-censoring to avoid government scrutiny. That is, to help ensure that a naturalized citizenship application is not rejected, for example, an applicant may avoid speaking out on social media about American foreign policy or expressing views about other political topics that may be considered controversial by the federal government—even when other Americans are free to do so.

We urge DHS and USCIS to abandon this dangerous proposal.

EFF to Court: Young People Have First Amendment Rights

Fri, 06/06/2025 - 12:39pm

Utah cannot stifle young people’s First Amendment rights to use social media to speak about politics, create art, discuss religion, or to hear from other users discussing those topics, EFF argued in a brief filed this week.

EFF filed the brief in NetChoice v. Brown, a constitutional challenge to the Utah Minor Protection in Social Media Act. The law prohibits young people from speaking to anyone on social media outside of the users with whom they are connected or those users’ connections. It also requires social media services to make young people’s accounts invisible to anyone outside of that same subgroup of users. The law requires parents to consent before minors can change those default restrictions.

To implement these restrictions, the law requires a social media service to verify every user’s age so that it knows whether to apply those speech-restricting settings.

The law therefore burdens the First Amendment rights of both young people and adults, the friend-of-the-court brief argued. The ACLU, Freedom to Read Foundation, LGBT Technology Institute, TechFreedom, and Woodhull Freedom Foundation joined EFF on the brief.

Utah, like many states across the country, has sought to significantly restrict young people’s ability to use social media. But “Minors enjoy the same First Amendment right as adults to access and engage in protected speech on social media,” the brief argues. As the brief details, minors use social media for to express political opinions, create art, practice religion, and find community.

Utah cannot impose such a severe restriction on minors’ ability to speak and to hear from others on social media without violating the First Amendment. “Utah has effectively blocked minors from being able to speak to their communities and the larger world, frustrating the full exercise of their First Amendment rights,” the brief argues.

Moreover, the law “also violates the First Amendment rights of all social media users—minors and adults alike—because it requires every user to prove their age, and compromise their anonymity and privacy, before using social media.”

Requiring internet users to provide their ID or other proof of their age could block people from accessing lawful speech if they don’t have the right form of ID, the brief argues. And requiring users to identify themselves infringes on people’s right to be anonymous online. That may deter people from joining certain social media services or speaking on certain topics, as people often rely on anonymity to avoid retaliation for their speech.

Finally, requiring users to provide sensitive personal information increases their risk of future privacy and security invasions, the brief argues.

Keeping the Web Up Under the Weight of AI Crawlers

Thu, 06/05/2025 - 7:13pm

If you run a site on the open web, chances are you've noticed a big increase in traffic over the past few months, whether or not your site has been getting more viewers, and you're not alone. Operators everywhere have observed a drastic increase in automated traffic—bots—and in most cases attribute much or all of this new traffic to AI companies.

Background

AI—in particular, Large Language Models (LLMs) and generative AI (genAI)—rely on compiling as much information from relevant sources (i.e., "texts written in English" or "photographs") as possible in order to build a functional and persuasive model that users will later interact with. While AI companies in part distinguish themselves by what data their models are trained on, possibly the greatest source of information—one freely available to all of us—is the open web.

To gather up all that data, companies and researchers use automated programs called scrapers (sometimes referred to by the more general term "bots") to "crawl" over the links available between various webpages and save the types of information they're tasked with as they go. Scrapers are tools with a long, and often beneficial, history: services like search engines, the Internet Archive, and all kinds of scientific research rely on them.

When scrapers are not deployed thoughtfully, however, they can contribute to higher hosting costs, lower performance, and even site outages, particularly when site operators see so many of them in operation at the same time. In the long run all this may lead to some sites shutting down rather than bearing the brunt of it.

For-profit AI companies must ensure they do not poison the well of the open web they rely on in a short-sighted rush for training data.

Bots: Read the Room

There are existing best practices those who use scrapers should follow. When bots and their operators ignore these guideposts it sends a signal to site operators, sometimes explicitly, that they can or should cut off their access, impede performance, and in the worst case it may take a site down for all users. Some companies appear to follow these practices most of the time, but we see increasing reports and evidence of new bots that don't.

First, scrapers should follow instructions given in a site's robots.txt file, whether those are to back off to a certain crawling rate, exclude certain paths, or not to crawl the site at all.

Second, bots should send their requests with a clearly labeled User Agent string which indicates their operator, their purpose, and a means of contact.

Third, those running scrapers should provide a process for site operators to request back-offs, rate caps, exclusions, and to report problematic behavior via the means of contact info or response forms linked via the User Agent string.

Mitigations for Site Operators

Of course, if you're running a website dealing with a flood of crawling traffic, waiting for those bots to change their behavior for the better might not be realistic. Here are a few suggested, if imperfect, mitigations based in part on our own sometimes frustrating experiences.

First, use a caching layer. In most cases a Content Delivery Network (CDN) or an "edge platform" (essentially a newer iteration of a CDN) can provide this for you, and some services offer a free tier for non-commercial users. There are also a number of great projects if you prefer to self-host. Some of the tools we've used for caching include varnish, memcached, and redis.

Second, convert to static content to prevent resource-intensive database reads. In some cases this may reduce the need for caching.

Third, use targeted rate limiting to slow down bots without taking your whole site down. But know this can get difficult when scrapers try to disguise themselves with misleading User Agent strings or by spreading a fleet of crawlers out across many IP addresses.

Other mitigations such as client-side validation (e.g. CAPTCHAs or proof-of-work) and fingerprinting carry privacy and usability trade-offs, and we warn against deploying them without careful forethought.

Where Do We Go From Here?

To reiterate, whatever one's opinion of these particular AI tools, scraping itself is not the problem. Automated access is a fundamental technique of archivists, computer scientists, and everyday users that we hope is here to stay—as long as it can be done non-destructively. However, we realize that not all implementers will follow our suggestions for bots above, and that our mitigations are both technically advanced and incomplete.

Because we see so many bots operating for the same purpose at the same time, it seems there's an opportunity here to provide these automated data consumers with tailored data providers, removing the need for every AI company to scrape every website, seemingly, every day.

And on the operators' end, we hope to see more web-hosting and framework technology that is built with an awareness of these issues from day one, perhaps building in responses like just-in-time static content generation or dedicated endpoints for crawlers.

EFF to the FTC: DMCA Section 1201 Creates Anti-Competitive Regulatory Barriers

Thu, 06/05/2025 - 6:33pm

As part of multi-pronged effort towards deregulation, the Federal Trade Commission has asked the public to identify any and all “anti-competitive” regulations. Working with our friends at Authors Alliance, EFF answered, calling attention to a set of anti-competitive regulations that many don’t  recognize as such: the triennial exemptions to Section 1201 of the Digital Millennium Copyright Act, and the cumbersome process on which they depend.

Copyright grants exclusive rights to creators, but only as a means to serve the broader public interest. Fair use and other limitations play a critical role in that service by ensuring that the public can engage in commentary, research, education, innovation, and repair without unjustified restriction. Section 1201 effectively forbids fair uses where those uses require circumventing a software lock (a.k.a. technological protection measures) on a copyrighted work.

Congress realized that Section 1201 had this effect, so it adopted a safety valve—a triennial process by which the Library of Congress could grant exemptions. Under the current rulemaking framework, however, this intended safety valve functions more like a chokepoint. Individuals and organizations seeking an exemption to engage in lawful fair use must navigate a burdensome, time-consuming administrative maze. The existing procedural and regulatory barriers ensure that the rulemaking process—and Section 1201 itself—thwarts, rather than serves, the public interest.

The FTC does not, of course, control Congress or the Library of Congress. But we hope its investigation and any resulting report on anti-competitive regulations will recognize the negative effects of Section 1201 and that the triennial rulemaking process has failed to be the check Congress intended. Our comments urge the FTC to recommend that Congress repeal or reform Section 1201. At a minimum, the FTC should advocate for fundamental revisions to the Library of Congress’s next triennial rulemaking process, set for 2026, so that copyright law can once again fulfill its purpose: to support—rather than thwart—competitive and independent innovation.

You can find the full comments here.

The Dangers of Consolidating All Government Information

Thu, 06/05/2025 - 1:15pm

The Trump administration has been heavily invested in consolidating all of the government’s information into a single searchable, or perhaps AI-queryable, super database. The compiling of all of this information is being done with the dubious justification of efficiency and modernization–however, in many cases, this information was originally siloed for important reasons: to protect your privacy, to prevent different branches of government from using sensitive data to punish or harass you, and to perserve the trust in and legitimacy of important civic institutions.

Attempts to Centralize All the Government’s Information About You

This process of consolidation has taken several forms. The purported Department of Government Efficiency (DOGE) has been seeking access to the data and computer systems of dozens of government agencies. According to one report, access to the data of these agencies has given DOGE, as of April 2025, hundreds of pieces of personal information about people living in the United States–everything ranging from financial and tax information, health and healthcare information, and even computer I.P. addresses. EFF is currently engaged in a lawsuit against the U.S. Office of Personnel Management (OPM) and DOGE for disclosing personal information about government employees to people who don’t need it in violation of the Privacy Act of 1974.

Another key maneuver in centralizing government information has been to steamroll the protections that were in place that keep this information away from agencies that don’t need, or could abuse, this information. This has been done by ignoring the law, like the Trump administration did when it ordered the IRS make tax information available for the purposes of immigration enforcement. It has also been done through the creation of new (and questionable) executive mandates that all executive branch information be made available to the White House or any other agency. Specifically, this has been attempted with the March 20, 2025 Executive Order, “Stopping Waste Fraud and Abuse by Eliminating Information Silos” which mandates that the federal government, as well as all 50 state governments, allow other agencies “full and prompt access to all unclassified agency records, data, software systems, and information technology systems.” But executive orders can’t override privacy laws passed by Congress.

Not only is the Trump administration trying to consolidate all of this data institutionally and statutorily, they are also trying to do it technologically. A new report revealed that the administration has contracted Palantir—the open-source surveillance and security data-analytics firm—to fuse data from multiple agencies, including the Department of Homeland Security and Health and Human Services.

Why it Matters and What Can Go Wrong 

The consolidation of government records equals more government power that can be abused. Different government agencies necessarily collect information to provide essential services or collect taxes. The danger comes when the government begins pooling that data and using it for reasons unrelated to the purpose it was collected.

Imagine, for instance, a scenario where a government employee could be denied health-related public services or support because of the information gathered about them by an agency that handles HR records. Or a person’s research topic according to federal grants being used to weigh whether or not that person should be allowed to renew a passport.

Marginalized groups are most vulnerable to this kind of abuse, including to locate individuals for immigration enforcement using tax records. Government records could also be weaponized against people who receive food subsidies, apply for student loans, or take government jobs

Congress recognized these dangers 50 years ago when it passed the Privacy Act to put strict limits on the government’s use of large databases. At that time, trust in the government eroded after revelations about White House enemies’ lists, misuse of existing government personality profiles, and surveillance of opposition political groups.

There’s another important issue at stake: the future of federal and state governments that actually have the information and capacity to help people. The more people learn to distrust the government because they worry the information they give certain government agencies may be used to hurt them in the future, the less likely people will be to participate or seek the help they need. The fewer people engage with these agencies, the less likely they will be to survive. Trust is a key part of any relationship between the governed and government and when that trust is abused or jettisoned, the long-term harms are irreparable.

EFF, like dozens of other organizations, will continue to fight to ensure personal records held by the government are only used and disclosed as needed and only for the purpose they were collected, as federal law demands. 

Related Cases: American Federation of Government Employees v. U.S. Office of Personnel Management

Judges Stand With Law Firms (and EFF) Against Trump’s Executive Orders

Thu, 06/05/2025 - 11:00am

Pernicious.”

Unprecedented... cringe-worthy.”

Egregious.”

Shocking.” 

These are just some of the words that federal judges used in recent weeks to describe President Trump’s politically motivated and vindictive executive orders targeting law firms that have employed people or represented clients or causes he doesn’t like. 

But our favorite word by far is “unconstitutional.” 

EFF was one of the very first legal organizations to publicly come out in support of Perkins Coie when it became the first law firm to challenge the legality of President Trump’s executive order targeting it. Since then, EFF has joined four amicus briefs in support of targeted law firms, and in all four cases, judges from the U.S. District Court for the District of Columbia have indicated they’re having none of it. Three have issued permanent injunctions deeming the executive orders null and void, and the fourth seems to be headed in that same direction. 

Trump issued his EO against Perkins Coie on March 6. In a May 2 opinion finding the order unconstitutional and issuing a permanent injunction, Senior Judge Beryl A. Howell wrote:  

“By its terms, this Order stigmatizes and penalizes a particular law firm and its employees—from its partners to its associate attorneys, secretaries, and mailroom attendants—due to the Firm’s representation, both in the past and currently, of clients pursuing claims and taking positions with which the current President disagrees, as well as the Firm’s own speech,” Howell wrote. “In a cringe-worthy twist on the theatrical phrase ‘Let’s kill all the lawyers,’ EO 14230 takes the approach of ‘Let’s kill the lawyers I don’t like,’ sending the clear message: lawyers must stick to the party line, or else.” 

“Using the powers of the federal government to target lawyers for their representation of clients and avowed progressive employment policies in an overt attempt to suppress and punish certain viewpoints, … is contrary to the Constitution, which requires that the government respond to dissenting or unpopular speech or ideas with ‘tolerance, not coercion.’” 

 Trump issued a similar EO against Jenner & Block on March 25. In a May 23 opinion also finding the order unconstitutional and issuing a permanent injunction, Senior Judge John D. Bates wrote: 

“This order—which takes aim at the global law firm Jenner & Block—makes no bones about why it chose its target: it picked Jenner because of the causes Jenner champions, the clients Jenner represents, and a lawyer Jenner once employed. Going after law firms in this way is doubly violative of the Constitution. Most obviously, retaliating against firms for the views embodied in their legal work—and thereby seeking to muzzle them going forward—violates the First Amendment’s central command that government may not ‘use the power of the State to punish or suppress disfavored expression.’ Nat’l Rifle Ass’n of Am. v. Vullo, 602 U.S. 175, 188 (2024). More subtle but perhaps more pernicious is the message the order sends to the lawyers whose unalloyed advocacy protects against governmental viewpoint becoming government-imposed orthodoxy. This order, like the others, seeks to chill legal representation the administration doesn’t like, thereby insulating the Executive Branch from the judicial check fundamental to the separation of powers. It thus violates the Constitution and the Court will enjoin its operation in full.” 

 Trump issued his EO targeting WilmerHale on March 27. In a May 27 opinion finding that order unconstitutional, Senior Judge Richard J. Leon wrote: 

“The cornerstone of the American system of justice is an independent judiciary and an independent bar willing to tackle unpopular cases, however daunting. The Founding Fathers knew this! Accordingly, they took pains to enshrine in the Constitution certain rights that would serve as the foundation for that independence. Little wonder that in the nearly 250 years since the Constitution was adopted no Executive Order has been issued challenging these fundamental rights. Now, however, several Executive Orders have been issued directly challenging these rights and that independence. One of these Orders is the subject of this case. For the reasons set forth below, I have concluded that this Order must be struck down in its entirety as unconstitutional. Indeed, to rule otherwise would be unfaithful to the judgment and vision of the Founding Fathers!” 

“Taken together, the provisions constitute a staggering punishment for the firm’s protected speech! The Order is intended to, and does in fact, impede the firm’s ability to effectively represent its clients!” 

“Even if the Court found that each section could be grounded in Executive power, the directives set out in each section clearly exceed that power! The President, by issuing the Order, is wielding his authority to punish a law firm for engaging in litigation conduct the President personally disfavors. Thus, to the extent the President does have the power to limit access to federal buildings, suspend and revoke security clearances, dictate federal hiring, and manage federal contracts, the Order surpasses that authority and in fact usurps the Judiciary’s authority to resolve cases and sanction parties that come before the courts!” 

The fourth case in which EFF filed a brief involved Trump’s April 9 EO against Susman Godfrey. In that case, Judge Loren L. AliKhan is still considering whether to issue a permanent injunction, but on April 15 gave a fiery ruling from the bench in granting a temporary restraining order against the EO’s enforcement. 

“The executive order is based on a personal vendetta against a particular firm, and frankly, I think the framers of our Constitution would see this as a shocking abuse of power,” AliKhan said, as quoted by Courthouse News Service. "The government cannot hold lawyers hostage to force them to agree with it, allowing the government to coerce private business, law firms and lawyers solely on the basis of their view is antithetical to our constitutional republic and hampers this court, and every court’s, ability to adjudicate these cases.” 

And, as quoted by the New York Times: “Law firms across the country are entering into agreements with the government out of fear that they will be targeted next and that coercion is plain and simple. And while I wish other firms were not capitulating as readily, I admire firms like Susman for standing up and challenging it when it does threaten the very existence of their business. … The government has sought to use its immense power to dictate the positions that law firms may and may not take. The executive order seeks to control who law firms are allowed to represent. This immensely oppressive power threatens the very foundations of legal representation in our country.” 

As we wrote when we began filing amicus briefs in these cases, an independent legal profession is a cornerstone of democracy and the rule of law. As a nonprofit legal organization that frequently sues the federal government, EFF understands the value of this bedrock principle and how it–and First Amendment rights more broadly–are threatened by President Trump’s executive orders. It is especially important that the whole legal profession speak out against these actions, particularly in light of the silence or capitulation of a few large law firms. 

We’re glad the courts agree.

Pages