EFF: Updates
Fighting to Keep Bad Patents in Check: 2025 in Review
A functioning patent system depends on one basic principle: bad patents must be challengeable. In 2025, that principle was repeatedly tested—by Congress, by the U.S. Patent and Trademark Office (USPTO), and by a small number of large patent owners determined to weaken public challenges.
Two damaging bills, PERA and PREVAIL, were reintroduced in Congress. At the same time, USPTO attempted a sweeping rollback of inter partes review (IPR), one of the most important mechanisms for challenging wrongly granted patents.
EFF pushed back—on Capitol Hill, inside the Patent Office, and alongside thousands of supporters who made their voices impossible to ignore.
Congress Weighed Bills That Would Undo Core SafeguardsThe Patent Eligibility Restoration Act, or PERA, would overturn the Supreme Court’s Alice and Myriad decisions—reviving patents on abstract software ideas, and even allowing patents on isolated human genes. PREVAIL, introduced by the same main sponsors in Congress, would seriously weaken the IPR process by raising the burden of proof, limiting who can file challenges, forcing petitioners to surrender court defenses, and giving patent owners new ways to rewrite their claims mid-review.
Together, these bills would have dismantled much of the progress made over the last decade.
We reminded Congress that abstract software patents—like those we’ve seen on online photo contests, upselling prompts, matchmaking, and scavenger hunts—are exactly the kind of junk claims patent trolls use to threaten creators and small developers. We also pointed out that if PREVAIL had been law in 2013, EFF could not have brought the IPR that crushed the so-called “podcasting patent.”
EFF’s supporters amplified our message, sending thousands of messages to Congress urging lawmakers to reject these bills. The result: neither bill advanced to the full committee. The effort to rewrite patent law behind closed doors stalled out once public debate caught up with it.
Patent Office Shifts To An “Era of No”Congress’ push from the outside was stymied, at least for now. Unfortunately, what may prove far more effective is the push from within by new USPTO leadership, which is working to dismantle systems and safeguards that protect the public from the worst patents.
Early in the year, the Patent Office signaled it would once again lean more heavily on procedural denials, reviving an approach that allowed patent challenges to be thrown out basically whenever there was an ongoing court case involving the same patent. But the most consequential move came later: a sweeping proposal unveiled in October that would make IPR nearly unusable for those who need it most.
2025 also marked a sharp practical shift inside the agency. Newly appointed USPTO Director John Squires took personal control of IPR institution decisions, and rejected all 34 of the first IPR petitions that came across his desk. As one leading patent blog put it, an “era of no” has been ushered in at the Patent Office.
The October Rulemaking: Making Bad Patents UntouchableThe USPTO’s proposed rule changes would:
- Force defendants to surrender their court defenses if they use IPR—an intense burden for anyone actually facing a lawsuit.
- Make patents effectively unchallengeable after a single prior dispute, even if that challenge was limited, incomplete, or years out of date.
- Block IPR entirely if a district court case is projected to move faster than the Patent Trial and Appeal Board (PTAB).
These changes wouldn’t “balance” the system as USPTO claims—they would make bad patents effectively untouchable. Patent trolls and aggressive licensors would be insulated, while the public would face higher costs and fewer options to fight back.
We sounded the alarm on these proposed rules and asked supporters to register their opposition. More than 4,000 of you did—thank you! Overall, more than 11,000 comments were submitted. An analysis of the comments shows that stakeholders and the public overwhelmingly oppose the proposal, with 97% of comments weighing in against it.
In those comments, small business owners described being hit with vague patents they could never afford to fight in court. Developers and open-source contributors explained that IPR is often the only realistic check on bad software patents. Leading academics, patient-advocacy groups, and major tech-community institutions echoed the same point: you cannot issue hundreds of thousands of patents a year and then block one of the only mechanisms that corrects the mistakes.
The Linux Foundation warned that the rules “would effectively remove IPRs as a viable mechanism” for developers.
GitHub emphasized the increased risk and litigation cost for open-source communities.
Twenty-two patent law professors called the proposal unlawful and harmful to innovation.
Patients for Affordable Drugs detailed the real-world impact of striking invalid pharmaceutical patents, showing that drug prices can plummet once junk patents are removed.
Heading Into 2026The USPTO now faces thousands of substantive comments. Whether the agency backs off or tries to push ahead, EFF will stay engaged. Congress may also revisit PERA, PREVAIL, or similar proposals next year. Some patent owners will continue to push for rules that shield low-quality patents from any meaningful review.
But 2025 proved something important: When people understand how patent abuse affects developers, small businesses, patients, and creators, they show up—and when they do, their actions can shape what happens next.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
The Breachies 2025: The Worst, Weirdest, Most Impactful Data Breaches of the Year
Another year has come and gone, and with it, thousands of data breaches that affect millions of people. The question these days is less, Is my information in a data breach this year? and more How many data breaches had my information in them this year?
Some data breaches are more noteworthy than others. Where one might affect a small number of people and include little useful information, like a name or email address, others might include data ranging from a potential medical diagnosis to specific location information. To catalog and talk about these breaches we created the Breachies, a series of tongue-in-cheek awards, to highlight the most egregious data breaches.
In most cases, if these companies practiced a privacy first approach and focused on data minimization, only collecting and storing what they absolutely need to provide the services they promise, many data breaches would be far less harmful to the victims. But instead, companies gobble up as much as they can, store it for as long as possible, and inevitably at some point someone decides to poke in and steal that data. Once all that personal data is stolen, it can be used against the breach victims for identity theft, ransomware attacks, and to send unwanted spam. It has become such a common occurrence that it’s easy to lose track of which breaches affect you, and just assume your information is out there somewhere. Still, a few steps can help protect your information.
With that, let’s get to the awards.
The Winners
- The Say Something Without Saying Anything Award: Mixpanel
- The We Still Told You So Award: Discord
- The Tea for Two Award: Tea Dating Advice and TeaOnHer
- The Just Stop Using Tracking Tech Award: Blue Shield of California
- The Hacker's Hall Pass Award: PowerSchool
- The Worst. Customer. Service. Ever. Award: TransUnion
- The Annual Microsoft Screwed Up Again Award: Microsoft
- The I Didn’t Even Know You Had My Information Award: Gravy Analytics
- The Keeping Up With My Cybertruck Award: Teslamate
- The Disorder in the Courts Award: PACER
- The Only Stalkers Allowed Award: Catwatchful
- The Why We’re Still Stuck on Unique Passwords Award: Plex
- The Uh, Yes, Actually, I Have Been Pwned Award: Troy Hunt’s Mailing List
- (Dis)honorable Mentions
We’ve long warned that apps delivering your personal information to third-parties, even if they aren’t the ad networks directly driving surveillance capitalism, presents risks and a salient target for hackers. The more widespread your data, the more places attackers can go to find it. Mixpanel, a data analytics company which collects information on users of any app which incorporates its SDK, suffered a major breach in November this year. The service has been used by a wide array of companies, including the Ring Doorbell App, which we reported on back in 2020 delivering a trove of information to Mixpanel, and PornHub, which despite not having worked with the company since 2021, had its historical record of paying subscribers breached.
There’s a lot we still don’t know about this data breach, in large part because the announcement about it is so opaque, leaving reporters with unanswered questions about how many were affected, if the hackers demanded a ransom, and if Mixpanel employee accounts utilized standard security best practices. One thing is clear, though: the breach was enough for OpenAI to drop them as a provider, disclosing critical details on the breach in a blog post that Mixpanel’s own announcement conveniently failed to mention.
The worst part is that, as a data analytics company providing libraries which are included in a broad range of apps, we can surmise that the vast majority of people affected by this breach have no direct relationship with Mixpanel, and likely didn’t even know that their devices were delivering data to the company. These people deserve better than vague statements by companies which profit off of (and apparently insufficiently secure) their data.
The We Still Told You So Award: DiscordLast year, AU10TIX won our first The We Told You So Award because as we predicted in 2023, age verification mandates would inevitably lead to more data breaches, potentially exposing government IDs as well as information about the sites that a user visits. Like clockwork, they did. It was our first We Told You So Breachies award, but we knew it wouldn’t be the last.
Unfortunately, there is growing political interest in mandating identity or age verification before allowing people to access social media or adult material. EFF and others oppose these plans because they threaten both speech and privacy.
Nonetheless, this year’s winner of The We Still Told You So Breachies Award is the messaging app, Discord — once known mainly for gaming communities, it now hosts more than 200 million monthly active users and is widely used to host fandom and community channels.
In September of this year, much of Discord’s age verification data was breached — including users’ real names, selfies, ID documents, email and physical addresses, phone numbers, IP addresses, and other contact details or messages provided to customer support. In some cases, “limited billing information” was also accessed—including payment type, the last four digits of credit card numbers, and purchase histories.
Technically though, it wasn’t Discord itself that was hacked but their third-party customer support provider — a company called Zendesk—that was compromised, allowing attackers to access Discord’s user data. Either way, it’s Discord users who felt the impact.
The Tea for Two Award: Tea Dating Advice and TeaOnHerSpeaking of age verification, Tea, the dating safety app for women, had a pretty horrible year for data breaches. The app allows users to anonymously share reviews and safety information about their dates with men—helping keep others safe by noting red flags they saw during their date.
Since Tea is aimed at women’s safety and dating advice, the app asks new users to upload a selfie or photo ID to verify their identity and gender to create an account. That’s some pretty sensitive information that the app is asking you to trust it with! Back in July, it was reported that 72,000 images had been leaked from the app, including 13,000 images of photo IDs and 59,000 selfies. These photos were found via an exposed database hosted on Google’s mobile app development platform, Firebase. And if that isn’t bad enough, just a week later a second breach exposed private messages between users, including messages with phone numbers, abortion planning, and discussions about cheating partners. This breach included more than 1.1 million messages from early 2023 all the way to mid-2025, just before the breach was reported. Tea released a statement shortly after, temporarily disabling the chat feature.
But wait, there’s more. A completely different app based on the same idea, but for men, also suffered a data breach. TeaOnHer failed to protect similar sensitive data. In August, TechCrunch discovered that user information — including emails, usernames, and yes, those photo IDs and selfies — was accessible through a publicly available web address. Even worse? TechCrunch also found the email address and password the app’s creator uses to access the admin page.
Breaches like this are one of the reasons that EFF shouts from the rooftops against laws that mandate user verification with an ID or selfie. Every company that collects this information becomes a target for data breaches — and if a breach happens, you can’t just change your face.
The Just Stop Using Tracking Tech Award: Blue Shield of CaliforniaAnother year, another data breach caused by online tracking tools.
In April, Blue Shield of California revealed that it had shared 4.7 million people’s health data with Google by misconfiguring Google Analytics on its website. The data, which may have been used for targeted advertising, included: people’s names, insurance plan details, medical service providers, and patient financial responsibility. The health insurance company shared this information with Google for nearly three years before realizing its mistake.
If this data breach sounds familiar, it’s because it is: last year’s Just Stop Using Tracking Tech award also went to a healthcare company that leaked patient data through tracking code on its website. Tracking tools remain alarmingly common on healthcare websites, even after years of incidents like this one. These tools are marketed as harmless analytics or marketing solutions, but can expose people’s sensitive data to advertisers and data brokers.
EFF’s free Privacy Badger extension can block online trackers, but you shouldn’t need an extension to stop companies from harvesting and monetizing your medical data. We need a strong, federal privacy law and ban on online behavioral advertising to eliminate the incentives driving companies to keep surveilling us online.
The Hacker's Hall Pass Award: PowerSchoolIn December 2024, PowerSchool, the largest provider of student information systems in the U.S., gave hackers access to sensitive student data. The breach compromised personal information of over 60 million students and teachers, including Social Security numbers, medical records, grades, and special education data. Hackers exploited PowerSchool’s weak security—namely, stolen credentials to their internal customer support portal—and gained unfettered access to sensitive data stored by school districts across the country.
PowerSchool failed to implement basic security measures like multi-factor authentication, and the breach affected districts nationwide. In Texas alone, over 880,000 individuals’ data was exposed, prompting the state's attorney general to file a lawsuit, accusing PowerSchool of misleading its customers about security practices. Memphis-Shelby County Schools also filed suit, seeking damages for the breach and the cost of recovery.
While PowerSchool paid hackers an undisclosed sum to prevent data from being published, the company’s failure to protect its users’ data raises serious concerns about the security of K-12 educational systems. Adding to the saga, a Massachusetts student, Matthew Lane, pleaded guilty in October to hacking and extorting PowerSchool for $2.85 million in Bitcoin. Lane faces up to 17 years in prison for cyber extortion and aggravated identity theft, a reminder that not all hackers are faceless shadowy figures — sometimes they’re just a college kid.
The Worst. Customer. Service. Ever. Award: TransUnionCredit reporting giant TransUnion had to notify its customers this year that a hack nabbed the personal information of 4.4 million people. How'd the attackers get in? According to a letter filed with the Maine Attorney General's office obtained by TechCrunch, the problem was a “third-party application serving our U.S. consumer support operations.” That's probably not the kind of support they were looking for.
TransUnion said in a Texas filing that attackers swept up “customers’ names, dates of birth, and Social Security numbers” in the breach, though it was quick to point out in public statements that the hackers did not access credit reports or “core credit data.” While it certainly could have been worse, this breach highlights the many ways that hackers can get their hands on information. Coming in through third-parties, companies that provide software or other services to businesses, is like using an unguarded side door, rather than checking in at the front desk. Companies, particularly those who keep sensitive personal information, should be sure to lock down customer information at all the entry points. After all, their decisions about who they do business with ultimately carry consequences for all of their customers — who have no say in the matter.
The Annual Microsoft Screwed Up Again Award: MicrosoftMicrosoft is a company nobody feels neutral about. Especially in the infosec world. The myriad software vulnerabilities in Windows, Office, and other Microsoft products over the decades has been a source of frustration and also great financial rewards for both attackers and defenders. Yet still, as the saying goes: “nobody ever got fired for buying from Microsoft.” But perhaps, the times, they are a-changing.
In July 2025, it was revealed that a zero-day security vulnerability in Microsoft’s flagship file sharing and collaboration software, SharePoint, had led to the compromise of over 400 organizations, including major corporations and sensitive government agencies such as the National Nuclear Security Administration (NNSA), the federal agency responsible for maintaining and developing the U.S. stockpile of nuclear weapons. The attack was attributed to three different Chinese government linked hacking groups. Amazingly, days after the vulnerability was first reported, there were still thousands of vulnerable self-hosted Sharepoint servers online.
Zero-days happen to tech companies, large and small. It’s nearly impossible to write even moderately complex software that is bug and exploit free, and Microsoft can’t exactly be blamed for having a zero-day in their code. But when one company is the source of so many zero-days consistently for so many years, one must start wondering whether they should put all their eggs (or data) into a basket that company made. Perhaps if Microsoft’s monopolistic practices had been reined in back in the 1990s we wouldn’t be in a position today where Sharepoint is the defacto file sharing software for so many major organizations. And maybe, just maybe, this is further evidence that tech monopolies and centralization of data aren’t just bad for consumer rights, civil liberties, and the economy—but also for cybersecurity.
The Silver Globe Award: Flat Earth Sun, Moon & ZodiacLook, we’ll keep this one short: in October of last year, researchers found security issues in the flat earther app, Flat Earth, Sun, Moon, & Clock. In March of 2025, that breach was confirmed. What’s most notable about this, aside from including a surprising amount of information about gender, name, email addresses and date of birth, is that it also included users’ location info, including latitude and longitude. Huh, interesting.
The I Didn’t Even Know You Had My Information Award: Gravy AnalyticsIn January, hackers claimed they stole millions of people’s location history from a company that never should’ve had it in the first place: location data broker Gravy Analytics. The data included timestamped location coordinates tied to advertising IDs, which can reveal exceptionally sensitive information. In fact, researchers who reviewed the leaked data found it could be used to identify military personnel and gay people in countries where homosexuality is illegal.
The breach of this sensitive data is bad, but Gravy Analytics’s business model of regularly harvesting and selling it is even worse. Despite the fact that most people have never heard of them, Gravy Analytics has managed to collect location information from a billion phones a day. The company has sold this data to other data brokers, makers of police surveillance tools, and the U.S. government.
How did Gravy Analytics get this location information from people’s phones? The data broker industry is notoriously opaque, but this breach may have revealed some of Gravy Analytics’ sources. The leaked data referenced thousands of apps, including Microsoft apps, Candy Crush, Tinder, Grindr, MyFitnessPal, pregnancy trackers and religious-focused apps. Many of these app developers said they had no relationship with Gravy Analytics. Instead, expert analysis of the data suggests it was harvested through the advertising ecosystem already connected to most apps. This breach provides further evidence that online behavioral advertising fuels the surveillance industry.
Whether or not they get hacked, location data brokers like Gravy Analytics threaten our privacy and security. Follow EFF’s guide to protecting your location data and help us fight for legislation to dismantle the data broker industry.
The Keeping Up With My Cybertruck Award: TeslamateTeslaMate, a tool meant to track Tesla vehicle data (but which is not owned or operated by Tesla itself), has become a cautionary tale about data security. In August, a security researcher found more than 1,300 self-hosted TeslaMate dashboards were exposed online, leaking sensitive information such as vehicle location, speed, charging habits, and even trip details. In essence, your Cybertruck became the star of its own Keeping Up With My Cybertruck reality show, except the audience wasn’t made up of fans interested in your lifestyle, just random people with access to the internet.
TeslaMate describes itself as “that loyal friend who never forgets anything!” — but its lack of proper security measures makes you wish it would. This breach highlights how easily location data can become a tool for harassment or worse, and the growing need for legislation that specifically protects consumer location data. Without stronger regulations around data privacy, sensitive location details like where you live, work, and travel can easily be accessed by malicious actors, leaving consumers with no recourse.
The Disorder in the Courts Award: PACERConfidentiality is a core principle in the practice of law. But this year a breach of confidentiality came from an unexpected source: a breach of the federal court filing system. In August, Politico reported that hackers infiltrated the Case Management/Electronic Case Files (CM/ECF) system, which uses the same database as PACER, a searchable public database for court records. Of particular concern? The possibility that the attack exposed the names of confidential informants involved in federal cases from multiple court districts. Courts across the country acted quickly to set up new processes to avoid the possibility of further compromises.
The leak followed a similar incident in 2021 and came on the heels of a warning to Congress that the file system is more than a little creaky. In fact, an IT official from the federal court system told the House Judiciary Committee that both systems are “unsustainable due to cyber risks, and require replacement.”
The Only Stalkers Allowed Award: CatwatchfulJust like last year, a stalkerware company was subject to a data breach that really should prove once and for all that these companies must be stopped. In this case, Catwatchful is an Android spyware company that sells itself as a “child monitoring app.” Like other products in this category, it’s designed to operate covertly while uploading the contents of a victim’s phone, including photos, messages, and location information.
This data breach was particularly harmful, as it included not just the email addresses and passwords on the customers who purchased the app to install on a victim’s phone, but also the data from the phones of 26,000 victims’ devices, which could include the victims’ photos, messages, and real-time location data.
This was a tough award to decide on because Catwatchful wasn’t the only stalkerware company that was hit this year. Similar breaches to SpyX, Cocospy, and Spyic were all strong contenders. EFF has worked tirelessly to raise the alarm on this sort of software, and this year worked with AV Comparatives to test the stalkerware detection rate on Android of various major antivirus apps.
The Why We’re Still Stuck on Unique Passwords Award: PlexEvery year, we all get a reminder about why using unique passwords for all our accounts is crucial for protecting our online identities. This time around, the award goes to Plex, who experienced a data breach that included customer emails, usernames, and hashed passwords (which is a fancy way of saying passwords are scrambled through an algorithm, but it is possible they could still be deciphered).
If this all sounds vaguely familiar to you for some reason, that’s because a similar issue also happened to Plex in 2022, affecting 15 million users. Whoops.
This is why it is important to use unique passwords everywhere. A password manager, including one that might be free on your phone or browser, makes this much easier to do. Likewise, credential stuffing illustrates why it’s important to use two-factor authentication. Here’s how to turn that on for your Plex account.
The Uh, Yes, Actually, I Have Been Pwned Award: Troy Hunt’s Mailing ListTroy Hunt, the person behind Have I Been Pwned? and who has more experience with data breaches than just about anyone, also proved that anyone can be pwned. In a blog post, he details what happened to his mailing list:
You know when you're really jet lagged and really tired and the cogs in your head are just moving that little bit too slow? That's me right now, and the penny has just dropped that a Mailchimp phish has grabbed my credentials, logged into my account and exported the mailing list for this blog.
And he continues later:
I'm enormously frustrated with myself for having fallen for this, and I apologise to anyone on that list. Obviously, watch out for spam or further phishes and check back here or via the social channels in the nav bar above for more.
The whole blog is worth a read as a reminder that phishing can get anyone, and we thank Troy Hunt for his feedback on this and other breaches to include this year.
Tips to Protect YourselfData breaches are such a common occurrence that it’s easy to feel like there’s nothing you can do, nor any point in trying. But privacy isn’t dead. While some information about you is almost certainly out there, that’s no reason for despair. In fact, it’s a good reason to take action.
There are steps you can take right now with all your online accounts to best protect yourself from the the next data breach (and the next, and the next):
- Use unique passwords on all your online accounts. This is made much easier by using a password manager, which can generate and store those passwords for you. When you have a unique password for every website, a data breach of one site won’t cascade to others.
- Use two-factor authentication when a service offers it. Two-factor authentication makes your online accounts more secure by requiring additional proof (“factors”) alongside your password when you log in. While two-factor authentication adds another step to the login process, it’s a great way to help keep out anyone not authorized, even if your password is breached.
- Delete old accounts: Sometimes, you’ll get a data breach notification for an account you haven’t used in years. This can be a nice reminder to delete that account, but it’s better to do so before a data breach happens, when possible. Try to make it a habit to go through and delete old accounts once a year or so.
- Freeze your credit. Many experts recommend freezing your credit with the major credit bureaus as a way to protect against the sort of identity theft that’s made possible by some data breaches. Freezing your credit prevents someone from opening up a new line of credit in your name without additional information, like a PIN or password, to “unfreeze” the account. This might sound absurd considering they can’t even open bank accounts, but if you have kids, you can freeze their credit too.
- Keep a close eye out for strange medical bills. With the number of health companies breached this year, it’s also a good idea to watch for healthcare fraud. The Federal Trade Commission recommends watching for strange bills, letters from your health insurance company for services you didn’t receive, and letters from debt collectors claiming you owe money.
According to one report, 2025 had already seen 2,563 data breaches by October, which puts the year on track to be one of the worst by the sheer number of breaches.
We did not investigate every one of these 2,500-plus data breaches, but we looked at a lot of them, including the news coverage and the data breach notification letters that many state Attorney General offices host on their websites. We can’t award the coveted Breachies Award to every company that was breached this year. Still, here are some (dis)honorable mentions we wanted to highlight:
Salesforce, F5, Oracle, WorkComposer, Raw, Stiizy, Ohio Medical Alliance LLC, Hello Cake, Lovense, Kettering Health, LexisNexis, WhatsApp, Nexar, McDonalds, Congressional Budget Office, Doordash, Louis Vuitton, Adidas, Columbia University, Hertz, HCRG Care Group, Lexipol, Color Dating, Workday, Aflac, and Coinbase. And a special nod to last minute entrants Home Depot, 700Credit, and Petco.
What now? Companies need to do a better job of only collecting the information they need to operate, and properly securing what they store. Also, the U.S. needs to pass comprehensive privacy protections. At the very least, we need to be able to sue companies when these sorts of breaches happen (and while we’re at it, it’d be nice if we got more than $5.21 checks in the mail). EFF has long advocated for a strong federal privacy law that includes a private right of action.
🪪 Age Verification Is Coming for the Internet | EFFector 37.18
The final EFFector of 2025 is here! Just in time to keep you up-to-date on the latests happenings in the fight for privacy and free speech online.
In this latest issue, we're sharing how to spot sneaky ALPR cameras at the U.S. border, covering a host of new resources on age verification laws, and explaining why AI companies need to protect chatbot logs from bulk surveillance.
Prefer to listen in? Check out our audio companion, where EFF Activist Molly Buckley explains our new resource explaining age verification laws and how you can fight back. Catch the conversation on YouTube or the Internet Archive.
EFFECTOR 37.18 - 🪪 AGE VERIFICATION IS COMING FOR THE INTERNET
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
States Take On Tough Tech Policy Battles: 2025 in Review
State legislatures—from Olympia, WA, to Honolulu, HI, to Tallahassee, FL, and everywhere in between—kept EFF’s state legislative team busy throughout 2025.
We saw some great wins and steps forward this year. Washington became the eighth state to enshrine the right to repair. Several states stepped up to protect the privacy of location data, with bills recognizing your location data isn't just a pin on a map—it's a powerful tool that reveals far more than most people realize. Other state legislators moved to protect health privacy. And California passed a law making it easier for people to exercise their privacy rights under the state’s consumer data privacy law.
Several states also took up debates around how to legislate and regulate artificial intelligence and its many applications. We’ll continue to work with allies in states including California and Colorado to proposals that address the real harms from some uses of AI, without infringing on the rights of creators and individual users.
We’ve also fought some troubling bills in states across the country this year. In April, Florida introduced a bill that would have created a backdoor for law enforcement to have easy access to messages if minors use encrypted platforms. Thankfully, the Florida legislature did not pass the bill this year. But it should set off serious alarm bells for anyone who cares about digital rights. And it was just one of a growing set of bills from states that, even when well-intentioned, threaten to take a wrecking ball to privacy, expression, and security in the name of protecting young people online.
Take, for example, the burgeoning number of age verification, age gating, age assurance, and age estimation bills. Instead of making the internet safer for children, these laws can incentivize or intersect with existing systems that collect vast amounts of data to force all users—regardless of age—to verify their identity just to access basic content or products. South Dakota and Wyoming, for example, are requiring any website that hosts any sexual content to implement age verification measures. But, given the way those laws are written, that definition could include essentially any site that allows user-generated or published content without age-based gatekeeping access. That could include everyday resources such as social media networks, online retailers, and streaming platforms.
Lawmakers, not satisfied with putting age gates on the internet, are also increasingly going after VPNs (virtual private networks) to prevent anyone from circumventing these new digital walls. VPNs are not foolproof tools—and they shouldn’t be necessary to access legally protected speech—but they should be available to people who want to use them. We will continue to stand against these types of bills, not just for the sake of free expression, but to protect the free flow of information essential to a free society.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
Lawmakers Must Listen to Young People Before Regulating Their Internet Access: 2025 in Review
State and federal lawmakers have introduced multiple proposals in 2025 to curtail or outright block children and teenagers from accessing legal content on the internet. These lawmakers argue that internet and social media platforms have an obligation to censor or suppress speech that they consider “harmful” to young people. Unfortunately, in many of these legislative debates, lawmakers are not listening to kids, whose experiences online are overwhelmingly more positive than what lawmakers claim.
Fortunately, EFF has spent the past year trying to make sure that lawmakers hear young people’s voices. We have also been reminding lawmakers that minors, like everyone else, have First Amendment rights to express themselves online.
These rights extend to a young person’s ability to use social media both to speak for themselves and access the speech of others online. Young people also have the right to control how they access this speech, including a personalized feed and other digestible and organized ways. Preventing teenagers from accessing the same internet and social media channels that adults use is a clear violation of their right to free expression.
On top of violating minors’ First Amendment rights, these laws also actively harm minors who rely on the internet to find community, find resources to end abuse, or access information about their health. Cutting off internet access acutely harms LGBTQ+ youth and others who lack familial or community support where they live. These laws also empower the state to decide what information is acceptable for all young people, overriding parents’ choices.
Additionally, all of the laws that would attempt to create a “kid friendly” internet and an “adults-only” internet are a threat to everyone, adults included. These mandates encourage an adoption of invasive and dangerous age-verification technology. Beyond creepy, these systems incentivize more data collection, and increase the risk of data breaches and other harms. Requiring everyone online to provide their ID or other proof of their age could block legal adults from accessing lawful speech if they don’t have the right form of ID. Furthermore, this trend infringes on people’s right to be anonymous online, and creates a chilling effect which may deter people from joining certain services or speaking on certain topics
EFF has lobbied against these bills at both the state and federal level, and we have also filed briefs in support of several lawsuits to protect the First Amendment Rights of minors. We will continue to advocate for the rights of everyone online – including minors – in the future.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
Trends to Watch in the California Legislature
If you’re a Californian, there are a few new state laws that you should know will be going into effect in the new year. EFF has worked hard in Sacramento this session to advance bills that protect privacy, fight surveillance, and promote transparency.
California’s legislature runs in a two-year cycle, meaning that it’s currently halftime for legislators. As we prepare for the next year of the California legislative session in January, it’s a good time to showcase what’s happened so far—and what’s left to do.
Wins Worth CelebratingIn a win for every Californian’s privacy rights, we were happy to support A.B. 566 (Assemblymember Josh Lowenthal). This is a common-sense law that makes California’s main consumer data privacy law, the California Consumer Privacy Act, more user-friendly. It requires that browsers support people’s rights to send opt-out signals, such as the global opt-out in Privacy Badger, to businesses. Managing your privacy as an individual can be a hard job, and EFF wants stronger laws that make it easier for you to do so.
Additionally, we were proud to advance government transparency by supporting A.B. 1524 (Judiciary Committee), which allows members of the public to make copies of public court records using their own devices, such as cell-phone cameras and overhead document scanners, without paying fees.
We also supported two bills that will improve law enforcement accountability at a time when we desperately need it. S.B. 627 (Senator Scott Wiener) prohibits law enforcement officers from wearing masks to avoid accountability (The Trump administration has sued California over this law). We also supported S.B. 524 (Asm. Jesse Arreguín), which requires law enforcement to disclose when a police report was written using artificial intelligence.
On the To-Do List for Next YearOn the flip side, we also stopped some problematic bills from becoming law. This includes S.B. 690 (Sen. Anna Caballero), which we dubbed the Corporate Coverup Act. This bill would have gutted California’s wiretapping statute by allowing businesses to ignore those privacy rights for “any business purpose.” Working with several coalition partners, we were able to keep that bill from moving forward in 2025. We do expect to see it come back in 2026, and are ready to fight back against those corporate business interests.
And, of course, not every fight ended in victory. There are still many areas where we have work left to do. California Governor Gavin Newsom vetoed a bill we supported, S.B. 7, which would have given workers in California greater transparency into how their employers use artificial intelligence and was sponsored by the California Federation of Labor Unions. S.B. 7 was vetoed in response to concerns from companies including Uber and Lyft, but we expect to continue working with the labor community on the ways AI affects the workplace in 2026.
Trends of NoteCalifornia continued a troubling years-long trend of lawmakers pushing problematic proposals that would require every internet user to verify their age to access information—often by relying on privacy-invasive methods to do so. Earlier this year EFF sent a letter to the California legislature expressing grave concerns with lawmakers’ approach to regulating young people’s ability to speak online. We continue to raise these concerns, and would welcome working with any lawmaker in California on a better solution.
We also continue to keep a close eye on government data sharing. On this front, there is some good news. Several of the bills we supported this year sought to place needed safeguards on the ways various government agencies in California share data. These include: A.B. 82 (Asm. Chris Ward) and S.B. 497 (Wiener), which would add privacy protections to data collected by the state about those who may be receiving gender-affirming or reproductive health care; A.B. 1303 (Asm. Avelino Valencia), which prohibits warrantless data sharing from California’s low-income broadband program to immigration and other government officials; and S.B. 635 (Sen. Maria Elena Durazo), which places similar limits on data collected from sidewalk vendors.
We are also heartened to see California correct course on broad government data sharing. Last session, we opposed A.B. 518 (Asm. Buffy Wicks), which let state agencies ignore existing state privacy law to allow broader information sharing about people eligible for CalFresh—the state’s federally funded food assistance program. As we’ve seen, the federal government has since sought data from food assistance programs to use for other purposes. We were happy to have instead supported A.B. 593 this year, also authored by Asm. Wicks—which reversed course on that data sharing.
We hope to see this attention to the harms of careless government data sharing continue. EFF’s sponsored bill this year, A.B. 1337, would update and extend vital privacy safeguards present at the state agency level to counties and cities. These local entities today collect enormous amounts of data and administer programs that weren’t contemplated when the original law was written in 1977. That information should be held to strong privacy standards.
We’ve been fortunate to work with Asm. Chris Ward, who is also the chair of the LGBTQ Caucus in the legislature, on that bill. The bill stalled in the Senate Judiciary Committee during the 2025 legislative session, but we plan to bring it back in the next session with a renewed sense of urgency.
Age Verification Threats Across the Globe: 2025 in Review
Age verification mandates won't magically keep young people safer online, but that has not stopped governments around the world spending this year implementing or attempting to introduce legislation requiring all online users to verify their ages before accessing the digital space.
The UK’s misguided approach to protecting young people online took many headlines due to the reckless and chaotic rollout of the country’s Online Safety Act, but they were not alone: courts in France ruled that porn websites can check users’ ages; the European Commission pushed forward with plans to test its age-verification app; and Australia’s ban on under-16s accessing social media was recently implemented.
Through this wave of age verification bills, politicians are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning sexual content usually hurt marginalized communities and groups that serve them the most.
In response, we’ve spent this year urging governments to pause these legislative initiatives and instead protect everyone’s right to speak and access information online. Here are three ways we pushed back [against these bills] in 2025:
Social Media Bans for Young PeopleBanning a certain user group changes nothing about a platform’s problematic privacy practices, insufficient content moderation, or business models based on the exploitation of people’s attention and data. And assuming that young people will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences.
Yet Australia’s government recently decided to ignore these dangers by rolling out a sweeping regime built around age verification that bans users under 16 from having social media accounts. In this world-first ban, platforms are required to introduce age assurance tools to block under-16s, demonstrate that they have taken “reasonable steps” to deactivate accounts used by under-16s, and prevent any new accounts being created or face fines of up to 49.5 million Australian dollars ($32 million USD). The 10 banned platforms—Instagram, Facebook, Threads, Snapchat, YouTube, TikTok, Kick, Reddit, Twitch and X—have each said they’ll comply with the legislation, leading to young people losing access to their accounts overnight.
Similarly, the European Commission this year took a first step towards mandatory age verification that could undermine privacy, expression, and participation rights for young people—rights that have been fully enshrined in international human rights law through its guidelines under Article 28 of the Digital Services Act. EFF submitted feedback to the Commission’s consultation on the guidelines, emphasizing a critical point: Mandatory age verification measures are not the right way to protect minors, and any online safety measure for young people must also safeguard their privacy and security. Unfortunately, the EU Parliament already went a step further, proposing an EU digital minimum age of 16 for access to social media, a move that aligns with EU Commission’s president Ursula von der Leyen’s recent public support for measures inspired by Australia’s model.
Push for Age Assurance on All UsersThis year, the UK had a moment—and not a good one. In late July, new rules took effect under the Online Safety Act that now require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.
The UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. As we argued throughout this year, and during the passage of the Online Safety Act, any attempt to protect young people online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. The approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the very young people that it is trying to protect.
We’re seeing these narratives and regulatory initiatives replicated from the UK to U.S. states and other global jurisdictions, and we’ll continue urging politicians not to follow the UK’s lead in passing similar legislation—and to instead explore more holistic approaches to protecting all users online.
Rushed Age Assurance through the EU Digital WalletThere is not yet a legal obligation to verify users’ ages at the EU level, but policymakers and regulators are already embracing harmful age verification and age assessment measures in the name of reducing online harms.
These demands steer the debate toward identity-based solutions, such as the EU Digital Identity Wallet, which will become available in 2026. This has come with its own realm of privacy and security concerns, such as long-term identifiers (which could result in tracking) and over-exposure of personal information. Even more concerning is, instead of waiting for the full launch of the EU DID Wallet, the Commission rushed a “mini AV” app out this year ahead of schedule, citing an urgent need to address concerns about children and the harms that may come to them online.
However, this proposed solution directly tied national ID to an age verification method. This also comes with potential mission creep of what other types of verification could be done in EU member states once this is fully deployed—while the focus of the “mini AV” app is for now on verifying age, its release to the public means that the infrastructure to expand ID checks to other purposes is in place, should the government mandate that expansion in the future.
Without the proper safeguards, this infrastructure could be leveraged inappropriately—all the more reason why lawmakers should explore more holistic approaches to children's safety.
Ways ForwardThe internet is an essential resource for young people and adults to access information, explore community, and find themselves. The issue of online safety is not solved through technology alone, and young people deserve a more intentional approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves.
Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians to look into what is best, and not what is easy; and in the meantime, we’ll continue fighting for the rights of all users on the internet in 2026.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
Defending Access to Abortion Information Online: 2025 in Review
As reproductive rights face growing attacks globally, access to content about reproductive healthcare and abortion online has never been more critical. The internet has essential information on topics like where and how to access care, links to abortion funds, and guidance on ways to navigate potential legal risks. Reproductive rights activists use the internet to organize and build community, and healthcare providers rely on it to distribute accurate information to people in need. And for those living in one of the 20+ states where abortion is banned or heavily restricted, the internet is often the only place to find these potentially life-saving resources.
Nonetheless, both the government and private platforms are increasingly censoring abortion-related speech, at a time when we need it most. Anti-abortion legislators are actively trying to pass laws to limit online speech about abortion, making it harder to share critical resources, discuss legal options, seek safe care, and advocate for reproductive rights. At the same time, social media platforms have increasingly cracked down on abortion-related content, leading to the suppression, shadow-banning, and outright removal of posts and accounts.
This year, we worked tirelessly to fight censorship of abortion-related information online—whether it originated from the largest social media platforms or the largest state in the U.S.
As defenders of free expression and access to information online, we have a role to play in understanding where and how this is happening, shining a light on practices that endanger these rights, and taking action to ensure they’re protected. This year, we worked tirelessly to fight censorship of abortion-related information online—whether it originated from the largest social media platforms or the largest state in the U.S.
Exposing Social Media CensorshipAt the start of 2025, we launched the #StopCensoringAbortion campaign to collect and spotlight the growing number of stories from users that have had abortion-related content censored by social media platforms. Our goal was to better understand how and why this is happening, raise awareness, and hold the platforms accountable.
Thanks to nearly 100 submissions from educators, advocates, clinics, researchers, and influencers around the world, we confirmed what many already suspected: this speech is being removed and restricted by platforms at an alarming rate. Across the submissions we received, we saw a pattern of over enforcement, lack of transparency, and arbitrary moderation decisions aimed at reproductive health and reproductive justice advocates.
Notably, almost none of the submissions we reviewed actually violated the platforms’ stated policies. The most common reason Meta gave for removing abortion-related content was that it violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.” But the content being removed wasn’t selling medications. Most of the censored posts simply provided factual, educational information—content that’s expressly allowed by Meta.
In a month-long 10-part series, we broke down our findings. We examined the trends we saw, including stories of individuals and organizations who needed to rely on internal connections at Meta to get wrongfully censored posts restored, examples of account suspensions without sufficient warnings, and an exploration of Meta policies and how they are wrongly applied. We provided practical tips for users to protect their posts from being removed, and we called on platforms to adopt steps to ensure transparency, a functional appeals process, more human review of posts, and consistent and fair enforcement of rules.
Social media platforms have a First Amendment right to curate the content on their sites—they can remove whatever content they want—and we recognize that. But companies like Meta claim they care about free speech, and their policies explicitly claim to allow educational information and discussions about abortion. We think they have a duty to live up to those promises. Our #StopCensoringAbortion campaign clearly shows that this isn’t happening and underscores the urgent need for platforms to review and consistently enforce their policies fairly and transparently.
Combating Legislative Attacks on Free SpeechOn top of platform censorship, lawmakers are trying to police what people can say and see about abortion online. So in 2025, we also fought against censorship of abortion information on the legislative front.
EFF opposed Texas Senate Bill (S.B.) 2880, which would not only outlaw the sale and distribution of abortion pills, but also make it illegal to “provide information” on how to obtain an abortion-inducing drug. Simply having an online conversation about mifepristone or exchanging emails about it could run afoul of the law.
On top of going after online speakers who create and post content themselves, the bill also targeted social media platforms, websites, email services, messaging apps, and any other “interactive computer service” simply for hosting or making that content available. This was a clear attempt by Texas legislators to keep people from learning about abortion drugs, or even knowing that they exist, by wiping this information from the internet altogether.
We laid out the glaring free-speech issues with S.B. 2880 and explained how the consequences would be dire if passed. And we asked everyone who cares about free speech to urge lawmakers to oppose this bill, and others like it. Fortunately, these concerns were heard, and the bill never became law.
Our team also spent much of the year fighting dangerous age verification legislation, often touted as “child safety” bills, at both the federal and state level. We raised the alarm on how age verification laws pose significant challenges for users trying to access critical content—including vital information about sexual and reproductive health. By age-gating the internet, these laws could result in websites requiring users to submit identification before accessing information about abortion or reproductive healthcare. This undermines the ability to remain private and anonymous while searching for abortion information online.
Protecting Life-Saving Information OnlineAbortion information saves lives, and the internet is a primary (and sometimes only) source where people can access it.
As attacks on abortion information intensify, EFF will continue to fight so that users can post, host, and access abortion-related content without fear of being silenced. We’ll keep pushing for greater accountability from social media platforms and fighting against harmful legislation aimed at censoring these vital resources. The fight is far from over, but we will remain steadfast in ensuring that everyone, regardless of where they live, can access life-saving information and make informed decisions about their health and rights.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
EFF, Open Rights Group, Big Brother Watch, and Index on Censorship Call on UK Government to Repeal Online Safety Act
Since the Online Safety Act took effect in late July, UK internet users have made it very clear to their politicians that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK, and a petition calling for the repeal of the Online Safety Act (OSA) hit over 400,000 signatures.
In the months since, more than 550,000 people have petitioned Parliament to repeal or reform the Online Safety Act, making it one of the largest public expressions of concern about a UK digital law in recent history. The OSA has galvanized swathes of the UK population, and it’s high time for politicians to take that seriously.
Last week, EFF joined Open Rights Group, Big Brother Watch, and Index on Censorship in sending a briefing to UK politicians urging them to listen to their constituents and repeal the Online Safety Act ahead of this week’s Parliamentary petition debate on 15 December.
The legislation is a threat to user privacy, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and effectively blocks millions of people without a personal device or form of ID from accessing the internet. The briefing highlights how, in the months since the OSA came into effect, we have seen the legislation:
- Make it harder for not-for-profits and community groups to run their own websites.
- Result in the wrong types of content being taken down.
- Lead to age-assurance being applied widely to all sorts of content.
Our briefing continues:
“Those raising concerns about the Online Safety Act are not opposing child safety. They are asking for a law that does both: protects children and respects fundamental rights, including children’s own freedom of expression rights.”
The petition shows that hundreds of thousands of people feel the current Act tilts too far, creating unnecessary risks for free expression and ordinary online life. With sensible adjustments, Parliament can restore confidence that online safety and freedom of expression rights can coexist.
If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.
Read the briefing in full here.
EFF and 12 Organizations Urge UK Politicians to Drop Digital ID Scheme Ahead of Parliamentary Petition Debate
The UK Parliament convened earlier this week to debate a petition signed by almost 2.9 million people calling for an end to the government’s plans to roll out a national digital ID. Ahead of that debate, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s newly announced digital ID proposal.
The UK’s Prime Minister Keir Starmer pitched the scheme as a way to “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like names, date of birth, nationality, photo, and residency status to verify their right to live and work in the country.
But the case for digital identification has not been made.
As we detail in our joint briefing, the proposal follows a troubling global trend: governments introducing expansive digital identity systems that are structurally incompatible with a rights-respecting democracy. The UK’s plan raises six interconnected concerns:
- Mission creep
- Infringements on privacy rights
- Serious security risks
- Reliance on inaccurate and unproven technologies
- Discrimination and exclusion
- The deepening of entrenched power imbalances between the state and the public.
Digital ID schemes don’t simply verify who you are—they redefine who can access services and what those services look like. They become a gatekeeper to essential societal infrastructure, enabling governments and state agencies to close doors as easily as they open them. And they disproportionately harm those already at society’s margins, including people seeking asylum and undocumented communities, who already face heightened surveillance and risk.
Even the strongest recommended safeguards cannot resolve the core problem: a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. And at a time when almost 3 million people in the UK have called on politicians to reject this proposal, the government must listen to people and say no to digital ID.
Read our civil society briefing in full here.
From Speakeasies to DEF CON—Celebrating With EFF Members: 2025 Year In Review
It’s been a great year to be on EFF’s membership team. There's no better feeling than hanging out with your fellow digital freedom supporters and being able to say, “Oh yeah, and we’re suing the government!” We’ve done that a lot this year—and that’s all thanks to people like you.
As a token of appreciation for supporting EFF’s mission to protect privacy and free expression online for all people, we put a lot of care into meeting the members who make our work possible. Whether it’s hosting meetups, traveling to conferences, or finding new and fun ways to explain what we’re fighting for, connecting with you is always a highlight of the job.
EFF Speakeasy Meet UpsOne of my favorite perks we offer for EFF members is exclusive invites for Speakeasy meet ups. It’s a chance for us to meet the very passionate members who fuel our work!
This year, we hosted Speakeasies across the country while making the rounds at conferences. We met supporters in Mesa, AZ during CactusCon; Pasadena, CA during SCALE; Portland, OR during BSidesPDX; New York, NY during HOPE and BSidesNYC; and Seattle, WA during our panel at the University of Washington.
Of course, we also had to host a Speakeasy in our home court—and for the first time it took place in the South Bay Area in Mountain View, CA at Hacker Dojo! There, members of EFF’s D.C. Legislative team spoke about EFF’s legislative efforts and how they’ll shape digital rights for all. We even recorded that conversation for you to watch on YouTube or the Internet Archive.
And we can’t forget about our global community! Our annual online Speakeasy brought together members around the world for a conversation and Q&A with our friends at Women in Security and Privacy (WISP) about online behavioral tracking and the data broker industry. We heard and answered great questions about pushing back on online tracking and what legislative steps we can take to strengthen privacy.
Summer Security ConferencesSay what you will about Vegas—nothing compares to the energy of seeing thousands of EFF supporters during the summer security conferences: BSidesLV, Black Hat USA, and DEF CON. This year over one thousand people signed up to support the digital freedom movement in just that one week.
If you’ve ever seen us at a conference, you know the drill: a table full of EFF staff frantically handing out swag, answering questions, and excitedly saying hi to everyone that stops by and supports our work. This year it was especially fun to see how many people brought their Rayhunter devices.
And of course, it wouldn’t be a trip to Vegas without EFF’s annual DEF CON Poker Tournament. This year 48 supporters and friends played for money, glory, and the future of the web—all with EFF’s very own playing cards. For the first time ever, the jellybean trophy went to the same winner two years in a row!
EFFecting Change Livestream SeriesWe ramped up our livestream series, EFFecting Change, this year with a total of six livestreams covering topics including the future of social media with guests from Mastodon, Bluesky, and Spill; EFF’s 35th Anniversary and what’s next in the fight for privacy and free speech online; and generative AI, including how to address the risks of the technology while protecting civil liberties and human rights online.
We’ve got more in store for EFFecting Change in 2026, so be sure to stay up-to-date by signing up for updates!
EFF Awards CeremonyEFF is at the forefront of protecting users from dystopian surveillance and unjust censorship online. But we’re not the only one doing this work, and we couldn’t do it without other organizations in the space. So, every year we like to award those who are courageously championing the digital rights movement.
This year we gave out three awards: the EFF Award for Defending Digital Freedoms went to Software Freedom Law Center, India, the EFF Award for Protecting Americans’ Data went to Erie Meyer, and the EFF Award for Leading Immigration and Surveillance Litigation went to Just Futures Law. You can watch the EFF Awards here and see photos from the event too!
That doesn’t even cover all of it! We even got to celebrate 35 years of EFF in July with limited-edition challenge coins and all-new member swag—plus a livestream covering EFF’s history and what’s next for us.
Grab EFF's 35th Anniversary t-shirt when you become a member today!
As the new year approaches, I always like to look back on the bright spots—especially the joy of hanging out with this incredible community. The world can feel hectic, but connecting with supporters like you is a reminder of how much good we can build when we work together.
Many thanks to all of the EFF members who joined forces with us this year. If you’ve been meaning to join, but haven’t yet, year-end is a great time to do so.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
Thousands Tell the Patent Office: Don’t Hide Bad Patents From Review
A massive wave of public comments just told the U.S. Patent and Trademark Office (USPTO): don’t shut the public out of patent review.
EFF submitted its own formal comment opposing the USPTO’s proposed rules, and more than 4,000 supporters added their voices—an extraordinary response for a technical, fast-moving rulemaking. We comprised more than one-third of the 11,442 comments submitted. The message is unmistakable: the public wants a meaningful way to challenge bad patents, and the USPTO should not take that away.
The Public Doesn’t Want To Bury Patent ChallengesThese thousands of submissions do more than express frustration. They demonstrate overwhelming public interest in preserving inter partes review (IPR), and undermine any broad claim that the USPTO’s proposal reflects public sentiment.
Comments opposing the rulemaking include many small business owners who have been wrongly accused of patent infringement, by both patent trolls and patent-abusing competitors. They also include computer science experts, law professors, and everyday technology users who are simply tired of patent extortion—abusive assertions of low-quality patents—and the harm it inflicts on their work, their lives, and the broader U.S. economy.
The USPTO exists to serve the public. The volume and clarity of this response make that expectation impossible to ignore.
EFF’s Comment To USPTOIn our filing, we explained that the proposed rules would make it significantly harder for the public to challenge weak patents. That undercuts the very purpose of IPR. The proposed rules would pressure defendants to give up core legal defenses, allow early or incomplete decisions to block all future challenges, and create new opportunities for patent owners to game timing and shut down PTAB review entirely.
Congress created IPR to allow the Patent Office to correct its own mistakes in a fair, fast, expert forum. These changes would take the system backward.
A Broad Coalition Supports IPRA wide range of groups told the USPTO the same thing: don’t cut off access to IPR.
Open Source and Developer Communities
The Linux Foundation submitted comments and warned that the proposed rules “would effectively remove IPRs as a viable mechanism for challenges to patent validity,” harming open-source developers and the users that rely on them. Github wrote that the USPTO proposal would increase “litigation risk and costs for developers, startups, and open source projects.” And dozens of individual software developers described how bad patents have burdened their work.
Patent Law Scholars
A group of 22 patent law professors from universities across the country said the proposed rule changes “would violate the law, increase the cost of innovation, and harm the quality of patents.”
Patient Advocates
Patients for Affordable Drugs warned in their filing that IPR is critical for invalidating wrongly granted pharmaceutical patents. When such patents are invalidated, studies have shown “cardiovascular medications have fallen 97% in price, cancer drugs dropping 80-98%, and treatments for opioid addiction becom[e] 50% more affordable.” In addition, “these cases involved patents that had evaded meaningful scrutiny in district court.”
Small Businesses
Hundreds of small businesses weighed in with a consistent message: these proposed rules would hit them hardest. Owners and engineers described being targeted with vague or overbroad patents they cannot afford to litigate in court, explaining that IPR is often the only realistic way for a small firm to defend itself. The proposed rules would leave them with an impossible choice—pay a patent troll, or spend money they don’t have fighting in federal court.
What Happens NextThe USPTO now has thousands of comments to review. It should listen. Public participation must be more than a box-checking exercise. It is central to how administrative rulemaking is supposed to work.
Congress created IPR so the public could help correct bad patents without spending millions of dollars in federal court. People across technical, academic, and patient-advocacy communities just reminded the agency why that matters.
We hope the USPTO reconsiders these proposed rules. Whatever happens, EFF will remain engaged and continue fighting to preserve the public’s ability to challenge bad patents.
Artificial Intelligence, Copyright, and the Fight for User Rights: 2025 in Review
A tidal wave of copyright lawsuits against AI developers threatens beneficial uses of AI, like creative expression, legal research, and scientific advancement. How courts decide these cases will profoundly shape the future of this technology, including its capabilities, its costs, and whether its evolution will be shaped by the democratizing forces of the open market or the whims of an oligopoly. As these cases finished their trials and moved to appeals courts in 2025, EFF intervened to defend fair use, promote competition, and protect everyone’s rights to build and benefit from this technology.
At the same time, rightsholders stepped up their efforts to control fair uses through everything from state AI laws to technical standards that influence how the web functions. In 2025, EFF fought policies that threaten the open web in the California State Legislature, the Internet Engineering Task Force, and beyond.
Fair Use Still Protects Learning—Even by MachinesCopyright lawsuits against AI developers often follow a similar pattern: plaintiffs argue that use of their works to train the models was infringement and then developers counter that their training is fair use. While legal theories vary, the core issue in many of these cases is whether using copyrighted works to train AI is a fair use.
We think that it is. Courts have long recognized that copying works for analysis, indexing, or search is a classic fair use. That principle doesn’t change because a statistical model is doing the reading. AI training is a legitimate, transformative fair use, not a substitute for the original works.
More importantly, expanding copyright would do more harm than good: while creators have legitimate concerns about AI, expanding copyright won’t protect jobs from automation. But overbroad licensing requirements risk entrenching Big Tech’s dominance, shutting out small developers, and undermining fair use protections for researchers and artists. Copyright is a tool that gives the most powerful companies even more control—not a check on Big Tech. And attacking the models and their outputs by attacking training—i.e. “learning” from existing works—is a dangerous move. It risks a core principle of freedom of expression: that training and learning—by anyone—should not be endangered by restrictive rightsholders.
In most of the AI cases, courts have yet to consider—let alone decide—whether fair use applies, but in 2025, things began to speed up.
But some cases have already reached courts of appeal. We advocated for fair use rights and sensible limits on copyright in amicus briefs filed in Doe v. GitHub, Thomson Reuters v. Ross Intelligence, and Bartz v. Anthropic, three early AI copyright appeals that could shape copyright law and influence dozens of other cases. We also filed an amicus brief in Kadrey v. Meta, one of the first decisions on the merits of the fair use defense in an AI copyright case.
How the courts decide the fair use questions in these cases could profoundly shape the future of AI—and whether legacy gatekeepers will have the power to control it. As these cases move forward, EFF will continue to defend your fair use rights.
Protecting the Open Web in the IETFRightsholders also tried to make an end-run around fair use by changing the technical standards that shape much of the internet. The IETF, an Internet standards body, has been developing technical standards that pose a major threat to the open web. These proposals would give websites to express “preference signals” against certain uses of scraped data—effectively giving them veto power over fair uses like AI training and web search.
Overly restrictive preference signaling threatens a wide range of important uses—from accessibility tools for people with disabilities to research efforts aimed at holding governments accountable. Worse, the IETF is dominated by publishers and tech companies seeking to embed their business models into the infrastructure of the internet. These companies aren’t looking out for the billions of internet users who rely on the open web.
That’s where EFF comes in. We advocated for users’ interests in the IETF, and helped defeat the most dangerous aspects of these proposals—at least for now.
Looking AheadThe AI copyright battles of 2025 were never just about compensation—they were about control. EFF will continue working in courts, legislatures, and standards bodies to protect creativity and innovation from copyright maximalists.
Why Isn’t Online Age Verification Just Like Showing Your ID In Person?
This blog also appears in our Age Verification Resource Hub: our one-stop shop for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yes—safe—internet.
One of the most common refrains we hear from age verification proponents is that online ID checks are nothing new. After all, you show your ID at bars and liquor stores all the time, right? And it’s true that many places age-restrict access in-person to various goods and services, such as tobacco, alcohol, firearms, lottery tickets, and even tattoos and body piercings.
But this comparison falls apart under scrutiny. There are fundamental differences between flashing your ID to a bartender and uploading government documents or biometric data to websites and third-party verification companies. Online age-gating is more invasive, affects far more people, and poses serious risks to privacy, security, and free speech that simply don't exist when you buy a six-pack at the corner store.
Online age verification burdens many more people.Online age restrictions are imposed on many, many more users than in-person ID checks. Because of the sheer scale of the internet, regulations affecting online content sweep in an enormous number of adults and youth alike, forcing them to disclose sensitive personal data just to access lawful speech, information, and services.
Additionally, age restrictions in the physical world affect only a limited number of transactions: those involving a narrow set of age-restricted products or services. Typically this entails a bounded interaction about one specific purchase.
Online age verification laws, on the other hand, target a broad range of internet activities and general purpose platforms and services, including social media sites and app stores. And these laws don’t just wall off specific content deemed harmful to minors (like a bookstore would); they age-gate access to websites wholesale. This is akin to requiring ID every time a customer walks into a convenience store, regardless of whether they want to buy candy or alcohol.
There are significant privacy and security risks that don’t exist offline.In offline, in-person scenarios, a customer typically provides their physical ID to a cashier or clerk directly. Oftentimes, customers need only flash their ID for a quick visual check, and no personal information is uploaded to the internet, transferred to a third-party vendor, or stored. Online age-gating, on the other hand, forces users to upload—not just momentarily display—sensitive personal information to a website in order to gain access to age-restricted content.
This creates a cascade of privacy and security problems that don’t exist in the physical world. Once sensitive information like a government-issued ID is uploaded to a website or third-party service, there is no guarantee it will be handled securely. You have no direct control over who receives and stores your personal data, where it is sent, or how it may be accessed, used, or leaked outside the immediate verification process.
Data submitted online rarely just stays between you and one other party. All online data is transmitted through a host of third-party intermediaries, and almost all websites and services also host a network of dozens of private, third-party trackers managed by data brokers, advertisers, and other companies that are constantly collecting data about your browsing activity. The data is shared with or sold to additional third parties and used to target behavioral advertisements. Age verification tools also often rely on third parties just to complete a transaction: a single instance of ID verification might involve two or three different third-party partners, and age estimation services often work directly with data brokers to offer a complete product. Users’ personal identifying data then circulates among these partners.
All of this increases the likelihood that your data will leak or be misused. Unfortunately, data breaches are an endemic part of modern life, and the sensitive, often immutable, personal data required for age verification is just as susceptible to being breached as any other online data. Age verification companies can be—and already have been—hacked. Once that personal data gets into the wrong hands, victims are vulnerable to targeted attacks both online and off, including fraud and identity theft.
Troublingly, many age verification laws don’t even protect user security by providing a private right of action to sue a company if personal data is breached or misused. This leaves you without a direct remedy should something bad happen.
Some proponents claim that age estimation is a privacy-preserving alternative to ID-based verification. But age estimation tools still require biometric data collection, often demanding users submit a photo or video of their face to access a site. And again, once submitted, there’s no way for you to verify how that data is processed or stored. Requiring face scans also normalizes pervasive biometric surveillance and creates infrastructure that could easily be repurposed for more invasive tracking. Once we’ve accepted that accessing lawful speech requires submitting our faces for scanning, we’ve crossed a threshold that’s difficult to walk back.
Online age verification creates even bigger barriers to access.Online age gates create more substantial access barriers than in-person ID checks do. For those concerned about privacy and security, there is no online analog to a quick visual check of your physical ID. Users may be justifiably discouraged from accessing age-gated websites if doing so means uploading personal data and creating a potentially lasting record of their visit to that site.
Given these risks, age verification also imposes barriers to remaining anonymous that don't typically exist in-person. Anonymity can be essential for those wishing to access sensitive, personal, or stigmatized content online. And users have a right to anonymity, which is “an aspect of the freedom of speech protected by the First Amendment.” Even if a law requires data deletion, users must still be confident that every website and online service with access to their data will, in fact, delete it—something that is in no way guaranteed.
In-person ID checks are additionally less likely to wrongfully exclude people due to errors. Online systems that rely on facial scans are often incorrect, especially when applied to users near the legal age of adulthood. These tools are also less accurate for people with Black, Asian, Indigenous, and Southeast Asian backgrounds, for users with disabilities, and for transgender individuals. This leads to discriminatory outcomes and exacerbates harm to already marginalized communities. And while in-person shoppers can speak with a store clerk if issues arise, these online systems often rely on AI models, leaving users who are incorrectly flagged as minors with little recourse to challenge the decision.
In-person interactions may also be less burdensome for adults who don’t have up-to-date ID. An older adult who forgets their ID at home or lacks current identification is not likely to face the same difficulty accessing material in a physical store, since there are usually distinguishing physical differences between young adults and those older than 35. A visual check is often enough. This matters, as a significant portion of the U.S. population does not have access to up-to-date government-issued IDs. This disproportionately affects Black Americans, Hispanic Americans, immigrants, and individuals with disabilities, who are less likely to possess the necessary identification.
We’re talking about First Amendment-protected speech.It's important not to lose sight of what’s at stake here. The good or service age gated by these laws isn’t alcohol or cigarettes—it’s First Amendment-protected speech. Whether the target is social media platforms or any other online forum for expression, age verification blocks access to constitutionally-protected content.
Access to many of these online services is also necessary to participate in the modern economy. While those without ID may function just fine without being able to purchase luxury products like alcohol or tobacco, requiring ID to participate in basic communication technology significantly hinders people’s ability to engage in economic and social life.
This is why it’s wrong to claim online age verification is equivalent to showing ID at a bar or store. This argument handwaves away genuine harms to privacy and security, dismisses barriers to access that will lock millions out of online spaces, and ignores how these systems threaten free expression. Ignoring these threats won’t protect children, but it will compromise our rights and safety.
Age Verification Is Coming For the Internet. We Built You a Resource Hub to Fight Back.
Age verification laws are proliferating fast across the United States and around the world, creating a dangerous and confusing tangle of rules about what we’re all allowed to see and do online. Though these mandates claim to protect children, in practice they create harmful censorship and surveillance regimes that put everyone—adults and young people alike—at risk.
The term “age verification” is colloquially used to describe a wide range of age assurance technologies, from age verification systems that force you to upload government ID, to age estimation tools that scan your face, to systems that infer your age by making you share personal data. While different laws call for different methods, one thing remains constant: every method out there collects your sensitive, personal information and creates barriers to accessing the internet. We refer to all of these requirements as age verification, age assurance, or age-gating.
If you’re feeling overwhelmed by this onslaught of laws and the invasive technologies behind them, you’re not alone. It’s a lot. But understanding how these mandates work and who they harm is critical to keeping yourself and your loved ones safe online. Age verification is lurking around every corner these days, so we must fight back to protect the internet that we know and love.
That’s why today, we’re launching EFF’s Age Verification Resource Hub (EFF.org/Age): a one-stop shop to understand what these laws actually do, what’s at stake, why EFF opposes all forms of age verification, how to protect yourself, and how to join the fight for a free, open, private, and yes—safe—internet.
Why Age Verification Mandates Are a ProblemIn the U.S., more than half of all states have now passed laws imposing age-verification requirements on online platforms. Congress is considering even more at the federal level, with a recent House hearing weighing nineteen distinct proposals relating to young people’s online safety—some sweeping, some contradictory, and each one more drastic and draconian than the last.
We all want young people to be safe online. However, age verification is not the silver bullet that lawmakers want you to think it is.
The rest of the world is moving in the same direction. We saw the UK’s Online Safety Act go into effect this summer, Australia’s new law barring access to social media for anyone under 16 goes live today, and a slew of other countries are currently considering similar restrictions.
We all want young people to be safe online. However, age verification is not the silver bullet that lawmakers want you to think it is. In fact, age-gating mandates will do more harm than good—especially for the young people they claim to protect. They undermine the fundamental speech rights of adults and young people alike; create new barriers to accessing vibrant, lawful, even life-saving content; and needlessly jeopardize all internet users’ privacy, anonymity, and security.
If legislators want to meaningfully improve online safety, they should pass a strong, comprehensive federal privacy law instead of building new systems of surveillance, censorship, and exclusion.
What’s Inside the Resource HubOur new hub is built to answer the questions we hear from users every day, such as:
- How do age verification laws actually work?
- What’s the difference between age verification, age estimation, age assurance, and all the other confusing technical terms I’m hearing?
- What’s at stake for me, and who else is harmed by these systems?
- How can I keep myself, my family, and my community safe as these laws continue to roll out?
- What can I do to fight back?
- And if not age verification, what else can we do to protect the online safety of our young people?
Head over to EFF.org/Age to explore our explainers, user-friendly guides, technical breakdowns, and advocacy tools—all indexed in the sidebar for easy browsing. And today is just the start, so keep checking back over the next several weeks as we continue to build out the site with new resources and answers to more of your questions on all things age verification.
Join Us: Reddit AMA & EFFecting Change Livestream EventsTo celebrate the launch of EFF.org/Age, and to hear directly from you how we can be most helpful in this fight, we’re hosting two exciting events:
1. Reddit AMA on r/privacyNext week, our team of EFF activists, technologists, and lawyers will be hanging out over on Reddit’s r/privacy subreddit to directly answer your questions on all things age verification. We’re looking forward to connecting with you and hearing how we can help you navigate these changing tides, so come on over to r/privacy on Monday (12/15), Tuesday (12/16), and Wednesday (12/17), and ask us anything!
2. EFFecting Change Livestream Panel: “The Human Cost of Online Age Verification”Then, on January 15th at 12pm PT, we’re hosting a livestream panel featuring Cynthia Conti-Cook, Director of Research and Policy at the Collaborative Research Center for Resilience; Hana Memon, Software Developer at Gen Z for Change; EFF Director of Engineering Alexis Hancock; and EFF Associate Director of State Affairs Rindala Alajaji. We’ll break down how these laws work, who they exclude, and how these mandates threaten privacy and free expression for people of all ages. Join us by RSVPing at https://livestream.eff.org/.
A Resource to Empower UsersAge-verification mandates are reshaping the internet in ways that are invasive, dangerous, and deeply unnecessary. But users are not powerless! We can challenge these laws, protect our digital rights, and build a safer digital world for all internet users, no matter their ages. Our new resource hub is here to help—so explore, share, and join us in the fight for a better internet.
The Best Big Media Merger Is No Merger at All
The state of streaming is... bad. It’s very bad. The first step in wanting to watch anything is a web search: “Where can I stream X?” Then you have to scroll past an AI summary with no answers, and then scroll past the sponsored links. After that, you find out that the thing you want to watch was made by a studio that doesn’t exist anymore or doesn’t have a streaming service. So, even though you subscribe to more streaming services than you could actually name, you will have to buy a digital copy to watch. A copy that, despite paying for it specifically, you do not actually own and might vanish in a few years.
Then, after you paid to see something multiple times in multiple ways (theater ticket, VHS tape, DVD, etc.), the mega-corporations behind this nightmare will try to get Congress to pass laws to ensure you keep paying them. In the end, this is easier than making a product that works. Or, as someone put it on social media, these companies have forgotten “that their entire existence relies on being slightly more convenient than piracy.”
It’s important to recognize this as we see more and more media mergers. These mergers are not about quality, they’re about control.
In the old days, studios made a TV show. If the show was a hit, they increased how much they charged companies to place ads during the show. And if the show was a hit for long enough, they sold syndication rights to another channel. Then people could discover the show again, and maybe come back to watch it air live. In that model, the goal was to spread access to a program as much as possible to increase viewership and the number of revenue streams.
Now, in the digital age, studios have picked up a Silicon Valley trait: putting all their eggs into the basket of “increasing the number of users.” To do that, they have to create scarcity. There has to be only one destination for the thing you’re looking for, and it has to be their own. And you shouldn’t be able to control the experience at all. They should.
They’ve also moved away from creating buzzy new exclusives to get you to pay them. That requires risk and also, you know, paying creative people to make them. Instead, they’re consolidating.
Media companies keep announcing mergers and acquisitions. They’ve been doing it for a long time, but it’s really ramped up in the last few years. And these mergers are bad for all the obvious reasons. There are the speech and censorship reasons that came to a head in, of all places, late night television. There are the labor issues. There are the concentration of power issues. There are the obvious problems that the fewer studios that exist the fewer chances good art gets to escape Hollywood and make it to our eyes and ears. But when it comes specifically to digital life there are these: consumer experience and ownership.
First, the more content that comes under a single corporation’s control, the more they expect you to come to them for it. And the more they want to charge. And because there is less competition, the less they need to work to make their streaming app usable. They then enforce their hegemony by using the draconian copyright restrictions they’ve lobbied for to cripple smaller competitors, critics, and fair use.
When everything is either Disney or NBCUniversal or Warner Brothers-Discovery-Paramount-CBS and everything is totally siloed, what need will they have to spend money improving any part of their product? Making things is hard, stopping others from proving how bad you are is easy, thanks to how broken copyright law is.
Furthermore, because every company is chasing increasing subscriber numbers instead of multiple revenue streams, they have an interest in preventing you from ever again “owning” a copy of a work. This was always sort of part of the business plan, but it was on a scale of a) once every couple of years, b) at least it came, in theory, with some new features or enhanced quality and c) you actually owned the copy you paid for. Now they want you to pay them every month for access to same copy. And, hey, the price is going to keep going up the fewer options you have. Or you will see more ads. Or start seeing ads where there weren’t any before.
On the one hand, the increasing dependence on direct subscriber numbers does give users back some power. Jimmy Kimmel’s reinstatement by ABC was partly due to the fact that the company was about to announce a price hike for Disney+ and it couldn’t handle losing users due to the new price and due to popular outrage over Kimmel’s treatment.
On the other hand, well, there's everything else.
The latest kerfuffle is over the sale of Warner Brothers-Discovery, a company that was already the subject of a sale and merger resulting in the hyphen. Netflix was competiing against another recently merged media megazord of Paramount Skydance.
Warner Brothers-Discovery accepted a bid from Netflix, enraging Paramount Skydance, which has now launched a hostile takeover.
Now the optimum outcome is for neither of these takeovers to happen. There are already too few players in Hollywood. It does nothing for the health of the industry to allow either merger. A functioning antitrust regime would stop both the sale and the hostile takeover attempt, full stop. But Hollywood and the federal government are frequent collaborators, and the feds have little incentive to stop Hollywood’s behemoths from growing even further, as long as they continue to play their role as propagandists for the American empire.
The promise of the digital era was in part convenience. You never again had to look at TV listings to find out when something would be airing. Virtually unlimited digital storage meant everything would be at your fingertips. But then the corporations went to work to make sure it never happened. And with each and every merger, that promise gets further and further away.
EFF Launches Age Verification Hub as Resource Against Misguided Laws
SAN FRANCISCO—With ill-advised and dangerous age verification laws proliferating across the United States and around the world, creating surveillance and censorship regimes that will be used to harm both youth and adults, the Electronic Frontier Foundation has launched a new resource hub that will sort through the mess and help people fight back.
To mark the hub's launch, EFF will host a Reddit AMA (“Ask Me Anything”) next week and a free livestreamed panel discussion on January 15 highlighting the dangers of these misguided laws.
“These restrictive mandates strike at the foundation of the free and open internet,” said EFF Activist Molly Buckley. “While they are wrapped in the legitimate concern about children's safety, they operate as tools of censorship, used to block people young and old from viewing or sharing information that the government deems ‘harmful’ or ‘offensive.’ They also create surveillance systems that critically undermine online privacy, and chill access to vital online communities and resources. Our new resource hub is a one-stop shop for information that people can use to fight back and redirect lawmakers to things that will actually help young people, like a comprehensive privacy law.”
Half of U.S. states have enacted some sort of online age verification law. At the federal level, a House Energy and Commerce subcommittee last week held a hearing on “Legislative Solutions to Protect Children and Teens Online.” While many of the 19 bills on that hearing’s agenda involve age verification, none would truly protect children and teens. Instead, they threaten to make it harder to access content that can be crucial, even lifesaving, for some kids.
It’s not just in the U.S. Effective this week, a new Australian law requires social media platforms to take reasonable steps to prevent Australians under the age of 16 from creating or keeping an account.
We all want young people to be safe online. However, age verification is not the panacea that regulators and corporations claim it to be; in fact, it could undermine the safety of many.
Age verification laws generally require online services to check, estimate, or verify all users’ ages—often through invasive tools like government ID checks, biometric scans, or other dubious “age estimation” methods—before granting them access to certain online content or services. These methods are often inaccurate and always privacy-invasive, demanding that users hand over sensitive and immutable personal information that links their offline identity to their online activity. Once that valuable data is collected, it can easily be leaked, hacked, or misused.
To truly protect everyone online, including children, EFF advocates for a comprehensive data privacy law.
EFF will host a Reddit AMA on r/privacy from Monday, Dec. 15 at 12 p.m. PT through Wednesday, Dec. 17 at 5 p.m. PT, with EFF attorneys, technologists, and activists answering questions about age verification on all three days.
EFF will host a free livestream panel discussion about age verification at 12 p.m. PDT on Thursday, Jan. 15. Panelists will include Cynthia Conti-Cook, Director of Research and Policy at the Collaborative Research Center for Resilience; a representative of Gen Z for Change; EFF Director of Engineering Alexis Hancock; and EFF Associate Director of State Affairs Rindala Alajaji. RSVP at https://www.eff.org/livestream-age.
For the age verification resource hub: https://www.eff.org/age
For the Reddit AMA: https://www.reddit.com/r/privacy/
For the Jan. 15 livestream: https://www.eff.org/livestream-age
Tags: age verificationage estimationage gatingContact: MollyBuckleyActivistmollybuckley@eff.org
Age Assurance Methods Explained
This blog also appears in our Age Verification Resource Hub: our one-stop shop for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yes—safe—internet.
EFF is against all mandatory age verification. Not only does it turn the internet into an age-gated cul-de-sac, but it also leaves behind many people who can’t get or don’t have proper and up-to-date documentation. While populations like undocumented immigrants and people experiencing homelessness are more obviously vulnerable groups, these restrictions also impact people with more mundane reasons for not having valid documentation on hand. Perhaps they’ve undergone life changes that impact their status or other information—such as a move, name change, or gender marker change—or perhaps they simply haven’t gotten around to updating their documents. Inconvenient events like these should not be a barrier to going online. People should also reserve the right to opt-out of unreliable technology and shady practices that could endanger their personal information.
But age restriction mandates threaten all of that. Not only do age-gating laws block adults and youth alike from freely accessing services on the web, they also force users to trade their anonymity—a pillar of online expression—for a system in which they are bound to their real-life identities. And this surveillance regime stretches beyond just age restrictions on certain content; much of this infrastructure is also connected to government plans for creating a digital system of proof of identity.
So how does age gating actually work? The age and identity verification industry has devised countless different methods platforms can purchase to—in theory—figure out the ages and/or identities of their users. But in practice, there is no technology available that is entirely privacy-protective, fully accurate, and that guarantees complete coverage of the population. Full stop.
Every system of age verification or age estimation demands that users hand over sensitive and oftentimes immutable personal information that links their offline identity to their online activity, risking their safety and security in the process.
But in practice, there is no technology available that is entirely privacy-protective, fully accurate, and that guarantees complete coverage of the population. Full stop.
With that said, as we see more of these laws roll out across the U.S. and the rest of the world, it’s important to understand the differences between these technologies so you can better identify the specific risks of each method, and make smart decisions about how you share your own data.
Age Assurance MethodsThere are many different technologies that are being developed, attempted, and deployed to establish user age. In many cases, a single platform will have implemented a mixture of methods. For example, a user may need to submit both a physical government ID and a face scan as part of a liveliness check to establish that they are the person pictured on the physical ID.
Age assurance methods generally fall into three categories:
- Age Attestation
- Age Estimation
- ID-bound Proof
Sometimes, you’ll be asked to declare your age, without requiring any form of verification. One way this might happen is through one-off self-attestation. This type of age attestation has been around for a while; you may have seen it when an alcohol website asks if you’re over 21, or when Steam asks you to input your age to view game content that may not be appropriate for all ages. It’s usually implemented as a pop-up on a website, and they might ask you for your age every time you enter, or remember it between site accesses. This sort of attestation provides an indication that the site may not be appropriate for all viewers, but gives users the autonomy and respect to make that decision for themselves.
An alternative proposed approach to declaring your own age, called device-bound age attestation, is to have you set your age on your operating system or on App Stores before you can make purchases or browse the web. This age or age range might then be shared with websites or apps. On an Apple device, that age can be modified after creation, as long as an adult age is chosen. It’s important to separate device-bound age attestation from methods that require age verification or estimation at the device or app store level (common to digital ID solutions and some proposed laws). It’s only attestation if you’re permitted to set your age to whatever you choose without needing to prove anything to your provider or another party—providing flexibility for age declaration outside of mandatory age verification.
Attestation through parental controlsThe sort of parental controls found on Apple and Android devices, Windows computers, and video game consoles provide the most flexible way for parents to manage what content their minor children can access. These settings can be applied through the device operating system, third-party applications, or by establishing a child account. Decisions about what content a young person can access are made via consent-driven mechanisms. As the manager, the parent or guardian will see requests and activity from their child depending on how strict or lax the settings are set. This could include requests to install an app, make a purchase on an app store, communicate with a new contact, or browse a particular website. The parent or guardian can then choose whether or not to accept the request and allow the activity.
One survey that collected answers from 1,000 parents found that parental controls are underutilized. Adoption of parental controls varied widely, from 51% on tablets to 35% on video game consoles. To help encourage more parents to make use of these settings, companies should continue to make them clearer and easier to use and manage. Parental controls are better suited to accommodating diverse cultural contexts and individual family concerns than a one-size-fits-all government mandate. It’s also safer to use native settings–or settings provided by the operating system itself–than it is to rely on third-party parental control applications. These applications have experienced data breaches and often effectively function as spyware.
Age EstimationInstead of asking you directly, the system guesses your age based on data it collects about you.
Age estimation through photo and facial estimationAge estimation by photo or live facial age analysis is when a system uses an image of a face to guess a person’s age.
A poorly designed system might improperly store these facial images or retain them for significant periods, creating a risk of data leakage. Our faces are unique, immutable, and constantly on display. In the hands of an adversary, and cross-referenced to other readily available information about us, this information can expose intimate details about us or lead to biometric tracking.
This technology has also proven fickle and often inaccurate, causing false negatives and positives, exacerbation of racial biases, and unprotected usage of biometric data to complete the analysis. And because it’s usually conducted with AI models, there often isn’t a way for a user to challenge a decision directly without falling back on more intrusive methods like submitting a government ID.
Age inference based on user data and third party servicesAge inference systems are normally conducted through estimating how old someone is based on their account information or querying other databases, where the account may have done age verification already, to cross reference with the existing information they have on that account.
Age inference includes but not limited to:
- Partnering with data brokers to gather data associated with an email, like utility use or mortgage purchases, or associated with a name, such as transaction history from a credit bureau;
- Other AI model-assisted deployments that infer age based on existing account or web activity, such as account age or “happy birthday” messages;
- Credit card ownership checks.
In order to view how old someone is via account information associated with their email, services often use data brokers to provide this information. This incentivizes even more collection of our data for the sake of age estimation and rewards data brokers for collecting a mass of data on people. Also, regulation of these age inference services varies based on a country’s privacy laws.
ID-bound ProofID-bound proofs, methods that use your government issued ID, are often used as a fallback for failed age estimation. Consequently, any government-issued ID backed verification disproportionately excludes certain demographics from accessing online services. A significant portion of the U.S. population does not have access to government-issued IDs, with millions of adults lacking a valid driver’s license or state-issued ID. This disproportionately affects Black Americans, Hispanic Americans, immigrants, and individuals with disabilities, who are less likely to possess the necessary identification. In addition, non-U.S. citizens, including undocumented immigrants, face barriers to acquiring government-issued IDs. The exclusionary nature of document-based verification systems is a major concern, as it could prevent entire communities from accessing essential services or engaging in online spaces.
Physical ID uploaded and stored as an imageWhen an image of a physical ID is required, users are forced to upload—not just momentarily display—sensitive personal information, such as government-issued ID or biometric identifiers, to third-party services in order to gain access to age-restricted content. This creates significant privacy and security concerns, as users have no direct control over who receives and stores their personal data, where it is sent, and how it may be accessed, used, or leaked outside the immediate verification process.
Requiring users to digitally hand over government-issued identification to verify their age introduces substantial privacy risks. Once sensitive information like a government-issued ID is uploaded to a website or third-party service, there is no guarantee that it will be handled securely. The verification process typically involves transmitting this data across multiple intermediaries, which means the risk of a data breach is heightened. The misuse of sensitive personal data, such as government IDs, has been demonstrated in numerous high-profile cases, including the breach of the age verification company AU10TIX, which exposed login credentials for over a year, and the hack of the messaging application Discord. Justifiable privacy and security concerns may chill users from accessing platforms they are lawfully entitled to access.
Device-bound digital IDDevice-bound digital ID is a credential that is locally stored on your device. This comes in the form of government or privately-run wallet applications, like those offered by Apple and Google. Digital IDs are subject to a higher level of security within the Google and Apple wallets (as they should be). This means they are not synced to your account or across services. If you lose the device, you will need to reissue a new credential to the new one. Websites and services can directly query your digital ID to reveal only certain information from your ID, like age range, instead of sharing all of your information. This is called “selective disclosure."
There are many reasons someone may not be able to acquire a digital ID, preventing them from relying on this option. This includes lack of access to a smartphone, sharing devices with another person, or inability to get a physical ID. No universal standards exist governing how ID expiration, name changes, or address updates affect the validity of digital identity credentials. How to handle status changes is left up to the credential issuer.
Asynchronous and Offline TokensThis is an issued token of some kind that doesn’t necessarily need network access to an external party or service every time you use it to establish your age with a verifier when they ask. A common danger in age verification services is the proliferation of multiple third-parties and custom solutions, which vary widely in their implementation and security. One proposal to avoid this is to centralize age checks with a trusted service that provides tokens that can be used to pass age checks in other places. Although this method requires a user to still submit to age verification or estimation once, after passing the initial facial age estimation or ID check, a user is issued a digital token they can present later to to show that they've previously passed an age check. The most popular proposal, AgeKeys, is similar to passkeys in that the tokens will be saved to a device or third-party password store, and can then be easily accessed after unlocking with your preferred on-device biometric verification or pin code.
Lessons LearnedWith lessons pulled from the problems with the age verification rollout in the UK and various U.S. states, age verification widens risk for everyone by presenting scope creep and blocking web information access. Privacy-preserving methods to determine age exist such as presenting an age threshold instead of your exact birth date, but have not been mass deployed or stress tested yet. Which is why policy safeguards around the deployed technology matter just as much, if not more.
Much of the infrastructure around age verification is entangled with other mandates, like deployment of digital ID. Which is why so many digital offerings get coupled with age verification as a “benefit” to the holder. In reality it’s more of a plus for the governments that want to deploy mandatory age verification and the vendors that present their implementation that often contains multiple methods. Instead of working on a singular path to age-gate the entire web, there should be a diversity of privacy-preserving ways to attest age without locking everyone into a singular platform or method. Ultimately, offering multiple options rather than focusing on a single method that would further restrict those who can’t use that particular path.
EFF Benefit Poker Tournament at DEF CON 33
In the brand new Planet Hollywood Poker Room, 48 digital rights supporters played No-Limit Texas Hold’Em in the 4th Annual EFF Benefit Poker Tournament at DEF CON, raising $18,395 for EFF.
The tournament was hosted by EFF board member Tarah Wheeler and emceed by lintile, lending his Hacker Jeopardy hosting skills to help EFF for the day.
Every table had two celebrity players with special bounties for the player that knocked them out of the tournament. This year featured Wendy Nather, Chris “WeldPond” Wysopal, Jake “MalwareJake” Williams, Bryson Bort, Kym “KymPossible” Price, Adam Shostack, and Dr. Allan Friedman.
Excellent poker player and teacher Jason Healey, Professor of International Affairs at Columbia University’s School of International and Public Affairs noted that “the EFF poker tournament is where you find all the hacker royalty in one room."
The day started at with a poker clinic run by Tarah’s father, professional poker player Mike Wheeler. The hour-long clinic helped folks get brushed up on their casino literacy before playing the big game.
Mike told the story of first teaching Tarah to play poker with jellybeans when she was only four. He then taught poker noobs how to play and when to check, when to fold, and when to go all-in.
After the clinic, lintile roused the crowd to play for real, starting the tournament off by announcing “Shuffle up and deal!”
The first hour saw few players get knocked out, but after the blinds began to rise, the field began to thin, with a number of celebrity knock outs.
At every knockout, lintile took to the mic to encourage the player to donate to EFF, which allowed them to buy back into the tournament and try their luck another round.
Jay Salzberg knocked out Kym Price to win a l33t crate.
Kim Holt knocked out Mike Wheeler, collecting the bounty on his head posted by Tarah, and winning a $250 donation to EFF in his name. This is the second time Holt has sent Mike home.
Tarah knocked out Adam Shostack, winning a number of fun prizes, including a signed copy of his latest book, Threats: What Every Engineer Should Learn From Star Wars.
Bryson Bort was knocked out by privacy attorney Marcia Hofmann.
Play continued for three hours until only the final table of players remained: Allaen Friedman, Luke Hanley, Jason Healey, Kim Holt, Igor Ignatov, Sid, Puneet Thapliyal, Charles Thomas and Tarah Wheeler herself.
As blinds continues to rise, players went all-in more and more. The most exciting moment was won by Sid, tripling up with TT over QT and A8s, and then only a few hands later knocking out Tarah, who finished 8th.
For the first time, the Jellybean Trophy sat on the final table awaiting the winner. This year, it was a Seattle Space Needle filled with green and blue jellybeans celebrating the lovely Pacific Northwest where Tarah and Mike are from.
The final three players were Allen Friedman, Kim Holt and Syd. Sid doubled up with KJ over Holt’s A6, and then knocked Holt out with his Q4 beating Holt’s 22.
Friedman and Sid traded blinds until Allan went all in with A6 and Syd called with JT. A jack landed on the flop and Syd won the day!
Sid becomes the first player to win the tournament more than once, taking home the jellybean trophy two years in a row.
It was an exciting afternoon of competition raising over $18,000 to support civil liberties and human rights online. We hope you join us next year as we continue to grow the tournament. Follow Tarah and EFF to make sure we have chips and a chair for you at DEF CON 34.
Be ready for this next year’s special benefit poker event: The Digital Rights Attack Lawyers Edition! Our special celebrity guests will all be our favorite digital rights attorneys including Cindy Cohn, Marcia Hofmann, Kurt Opsahl, and more!
10 (Not So) Hidden Dangers of Age Verification
It’s nearly the end of 2025, and half of the US and the UK now require you to upload your ID or scan your face to watch “sexual content.” A handful of states and Australia now have various requirements to verify your age before you can create a social media account.
Age-verification laws may sound straightforward to some: protect young people online by making everyone prove their age. But in reality, these mandates force users into one of two flawed systems—mandatory ID checks or biometric scans—and both are deeply discriminatory. These proposals burden everyone’s right to speak and access information online, and structurally excludes the very people who rely on the internet most. In short, although these laws are often passed with the intention to protect children from harm, the reality is that these laws harm both adults and children.
Here’s who gets hurt, and how:
1. Adults Without IDs Get Locked OutDocument-based verification assumes everyone has the right ID, in the right name, at the right address. About 15 million adult U.S. citizens don’t have a driver’s license, and 2.6 million lack any government-issued photo ID at all. Another 34.5 million adults don't have a driver's license or state ID with their current name and address.
- 18% of Black adults don't have a driver's license at all.
- Black and Hispanic Americans are disproportionately less likely to have current licenses.
- Undocumented immigrants often cannot obtain state IDs or driver's licenses.
- People with disabilities are less likely to have current identification.
- Lower-income Americans face greater barriers to maintaining valid IDs.
Some laws allow platforms to ask for financial documents like credit cards or mortgage records instead. But they still overlook the fact that nearly 35% of U.S. adults also don't own homes, and close to 20% of households don't have credit cards. Immigrants, regardless of legal status, may also be unable to obtain credit cards or other financial documentation.
2. Communities of Color Face Higher Error RatesPlatforms that rely on AI-based age-estimation systems often use a webcam selfie to guess users’ ages. But these algorithms don’t work equally well for everyone. Research has consistently shown that they are less accurate for people with Black, Asian, Indigenous, and Southeast Asian backgrounds; that they often misclassify those adults as being under 18; and sometimes take longer to process, creating unequal access to online spaces. This mirrors the well-documented racial bias in facial recognition technologies. The result is that technology’s inherent biases can block people from speaking online or accessing others’ speech.
3. People with Disabilities Face More BarriersAge-verification mandates most harshly affect people with disabilities. Facial recognition systems routinely fail to recognize faces with physical differences, affecting an estimated 100 million people worldwide who live with facial differences, and “liveness detection” can exclude folks with limited mobility. As these technologies become gatekeepers to online spaces, people with disabilities find themselves increasingly blocked from essential services and platforms with no specified appeals processes that account for disability.
Document-based systems also don't solve this problem—as mentioned earlier, people with disabilities are also less likely to possess current driver's licenses, so document-based age-gating technologies are equally exclusionary.
4. Transgender and Non-Binary People Are Put At RiskAge-estimation technologies perform worse on transgender individuals and cannot classify non-binary genders at all. For the 43% of transgender Americans who lack identity documents that correctly reflect their name or gender, age verification creates an impossible choice: provide documents with dead names and incorrect gender markers, potentially outing themselves in the process, or lose access to online platforms entirely—a risk that no one should be forced to take just to use social media or access legal content.
5. Anonymity Becomes a CasualtyAge-verification systems are, at their core, surveillance systems. By requiring identity verification to access basic online services, we risk creating an internet where anonymity is a thing of the past. For people who rely on anonymity for safety, this is a serious issue. Domestic abuse survivors need to stay anonymous to hide from abusers who could track them through their online activities. Journalists, activists, and whistleblowers regularly use anonymity to protect sources and organize without facing retaliation or government surveillance. And in countries under authoritarian rule, anonymity is often the only way to access banned resources or share information without being silenced. Age-verification systems that demand government IDs or biometric data would strip away these protections, leaving the most vulnerable exposed.
6. Young People Lose Access to Essential InformationBecause state-imposed age-verification rules either block young people from social media or require them to get parental permission before logging on, they can deprive minors of access to important information about their health, sexuality, and gender. Many U.S. states mandate “abstinence only” sexual health education, making the internet a key resource for education and self-discovery. But age-verification laws can end up blocking young people from accessing that critical information. And this isn't just about porn, it’s about sex education, mental health resources, and even important literature. Some states and countries may start going after content they deem “harmful to minors,” which could include anything from books on sexual health to art, history, and even award-winning novels. And let’s be clear: these laws often get used to target anything that challenges certain political or cultural narratives, from diverse educational materials to media that simply includes themes of sexuality or gender diversity. What begins as a “protection” for kids could easily turn into a full-on censorship movement, blocking content that’s actually vital for minors’ development, education, and well-being.
This is also especially harmful to homeschoolers, who rely on the internet for research, online courses, and exams. For many, the internet is central to their education and social lives. The internet is also crucial for homeschoolers' mental health, as many already struggle with isolation. Age-verification laws would restrict access to resources that are essential for their education and well-being.
7. LGBTQ+ Youth Are Denied Vital LifelinesFor many LGBTQ+ young people, especially those with unsupportive or abusive families, the internet can be a lifeline. For young people facing family rejection or violence due to their sexuality or gender identity, social media platforms often provide crucial access to support networks, mental health resources, and communities that affirm their identities. Age verification systems that require parental consent threaten to cut them from these crucial supports.
When parents must consent to or monitor their children's social media accounts, LGBTQ+ youth who lack family support lose these vital connections. LGBTQ+ youth are also disproportionately likely to be unhoused and lack access to identification or parental consent, further marginalizing them.
8. Youth in Foster Care Systems Are Completely Left OutAge verification bills that require parental consent fail to account for young people in foster care, particularly those in group homes without legal guardians who can provide consent, or with temporary foster parents who cannot prove guardianship. These systems effectively exclude some of the most vulnerable young people from accessing online platforms and resources they may desperately need.
9. All of Our Personal Data is Put at RiskAn age-verification system also creates acute privacy risks for adults and young people. Requiring users to upload sensitive personal information (like government-issued IDs or biometric data) to verify their age creates serious privacy and security risks. Under these laws, users would not just momentarily display their ID like one does when accessing a liquor store, for example. Instead, they’d submit their ID to third-party companies, raising major concerns over who receives, stores, and controls that data. Once uploaded, this personal information could be exposed, mishandled, or even breached, as we've seen with past data hacks. Age-verification systems are no strangers to being compromised—companies like AU10TIX and platforms like Discord have faced high-profile data breaches, exposing users’ most sensitive information for months or even years.
The more places personal data passes through, the higher the chances of it being misused or stolen. Users are left with little control over their own privacy once they hand over these immutable details, making this approach to age verification a serious risk for identity theft, blackmail, and other privacy violations. Children are already a major target for identity theft, and these mandates perversely increase the risk that they will be harmed.
10. All of Our Free Speech Rights Are TrampledThe internet is today’s public square—the main place where people come together to share ideas, organize, learn, and build community. Even the Supreme Court has recognized that social media platforms are among the most powerful tools ordinary people have to be heard.
Age-verification systems inevitably block some adults from accessing lawful speech and allow some young people under 18 users to slip through anyway. Because the systems are both over-inclusive (blocking adults) and under-inclusive (failing to block people under 18), they restrict lawful speech in ways that violate the First Amendment.
The Bottom LineAge-verification mandates create barriers along lines of race, disability, gender identity, sexual orientation, immigration status, and socioeconomic class. While these requirements threaten everyone’s privacy and free-speech rights, they fall heaviest on communities already facing systemic obstacles.
The internet is essential to how people speak, learn, and participate in public life. When access depends on flawed technology or hard-to-obtain documents, we don’t just inconvenience users, we deepen existing inequalities and silence the people who most need these platforms. As outlined, every available method—facial age estimation, document checks, financial records, or parental consent—systematically excludes or harms marginalized people. The real question isn’t whether these systems discriminate, but how extensively.
