EFF: Updates
A Surveillance Mandate Disguised As Child Safety: Why the GUARD Act Won't Keep Us Safe
A new bill sponsored by Sen. Hawley (D-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. That might sound reasonable at first, but behind those talking points lies a sprawling surveillance and censorship regime that would reshape how people of all ages use the internet.
The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot.
The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using the digital tools that they rely on every day.
EFF has warned for years that age-verification laws endanger free expression, privacy, and competition. There are legitimate concerns about transparency and accountability in AI, but the GUARD Act’s sweeping mandates are not the solution.
TELL CONGRESS: The guard act won't keep us safe
Young People's Access to Legitimate AI Tools Could Be Cut Off Entirely.The GUARD Act doesn’t give parents a choice—it simply blocks minors from AI companions altogether. If a chat system’s age-verification process determines that a user is under 18, that user must then be locked out completely. The GUARD Act contains no parental consent mechanism, no appeal process for errors in age estimation, and no flexibility for any other context.
The bill’s definition of an AI “companion” is ambiguous enough that it could easily be interpreted to extend beyond general-use LLMs like ChatGPT, causing overcautious companies to block young people from other kinds of AI services too. In practice, this means that under the GUARD Act, teenagers may not be able to use chatbots to get help with homework, seek customer service assistance for a product they bought, or even ask a search engine a question. It could also cut off all young people’s access to educational and creative tools that have quickly become a part of everyday learning and life online.
The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true.
By treating all young people—whether seven or seventeen—the same, the GUARD Act threatens their ability to explore their identities, get answers to questions free from shame or stigma, and gradually develop a sense of autonomy as they mature into adults. Denying teens’ access to online spaces doesn’t make them safer, it just keeps them uninformed and unprepared for adult life.
The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true. Instead, it will undermine both safety and autonomy by replacing parental guidance with government mandates and building mass surveillance infrastructure instead of privacy controls.
All Age Verification Systems Are Dangerous. This Is No Different.Teens aren’t the only ones who lose out under the GUARD Act. The bill would require platforms to confirm the ages of all users—young and old—before allowing them to speak, learn, or engage with their AI tools.
Under the GUARD Act, platforms can’t rely on a simple “I’m over 18” checkbox or self-attested birthdate. Instead, they must build or buy a “commercially reasonable” age-verification system that collects identifying information (like a government ID, credit record, or biometric data) from every user before granting them access to the AI service. Though the GUARD Act does contain some data minimization language, its mandate to periodically re-verify users means that platforms must either retain or re-collect that sensitive user data as needed. Both of those options come with major privacy risks.
EFF has long documented the dangers of age-verification systems:
- They create attractive targets for hackers. Third-party services that collect users’ sensitive ID and biometric data for the purpose of age verification have been repeatedly breached, exposing millions to identity theft and other harms.
- They implement mass surveillance systems and ruin anonymity. To verify your age, a system must determine and record who you are. That means every chatbot interaction could feasibly be linked to your verified identity.
- They disproportionately harm vulnerable groups. Many people—especially activists and dissidents, trans and gender-nonconforming folks, undocumented people, and survivors of abuse—avoid systems that force identity disclosure. The GUARD Act would entirely cut off their ability to use these public AI tools.
- They entrench Big Tech. Only the biggest companies can afford the compliance and liability burden of mass identity verification. Smaller, privacy-respecting developers simply can’t compete.
As we’ve said repeatedly, there’s no such thing as “safe” age verification. Every approach—whether it’s facial or biometric scans, government ID uploads, or behavioral or account analysis—creates new privacy, security, and expressive harms.
Vagueness + Steep Fines = Censorship. Full Stop.Though mandatory age-gates provide reason enough to oppose the GUARD Act, the definitions of “AI chatbot” and “AI companion” are also vague and broad enough to raise alarms. In a nutshell, the Act’s definitions of these two terms are so expansive that they could cover nearly any system capable of generating “human-like” responses—including not just general-purpose LLMs like ChatGPT, but also more tailored services like those used for customer service interactions, search-engine summaries, and subject-specific research tools.
The bill defines an “AI chatbot” as any service that produces “adaptive” or “context-responsive” outputs that aren’t fully predetermined by a developer or operator. That could include Google’s search summaries, research tools like Perplexity, or any AI-powered Q&A tool—all of which respond to natural language prompts and dynamically generate conversational text.
Meanwhile, the GUARD Act’s definition of an “AI companion”—a system that both produces “adaptive” or “context-responsive” outputs and encourages or simulates “interpersonal or emotional interaction”—will easily sweep in general-purpose tools like ChatGPT. Courts around the country are already seeing claims that conversational AI tools manipulate users’ emotions to increase engagement. Under this bill, that’s enough to trigger the “AI companion” label, putting AI developers at risk even when they do not intend to cause harm.
Both of these definitions are imprecise and unconstitutionally overbroad. And, when combined with the GUARD Act’s incredibly steep fines (up to $100,000 per violation, enforceable by the federal Attorney General and every state AG), companies worried about their legal liability will inevitably err on the side of prohibiting minors from accessing their chat systems. The GUARD Act leaves them these options: censor certain topics en masse, entirely block users under 18 from accessing their services, or implement broad-sweeping surveillance systems as a prerequisite to access. No matter which way platforms choose to go, the inevitable result for users is less speech, less privacy, and less access to genuinely helpful tools.
How You Can HelpWhile there may be legitimate problems with AI chatbots, young people’s safety is an incredibly complex social issue both on- and off-line. The GUARD Act tries to solve this complex problem with a blunt, dangerous solution.
In other words, protecting young people’s online safety is incredibly important, but to do so by forcing invasive ID checks, criminalizing AI tools, and banning teens from legitimate digital spaces is not a good way out of this.
The GUARD Act would make the internet less free, less private, and less safe for everyone.
The GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further stratifying the internet we know and love.
Lawmakers should reject the GUARD Act and focus instead on policies that provide transparency, more options for users, and comprehensive privacy for all. Help us tell Congress to oppose the GUARD Act today.
TELL CONGRESS: OPPOSe THE GUARD ACT
Lawmakers Want to Ban VPNs—And They Have No Idea What They're Doing
Remember when you thought age verification laws couldn't get any worse? Well, lawmakers in Wisconsin, Michigan, and beyond are about to blow you away.
It's unfortunately no longer enough to force websites to check your government-issued ID before you can access certain content, because politicians have now discovered that people are using Virtual Private Networks (VPNs) to protect their privacy and bypass these invasive laws. Their solution? Entirely ban the use of VPNs.
Yes, really.
As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of “protecting children” in A.B. 105/S.B. 130. It’s an age verification bill that requires all websites distributing material that could conceivably be deemed “sexual content” to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are “harmful to minors” beyond the type of speech that states can prohibit minors from accessing—potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction.
This follows a notable pattern: As we’ve explained previously, lawmakers, prosecutors, and activists in conservative states have worked for years to aggressively expand the definition of “harmful to minors” to censor a broad swath of content: diverse educational materials, sex education resources, art, and even award-winning literature.
Wisconsin’s bill has already passed the State Assembly and is now moving through the Senate. If it becomes law, Wisconsin could become the first state where using a VPN to access certain content is banned. Michigan lawmakers have proposed similar legislation that did not move through its legislature, but among other things, would force internet providers to actively monitor and block VPN connections. And in the UK, officials are calling VPNs "a loophole that needs closing."
This is actually happening. And it's going to be a disaster for everyone.
Here's Why This Is A Terrible IdeaVPNs mask your real location by routing your internet traffic through a server somewhere else. When you visit a website through a VPN, that website only sees the VPN server's IP address, not your actual location. It's like sending a letter through a P.O. box so the recipient doesn't know where you really live.
So when Wisconsin demands that websites "block VPN users from Wisconsin," they're asking for something that's technically impossible. Websites have no way to tell if a VPN connection is coming from Milwaukee, Michigan, or Mumbai. The technology just doesn't work that way.
Websites subject to this proposed law are left with this choice: either cease operation in Wisconsin, or block all VPN users, everywhere, just to avoid legal liability in the state. One state's terrible law is attempting to break VPN access for the entire internet, and the unintended consequences of this provision could far outweigh any theoretical benefit.
Almost Everyone Uses VPNsLet's talk about who lawmakers are hurting with these bills, because it sure isn't just people trying to watch porn without handing over their driver's license.
- Businesses run on VPNs. Every company with remote employees uses VPNs. Every business traveler connecting through sketchy hotel Wi-Fi needs one. Companies use VPNs to protect client and employee data, secure internal communications, and prevent cyberattacks.
- Students need VPNs for school. Universities require students to use VPNs to access research databases, course materials, and library resources. These aren't optional, and many professors literally assign work that can only be accessed through the school VPN. The University of Wisconsin-Madison’s WiscVPN, for example, “allows UW–Madison faculty, staff and students to access University resources even when they are using a commercial Internet Service Provider (ISP).”
- Vulnerable people rely on VPNs for safety. Domestic abuse survivors use VPNs to hide their location from their abusers. Journalists use them to protect their sources. Activists use them to organize without government surveillance. LGBTQ+ people in hostile environments—both in the US and around the world—use them to access health resources, support groups, and community. For people living under censorship regimes, VPNs are often their only connection to vital resources and information their governments have banned.
- Regular people just want privacy. Maybe you don't want every website you visit tracking your location and selling that data to advertisers. Maybe you don't want your internet service provider (ISP) building a complete profile of your browsing history. Maybe you just think it's creepy that corporations know everywhere you go online. VPNs can protect everyday users from everyday tracking and surveillance.
Here's what happens if VPNs get blocked: everyone has to verify their age by submitting government IDs, biometric data, or credit card information directly to websites—without any encryption or privacy protection.
We already know how this story ends. Companies get hacked. Data gets breached. And suddenly your real name is attached to the websites you visited, stored in some poorly-secured database waiting for the inevitable leak. This has already happened, and is not a matter of if but when. And when it does, the repercussions will be huge.
Forcing people to give up their privacy to access legal content is the exact opposite of good policy. It's surveillance dressed up as safety.
"Harmful to Minors" Is Not a Catch-AllHere's another fun feature of these laws: they're trying to broaden the definition of “harmful to minors” to sweep in a host of speech that is protected for both young people and adults.
Historically, states can prohibit people under 18 years old from accessing sexual materials that an adult can access under the First Amendment. But the definition of what constitutes “harmful to minors” is narrow — it generally requires that the materials have almost no social value to minors and that they, taken as a whole, appeal to a minors’ “prurient sexual interests.”
Wisconsin's bill defines “harmful to minors” much more broadly. It applies to materials that merely describe sex or feature descriptions/depictions of human anatomy. This definition would likely encompass a wide range of literature, music, television, and films that are protected under the First Amendment for both adults and young people, not to mention basic scientific and medical content.
Additionally, the bill’s definition would apply to any websites where more than one third of the site’s material is "harmful to minors." Given the breadth of the definition and its one-third trigger, we anticipate that Wisconsin could argue that the law applies to most social media websites. And it’s not hard to imagine, as these topics become politicised, Wisconsin claiming it applies to websites containing LGBTQ+ health resources, basic sexual education resources, and reproductive healthcare information.
This breadth of the bill’s definition isn't a bug, it's a feature. It gives the state a vast amount of discretion to decide which speech is “harmful” to young people, and the power to decide what's "appropriate" and what isn't. History shows us those decisions most often harm marginalized communities.
It Won’t Even WorkLet's say Wisconsin somehow manages to pass this law. Here's what will actually happen:
People who want to bypass it will use non-commercial VPNs, open proxies, or cheap virtual private servers that the law doesn't cover. They'll find workarounds within hours. The internet always routes around censorship.
Even in a fantasy world where every website successfully blocked all commercial VPNs, people would just make their own. You can route traffic through cloud services like AWS or DigitalOcean, tunnel through someone else's home internet connection, use open proxies, or spin up a cheap server for less than a dollar.
Meanwhile, everyone else (businesses, students, journalists, abuse survivors, regular people who just want privacy) will have their VPN access impacted. The law will accomplish nothing except making the internet less safe and less private for users.
Nonetheless, as we’ve mentioned previously, while VPNs may be able to disguise the source of your internet activity, they are not foolproof—nor should they be necessary to access legally protected speech. Like the larger age verification legislation they are a part of, VPN-blocking provisions simply don't work. They harm millions of people and they set a terrifying precedent for government control of the internet. More fundamentally, legislators need to recognize that age verification laws themselves are the problem. They don't work, they violate privacy, they're trivially easy to circumvent, and they create far more harm than they prevent.
A False DilemmaPeople have (predictably) turned to VPNs to protect their privacy as they watched age verification mandates proliferate around the world. Instead of taking this as a sign that maybe mass surveillance isn't popular, lawmakers have decided the real problem is that these privacy tools exist at all and are trying to ban the tools that let people maintain their privacy.
Let's be clear: lawmakers need to abandon this entire approach.
The answer to "how do we keep kids safe online" isn't "destroy everyone's privacy." It's not "force people to hand over their IDs to access legal content." And it's certainly not "ban access to the tools that protect journalists, activists, and abuse survivors.”
If lawmakers genuinely care about young people's well-being, they should invest in education, support parents with better tools, and address the actual root causes of harm online. What they shouldn't do is wage war on privacy itself. Attacks on VPNs are attacks on digital privacy and digital freedom. And this battle is being fought by people who clearly have no idea how any of this technology actually works.
If you live in Wisconsin—reach out to your Senator and urge them to kill A.B. 105/S.B. 130. Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.
🔔 Ring's Face Scan Plan | EFFector 37.16
Cozy up next to the fireplace and we'll catch you up on the latest digital rights news with EFF's EFFector newsletter.
In our latest issue, we’re exposing surveillance logs that reveal racist policing; explaining the harms of Google’s plan for Android app gatekeeping; and continuing our new series, Gate Crashing, exploring how the internet empowers people to take nontraditional paths into the traditional worlds of journalism, creativity, and criticism.
Prefer to listen in? Check out our audio companion, where EFF Staff Attorney Mario Trujillo explains why Ring's upcoming facial recognition tool could violate the privacy rights of millions of people. Catch the conversation on YouTube or the Internet Archive.
EFFECTOR 37.16 - 🔔 RING'S FACE SCAN PLAN
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
Washington Court Rules That Data Captured on Flock Safety Cameras Are Public Records
A Washington state trial court has shot down local municipalities’ effort to keep automated license plate reader (ALPR) data secret.
The Skagit County Superior Court in Washington rejected the attempt to block the public’s right to access data gathered by Flock Safety cameras, protecting access to information under the Washington Public Records Act (PRA). Importantly, the ruling from the court makes it clear that this access is protected even when a Washington city uses Flock Safety, a third-party vendor, to conduct surveillance and store personal data on behalf of a government agency.
"The Flock images generated by the Flock cameras...are public records," the court wrote in its ruling. "Flock camera images are created and used to further a governmental purpose. The Flock images created by the cameras located in Stanwood and Sedro-Woolley were paid for by Stanwood and Sedro Wooley [sic] and were generated for the benefit of Stanwood and Sedro-Woolley."
The cities’ move to exempt the records from disclosure was a dangerous attempt to deny transparency and reflects another problem with the massive amount of data that police departments collect through Flock cameras and store on Flock servers: the wiggle room cities seek when public data is hosted on a private company’s server.
Flock Safety's main product is ALPRs, camera systems installed throughout communities to track all drivers all the time. Privacy activists and journalists across the country recently have used public records requests to obtain data from the system, revealing a variety of controversial uses. This has included agencies accessing data for immigration enforcement and to investigate an abortion, the latter of which may have violated Washington law. A recent report from the University of Washington found that some cities in the state are also sharing the ALPR data from their Flock Safety systems with federal immigration agents.
In this case, a member of the public in April filed a records request with a Flock customer, the City of Stanwood, for all footage recorded during a one-hour period in March. Shortly afterward, Stanwood and another Flock user, the City of Sedro-Woolley requested the local court rule that this data is not a public record, asserting that “data generated by Flock [automated license plate reader cameras (ALPRs)] and stored in the Flock cloud system are not public records unless and until a public agency extracts and downloads that data."
If a government agency is conducting mass surveillance, EFF supports individuals’ access to data collected specifically on them, at the very least. And to address legitimate privacy concerns, governments can and should redact personal information in these records while still disclosing information about how the systems work and the data that they capture.
This isn’t what these Washington cities offered, though. They tried a few different arguments against releasing any information at all.
The contract between the City of Sedron-Woolley and Flock Safety clearly states that "As between Flock and Customer, all right, title and interest in the Customer Data, belong to and are retained solely by Customer,” and “Customer Data” is defined as "the data, media, and content provided by Customer through the Services. For the avoidance of doubt, the Customer Data will include the Footage." Other Flock-using police departments across the country have also relied on similar contract language to insist that footage captured by Flock cameras belongs to the jurisdiction in question.
The contract language notwithstanding, officials in Washington attempted to restrict public access by claiming that video footage stored on Flock’s servers and requests for that information would constitute the generation of a new record. This part of the argument claimed that any information that was gathered but not otherwise accessed by law enforcement, including thousands of images taken every day by the agency’s 14 Flock ALPR cameras, had nothing to do with government business, would generate a new record, and should not be subject to records requests. The cities shut off their Flock cameras while the litigation was ongoing.
If the court had ruled in favor of the cities’ claim, police could move to store all their data — from their surveillance equipment and otherwise — on private company servers and claim that it's no longer accessible to the public.
The cities threw another reason for withholding information at the wall to see if it would stick, claiming that even if the court found that data collected on Flock cameras are in fact public record, the cities should still be able to block the release of the requested one hour of footage either because all of the images captured by Flock cameras are sensitive investigation material or because they should be treated the same way as automated traffic safety cameras.
EFF is particularly opposed to this line of reasoning. In 2017, the California Supreme Court sided with EFF and ACLU in a case arguing that “the license plate data of millions of law-abiding drivers, collected indiscriminately by police across the state, are not ‘investigative records’ that law enforcement can keep secret.”
Notably, when Stanwood Police Chief Jason Toner made his pitch to the City Council to procure the Flock cameras in April 2024, he was adamant that the ALPRs would not be the same as traffic cameras. “Flock Safety Cameras are not ‘red light’ traffic cameras nor are they facial recognition cameras,” Chief Toner wrote at the time, adding that the system would be a “force multiplier” for the department.
If the court had gone along with this part of the argument, cities could have been able to claim that the mass surveillance conducted using ALPRs is part of undefined mass investigations, pulling back from the public huge amounts of information being gathered without warrants or reason.
The cities seemed to be setting up contradictory arguments. Maybe the footage captured by the cities’ Flock cameras belongs to the city — or maybe it doesn’t until the city accesses it. Maybe the data collected by the cities’ taxpayer-funded cameras are unrelated to government business and should be inaccessible to the public — or maybe it’s all related to government business and, specifically, to sensitive investigations, presumably of every single vehicle that goes by the cameras.
The requester, Jose Rodriguez, still won’t be getting his records, despite the court’s positive ruling.
“The cities both allowed the records to be automatically deleted after I submitted my records requests and while they decided to have their legal council review my request. So they no longer have the records and can not provide them to me even though they were declared to be public records,” Rodriguez told 404 Media — another possible violation of that state’s public records laws.
Flock Safety and its ALPR system have come under increased scrutiny in the last few months, as the public has become aware of illegal and widespread sharing of information.
The system was used by the Johnson County Sheriff’s Office to track someone across the country who’d self-administered an abortion in Texas. Flock repeatedly claimed that this was inaccurate reporting, but materials recently obtained by EFF have affirmed that Johnson County was investigating that individual as part of a fetal death investigation, conducted at the request of her former abusive partner. They were not looking for her as part of a missing person search, as Flock said.
In Illinois, the Secretary of State conducted an audit of Flock use within the state and found that the Flock Safety system was facilitating Customs and Border Protection access, in violation of state law. And in California, the Attorney General recently sued the City of El Cajon for using Flock to illegally share information across state lines.
Police departments are increasingly relying on third-party vendors for surveillance equipment and storage for the terabytes of information they’re gathering. Refusing the public access to this information undermines public records laws and the assurances the public has received when police departments set these powerful spying tools loose in their streets. While it’s great that these records remain public in Washington, communities around the country must be swift to reject similar attempts at blocking public access.
EFFecting Change: This Title Was Written by a Human
Generative AI is like a Rorschach test for anxieties about technology–be they privacy, replacement of workers, bias and discrimination, surveillance, or intellectual property. Our panelists discuss how to address complex questions and risks in AI while protecting civil liberties and human rights online.
Join EFF Director of Policy and Advocacy Katharine Trendacosta, EFF Staff Attorney Tori Noble, Berkeley Center for Law & Technology Co-Director Pam Samuelson, and Icarus Salon Artist Şerife Wong for a live discussion with Q&A.
EFFecting Change Livestream Series:This Title Was Written by a Human
Thursday, November 13th (New Date!)
10:00 AM - 11:00 AM Pacific
This event is LIVE and FREE!
Accessibility
This event will be live-captioned and recorded. EFF is committed to improving accessibility for our events. If you have any accessibility questions regarding the event, please contact events@eff.org.
Event ExpectationsEFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.
Upcoming EventsWant to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates. If you have a friend or colleague that might be interested, please join the fight for your digital rights by forwarding this link: eff.org/EFFectingChange. Thank you for helping EFF spread the word about privacy and free expression online.
RecordingWe hope you and your friends can join us live! If you can't make it, we’ll post the recording afterward on YouTube and the Internet Archive!
EFF Teams Up With AV Comparatives to Test Android Stalkerware Detection by Major Antivirus Apps
EFF has, for many years, raised the alarm about the proliferation of stalkerware—commercially-available apps designed to be installed covertly on another person’s device and exfiltrate data from that device without their knowledge. In particular, we have urged the makers of anti-virus products for Android phones to improve their detection of stalkerware and call it out explicitly to users when it is found. In 2020 and 2021, AV Comparatives ran tests to see how well the most popular anti-virus products detected stalkerware from many different vendors. The results were mixed, with some high-scoring companies and others that had alarmingly low detection rates. Since malware detection is an endless game of cat and mouse between anti-virus companies and malware developers, we felt that the time was right to take a more up-to-date snapshot of how well the anti-virus companies are performing. We’ve teamed up with the researchers at AV Comparatives to test the most popular anti-virus products for Android to see how well they detect the most popular stalkerware products in 2025.
Here is what we found:
Stalkerware detection is still a mixed bag. Notably, Malwarebytes detected 100% of the stalkerware products we tested for. ESET, Bitdefender, McAfee, and Kaspersky detected all but one sample. This is a marked improvement over the 2021 test, which also found only one app with a 100% detection rate (G Data), but the next-best performing products had detect rates of 80-85%. Google Play Protect and Trend Micro had the lowest detection rates in the 2025 test, at 53% and 59% respectively. The poor performance of Google Play Protect is unsurprising: because it is the anti-virus solution on so many Android phones by default, some stalkerware includes specific instructions to disable detection by Google Play Protect as part of the installation process.
There are fewer stalkerware products out there. In 2020 and 2021, AV Comparatives tested 20 unique stalkerware products from different vendors. In 2025, we tested 17. We found that many stalkerware apps are essentially variations on the same underlying product and that the number of unique underlying products appears to have decreased in recent years. We cannot be certain about the cause of this decline, but we speculate that increased attention from regulators may be a factor. The popularity of small, cheap, Bluetooth-enabled physical trackers such as Apple AirTags and Tiles as an alternative method of location-tracking may also be undercutting the stalkerware market.
We hope that these tests will help survivors of domestic abuse and others who are concerned about stalkerware on their Android devices make informed choices about their anti-virus apps. We also hope that exposing the gaps that these products have in stalkerware detection will renew interest in this problem at anti-virus companies.
You can find the full results of the test here (PDF).
The Legal Case Against Ring’s Face Recognition Feature
Amazon Ring’s upcoming face recognition tool has the potential to violate the privacy rights of millions of people and could result in Amazon breaking state biometric privacy laws.
Ring plans to introduce a feature to its home surveillance cameras called “Familiar Faces,” to identify specific people who come into view of the camera. When turned on, the feature will scan the faces of all people who approach the camera to try and find a match with a list of pre-saved faces. This will include many people who have not consented to a face scan, including friends and family, political canvassers, postal workers, delivery drivers, children selling cookies, or maybe even some people passing on the sidewalk.
When turned on, the feature will scan the faces of all people who approach the camera.
Many biometric privacy laws across the country are clear: Companies need your affirmative consent before running face recognition on you. In at least one state, ordinary people with the help of attorneys can challenge Amazon’s data collection. Where not possible, state privacy regulators should step in.
Sen. Ed Markey (D-Mass.) has already called on Amazon to abandon its plans and sent the company a list of questions. Ring spokesperson Emma Daniels answered written questions posed by EFF, which can be viewed here.
What is Ring’s “Familiar Faces”?Amazon describes “Familiar Faces” as a tool that “intelligently recognizes familiar people.” It says this tool will provide camera owners with “personalized context of who is detected, eliminating guesswork and making it effortless to find and review important moments involving specific familiar people.” Amazon plans to release the feature in December.
The feature will allow camera owners to tag particular people so Ring cameras can automatically recognize them in the future. In order for Amazon to recognize particular people, it will need to perform face recognition on every person that steps in front of the camera. Even if a camera owner does not tag a particular face, Amazon says it may retain that biometric information for up to six months. Amazon said it does not currently use the biometric data for “model training or algorithmic purposes.”
In order to biometrically identify you, a company typically will take your image and extract a faceprint by taking tiny measurements of your face and converting that into a series of numbers that is saved for later. When you step in front of a camera again, the company takes a new faceprint and compares it to a list of previous prints to find a match. Other forms of biometric tracking can be done with a scan of your fingertip, eyeball, or even your particular gait.
Amazon has told reporters that the feature will be off by default and that it would be unavailable in certain jurisdictions with the most active biometric privacy enforcement—including the states of Illinois and Texas, and the city of Portland, Oregon. The company would not promise that this feature will remain off by default in the future.
Why is This a Privacy Problem?Your biometric data, such as your faceprint, are some of the most sensitive pieces of data that a company can collect. Associated risks include mass surveillance, data breach, and discrimination.
Today’s feature to recognize your friend at your front door can easily be repurposed tomorrow for mass surveillance. Ring’s close partnership with police amplifies that threat. For example, in a city dense with face recognition cameras, the entirety of a person’s movements could be tracked with the click of a button, or all people could be identified at a particular location. A recent and unrelated private-public partnership in New Orleans unfortunately shows that mass surveillance through face recognition is not some far flung concern.
Amazon has already announced a related tool called “search party” that can identify and track lost dogs using neighbors’ cameras. A tool like this could be repurposed for law enforcement to track people. At least for now, Amazon says it does not have the technical capability to comply with law enforcement demanding a list of all cameras in which a person has been identified. Though, it complies with other law enforcement demands.
In addition, data breaches are a perpetual concern with any data collection. Biometrics magnify that risk because your face cannot be reset, unlike a password or credit card number. Amazon says it processes and stores biometrics collected by Ring cameras on its own servers, and that it uses comprehensive security measure to protect the data.
Face recognition has also been shown to have higher error rates with certain groups—most prominently with dark-skinned women. Similar technology has also been used to make questionable guesses about a person’s emotions, age, and gender.
Will Ring’s “Familiar Faces” Violate State Biometric Laws?Any Ring collection of biometric information in states that require opt-in consent poses huge legal risk for the company. Amazon already told reporters that the feature will not be available in Illinois and Texas—strongly suggesting its feature could not survive legal scrutiny there. The company said it is also avoiding Portland, Oregon, which has a biometric privacy law that similar companies have avoided.
Its “familiar faces” feature will necessarily require its cameras to collect a faceprint from of every person who comes into view of an enabled camera, to try and find a match. It is impossible for Amazon to obtain consent from everyone—especially people who do not own Ring cameras. It appears that Amazon will try to unload some consent requirements onto individual camera owners themselves. Amazon says it will provide in-app messages to customers, reminding them to comply with applicable laws. But Amazon—as a company itself collecting, processing, and storing this biometric data—could have its own consent obligations under numerous laws.
Lawsuits against similar features highlight Amazon’s legal risks. In Texas, Google paid $1.375 billion to settle a lawsuit that alleged, among other things, that Google’s Nest cameras "indiscriminately capture the face geometry of any Texan who happens to come into view, including non-users." In Illinois, Facebook paid $650 million and shut down its face recognition tools that automatically scanned Facebook photos—even the faces of non-Facebook users—in order to identify people to recommend tagging. Later, Meta paid another $1.4 billion to settle a similar suit in Texas.
Many states aside from Illinois and Texas now protect biometric data. While the state has never enforced its law, Washington in 2017 passed a biometric privacy law. In 2023, the state passed an ever stronger law that protects biometric privacy, which allows individuals to sue on their own behalf. And at least 16 states have recently passed comprehensive privacy laws that often require companies to obtain opt-in consent for the collection of sensitive data, which typically includes biometric data. For example, in Colorado, a company that jointly with others determines the purpose and means of processing biometric data must obtain consent. Maryland goes farther, and such companies are essentially prohibited from collecting or processing biometric data from bystanders.
Many of these comprehensive laws have numerous loopholes and can only be enforced by state regulators—a glaring weakness facilitated in part by Amazon lobbyists.
Nonetheless, Ring’s new feature provides regulators a clear opportunity to step up to investigate, protect people’s privacy, and test the strength of their laws.
Application Gatekeeping: An Ever-Expanding Pathway to Internet Censorship
It’s not news that Apple and Google use their app stores to shape what apps you can and cannot have on many of your devices. What is new is more governments—including the U.S. government—using legal and extralegal tools to lean on these gatekeepers in order to assert that same control. And rather than resisting, the gatekeepers are making it easier than ever.
Apple’s decision to take down the ICEBlock app at least partially in response to threats from the U.S. government—with Google rapidly and voluntarily following suit—was bad enough. But it pales in comparison with Google’s new program, set to launch worldwide next year, requiring developers to register with the company in order to have their apps installable on Android certified devices—including paying a fee and providing personal information backed by government-issued identification. Google claims the new program of “is an extra layer of security that deters bad actors and makes it harder for them to spread harm,” but the registration requirements are barely tied to app effectiveness or security. Why, one wonders, does Google need to see your driver’s license to evaluate whether your app is safe? Why, one also wonders, does Google want to create a database of virtually every Android app developer in the world?
Those communities are likely to drop out of developing for Android altogether, depriving all Android users of valuable tools.
F-Droid, a free and open-source repository for Android apps, has been sounding the alarm. As they’ve explained in an open letter, Google’s central registration system will be devastating for the Android developer community. Many mobile apps are created, improved, and distributed by volunteers, researchers, and/or small teams with limited financial resources. Others are created by developers who do not use the name attached to any government-issued identification. Others may have good reason to fear handing over their personal information to Google, or any other third party. Those communities are likely to drop out of developing for Android altogether, depriving all Android users of valuable tools.
Google’s promise that it’s “working on” a program for “students and hobbyists” that may have different requirements falls far short of what is necessary to alleviate these concerns.
It’s more important than ever to support technologies which decentralize and democratize our shared digital commons. A centralized global registration system for Android will inevitably chill this work.
The point here is not that all the apps are necessarily perfect or even safe. The point is that when you set up a gate, you invite authorities to use it to block things they don’t like. And when you build a database, you invite governments (and private parties) to try to get access to that database. If you build it, they will come.
Imagine you have developed a virtual private network (VPN) and corresponding Android mobile app that helps dissidents, journalists, and ordinary humans avoid corporate and government surveillance. In some countries, distributing that app could invite legal threats and even prosecution. Developers in those areas should not have to trust that Google would not hand over their personal information in response to a government demand just because they want their app to be installable by all Android users. By the same token, technologists that work on Android apps for reporting ICE misdeeds should not have to worry that Google will hand over their personal information to, say, the U.S. Department of Homeland Security.
It’s easy to see how a new registration requirement for developers could give Google a new lever for maintaining its app store monopoly
Our tech infrastructure’s substantial dependence on just a few platforms is already creating new opportunities for those platforms to be weaponized to serve all kinds of disturbing purposes, from policing to censorship. In this context, it’s more important than ever to support technologies which decentralize and democratize our shared digital commons. A centralized global registration system for Android will inevitably chill this work.
Not coincidentally, the registration system Google announced would also help cement Google’s outsized competitive power, giving the company an additional window—if it needed one, given the company’s already massive surveillance capabilities—into what apps are being developed, by whom, and how they are being distributed. It’s more than ironic that Google’s announcement came at the same time the company is fighting a court order (in the Epic Games v. Google lawsuit) that will require it to stop punishing developers who distribute their apps through app stores that compete with Google’s own. It’s easy to see how a new registration requirement for developers, potentially enforced by technical measures on billions of Android certified mobile devices, could give Google a new lever for maintaining its app store monopoly.
EFF has signed on to F-Droid’s open letter. If you care about taking back control of tech, you should too.
EFF Stands With Tunisian Media Collective Nawaat
When the independent Tunisian online media collective Nawaat announced that the government had suspended its activities for one month, the news landed like a punch in the gut for anyone who remembers what the Arab uprisings promised: dignity, democracy, and a free press.
But Tunisia’s October 31 suspension of Nawaat—delivered quietly, without formal notice, and justified under Decree-Law 2011-88—is not just a bureaucratic decision. It’s a warning shot aimed at the very idea of independent civic life.
The silencing of a revolutionary media outletNawaat’s statement, published last week, recounts how the group discovered the suspension: not through any official communication, but by finding the order slipped under its office door. The move came despite Nawaat’s documented compliance with all the legal requirements under Decree 88, the 2011 law that once symbolized post-revolutionary openness for associations.
Instead, the Decree, once seen as a safeguard for civic freedom, is now being weaponized as a tool of control. Nawaat’s team describes the action as part of a broader campaign of harassment: tax audits, financial investigations, and administrative interrogations that together amount to an attempt to “stifle all media resistance to the dictatorship.”
For those who have followed Tunisia’s post-2019 trajectory, the move feels chillingly familiar. Since President Kais Saied consolidated power in 2021, civil society organizations, journalists, and independent voices have faced escalating repression. Amnesty International has documented arrests of reporters, the use of counter-terrorism laws against critics, and the closure of NGOs. And now, the government has found in Decree 88 a convenient veneer of legality to achieve what old regimes did by force.
Adopted in the hopeful aftermath of the revolution, Decree-Law 2011-88 was designed to protect the right to association. It allowed citizens to form organizations without prior approval and receive funding freely—a radical departure from the Ben Ali era’s suffocating controls.
But laws are only as democratic as the institutions that enforce them. Over the years, Tunisian authorities have chipped away at these protections. Administrative notifications, once procedural, have become tools for sanction. Financial transparency requirements have turned into pretexts for selective punishment.
When a government can suspend an association that has complied with every rule, the rule of law itself becomes a performance.
Bureaucratic authoritarianismWhat’s happening in Tunisia is not an isolated episode. Across the region, governments have refined the art of silencing dissent without firing a shot. But whether through Egypt’s NGO Law, Morocco’s press code, or Algeria’s foreign-funding restrictions, the outcome is the same: fewer independent outlets, and fewer critical voices.
These are the tools of bureaucratic authoritarianism…the punishment is quiet, plausible, and difficult to contest. A one-month suspension might sound minor, but for a small newsroom like Nawaat—which operates with limited funding and constant political pressure—it can mean disrupted investigations, delayed publications, and lost trust from readers and sources alike.
A decade of resistanceTo understand why Nawaat matters, remember where it began. Founded in 2004 under Zine El Abidine Ben Ali’s dictatorship, Nawaat became a rare space for citizen journalism and digital dissent. During the 2011 uprising, its reporting and documentation helped the world witness Tunisia’s revolution.
Over the past two decades, Nawaat has earned international recognition, including an EFF Pioneer Award in 2011, for its commitment to free expression and technological empowerment. It’s not just a media outlet; it’s a living archive of Tunisia’s struggle for dignity and rights.
That legacy is precisely what makes it threatening to the current regime. Nawaat represents a continuity of civic resistance that authoritarianism cannot easily erase.
The cost of silenceAdministrative suspensions like this one are designed to send a message: You can be shut down at any time. They impose psychological costs that are harder to quantify than arrests or raids. Journalists start to self-censor. Donors hesitate to renew grants. The public, fatigued by uncertainty, tunes out.
But the real tragedy lies in what this means for Tunisians’ right to know. Nawaat’s reporting on corruption, surveillance, and state violence fills the gaps left by state-aligned media. Silencing it deprives citizens of access to truth and accountability.
As Nawaat’s statement puts it:
“This arbitrary decision aims to silence free voices and stifle all media resistance to the dictatorship.”
The government’s ability to pause a media outlet, even temporarily, sets a precedent that could be replicated across Tunisia’s civic sphere. If Nawaat can be silenced today, so can any association tomorrow.
So what can be done? Nawaat has pledged to challenge the suspension in court, but litigation alone won’t fix a system where independence is eroding from within. What’s needed is sustained, visible, and international solidarity.
Tunisia’s government may succeed in pausing Nawaat’s operations for a month. But it cannot erase the two decades of documentation, dissent, and hope the outlet represents. Nor can it silence the networks of journalists, technologists, and readers who know what is at stake.
EFF has long argued that the right to free expression is inseparable from the right to digital freedom. Nawaat’s suspension shows how easily administrative and legal tools can become weapons against both. When states combine surveillance, regulatory control, and economic pressure, they don’t need to block websites or jail reporters outright—they simply tighten the screws until free expression becomes impossible.
That’s why what happens in Tunisia matters far beyond its borders. It’s a test of whether the ideals of 2011 still mean anything in 2025.
And Nawaat, for its part, has made its position clear:
“We will continue to defend our independence and our principles. We will not be silenced.”
What EFF Needs in a New Executive Director
By Gigi Sohn, Chair, EFF Board of Directors
With the impending departure of longtime, renowned, and beloved Executive Director Cindy Cohn, EFF and leadership advisory firm Russell Reynolds Associates have developed a profile for her successor. While Cindy is irreplaceable, we hope that everyone who knows and loves EFF will help us find our next leader.
First and foremost, we are looking for someone who’ll meet this pivotal moment in EFF’s history. As authoritarian surveillance creeps around the globe and society grapples with debates over AI and other tech, EFF needs a forward-looking, strategic, and collaborative executive director to bring fresh eyes and new ideas while building on our past successes.
The San Francisco-based executive director, who reports to our board of directors, will have responsibility over all administrative, financial, development and programmatic activities at EFF. They will lead a dedicated team of legal, technical, and advocacy professionals, steward EFF’s strong organizational culture, and ensure long-term organizational sustainability and impact. That means being:
- Our visionary — partnering with the board and staff to define and advance a courageous, forward-looking strategic vision for EFF; leading development, prioritization, and execution of a comprehensive strategic plan that balances proactive agenda-setting with responsive action; and ensuring clarity of mission and purpose, aligning organizational priorities and resources for maximum impact.
- Our face and voice — serving as a compelling, credible public voice and thought leader for EFF’s mission and work, amplifying the expertise of staff and engaging diverse audiences including media, policymakers, and the broader public, while also building and nurturing partnerships and coalitions across the technology, legal, advocacy, and philanthropic sectors.
- Our chief money manager — stewarding relationships with individual donors, foundations, and key supporters; developing and implementing strategies to diversify and grow EFF’s revenue streams, including membership, grassroots, institutional, and major gifts; and ensuring financial discipline, transparency, and sustainability in partnership with the board and executive team.
- Our fearless leader — fostering a positive, inclusive, high-performing, and accountable culture that honors EFF’s activist DNA while supporting professional growth, partnering with unionized staff, and maintaining a collaborative, constructive relationship with the staff union.
It’ll take a special someone to lead us with courage, vision, personal integrity, and deep understanding of EFF’s unique role at the intersection of law and technology. For more details — including the compensation range and how to apply — click here for the full position specification. And if you know someone who you believe fits the bill, all nominations (strictly confidential, of course) are welcome at eff@russellreynolds.com.
License Plate Surveillance Logs Reveal Racist Policing Against Romani People
More than 80 law enforcement agencies across the United States have used language perpetuating harmful stereotypes against Romani people when searching the nationwide Flock Safety automated license plate reader (ALPR) network, according to audit logs obtained and analyzed by the Electronic Frontier Foundation.
When police run a search through the Flock Safety network, which links thousands of ALPR systems, they are prompted to leave a reason and/or case number for the search. Between June 2024 and October 2025, cops performed hundreds of searches for license plates using terms such as "roma" and "g*psy," and in many instances, without any mention of a suspected crime. Other uses include "g*psy vehicle," "g*psy group," "possible g*psy," "roma traveler" and "g*psy ruse," perpetuating systemic harm by demeaning individuals based on their race or ethnicity.
These queries were run through thousands of police departments' systems—and it appears that none of these agencies flagged the searches as inappropriate.
These searches are, by definition, racist.
Word Choices and Flock SearchesWe are using the terms "Roma" and “Romani people” as umbrella terms, recognizing that they represent different but related groups. Since 2020, the U.S. federal government has officially recognized "Anti-Roma Racism" as including behaviors such as "stereotyping Roma as persons who engage in criminal behavior" and using the slur "g*psy." According to the U.S. Department of State, this language “leads to the treatment of Roma as an alleged alien group and associates them with a series of pejorative stereotypes and distorted images that represent a specific form of racism.”
Nevertheless, police officers have run hundreds of searches for license plates using the terms "roma" and "g*psy." (Unlike the police ALPR queries we’ve uncovered, we substitute an asterisk for the Y to avoid repeating this racist slur). In many cases, these terms have been used on their own, with no mention of crime. In other cases, the terms have been used in contexts like "g*psy scam" and "roma burglary," when ethnicity should have no relevance to how a crime is investigated or prosecuted.
A “g*psy scam” and “roma burglary” do not exist in criminal law separate from any other type of fraud or burglary. Several agencies contacted by EFF have since acknowledged the inappropriate use and expressed efforts to address the issue internally.
"The use of the term does not reflect the values or expected practices of our department," a representative of the Palos Heights (IL) Police Department wrote to EFF after being confronted with two dozen searches involving the term "g*psy." "We do not condone the use of outdated or offensive terminology, and we will take this inquiry as an opportunity to educate those who are unaware of the negative connotation and to ensure that investigative notations and search reasons are documented in a manner that is accurate, professional, and free of potentially harmful language."
Of course, the broader issue is that allowing "g*psy" or "Roma" as a reason for a search isn't just offensive, it implies the criminalization an ethnic group. In fact, the Grand Prairie Police Department in Texas searched for "g*psy" six times while using Flock's "Convoy" feature, which allows an agency to identify vehicles traveling together—in essence targeting an entire traveling community of Roma without specifying a crime.
At the bottom of this post is a list of agencies and the terms they used when searching the Flock system.
Anti-Roma Racism in an Age of SurveillanceRacism against Romani people has been a problem for centuries, with one of its most horrific manifestations during the Holocaust, when the Third Reich and its allies perpetuated genocide by murdering hundreds of thousands of Romani people and sterilizing thousands more. Despite efforts by the UN and EU to combat anti-Roma discrimination, this form of racism persists. As scholars Margareta Matache and Mary T. Bassett explain, it is perpetuated by modern American policing practices:
In recent years, police departments have set up task forces specialised in “G*psy crimes”, appointed “G*psy crime” detectives, and organised police training courses on “G*psy criminality”. The National Association of Bunco Investigators (NABI), an organisation of law enforcement professionals focusing on “non-traditional organised crime”, has even created a database of individuals arrested or suspected of criminal activity, which clearly marked those who were Roma.
Thus, it is no surprise that a 2020 Harvard University survey of Romani Americans found that 4 out of 10 respondents reported being subjected to racial profiling by police. This demonstrates the ongoing challenges they face due to systemic racism and biased policing.
Notably, many police agencies using surveillance technologies like ALPRs have adopted some sort of basic policy against biased policing or the use of these systems to target people based on race or ethnicity. But even when such policies are in place, an agency’s failure to enforce them allows these discriminatory practices to persist. These searches were also run through the systems of thousands of other police departments that may have their own policies and state laws that prohibit bias-based policing—yet none of those agencies appeared to have flagged the searches as inappropriate.
The Flock search data in question here shows that surveillance technology exacerbates racism, and even well-meaning policies to address bias can quickly fall apart without proper oversight and accountability.
Cops In Their Own WordsEFF reached out to a sample of the police departments that ran these searches. Here are five representative responses we received from police departments in Illinois, California, and Virginia. They do not inspire confidence.
1. Lake County Sheriff's Office, ILIn June 2025, the Lake County Sheriff's Office ran three searches for a dark colored pick-up truck, using the reason: "G*PSY Scam." The search covered 1,233 networks, representing 14,467 different ALPR devices.
In response to EFF, a sheriff's representative wrote via email:
“Thank you for reaching out and for bringing this to our attention. We certainly understand your concern regarding the use of that terminology, which we do not condone or support, and we want to assure you that we are looking into the matter.
Any sort of discriminatory practice is strictly prohibited at our organization. If you have the time to take a look at our commitment to the community and our strong relationship with the community, I firmly believe you will see discrimination is not tolerated and is quite frankly repudiated by those serving in our organization.
We appreciate you bringing this to our attention so we can look further into this and address it.”
2. Sacramento Police Department, CAIn May 2025, the Sacramento Police Department ran six searches using the term "g*psy." The search covered 468 networks, representing 12,885 different ALPR devices.
In response to EFF, a police representative wrote:
“Thank you again for reaching out. We looked into the searches you mentioned and were able to confirm the entries. We’ve since reminded the team to be mindful about how they document investigative reasons. The entry reflected an investigative lead, not a disparaging reference.
We appreciate the chance to clarify.”
3. Palos Heights Police Department, ILIn September 2024, the Palos Heights Police Department ran more than two dozen searches using terms such as "g*psy vehicle," "g*psy scam" and "g*psy concrete vehicle." Most searches hit roughly 1,000 networks.
In response to EFF, a police representative said the searches were related to a singular criminal investigation into a vehicle involved in a "suspicious circumstance/fraudulent contracting incident" and is "not indicative of a general search based on racial or ethnic profiling." However, the agency acknowledged the language was inappropriate:
“The use of the term does not reflect the values or expected practices of our department. We do not condone the use of outdated or offensive terminology, and we will take this inquiry as an opportunity to educate those who are unaware of the negative connotation and to ensure that investigative notations and search reasons are documented in a manner that is accurate, professional, and free of potentially harmful language.
We appreciate your outreach on this matter and the opportunity to provide clarification.”
4. Irvine Police Department, CAIn February and May 2025, the Irvine Police Department ran eight searches using the term "roma" in the reason field. The searches covered 1,420 networks, representing 29,364 different ALPR devices.
In a call with EFF, an IPD representative explained that the cases were related to a series of organized thefts. However, they acknowledged the issue, saying, "I think it's an opportunity for our agency to look at those entries and to use a case number or use a different term."
5. Fairfax County Police Department, VABetween December 2024 and April 2025, the Fairfax County Police Department ran more than 150 searches involving terms such as "g*psy case" and "roma crew burglaries." Fairfax County PD continued to defend its use of this language.
In response to EFF, a police representative wrote:
“Thank you for your inquiry. When conducting searches in investigative databases, our detectives must use the exact case identifiers, terms, or names connected to a criminal investigation in order to properly retrieve information. These entries reflect terminology already tied to specific cases and investigative files from other agencies, not a bias or judgment about any group of people. The use of such identifiers does not reflect bias or discrimination and is not inconsistent with our Bias-Based Policing policy within our Human Relations General Order.”
A National TrendRoma individuals and families are not the only ones being systematically and discriminatorily targeted by ALPR surveillance technologies. For example, Flock audit logs show agencies ran 400 more searches using terms targeting Traveller communities more generally, with a specific focus on Irish Travellers, often without any mention of a crime.
Across the country, these tools are enabling and amplifying racial profiling by embedding longstanding policing biases into surveillance technologies. For example, data from Oak Park, IL, show that 84% of drivers stopped in Flock-related traffic incidents were Black—despite Black people making up only 19% of the local population. ALPR systems are far from being neutral tools for public safety and are increasingly being used to fuel discriminatory policing practices against historically marginalized people.
The racially coded language in Flock's logs mirrors long-standing patterns of discriminatory policing. Terms like "furtive movements," "suspicious behavior," and "high crime area" have always been cited by police to try to justify stops and searches of Black, Latine, and Native communities. These phrases might not appear in official logs because they're embedded earlier in enforcement—in the traffic stop without clear cause, the undocumented stop-and-frisk, the intelligence bulletin flagging entire neighborhoods as suspect. They function invisibly until a body-worn camera, court filing, or audit brings them to light. Flock's network didn’t create racial profiling; it industrialized it, turning deeply encoded and vague language into scalable surveillance that can search thousands of cameras across state lines.
The Path ForwardU.S. Sen. Ron Wyden, D-OR, recently recommended that local governments reevaluate their decisions to install Flock Safety in their communities. We agree, but we also understand that sometimes elected officials need to see the abuse with their own eyes first.
We know which agencies ran these racist searches, and they should be held accountable. But we also know that the vast majority of Flock Safety's clients—thousands of police and sheriffs—also allowed those racist searches to run through their Flock Safety systems unchallenged.
Elected officials must act decisively to address the racist policing enabled by Flock's infrastructure. First, they should demand a complete audit of all ALPR searches conducted in their jurisdiction and a review of search logs to determine (a) whether their police agencies participated in discriminatory policing and (b) what safeguards, if any, exist to prevent such abuse. Second, officials should institute immediate restrictions on data-sharing through Flock's nationwide network. As demonstrated by California law, for example, police agencies should not be able to share their ALPR data with federal authorities or out-of-state agencies, thus eliminating a vehicle for discriminatory searches spreading across state lines.
Ultimately, elected officials must terminate Flock Safety contracts entirely. The evidence is now clear: audit logs and internal policies alone cannot prevent a surveillance system from becoming a tool for racist policing. The fundamental architecture of Flock—thousands of cameras feeding into a nationwide searchable network—makes discrimination inevitable when enforcement mechanisms fail.
As Sen. Wyden astutely explained, "local elected officials can best protect their constituents from the inevitable abuses of Flock cameras by removing Flock from their communities.”
Table Overview and NotesThe following table compiles terms used by agencies to describe the reasons for searching the Flock Safety ALPR database. In a small number of cases, we removed additional information such as case numbers, specific incident details, and officers' names that were present in the reason field.
We removed one agency from the list due to the agency indicating that the word was a person's name and not a reference to Romani people.
In general, we did not include searches that used the term "Romanian," although many of those may also be indicative of anti-Roma bias. We also did not include uses of "traveler" or “Traveller” when it did not include a clear ethnic modifier; however, we believe many of those searches are likely relevant.
A text-based version of the spreadsheet is available here.
Once Again, Chat Control Flails After Strong Public Pressure
The European Union Council pushed for a dangerous plan to scan encrypted messages, and once again, people around the world loudly called out the risks, leading to the current Danish presidency to withdraw the plan.
EFF has strongly opposed Chat Control since it was first introduced in 2022. The zombie proposal comes back time and time again, and time and time again, it’s been shot down because there’s no public support. The fight is delayed, but not over.
It’s time for lawmakers to stop attempting to compromise encryption under the guise of public safety. Instead of making minor tweaks and resubmitting this proposal over and over, the EU Council should accept that any sort of client-side scanning of devices undermines encryption, and move on to developing real solutions that don’t violate the human rights of people around the world.
As long as lawmakers continue to misunderstand the way encryption technology works, there is no way forward with message-scanning proposals, not in the EU or anywhere else. This sort of surveillance is not just an overreach; it’s an attack on fundamental human rights.
The coming EU presidencies should abandon these attempts and work on finding a solution that protects people’s privacy and security.
The Department of Defense Wants Less Proof its Software Works
When Congress eventually reopens, the 2026 National Defense Authorization Act (NDAA) will be moving toward a vote. This gives us a chance to see the priorities of the Secretary of Defense and his Congressional allies when it comes to the military—and one of those priorities is buying technology, especially AI, with less of an obligation to prove it’s effective and worth the money the government will be paying for it.
As reported by Lawfare, “This year’s defense policy bill—the National Defense Authorization Act (NDAA)—would roll back data disclosures that help the department understand the real costs of what they are buying, and testing requirements that establish whether what contractors promise is technically feasible or even suited to its needs.” This change comes amid a push from the Secretary of Defense to “Maximize Lethality” by acquiring modern software “at a speed and scale for our Warfighter.” The Senate Armed Services Committee has also expressed interest in making “significant reforms to modernize the Pentagon's budgeting and acquisition operations...to improve efficiency, unleash innovation, and modernize the budget process.”
The 2026 NDAA itself says that the “Secretary of Defense shall prioritize alternative acquisition mechanisms to accelerate development and production” of technology, including an expedited “software acquisition pathway”—a special part of the U.S. code that, if this version of the NDAA passes, will transfer powers to the Secretary of Defense to streamline the buying process and make new technology or updates to existing technology and get it operational “in a period of not more than one year from the time the process is initiated…” It also makes sure the new technology “shall not be subjected to” some of the traditional levers of oversight.
All of this signals one thing: speed over due diligence. In a commercial technology landscape where companies are repeatedly found to be overselling or even deceiving people about their product’s technical capabilities—or where police departments are constantly grappling with the reality that expensive technology may not be effective at providing the solutions they’re after—it’s important that the government agency with the most expansive budget has time to test the efficacy and cost-efficiency of new technology. It’s easy for the military or police departments to listen to a tech company’s marketing department and believe their well-rehearsed sales pitch, but Congress should make sure that public money is being used wisely and in a way that is consistent with both civil liberties and human rights.
The military and those who support its preferred budget should think twice about cutting corners before buying and deploying new technology. The Department of Defense’s posturing does not elicit confidence that the technologically-focused military of tomorrow will be equipped in a way that is effective, efficient, or transparent.
Age Verification, Estimation, Assurance, Oh My! A Guide to the Terminology
If you've been following the wave of age-gating laws sweeping across the country and the globe, you've probably noticed that lawmakers, tech companies, and advocates all seem to be using different terms for what sounds like the same thing. Age verification, age assurance, age estimation, age gating—they get thrown around interchangeably, but they technically mean different things. And those differences matter a lot when we're talking about your rights, your privacy, your data, and who gets to access information online.
So let's clear up the confusion. Here's your guide to the terminology that's shaping these laws, and why you should care about the distinctions.
Age Gating: “No Kids Allowed”Age gating refers to age-based restrictions on access to online services. Age gating can be required by law or voluntarily imposed as a corporate decision. Age gating does not necessarily refer to any specific technology or manner of enforcement for estimating or verifying a user’s age. It simply refers to the fact that a restriction exists. Think of it as the concept of “you must be this old to enter” without getting into the details of how they’re checking.
Age Assurance: The Umbrella TermThink of age assurance as the catch-all category. It covers any method an online service uses to figure out how old you are with some level of confidence. That's intentionally vague, because age assurance includes everything from the most basic check-the-box systems to full-blown government ID scanning.
Age assurance is the big tent that contains all the other terms we're about to discuss below. When a company or lawmaker talks about "age assurance," they're not being specific about how they're determining your age—just that they're trying to. For decades, the internet operated on a “self-attestation” system where you checked a box saying you were 18, and that was it. These new age-verification laws are specifically designed to replace that system. When lawmakers say they want "robust age assurance," what they really mean is "we don't trust self-attestation anymore, so now you need to prove your age beyond just swearing to it."
Age Estimation: Letting the Algorithm DecideAge estimation is where things start getting creepy. Instead of asking you directly, the system guesses your age based on data it collects about you.
This might include:
- Analyzing your face through a video selfie or photo
- Examining your voice
- Looking at your online behavior—what you watch, what you like, what you post
- Checking your existing profile data
Companies like Instagram have partnered with services like Yoti to offer facial age estimation. You submit a video selfie, an algorithm analyzes your face, and spits out an estimated age range. Sounds convenient, right?
Here's the problem, “estimation” is exactly that: it’s a guess. And it is inherently imprecise. Age estimation is notoriously unreliable, especially for teenagers—the exact group these laws claim to protect. An algorithm might tell a website you're somewhere between 15 and 19 years old. That's not helpful when the cutoff is 18, and what's at stake is a young person's constitutional rights.
And it gets worse. These systems consistently fail for certain groups:
- People of color are routinely misidentified (even Yoti's own research admits higher error rates for darker skin tones)
- Trans and nonbinary people are frequently misclassified
- People with disabilities that affect their appearance fall outside the algorithm's training parameters and anyone who doesn't fit the algorithmic "norm" gets flagged
When estimation fails (and it often does), users get kicked to the next level: actual verification. Which brings us to…
Age Verification: “Show Me Your Papers”Age verification is the most invasive option. This is where you have to prove your age to a certain date, rather than, for example, prove that you have crossed some age threshold (like 18 or 21 or 65). EFF generally refers to most age gates and mandates on young people’s access to online information as “age verification,” as most of them typically require you to submit hard identifiers like:
- Government-issued ID (driver's license, passport, state ID)
- Credit card information
- Utility bills or other documents
- Biometric data
This is what a lot of new state laws are actually requiring, even when they use softer language like "age assurance." Age verification doesn't just confirm you're over 18, it reveals your full identity. Your name, address, date of birth, photo—everything.
Here's the critical thing to understand: age verification is really identity verification. You're not just proving you're old enough—you're proving exactly who you are. And that data has to be stored, transmitted, and protected by every website that collects it.
We already know how that story ends. Data breaches are inevitable. And when a database containing your government ID tied to your adult content browsing history gets hacked—and it will—the consequences can be devastating.
Why This Confusion MattersPoliticians and tech companies love using these terms interchangeably because it obscures what they're actually proposing. A law that requires "age assurance" sounds reasonable and moderate. But if that law defines age assurance as requiring government ID verification, it's not moderate at all—it's mass surveillance. Similarly, when Instagram says it's using "age estimation" to protect teens, that sounds privacy-friendly. But when their estimation fails and forces you to upload your driver's license instead, the privacy promise evaporates.
Language matters because it shapes how we think about these systems. "Assurance" sounds gentle. "Verification" sounds official. "Estimation" sounds technical and impersonal, and also admits its inherent imprecision.
Here's the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don't know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don't know that verification systems have error rates. They don't even seem to understand that the terms they're using mean different things. The fact that their terminology is all over the place—using "age assurance," "age verification," and "age estimation" interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.
Language matters because it shapes how we think about these systems. "Assurance" sounds gentle. "Verification" sounds official. "Estimation" sounds technical and impersonal, and also admits its inherent imprecision. But they all involve collecting your data and create a metaphysical age gate to the internet. The terminology is deliberately confusing, but the stakes are clear: it's your privacy, your data, and your ability to access the internet without constant identity checks. Don't let fuzzy language disguise what these systems really do.
❤️ Let's Sue the Government! | EFFector 37.15
There are no tricks in EFF's EFFector newsletter, just treats to keep you up-to-date on the latest in the fight for digital privacy and free expression.
In our latest issue, we're explaining a new lawsuit to stop the U.S. government's viewpoint-based surveillance of online speech; sharing even more tips to protect your privacy; and celebrating a victory for transparency around AI police reports.
Prefer to listen in? Check out our audio companion, where EFF Staff Attorney Lisa Femia explains why EFF is suing to stop the Trump administration's ideological social media surveillance program. Catch the conversation on YouTube or the Internet Archive.
EFFECTOR 37.15 - ❤️ LET'S SUE THE GOVERNMENT!
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
Science Must Decentralize
Knowledge production doesn’t happen in a vacuum. Every great scientific breakthrough is built on prior work, and an ongoing exchange with peers in the field. That’s why we need to address the threat of major publishers and platforms having an improper influence on how scientific knowledge is accessed—or outright suppressed.
In the digital age, the collaborative and often community-governed effort of scholarly research has gone global and unlocked unprecedented potential to improve our understanding and quality of life. That is, if we let it. Publishers continue to monopolize access to life-saving research and increase the burden on researchers through article processing charges and a pyramid of volunteer labor. This exploitation makes a mockery of open inquiry and the denial of access as a serious human rights issue.
While alternatives like Diamond Open Access are promising, crashing through publishing gatekeepers isn’t enough. Large intermediary platforms are capturing other aspects of the research process—inserting themselves between researchers and between the researchers and these published works—through platformization.
Funneling scholars into a few major platforms isn’t just annoying, it’s corrosive to privacy and intellectual freedom. Enshittification has come for research infrastructure, turning everyday tools into avenues for surveillance. Most professors are now worried their research is being scrutinized by academic bossware, forcing them to worry about arbitrary metrics which don’t always reflect research quality. While playing this numbers game, a growing threat of surveillance in scholarly publishing gives these measures a menacing tilt, chilling the publication and access of targeted research areas. These risks spike in the midst of governmental campaigns to muzzle scientific knowledge, buttressed by a scourge of platform censorship on corporate social media.
The only antidote to this ‘platformization’ is Open Science and decentralization. Infrastructure we rely on must be built in the open and on interoperable standards, and hostile to corporate (or governmental) takeovers. Universities and the science community are well situated to lead this fight. As we’ve seen in EFF’s TOR University Challenge, promoting access to knowledge and public interest infrastructure is aligned with the core values of higher education.
Using social media as an example, universities have a strong interest in promoting the work being done at their campuses far and wide. This is where traditional platforms fall short: algorithms typically prioritizing paid content, downrank off-site links, and prioritize sensational claims to drive engagement. When users are free from enshittification and can themselves control the platform’s algorithms, as they can on platforms like Bluesky, scientists get more engagement and find interactions are more useful.
Institutions play a pivotal role in encouraging the adoption of these alternatives, ranging from leveraging existing IT support to assist with account use and verification, all the way to shouldering some of the hosting with Mastodon instances and/or Bluesky PDS for official accounts. This support is good for the research, good for the university, and makes our systems of science more resilient to attacks on science and the instability of digital monocultures.
This subtle influence of intermediaries can also appear in other tools relied on by researchers, while there are a number of open alternatives and interoperable tools developed for everything from citation management, data hosting to online chat among collaborators. Individual scholars and research teams can implement these tools today, but real change depends on institutions investing in tech that puts community before shareholders.
When infrastructure is too centralized, gatekeepers gain new powers to capture, enshittify, and censor. The result is a system that becomes less useful, less stable, and with more costs put on access. Science thrives on sharing and access equity, and its future depends on a global and democratic revolt against predatory centralized platforms.
EFF is proud to celebrate Open Access Week.
Joint Statement on the UN Cybercrime Convention: EFF and Global Partners Urge Governments Not to Sign
Today, EFF joined a coalition of civil society organizations in urging UN Member States not to sign the UN Convention Against Cybercrime. For those that move forward despite these warnings, we urge them to take immediate and concrete steps to limit the human rights harms this Convention will unleash. These harms are likely to be severe and will be extremely difficult to prevent in practice.
The Convention obligates states to establish broad electronic surveillance powers to investigate and cooperate on a wide range of crimes—including those unrelated to information and communication systems—without adequate human rights safeguards. It requires governments to collect, obtain, preserve, and share electronic evidence with foreign authorities for any “serious crime”—defined as an offense punishable under domestic law by at least four years’ imprisonment (or a higher penalty).
In many countries, merely speaking freely; expressing a nonconforming sexual orientation or gender identity; or protesting peacefully can constitute a serious criminal offense per the definition of the convention. People have faced lengthy prison terms, or even more severe acts like torture, for criticizing their governments on social media, raising a rainbow flag, or criticizing a monarch.
In today’s digital era, nearly every message or call generates granular metadata—revealing who communicates with whom, when, and from where—that routinely traverses national borders through global networks. The UN cybercrime convention, as currently written, risks enabling states to leverage its expansive cross-border data-access and cooperation mechanisms to obtain such information for political surveillance—abusing the Convention’s mechanisms to monitor critics, pressure their families, and target marginalized communities abroad.
As abusive governments increasingly rely on questionable tactics to extend their reach beyond their borders—targeting dissidents, activists, and journalists worldwide—the UN Cybercrime Convention risks becoming a vehicle for globalizing repression, enabling an unprecedented multilateral infrastructure for digital surveillance that allows states to access and exchange data across borders in ways that make political monitoring and targeting difficult to detect or challenge.
EFF has long sounded the alarm over the UN Cybercrime Treaty’s sweeping powers of cross-border cooperation and its alarming lack of human-rights safeguards. As the Convention opens for signature on October 25–26, 2025 in Hanoi, Vietnam—a country repeatedly condemned by international rights groups for jailing critics and suppressing online speech—the stakes for global digital freedom have never been higher.
The Convention’s many flaws cannot easily be mitigated because it fundamentally lacks a mechanism for suspending states that systematically fail to respect human rights or the rule of law. States must refuse to sign or ratify the Convention.
Read our full letter here.
When AI and Secure Chat Meet, Users Deserve Strong Controls Over How They Interact
Both Google and Apple are cramming new AI features into their phones and other devices, and neither company has offered clear ways to control which apps those AI systems can access. Recent issues around WhatsApp on both Android and iPhone demonstrate how these interactions can go sideways, risking revealing chat conversations beyond what you intend. Users deserve better controls and clearer documentation around what these AI features can access.
After diving into how Google Gemini and Apple Intelligence (and in some cases Siri) currently work, we didn’t always find clear answers to questions about how data is stored, who has access, and what it can be used for.
At a high level, when you compose a message with these tools, the companies can usually see the contents of those messages and receive at least a temporary copy of the text on their servers.
When receiving messages, things get trickier. When you use an AI like Gemini or a feature like Apple Intelligence to summarize or read notifications, we believe companies should be doing that content processing on-device. But poor documentation and weak guardrails create issues that have lead us deep into documentation rabbit holes and still fail to clarify the privacy practices as clearly as we’d like.
We’ll dig into the specifics below as well as potential solutions we’d like to see Apple, Google, and other device-makers implement, but first things first, here’s what you can do right now to control access:
Control AI Access to Secure Chat on Android and iOSHere are some steps you can take to control access if you want nothing to do with the device-level AI features' integration and don’t want to risk accidentally sharing the text of a message outside of the app you’re using.
How to Check and Limit What Gemini Can AccessIf you’re using Gemini on your Android phone, it’s a good time to review your settings to ensure things are set up how you want. Here’s how to check each of the relevant settings:
- Disable Gemini App Activity: Gemini App Activity is a history Google stores of all your interactions with Gemini. It’s enabled by default. To disable it, open Gemini (depending on your phone model, you may or may not even have the Google Gemini app installed. If you don’t have it installed, you don’t really need to worry about any of this). Tap your profile picture > Gemini Apps Activity, then change the toggle to either “Turn off,” or “Turn off and delete activity” if you want to delete previous conversations. If the option reads “Turn on,” then Gemini Apps Activity is already turned off.
- Control app and notification access: You can control which apps Gemini can access by tapping your profile picture > Apps, then scrolling down and disabling the toggle next to any apps you do not want Gemini to access. If you do not want Gemini to potentially access the content that appears in notifications, open the Settings app and revoke notification access from the Google app.
- Delete the Gemini app: Depending on your phone model, you might be able to delete the Gemini app and revert to using Google Assistant instead. You can do so by long-pressing the Gemini app and selecting the option to delete.
Similarly, there are a few things you can do to clamp down on what Apple Intelligence and Siri can do:
- Disable the “Use with Siri Requests” option: If you want to continue using Siri, but don’t want to accidentally use it to send messages through secure messaging apps, like WhatsApp, then you can disable that feature by opening Settings > Apps > [app name], and disabling “Use with Siri Requests,” which turns off the ability to compose messages with Siri and send them through that app.
- Disable Apple Intelligence entirely: Apple Intelligence is an all-or-nothing setting on iPhones, so if you want to avoid any potential issues your only option is to turn it off completely. To do so, open Settings > Apple Intelligence & Siri, and disable “Apple Intelligence” (you will only see this option if your device supports Apple Intelligence, if it doesn’t, the menu will only be for “Siri”). You can also disable certain features, like “writing tools,” using Screen Time restrictions. Siri can’t be universally turned off in the same way, though you can turn off the options under “Talk to Siri” to make it so you can’t speak to it.
For more information about cutting off AI access at different levels in other apps, this Consumer Reports article covers other platforms and services.
Why It Matters Sending Messages Has Different Privacy Concerns than Receiving ThemLet’s start with a look at how Google and Apple integrate their AI systems into message composition, using WhatsApp as an example.
Google Gemini and WhatsAppOn Android, you can optionally link WhatsApp and Gemini together so you can then initiate various actions for sending messages from the Gemini app, like “Call Mom on WhatsApp” or “Text Jason on WhatsApp that we need to cancel our secret meeting, but make it a haiku.” This feature raised red flags for users concerned about privacy.
By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products. So, unless you change it, when you use Gemini to compose and send a message in WhatsApp then the message you composed is visible to Google.
If you turn the activity off, interactions are still stored for 72 hours. Google’s documentation claims that even though messages are stored, those conversations aren't reviewed or used to improve Google machine learning technologies, though that appears to be an internal policy choice with no technical limits preventing Google from accessing those messages.
By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products.
The simplicity of invoking Gemini to compose and send a message may lead to a false sense of privacy. Notably, other secure messaging apps, like Signal, do not offer this Gemini integration.
For comparison’s sake, let’s see how this works with Apple devices.
Siri and WhatsAppThe closest comparison to this process on iOS is to use Siri, which it is claimed, will eventually be a part of Apple Intelligence. Currently, Apple’s AI message composition tools are not available for third-party apps like Signal and WhatsApp.
According to its privacy policy, when you dictate a message through Siri to send to WhatsApp (or anywhere else), the message, including metadata like the recipient phone number and other identifiers, is sent to Apple’s servers. This was confirmed by researchers to include the text of messages sent to WhatsApp. When you use Siri to compose a WhatsApp message, the message gets routed to both Apple and WhatsApp. Apple claims it does not store this transcript unless you’ve opted into “Improve Siri and Dictation.” WhatsApp defers to Apple’s support for data handling concerns. This is similar to how Google handles speech-to-text prompts.
In response to that research, Apple said this was expected behavior with an app that uses SiriKit—the extension that allows third-party apps to integrate with Siri—like WhatsApp does.
Both Siri and Apple Intelligence can sometimes run locally on-device, and other times need to rely on Apple-managed cloud servers to complete requests. Apple Intelligence can use the company’s Private Cloud Compute, but Siri doesn’t have a similar feature.
The ambiguity around where data goes makes it overly difficult to decide on whether you are comfortable with the sort of privacy trade-off that using features like Siri or Apple Intelligence might entail.
How Receiving Messages WorksSending encrypted messages is just one half of the privacy puzzle. What happens on the receiving end matters too.
Google GeminiBy default, the Gemini app doesn’t have access to the text inside secure messaging apps or to notifications. But you can grant access to notifications using the Utilities app. Utilities can read, summarize, and reply to notifications, including in WhatsApp and Signal (it can also read notifications in headphones).
This could open up any notifications routed through the Utilities app to the Gemini app to access internally or from third-parties.
We could not find anything in Google’s Utilities documentation that clarifies what information is collected, stored, or sent to Google from these notifications. When we reached out to Google, the company responded that it “builds technical data protections that safeguard user data, uses data responsibly, and provides users with tools to control their Gemini experience.” Which means Google has no technical limitation around accessing the text from notifications if you’ve enabled the feature in the Utilities app. This could open up any notifications routed through the Utilities app to the Gemini app to be accessed internally or from third-parties. Google needs to publicly make its data handling explicit in its documentation.
If you use encrypted communications apps and have granted access to notifications, then it is worth considering disabling that feature or controlling what’s visible in your notifications on an app-level.
Apple IntelligenceApple is more clear about how it handles this sort of notification access.
Siri can read and reply to messages with the “Announce Notifications” feature. With this enabled, Siri can read notifications out loud on select headphones or via CarPlay. In a press release, Apple states, “When a user talks or types to Siri, their request is processed on device whenever possible. For example, when a user asks Siri to read unread messages… the processing is done on the user’s device. The contents of the messages aren’t transmitted to Apple servers, because that isn’t necessary to fulfill the request.”
Apple Intelligence can summarize notifications from any app that you’ve enabled notifications on. Apple is clear that these summaries are generated on your device, “when Apple Intelligence provides you with preview summaries of your emails, messages, and notifications, these summaries are generated by on-device models.” This means there should be no risk that the text of notifications from apps like WhatsApp or Signal get sent to Apple’s servers just to summarize them.
New AI Features Must Come With Strong User ControlsAs more device-makers cram AI features into their devices, the more necessary it is for us to have clear and simple controls over what personal data these features can access on our devices. If users do not have control over when a text leaves a device for any sort of AI processing—whether that’s to a “private” cloud or not—it erodes our privacy and potentially threatens the foundations of end-to-end encrypted communications.
Per-app AI PermissionsGoogle, Apple, and other device makers should add a device-level AI permission, just like they do for other potentially invasive privacy features, like location sharing, to their phones. You should be able to tell the operating system’s AI to not access an app, even if that comes at the “cost” of missing out on some features. The setting should be straightforward and easy to understand in ways the Gemini an Apple Intelligence controls currently are not.
Device-makers should offer an “on-device only” mode for those interested in using some features without having to try to figure out what happens on device or on the cloud. Samsung offers this, and both Google and Apple would benefit from a similar option.
Both Google and Apple should improve their documentation about how these features interact with various apps. Apple doesn’t seem to clarify notification processing privacy anywhere outside of a press release, and we couldn’t find anything about Google’s Utilities privacy at all. We appreciate tools like Gemini Apps Activity as a way to audit what the company collects, but vague information like “Prompted a Communications query” is only useful if there’s an explanation somewhere about what that means.
The current user options are not enough. It’s clear that the AI features device-makers add come with significant confusion about their privacy implications, and it’s time to push back and demand better controls. The privacy problems introduced alongside new AI features should be taken seriously, and remedies should be offered to both users and developers who want real, transparent safeguards over how a company accesses their private data and communications.
Civil Disobedience of Copyright Keeps Science Going
Creating and sharing knowledge are defining traits of humankind, yet copyright law has grown so restrictive that it can require acts of civil disobedience to ensure that students and scholars have the books they need and to preserve swaths of culture from being lost forever.
Reputable research generally follows a familiar pattern: Scientific articles are written by scholars based on their research—often with public funding. Those articles are then peer-reviewed by other scholars in their fields and revisions are made according to those comments. Afterwards, most large publishers expect to be given the copyright on the article as a condition of packaging it up and selling it back to the institutions that employ the academics who did the research and to the public at large. Because research is valuable and because copyright is a monopoly on disseminating the articles in question, these publishers can charge exorbitant fees that place a strain even on wealthy universities and are simply out of reach for the general public or universities with limited budgets, such as those in the global south. The result is a global human rights problem.
This model is broken, yet science goes on thanks to widespread civil disobedience of the copyright regime that locks up the knowledge created by researchers. Some turn to social media to ask that a colleague with access share articles they need (despite copyright’s prohibitions on sharing). Certainly, at least some such sharing is protected fair use, but scholars should not have to seek a legal opinion or risk legal threats from publishers to share the collective knowledge they generate.
Even more useful, though on shakier legal ground, are so-called “shadow archives” and aggregators such as SciHub, Library Genesis (LibGen), Z-Library, or Anna’s Archive. These are the culmination of efforts from volunteers dedicated to defending science.
SciHub alone handles tens of millions of requests for scientific articles each year and remains operational despite adverse court rulings thanks both to being based in Russia, and to the community of academics who see it as an ethical response to the high access barriers that publishers impose and provide it their log-on credentials so it can retrieve requested articles. SciHub and LibGen are continuations of samizdat, the Soviet-era practice of disobeying state censorship in the interests of learning and free speech.
Unless publishing gatekeepers adopt drastically more equitable practices and become partners in disseminating knowledge, they will continue to lose ground to open access alternatives, legal or otherwise.
EFF is proud to celebrate Open Access Week.
EFF Backs Constitutional Challenge to Ecuador’s Intelligence Law That Undermines Human Rights
In early September, EFF submitted an amicus brief to Ecuador’s Constitutional Court supporting a constitutional challenge filed by Ecuadorian NGOs, including INREDH and LaLibre. The case challenges the constitutionality of the Ley Orgánica de Inteligencia (LOI) and its implementing regulation, the General Regulation of the LOI.
EFF’s amicus brief argues that the LOI enables disproportionate surveillance and secrecy that undermine constitutional and Inter-American human rights standards. EFF urges the Constitutional Court to declare the LOI and its regulation unconstitutional in their entirety.
More specifically, our submission notes that:
“The LOI presents a structural flaw that undermines compliance with the principles of legality, legitimate purpose, suitability, necessity, and proportionality; it inverts the rule and the exception, with serious harm to rights enshrined constitutionally and under the Convention; and it prioritizes indeterminate state interests, in contravention of the ultimate aim of intelligence activities and state action, namely the protection of individuals, their rights, and freedoms.”
Core Legal Problems IdentifiedVague and Overbroad Definitions
The LOI contains key terms like “national security,” “integral security of the State,” “threats,” and “risks” that are left either undefined or so broadly framed that they could mean almost anything. This vagueness grants intelligence agencies wide, unchecked discretion, and fails short of the standard of legal certainty required under the American Convention on Human Rights (CADH).
Secrecy and Lack of TransparencyThe LOI makes secrecy the rule rather than the exception, reversing the Inter-American principle of maximum disclosure, which holds that access to information should be the norm and secrecy a narrowly justified exception. The law establishes a classification system—“restricted,” “secret,” and “top secret”—for intelligence and counterintelligence information, but without clear, verifiable parameters to guide its application on a case-by-case basis. As a result, all information produced by the governing body (ente rector) of the National Intelligence System is classified as secret by default. Moreover, intelligence budgets and spending are insulated from meaningful public oversight, concentrated under a single authority, and ultimately destroyed, leaving no mechanism for accountability.
Weak or Nonexistent Oversight MechanismsThe LOI leaves intelligence agencies to regulate themselves, with almost no external scrutiny. Civilian oversight is minimal, limited to occasional, closed-door briefings before a parliamentary commission that lacks real access to information or decision making power. This structure offers no guarantee of independent or judicial supervision and instead fosters an environment where intelligence operations can proceed without transparency or accountability.
Intrusive Powers Without Judicial AuthorizationThe LOI allows access to communications, databases, and personal data without prior judicial order, which enables the mass surveillance of electronic communications, metadata, and databases across public and private entities—including telecommunication operators. This directly contradicts rulings of the Inter-American Court of Human Rights, which establish that any restriction of the right to privacy must be necessary, proportionate, and subject to independent oversight. It also runs counter to CAJAR vs. Colombia, which affirms that intrusive surveillance requires prior judicial authorization.
International Human Rights Standards AppliedOur amicus curiae draws on the CAJAR vs. Colombia judgment, which set strict standards for intelligence activities. Crucially, Ecuador’s LOI fall short of all these tests: it doesn’t constitute an adequate legal basis for limiting rights; contravenes necessary and proportionate principles; fails to ensure robust controls and safeguards, like prior judicial authorization and solid civilian oversight; and completely disregards related data protection guarantees and data subject’s rights.
At its core, the LOI structurally prioritizes vague notions of “state interest” over the protection of human rights and fundamental freedoms. It legalizes secrecy, unchecked surveillance, and the impunity of intelligence agencies. For these reasons, we urge Ecuador’s Constitutional Court to declare the LOI and its regulations unconstitutional, as they violate both the Ecuadorian Constitution and the American Convention on Human Rights (CADH).
Read our full amicus brief here to learn more about how Ecuador’s intelligence framework undermines privacy, transparency, and the human rights protected under Inter-American human rights law.
