EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 1 hour 20 min ago

Ola Bini Faces Ecuadorian Prosecutors Seeking to Overturn Acquittal of Cybercrime Charge

Mon, 04/01/2024 - 12:21pm

Ola Bini, the software developer acquitted last year of cybercrime charges in a unanimous verdict in Ecuador, was back in court last week in Quito as prosecutors, using the same evidence that helped clear him, asked an appeals court to overturn the decision with bogus allegations of unauthorized access of a telecommunications system.

Armed with a grainy image of a telnet session—which the lower court already ruled was not proof of criminal activity—and testimony of an expert witness to the lower—who never had access to the devices and systems involved in the alleged intrusion—prosecutors presented the theory that, by connecting to a router, Bini made partial unauthorized access in an attempt to break into a  system  provided by Ecuador’s national telecommunications company (CNT) to a presidency's contingency center.

If this all sounds familiar, that’s because it is. In an unfounded criminal case plagued by irregularities, delays, and due process violations, Ecuadorian prosecutors have for the last five years sought to prove Bini violated the law by allegedly accessing an information system without authorization.

Bini, who resides in Ecuador, was arrested at the Quito airport in 2019 without being told why. He first learned about the charges from a TV news report depicting him as a criminal trying to destabilize the country. He spent 70 days in jail and cannot leave Ecuador or use his bank accounts.

Bini prevailed in a trial last year before a three-judge panel. The core evidence the Prosecutor’s Office and CNT’s lawyer presented to support the accusation of unauthorized access to a computer, telematic, or telecommunications system was a printed image of a telnet session allegedly taken from Bini’s mobile phone.

The image shows the user requesting a telnet connection to an open server using their computer’s command line. The open server warns that unauthorized access is prohibited and asks for a username. No username is entered. The connection then times out and closes. Rather than demonstrating that Bini intruded into the Ecuadorean telephone network system, it shows the trail of someone who paid a visit to a publicly accessible server—and then politely obeyed the server's warnings about usage and access.

Bini’s acquittal was a major victory for him and the work of security researchers. By assessing the evidence presented, the court concluded that both the Prosecutor’s Office and CNT failed to demonstrate a crime had occurred. There was no evidence that unauthorized access had ever happened, nor anything to sustain the malicious intent that article 234 of Ecuador’s Penal Code requires to characterize the offense of unauthorized access.

The court emphasized the necessity of proper evidence to prove that an alleged computer crime occurred and found that the image of a telnet session presented in Bini’s case is not fit for this purpose. The court explained that graphical representations, which can be altered, do not constitute evidence of cybercrime since an image cannot verify whether the commands illustrated in it were actually executed. Building on technical experts' testimonies, the court said that what does not emerge, or what can't be verified from digital forensics, is not proper digital evidence.

Prosecutors appealed the verdict and are back in court using the same image that didn’t prove any crime was committed. At the March 26 hearing, prosecutors said their expert witness’s analysis of the telnet image shows there was connectivity to the router. The witness compared it to entering the yard of someone’s property to see if the gate to the property is open or closed. Entering the yard is analogous to connecting to the router, the witness said.

Actually, no. Our interpretation of the image, which was leaked to the media before Bini’s trial, is that it’s the internet equivalent of seeing an open gate, walking up to it, seeing a “NO TRESPASSING” sign, and walking away. If this image could prove anything it is that no unauthorized access happened.

Yet, no expert analysis was conducted in the systems allegedly affected. The  expert witness’s testimony was based on his analysis of a CNT report—he didn’t have access to the CNT router to verify its configuration. He didn’t digitally validate whether what was shown in the report actually happened and he was never asked to verify the existence of an IP address owned or managed by CNT.

That’s not the only problem with the appeal proceedings. Deciding the appeal is a panel of three judges, two of whom ruled to keep Bini in detention after his arrest in 2019 because there were allegedly sufficient elements to establish a suspicion against him. The detention was later considered illegal and arbitrary because of a lack of such elements. Bini filed a lawsuit against the Ecuadorian state, including the two judges, for violating his rights. Bini’s defense team has sought to remove these two judges from the appeals case, but his requests were denied.

The appeals court panel is expected to issue a final ruling in the coming days.  

U.S. Supreme Court Does Not Go Far Enough in Determining When Government Officials Are Barred from Censoring Critics on Social Media

Fri, 03/29/2024 - 5:45pm

After several years of litigation across the federal appellate courts, the U.S. Supreme Court in a unanimous opinion has finally crafted a test that lower courts can use to determine whether a government official engaged in “state action” such that censoring individuals on the official’s social media page—even if also used for personal purposes—would violate the First Amendment.

The case, Lindke v. Freed, came out of the Sixth Circuit and involves a city manager, while a companion case called O'Connor-Ratcliff v. Garnier came out of the Ninth Circuit and involves public school board members.

A Two-Part Test

The First Amendment prohibits the government from censoring individuals’ speech in public forums based on the viewpoints that individuals express. In the age of social media, where people in government positions use public-facing social media for both personal, campaign, and official government purposes, it can be unclear whether the interactive parts (e.g., comments section) of a social media page operated by someone who works in government amount to a government-controlled public forum subject to the First Amendment’s prohibition on viewpoint discrimination. Another way of stating the issue is whether a government official who uses a social media account for personal purposes is engaging in state action when they also use the account to speak about government business.  

As the Supreme Court states in the Lindke opinion, “Sometimes … the line between private conduct and state action is difficult to draw,” and the question is especially difficult “in a case involving a state or local official who routinely interacts with the public.”

The Supreme Court announced a fact-intensive test to determine if a government official’s speech on social media counts as state action under the First Amendment. The test includes two required elements:

  • the official “possessed actual authority to speak” on the government’s behalf, and
  • the official “purported to exercise that authority when he spoke on social media.”

Although the court’s opinion isn’t as generous to internet users as we had asked for in our amicus brief, it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright.

This issue has been percolating in the courts since at least 2016. Perhaps most famously, the Knight First Amendment Institute at Columbia University and others sued then-president Donald Trump for blocking many of the plaintiffs on Twitter. In that case, the U.S. Court of Appeals for the Second Circuit affirmed a district court’s holding that President Trump’s practice of blocking critics from his Twitter account violated the First Amendment. EFF has also represented PETA in two cases against Texas A&M University.

Element One: Does the official possess actual authority to speak on the government’s behalf?

There is some ambiguity as to what specific authority the Supreme Court believes the government official must have. The opinion is unclear whether the authority is simply the general authority to speak officially on behalf of the public entity, or instead the specific authority to speak officially on social media. On the latter framing, the opinion, for example, discusses the authority “to post city updates and register citizen concerns,” and the authority “to speak for the [government]” that includes “the authority to do so on social media….” The broader authority to generally speak on behalf of the government would be easier to prove for plaintiffs and should always include any authority to speak on social media.

Element One Should Be Interpreted Broadly

We will urge the lower courts to interpret the first element broadly. As we emphasized in our amicus brief, social media is so widely used by government agencies and officials at all levels that a government official’s authority generally to speak on behalf of the public entity they work for must include the right to use social media to do so. Any other result does not reflect the reality we live in.

Moreover, plaintiffs who are being censored on social media are not typically commenting on the social media pages of low-level government employees, say, the clerk at the county tax assessor’s office, whose authority to speak publicly on behalf of their agency may be questionable. Plaintiffs are instead commenting on the social media pages of people in leadership positions, who are often agency heads or in elected positions and who surely should have the general authority to speak for the government.

“At the same time,” the Supreme Court cautions, “courts must not rely on ‘excessively broad job descriptions’ to conclude that a government employee is authorized to speak” on behalf of the government. But under what circumstances would a court conclude that a government official in a leadership position does not have such authority? We hope these circumstances are few and far between for the sake of plaintiffs seeking to vindicate their First Amendment rights.

When Does the Use of a New Communications Technology Become So “Well Settled” That It May Fairly Be Considered Part of a Government Official’s Public Duties?

If, on the other hand, the lower courts interpret the first element narrowly and require plaintiffs to provide evidence that the government official who censored them had authority to speak on behalf of the agency on social media specifically, this will be more difficult to prove.

One helpful aspect of the court’s opinion is that the government official’s authority to speak (however that’s defined) need not be written explicitly in their job description. This is in contrast to what the Sixth Circuit had, essentially, held. The authority to speak on behalf of the government, instead, may be based on “persistent,” “permanent,” and “well settled” “custom or usage.”  

We remain concerned, however, that if there is a narrower requirement that the authority must be to speak on behalf of the government via a particular communications technology—in this case, social media—then at what point does the use of a new technology become so “well settled” for government officials that it is fair to conclude that it is within their public duties?

Fortunately, the case law on which the Supreme Court relies does not require an extended period of time for a government practice to be deemed a legally sufficient “custom or usage.” It would not make sense to require an ages-old custom and usage of social media when the widespread use of social media within the general populace is only a decade and a half old. Ultimately, we will urge lower courts to avoid this problem and broadly interpret element one.

Government Officials May Be Free to Censor If They Speak About Government Business Outside Their Immediate Purview

Another problematic aspect of the Supreme Court’s opinion within element one is the additional requirement that “[t]he alleged censorship must be connected to speech on a matter within [the government official’s] bailiwick.”

The court explains:

For example, imagine that [the city manager] posted a list of local restaurants with health-code violations and deleted snarky comments made by other users. If public health is not within the portfolio of the city manager, then neither the post nor the deletions would be traceable to [his] state authority—because he had none.

But the average constituent may not make such a distinction—nor should they. They would simply see a government official talking about an issue generally within the government’s area of responsibility. Yet under this interpretation, the city manager would be within his right to delete the comments, as the constituent could not prove that the issue was within that particular government official’s purview, and they would thus fail to meet element one.

Element Two: Did the official purport to exercise government authority when speaking on social media? Plaintiffs Are Limited in How a Social Media Account’s “Appearance and Function” Inform the State Action Analysis

In our brief, we argued for a functional test, where state action would be found if a government official were using their social media account in furtherance of their public duties, even if they also used that account for personal purposes. This was essentially the standard that the Ninth Circuit adopted, which included looking at, in the words of the Supreme Court, “whether the account’s appearance and content look official.” The Supreme Court’s two-element test is more cumbersome for plaintiffs. But the upside is that the court agrees that a social media account’s “appearance and function” is relevant, even if only with respect to element two.

Reality of Government Officials Using Both Personal and Official Accounts in Furtherance of Their Public Duties Is Ignored

Another problematic aspect of the Supreme Court’s discussion of element two is that a government official’s social media page would amount to state action if the page is the “only” place where content related to government business is located. The court provides an example: “a mayor would engage in state action if he hosted a city council meeting online by streaming it only on his personal Facebook page” and it wasn’t also available on the city’s official website. The court further discusses a new city ordinance that “is not available elsewhere,” except on the official’s personal social media page. By contrast, if “the mayor merely repeats or shares otherwise available information … it is far less likely that he is purporting to exercise the power of his office.”

This limitation is divorced from reality and will hamstring plaintiffs seeking to vindicate their First Amendment rights. As we showed extensively in our brief (see Section I.B.), government officials regularly use both official office accounts and “personal” accounts for the same official purposes, by posting the same content and soliciting constituent feedback—and constituents often do not understand the difference.

Constituent confusion is particularly salient when government officials continue to use “personal” campaign accounts after they enter office. The court’s conclusion that a government official “might post job-related information for any number of personal reasons, from a desire to raise public awareness to promoting his prospects for reelection” is thus highly problematic. The court is correct that government officials have their own First Amendment right to speak as private citizens online. However, their constituents should not be subject to censorship when a campaign account functions the same as a clearly official government account.

An Upside: Supreme Court Denounces the Blocking of Users Even on Mixed-Use Social Media Accounts

One very good aspect of the Supreme Court’s opinion is that if the censorship amounted to the blocking of a plaintiff from engaging with the government official’s social media page as a whole, then the plaintiff must merely show that the government official “had engaged in state action with respect to any post on which [the plaintiff] wished to comment.”  

The court further explains:

The bluntness of Facebook’s blocking tool highlights the cost of a “mixed use” social-media account: If page-wide blocking is the only option, a public of­ficial might be unable to prevent someone from commenting on his personal posts without risking liability for also pre­venting comments on his official posts. A public official who fails to keep personal posts in a clearly designated per­sonal account therefore exposes himself to greater potential liability.

We are pleased with this language and hope it discourages government officials from engaging in the most egregious of censorship practices.

The Supreme Court also makes the point that if the censorship was the deletion of a plaintiff’s individual comments under a government official’s posts, then those posts must each be analyzed under the court’s new test to determine whether a particular post was official action and whether the interactive spaces that accompany it are government forums. As the court states, “it is crucial for the plaintiff to show that the official is purporting to exercise state authority in specific posts.” This is in contrast to the Sixth Circuit, which held, “When analyzing social-media activity, we look to a page or account as a whole, not each individual post.”

The Supreme Court’s new test for state action unfortunately puts a thumb on the scale in favor of government officials who wish to censor constituents who engage with them on social media. However, the test does chart a path forward on this issue and should be workable if lower courts apply the test with an eye toward maximizing constituents’ First Amendment rights online.

Restricting Flipper is a Zero Accountability Approach to Security: Canadian Government Response to Car Hacking

Thu, 03/28/2024 - 11:30pm

On February 8, François-Philippe Champagne, the Canadian Minister of Innovation, Science and Industry, announced Canada would ban devices used in keyless car theft. The only device mentioned by name was the Flipper Zero—the multitool device that can be used to test, explore, and debug different wireless protocols such as RFID, NFC, infrared, and Bluetooth.

EFF explores toilet hacking

While it is useful as a penetration testing device, Flipper Zero is impractical in comparison to other, more specialized devices for car theft. It’s possible social media hype around the Flipper Zero has led people to believe that this device offers easier hacking opportunities for car thieves*. But government officials are also consuming such hype. That leads to policies that don’t secure systems, but rather impedes important research that exposes potential vulnerabilities the industry should fix. Even with Canada walking back on the original statement outright banning the devices, restricting devices and sales to “move forward with measures to restrict the use of such devices to legitimate actors only” is troublesome for security researchers.

This is not the first government seeking to limit access to Flipper Zero, and we have explained before why this approach is not only harmful to security researchers but also leaves the general population more vulnerable to attacks. Security researchers may not have the specialized tools car thieves use at their disposal, so more general tools come in handy for catching and protecting against vulnerabilities. Broad purpose devices such as the Flipper have a wide range of uses: penetration testing to facilitate hardening of a home network or organizational infrastructure, hardware research, security research, protocol development, use by radio hobbyists, and many more. Restricting access to these devices will hamper development of strong, secure technologies.

When Brazil’s national telecoms regulator Anatel refused to certify the Flipper Zero and as a result prevented the national postal service from delivering the devices, they were responding to media hype. With a display and controls reminiscent of portable video game consoles, the compact form-factor and range of hardware (including an infrared transceiver, RFID reader/emulator, SDR and Bluetooth LE module) made the device an easy target to demonize. While conjuring imagery of point-and-click car theft was easy, citing examples of this actually occurring proved impossible. Over a year later, you’d be hard-pressed to find a single instance of a car being stolen with the device. The number of cars stolen with the Flipper seems to amount to, well, zero (pun intended). It is the same media hype and pure speculation that has led Canadian regulators to err in their judgment to ban these devices.

Still worse, law enforcement in other countries have signaled their own intentions to place owners of the device under greater scrutiny. The Brisbane Times quotes police in Queensland, Australia: “We’re aware it can be used for criminal means, so if you’re caught with this device we’ll be asking some serious questions about why you have this device and what you are using it for.” We assume other tools with similar capabilities, as well as Swiss Army Knives and Sharpie markers, all of which “can be used for criminal means,” will not face this same level of scrutiny. Just owning this device, whether as a hobbyist or professional—or even just as a curious customer—should not make one the subject of overzealous police suspicions.

It wasn’t too long ago that proficiency with the command line was seen as a dangerous skill that warranted intervention by authorities. And just as with those fears of decades past, the small grain of truth embedded in the hype and fears gives it an outsized power. Can the command line be used to do bad things? Of course. Can the Flipper Zero assist criminal activity? Yes. Can it be used to steal cars? Not nearly as well as many other (and better, from the criminals’ perspective) tools. Does that mean it should be banned, and that those with this device should be placed under criminal suspicion? Absolutely not.

We hope Canada wises up to this logic, and comes to view the device as just one of many in the toolbox that can be used for good or evil, but mostly for good.

*Though concerns have been raised about Flipper Devices' connection to the Russian state apparatus, no unexpected data has been observed escaping to Flipper Devices' servers, and much of the dedicated security and pen-testing hardware which hasn't been banned also suffers from similar problems.

EFF Asks Oregon Supreme Court Not to Limit Fourth Amendment Rights Based on Terms of Service

Wed, 03/27/2024 - 8:26pm

This post was drafted by EFF legal intern Alissa Johnson.

EFF signed on to an amicus brief drafted by the National Association of Criminal Defense Lawyers earlier this month petitioning the Oregon Supreme Court to review State v. Simons, a case involving law enforcement surveillance of over a year’s worth of private internet activity. We ask that the Court join the Ninth Circuit in recognizing that people have a reasonable expectation of privacy in their browsing histories, and that checking a box to access public Wi-Fi does not waive Fourth Amendment rights.

Mr. Simons was convicted of downloading child pornography after police warrantlessly captured his browsing history on an A&W restaurant’s public Wi-Fi network, which he accessed from his home across the street. The network was not password-protected but did require users to agree to an acceptable use policy, which noted that while web activity would not be actively monitored under normal circumstances, A&W “may cooperate with legal authorities.” A private consultant hired by the restaurant noticed a device on the network accessing child pornography sites and turned over logs of all of the device’s unencrypted internet activity, both illegal and benign, to law enforcement.

The Court of Appeals asserted that Mr. Simons had no reasonable expectation of privacy in his browsing history on A&W’s free Wi-Fi network. We disagree.

Browsing history reveals some of the most sensitive personal information that exists—the very privacies of life that the Fourth Amendment was designed to protect. It can allow police to uncover political and religious affiliation, medical history, sexual orientation, or immigration status, among other personal details. Internet users know how much of their private information is exposed through browsing data, take steps to protect it, and expect it to remain private.

Courts have also recognized that browsing history offers an extraordinarily detailed picture of someone’s private life. In Riley v. California, the Supreme Court cited browsing history as an example of the deeply private information that can be found on a cell phone. The Ninth Circuit went a step further in holding that people have a reasonable expectation of privacy in their browsing histories.

People’s expectation of privacy in browsing history doesn’t disappear when tapping “I Agree” on a long scroll of Terms of Service to access public Wi-Fi. Private businesses monitoring internet activity to protect their commercial interests does not license the government to sidestep a warrant requirement, or otherwise waive constitutional rights.

The price of participation in public society cannot be the loss of Fourth Amendment rights to be free of unreasonable government infringement on our privacy. As the Supreme Court noted in Carpenter v. United States, “A person does not surrender all Fourth Amendment protection by venturing into the public sphere.” People cannot negotiate the terms under which they use public Wi-Fi, and in practicality have no choice but to accept the terms dictated by the network provider.

The Oregon Court of Appeals’ assertion that access to public Wi-Fi is convenient but not necessary for participation in modern life ignores well-documented inequalities in internet access across race and class. Fourth Amendment rights are for everyone, not just those with private residences and a Wi-Fi budget.

Allowing private businesses’ Terms of Service to dictate our constitutional rights threatens to make a “crazy quilt” of the Fourth Amendment, as the U.S. Supreme Court pointed out in Smith v. Maryland. Pinning constitutional protection to the contractual provisions of private parties is absurd and impracticable. Almost all of us rely on Wi-Fi outside of our homes, and that access should be protected against government surveillance.

We hope that the Oregon Supreme Court accepts Mr. Simons’ petition for review to address the important constitutional questions at stake in this case.

Meta Oversight Board’s Latest Policy Opinion a Step in the Right Direction

Tue, 03/26/2024 - 3:11pm

EFF welcomes the latest and long-awaited policy advisory opinion from Meta’s Oversight Board calling on the company to end its blanket ban on the use of the Arabic-language term “shaheed” when referring to individuals listed under Meta’s policy on dangerous organizations and individuals and calls on Meta to fully implement the Board’s recommendations.

Since the Meta Oversight Board was created in 2020 as an appellate body designed to review select contested content moderation decisions made by Meta, we’ve watched with interest as the Board has considered a diverse set of cases and issued expert opinions aimed at reshaping Meta’s policies. While our views on the Board's efficacy in creating long-term policy change have been mixed, we have been happy to see the Board issue policy recommendations that seek to maximize free expression on Meta properties.

The policy advisory opinion, issued Tuesday, addresses posts referring to individuals as 'shaheed' an Arabic term that closely (though not exactly) translates to 'martyr,' when those same individuals have previously been designated by Meta as 'dangerous' under its dangerous organizations and individuals policy. The Board found that Meta’s approach to moderating content that contains the term to refer to individuals who are designated by the company’s policy on “dangerous organizations and individuals”—a policy that covers both government-proscribed organizations and others selected by the company— substantially and disproportionately restricts free expression.

The Oversight Board first issued a call for comment in early 2023, and in April of last year, EFF partnered with the European Center for Not-for-Profit Law (ECNL) to submit comment for the Board’s consideration. In our joint comment, we wrote:

The automated removal of words such as ‘shaheed’ fail to meet the criteria for restricting users’ right to freedom of expression. They not only lack necessity and proportionality and operate on shaky legal grounds (if at all), but they also fail to ensure access to remedy and violate Arabic-speaking users’ right to non-discrimination.

In addition to finding that Meta’s current approach to moderating such content restricts free expression, the Board noted thate importance of any restrictions on freedom of expression that seek to prevent violence must be necessary and proportionate, “given that undue removal of content may be ineffective and even counterproductive.”

We couldn’t agree more. We have long been concerned about the impact of corporate policies and government regulations designed to limit violent extremist content on human rights and evidentiary content, as well as journalism and art. We have worked directly with companies and with multi stakeholder initiatives such as the Global Internet Forum to Counter Terrorism, Tech Against Terrorism, and the Christchurch Call to ensure that freedom of expression remains a core part of policymaking.

In its policy recommendation, the Board acknowledges the importance of Meta’s ability to take action to ensure its platforms are not used to incite violence or recruit people to engage in violence, and that the term “shaheed” is sometimes used by extremists “to praise or glorify people who have died while committing violent terrorist acts.” However, the Board also emphasizes that Meta’s response to such threats must be guided by respect for all human rights, including freedom of expression. Notably, the Board’s opinion echoes our previous demands for policy changes, as well as those of the Stop Silencing Palestine campaign initiated by nineteen digital and human rights organizations, including EFF.

We call on Meta to implement the Board’s recommendations and ensure that future policies and practices respect freedom of expression.

Speaking Freely: Robert Ssempala

Tue, 03/26/2024 - 2:07pm

*This interview has been edited for length and clarity. 

Robert Ssempala is a longtime press freedom and social justice advocate. He serves as Executive Director at Human Rights Network for Journalists-Uganda, a network of journalists in Uganda working towards enhancing the promotion, protection, and respect of human rights through defending and building the capacities of journalists, to effectively exercise their constitutional rights and fundamental freedoms for collective campaigning through the media. Under his leadership, his organization has supported hundreds of journalists who have been assaulted, imprisoned, and targeted in the course of their work. 

 York: What does free speech or free expression mean to you?

 It means being able to give one’s opinions and ideas freely without fear of reprisals or without fearing facing criminal sanctions, and without being concerned about how another feels about their ideas or opinions. Sometimes even if it’s offensive, it’s one’s opinion. For me, it’s entirely how one wants to express themselves that is all about having the liberty to speak freely.

 York: What are the qualities that make you passionate about free expression?

 For me, it is the light for everyone when they’re able to give their ideas and opinions. It is having a sense of liberty to have an idea. I am very passionate about listening to ideas, about everyone getting to speak what they feel is right. The qualities that make me passionate about it are that, first, I’m from a media background. So, during that time I learned that we are going to receive the people’s ideas and opinions, disseminate them to the wider public, and there will be feedback from the public about what has come out from one side to the other. And that quality is so dear to my heart. And second, it is a sense of freedom that is expressed at all levels, in any part of the country or the world, being the people’s eyes and ears, especially at their critical times of need.

 York: I want to ask you more about Uganda. Can you give us a short overview of what the situation for speech is like in the country right now?

 The climate in Uganda is partly free and partly not free, depending on the nature of the issues at hand. Those that touch civil and political rights are very highly restricted and it has attracted so many reprisals for those that seek to express themselves that way. I work for the Human Rights Network for Journalists-Uganda (HRNJ-Uganda) which is a non-governmental media rights organization, so we monitor and document annually the incidents, trends, and patterns touching freedom of expression and journalists’ rights. Most of the cases that we have received, documented, and worked on are stemming from civil and political rights. We receive less of those that touch economic, social, and cultural rights. So depending on where you’re standing, those media houses and journalists that are critically independent and venture into investigative practices are highly targeted. They have been attacked physically, their gadgets have been confiscated and sometimes even damaged deliberately. Some have lost their jobs under duress because a majority of media ownership in this country is by the political class or lean toward the ruling political party. As such, they want to be seen to be supportive of the regime, so they kind of tighten the noose on all freedom of expression spaces within media houses and prevail over their journalists. This by any measure has led to heightened self-censorship.

 But also, those journalists that seem to take critical lines are targeted. Some are even blacklisted. We can say that from the looks of things that times around political campaigns and elections are the tightest for freedom of expression in this country, and most cases have been reported around such times. We normally have elections every five years. So every three years after an election electioneering starts. And that’s when we see a lot of restrictions coming from the government through its regulation bodies like the Uganda Communications Commission, which is the communications regulator in my country. Also from the Media Council of Uganda, which was put in place by an act of Parliament to oversee the practices of media. And from the police or security apparatus in this country. So it’s a very fragile environment within which to practice. The journalists operate under immense fear and there are very high levels of censorship. The law has increasingly been used to criminalize free speech. That’s how I’d describe the current environment.

 York: I understand that the Computer Misuse Act as well as cybercrime legislation have been used to target journalists. Have you or any of your clients experienced censorship through abuse of computer crime laws?

 We have a very Draconian law called the Computer Misuse Amendment Act. It was amended just last year to make it even worse. It has been now the walking stick of the proponents of the regime that don’t want to be subjected to public scrutiny, that don’t want to be held accountable politically in their offices. So abuses of public trust and power of their offices are hidden under the Computer Misuse Amendment Act. And most journalists, most editors, most managers have been, from time to time, interrogated at the Criminal Investigations Directorate of the police over what they have written about the powerful personalities especially in the political class – sometimes even from the business class – but mainly it’s from the political class. So it is used to insulate the powerful from being held accountable. Sadly, most of these cases are politically motivated. Most of them have not even ended up in courts of law, but have been used to open up charges against the media practitioners who have, from time to time, kept reporting and answering to the police for a long time without being presented to court or that are presented at a time when they realize that the journalists in question are becoming a bit unruly. So these laws are used to contain the journalists.

 Since most of the stories that have been at the highlight of the regime have been factual, they have not had reason to run to Court, but the effect of this is very counterproductive to the journalists’ independence, to their ability to concentrate on more stories – because they’re always thinking about these cases pending before them. Also, media houses now become very fearful and learn how to behave to not be in many cases of that nature. So the Computer Misuse Act, criminal defamation and now the most recent one, the Anti-Homosexuality Act (AHA) – which was passed by Parliament with very drastic clauses– are clawback legislation for press freedom in Uganda. The AHA in itself fundamentally affected the practice of journalism. The legislation falls short of drawing a clear distinction between what amounts to promotion or education [with regards to sharing material related to homosexuality]. Yet one of the crucial roles of the media is to educate the population about many things, but here, it’s not clear when the media is promoting and when it is educating. So it wants to slap a blackout completely on when you’re discussing the LGBTQI+ issues in the country. So, this law is very ambiguous and therefore susceptible to abuse at the expense of freedom speech

 And it also introduces very drastic sanctions. For instance, if one writes about homosexuality their media operating license is revoked for ten years. And I’m sure no media house can stand up again after ten years of closure and can still breathe life. Also, the AHA generalizes the practice of an individual journalist. If, for instance, one of your journalists writes something that the law looks at as against it, the entire media house license is revoked for ten years, but also you’re imprisoned for five years – you as the writer. In addition, you receive a hefty fine of the equivalent of 1 billion Uganda shillings, that’s about 250,000 euros. Which is really too much for any media house operating in Uganda.

 So that alone has created a lot of fear to discuss these issues, even when the law was passed in such a rushed manner with total disregard for the input of key stakeholders like the media, among others. As a media rights organization, we had looked at the draft bill and we were planning to make a presentation before the Parliamentary Committee. But within a week they closed all public hearings opinions, which limited the space for engagement. Within a few days the law had been written, presented again, and then assented to by the President. No wonder it’s being challenged in the Constitutional Court. This is the second time actually that such a law has been challenged. Of course, there are many other laws, like the Anti-Terrorism Act, which has not clearly defined the role of a journalist who speaks to a person who engages in subversive activities as terrorism. Where the law presupposes that before interviewing a person or before hosting them in your shows, you must have done a lot of background checks to make sure they have not engaged in such terrorism acts. So if you do not, the law here presses a criminal liability on the talk show host for promoting and abetting terrorism. And if there’s a conviction, the ultimate punishment is being sentenced to death. So these couple of laws are really used to curtail freedom of expression.

 York: Wow, that’s incredible. I understand how this impacts media houses, but what would you say the impact is on ordinary citizens or individual activists, for example?

 Under the Computer Misuse Amendment Act, the amended Act is restrictive and inhibitive to freedom of expression in regards to citizen journalism. It introduces such stringent conditions, like, if I’m going to record a video of you, say that I’m a journalist, citizen journalist or an activist who is not working for a media house, I must seek your permission before I record you in case you’re committing a crime. The law presupposes that I have no right to record you and later on disseminate the video without your explicit permission. Notably, the law is silent on the nature of the admissible permission, whether it is an email, SMS, WhatsApp, voice note, written note, etc. Also, the law presupposes that before I send you such a video, I must seek for your permission as the intended recipient of the said message. For instance, if I send you an email and you think you don’t need it, you can open a case against me for sending you unsolicited information. Unsolicited information – that’s the word that’s used.

 So the law is so amorphous in this nature that it completely closes out the liberty of a free society where citizens can engage in discussions, dialogues, or give opinions or ideas. For instance, I could be a very successful farmer, and I think the public could benefit from my farming practices, and I record a lot of what I do and I disseminate those videos. Somebody who receives this, wherever they are, can run to court and use this amended Computer Misuse Act to open up charges against me. And the fines are also very hefty compared to the crimes that the law talks about. So it is so evident that the law is killing citizen journalism, dissent, and activism at all levels. The law does not seem to cater to a free society where the individual citizens can express themselves at any one time, can criticize their leaders, and can hold them accountable. In the presence of this law, we do not have a society that can hold anyone accountable or that can keep the powerful in check. So the spirit of the law is bad. The powerful fence themselves off from the ordinary citizens that are out there watching and not able to track their progress of things or raise red flags through the different social media platforms. But we have tried to challenge this law. There is a group of us, 13 individual activists and CSOs that have gone to the Constitutional Court to say, “this law is counterproductive to freedom of expression, democracy, rule of law and a free society.” We believe that the court will agree with us given its key function of promoting human rights, good governance, democracy, and the rule of law.

 York: That was my next question- I was going to ask how are people fighting against these laws?

 People are very active in terms of pushing back and to that extent we have many petitions that are in court. For instance, the Computer Misuse Amendment is being challenged. We had the Anti-pornographic Act of 2014 which was so amorphous in its nature that it didn’t clearly define what actually amounts to pornography. For instance, if I went around people in a swimming pool in their swimming trunks and took photos and carried those in the newspaper or on TV, that would be promoting pornography. So that was counterproductive to journalism so we went to court. And, fortunately, a court ruled in our favor. So the citizens are really up in arms to fight back because that’s the only way we can have civic engagements that are not restricted through a litany of such laws. There has been civic participation and engagement through mass media, dialogues with key actors, among others. However, many fear to speak out due to fear of reprisals, having seen the closure of media houses, the arrest and detention of activists and journalists, and the use of administrative sanctions to curtail free expression.

 York: Are there ways in which international groups and activists can stand in solidarity with those of you who are fighting back against these laws?

 There’s a lot of backlash on organizations, especially local ones, that tend to work a lot with international organizations. The government seems to be so threatened by the international eye as compared to local eyes, because recently it banned the UN Human Rights Office. They had to wind up business and leave the country. Also, the offices of the Democratic Governance Facility (DGF), which was a basket of embassies and the EU that were the biggest funding entity for the civil society. And actually for the government, too, because they were empowering citizens, you know, empowering the demand side to heighten its demand for services from the supply side. The government said no and they had to wind up their offices and leave. This has severely crippled the work of civil society, media, and, generally, governance.

The UN played an important role before they left and we now have that gap. Yet this comes at a time when our national Uganda Human Rights Commission is at its weakest due to a number of structural challenges characterizing it. The current leadership of the Commission is always up in arms against the political opposition for accusing government of committing human rights excesses against its members. So we do our best to work with international organizations through sharing our voices. We have an African Hub, like the African IFEX, where the members try to replicate voices from here. In that nature we do try a lot, but it’s not very easy for them to come here and do their practices. Just like you will realize a lot of foreign correspondents, foreign journalists, who work in Uganda are highly restricted. It’s a tug of war to have their licenses renewed. Because it’s politically handled. It was taken away from the professional body of the Media Council of Uganda to the Media Centre of Uganda, which is a government mouthpiece.  So for the critical foreign correspondents their licenses are rarely renewed. When it comes to election times most of them are blocked from even coming here to cover the elections. The international media development bodies can help to build capacities of our media development organizations, facilitate research, provide legal aid support, and engage the government on the excesses of the security forces and some emergency responses for victims, among others.

 York: Is there anything that I didn’t ask that you’d like to share with our readers?

 One thing I was to add is about trying to have an international focus on Uganda in the build up to elections. There’s a lot of havoc that happens to the citizens, but most importantly, to the activists and human rights defenders. Either cultural activists or media activists- a lot happens. And most of these things are not captured well because it is prior to the peak of campaigns or there is fear by the local media of capturing such situations. So by the time we get international attention, sometimes the damage is really too irreparable and a lot has happened. As opposed to if there was that international focus from the world. To me, that should really be captured because it would mitigate a lot that has happened. 

 

Podcast Episode: About Face (Recognition)

Tue, 03/26/2024 - 3:05am

Is your face truly your own, or is it a commodity to be sold, a weapon to be used against you? A company called Clearview AI has scraped the internet to gather (without consent) 30 billion images to support a tool that lets users identify people by picture alone. Though it’s primarily used by law enforcement, should we have to worry that the eavesdropper at the next restaurant table, or the creep who’s bothering you in the bar, or the protestor outside the abortion clinic can surreptitiously snap a pic of you, upload it, and use it to identify you, where you live and work, your social media accounts, and more?

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F8d50564d-31d3-45bb-b0ab-4b9704d85c8a%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.)

New York Times reporter Kashmir Hill has been writing about the intersection of privacy and technology for well over a decade; her book about Clearview AI’s rise and practices was published last fall. She speaks with EFF’s Cindy Cohn and Jason Kelley about how face recognition technology’s rapid evolution may have outpaced ethics and regulations, and where we might go from here. 

In this episode, you’ll learn about: 

  • The difficulty of anticipating how information that you freely share might be used against you as technology advances. 
  • How the all-consuming pursuit of “technical sweetness” — the alluring sensation of neatly and functionally solving a puzzle — can blind tech developers to the implications of that tech’s use. 
  • The racial biases that were built into many face recognition technologies.  
  • How one state's 2008 law has effectively curbed how face recognition technology is used there, perhaps creating a model for other states or Congress to follow. 

Kashmir Hill is a New York Times tech reporter who writes about the unexpected and sometimes ominous ways technology is changing our lives, particularly when it comes to our privacy. Her book, “Your Face Belongs To Us” (2023), details how Clearview AI gave facial recognition to law enforcement, billionaires, and businesses, threatening to end privacy as we know it. She joined The Times in 2019 after having worked at Gizmodo Media Group, Fusion, Forbes Magazine and Above the Law. Her writing has appeared in The New Yorker and The Washington Post. She has degrees from Duke University and New York University, where she studied journalism. 

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here

Transcript

KASHMIR HILL
Madison Square Garden, the big events venue in New York City, installed facial recognition technology in 2018, originally to address security threats. You know, people they were worried about who'd been violent in the stadium before, or Or perhaps the Taylor Swift model of, you know, known stalkers wanting to identify them if they're trying to come into concerts.

But then in the last year, they realized, well, we've got this system set up. This is a great way to keep out our enemies, people that the owner, James Dolan, doesn't like, namely lawyers who work at firms that have sued him and cost him a lot of money.

And I saw this, I actually went to a Rangers game with a banned lawyer and it's, you know, thousands of people streaming into Madison Square Garden. We walk through the door, put our bags down on the security belt, and by the time we go to pick them up, a security guard has approached us and told her she's not welcome in.

And yeah, once you have these systems of surveillance set up, it goes from security threats to just keeping track of people that annoy you. And so that is the challenge of how do we control how these things get used?

CINDY COHN
That's Kashmir Hill. She's a tech reporter for the New York Times, and she's been writing about the intersection of privacy and technology for well over a decade.

She's even worked with EFF on several projects, including security research into pregnancy tracking apps. But most recently, her work has been around facial recognition and the company Clearview AI.

Last fall, she published a book about Clearview called Your Face Belongs to Us. It's about the rise of facial recognition technology. It’s also about a company that was willing to step way over the line. A line that even the tech giants abided by. And it did so in order to create a facial search engine of millions of innocent people to sell to law enforcement.

I'm Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN
The idea behind this show is that we're trying to make our digital lives BETTER. At EFF we spend a lot of time envisioning the ways things can go wrong — and jumping into action to help when things DO go wrong online. But with this show, we're trying to give ourselves a vision of what it means to get it right.

JASON KELLEY
It's easy to talk about facial recognition as leading towards this sci-fi dystopia, but many of us use it in benign - and even helpful - ways every day. Maybe you just used it to unlock your phone before you hit play on this podcast episode.

Most of our listeners probably know that there's a significant difference between the data that's on your phone and the data that Clearview used, which was pulled from the internet, often from places that people didn't expect. Since Kash has written several hundred pages about what Clearview did, we wanted to start with a quick explanation.

KASHMIR HILL
Clearview AI scraped billions of photos from the internet -

JASON KELLEY
Billions with a B. Sorry to interrupt you, just to make sure people hear that.

KASHMIR HILL
Billions of photos from, the public internet and social media sites like Facebook, Instagram, Venmo, LinkedIn. At the time I first wrote about them in January, 2020, they had 3 billion faces in their database.

They now have 30 billion and they say that they're adding something like 75 million images every day. So a lot of faces, all collected without anyone's consent and, you know, they have paired that with a powerful facial recognition algorithm so that you can take a photo of somebody, you know, upload it to Clearview AI and it will return the other places on the internet where that face appears along with a link to the website where it appears.

So it's a way of finding out who someone is. You know, what their name is, where they live, who their friends are, finding their social media profiles, and even finding photos that they may not know are on the internet, where their name is not linked to the photo but their face is there.

JASON KELLEY

Wow. Obviously that's terrifying, but is there an example you might have of a way that this affects the everyday person. Could you talk about that a little bit?

KASHMIR HILL

Yeah, so with a tool like this, um, you know, if you were out at a restaurant, say, and you're having a juicy conversation, whether about your friends or about your work, and it kind of catches the attention of somebody sitting nearby, you assume you're anonymous. With a tool like this, they could take a photo of you, upload it, find out who you are, where you work, and all of a sudden understand the context of the conversation. You know, a person walking out of an abortion clinic, if there's protesters outside, they can take a photo of that person. Now they know who they are and the health services they may have gotten.

I mean, there's all kinds of different ways. You know, you go to a bar and you're talking to somebody. They're a little creepy. You never want to talk to them again. But they take your picture. They find out your name. They look up your social media profiles. They know who you are.
On the other side, you know, I do hear about people who think about this in a positive context, who are using tools like this to research people they meet on dating sites, finding out if they are who they say they are, you know, looking up their photos.

It's complicated, facial recognition technology. There are positive uses, there are negative uses. And right now we're trying to figure out what place this technology should have in our lives and, and how authorities should be able to use it.

CINDY COHN
Yeah, I think Jason's, like, ‘this is creepy’ is very widely shared, I think, by a lot of people. But you know the name of this is How to Fix the Internet. I would love to hear your thinking about how facial recognition might play a role in our lives if we get it right. Like, what would it look like if we had the kinds of law and policy and technological protections that would turn this tool into something that we would all be pretty psyched about on the main rather than, you know, worried about on the main.

KASHMIR HILL
Yeah, I mean, so some activists feel that facial recognition technology should be banned altogether. Evan Greer at Fight for the Future, you know, compares it to nuclear weapons and that there's just too many possible downsides that it's not worth the benefits and it should be banned altogether. I kind of don't think that's likely to happen just because I have talked to so many police officers who really appreciate facial recognition technology, think it's a very powerful tool that when used correctly can be such an important part of their tool set. I just don't see them giving it up.

But when I look at what's happening right now, you have these companies like not just Clearview AI, but PimEyes, Facecheck, Eye-D. There's public face search engines that exist now. While Clearview is limited to police use, these are on the internet. Some are even free, some require a subscription.  And right now in the U. S., we don't have much of a legal infrastructure, certainly at the national level about whether they can do that or not. But there's been a very different approach in Europe where they say, that citizens shouldn't be included in these databases without their consent. And, you know, after I revealed the existence of Clearview AI, privacy regulators in Europe, in Canada, in Australia, investigated Clearview AI and said that what it had done was illegal, that they needed people's consent to put them in the databases.

So that's one way to handle facial recognition technology is you can't just throw everybody's faces into a database and make them searchable, you need to get permission first. And I think that is one effective way of handling it. Privacy regulators actually inspired by Clearview AA actually issued a warning to other AI companies saying, hey, just because there's all these, there's all this information that's public on the internet, it doesn't mean that you're entitled to it. There can still be a personal interest in the data, and you may violate our privacy laws by collecting this information.

We haven't really taken that approach, in the U. S. as much, with the exception of Illinois, which has this really strong law that's relevant to facial recognition technology. When we have gotten privacy laws at the state level, it says you have the right to get out of the databases. So in California, for example, you can go to Clearview AI and say, hey, I want to see my file. And if you don't like what they have on you, you can ask them to delete you. So that's a very different approach, uh, to try to give people some rights over their face. And California also requires that companies say how many of these requests they get per year. And so I looked and in the last two years fewer than a thousand Californians have asked to delete themselves from Clearview's database and you know, California's population is very much bigger than that, I think, you know 34 million people or so and so I'm not sure how effective those laws are at protecting people at large.

CINDY COHN
Here’s what I hear from that. Our world where we get it right is one where we have a strong legal infrastructure protecting our privacy. But it’s also one where if the police want something, it doesn’t mean that they get it. It’s a world where control of our faces and faceprints rests with us, and any use needs to have our permission. That’s the Illinois law called BIPA - the Biometric Privacy Act, or the foreign regulators you mention.
It also means that a company like Venmo cannot just put our faces onto the public internet, and a company like Clearview cannot just copy them. Neither can happen without our affirmative permission.

I think of technologies like this as needed to have good answers to two questions. Number one, who is the technology serving - who benefits if the technology gets it right? And number two, who is harmed if the technology DOESN’T get it right?

For police use of facial recognition, the answers to both of these questions are bad. Regular people don’t benefit from the police having their faces in what has been called a perpetual line-up. And if the technology doesn’t work, people can pay a very heavy price of being wrongly arrested - as you document in your book, Kash.

But for facial recognition technology allowing me to unlock my phone and manipulate apps like digital credit cards, I benefit by having an easy way to lock and use my phone. And if the technology doesn’t work, I just use my password, so it’s not catastrophic. But how does that compare to your view of a fixed facial recognition world, Kash?

KASHMIR HILL
Well, I'm not a policymaker. I am a journalist. So I kind of see my job as, as here's what has happened. Here's how we got here. And here's how different, you know, different people are dealing with it and trying to solve it. One thing that's interesting to me, you brought up Venmo, is that Venmo was one of the very first places that the kind of technical creator of Clearview AI, Hoan Ton-That, one of the first places he talked about getting faces from.

And this was interesting to me as a privacy reporter because I very much remembered this criticism that the privacy community had for Venmo that, you know, when you've signed up for the social payment site, they made everything public by default, all of your transactions, like who you were sending money to.

And there was such a big pushback saying, Hey, you know, people don't realize that you're making this public by default. They don't realize that the whole world can see this. They don't understand, you know, how that could come back to be used against them. And, you know, some of the initial uses were, you know, people who were sending each other Venmo transactions and like putting syringes in it and you know, cannabis leaves and how that got used in criminal trials.

But what was interesting with Clearview is that Venmo actually had this iPhone on their homepage on Venmo.com and they would show real transactions that were happening on the network. And it included people's profile photos and a link to their profile. So Hoan Ton-That sent this scraper to Venmo.com and it would just, he would just hit it every few seconds and pull down the photos and the links to the profile photos and he got, you know, millions of faces this way, and he says he remembered that the privacy people were kind of annoyed about Venmo making everything public, and he said it took them years to change it, though.

JASON KELLEY
We were very upset about this.

CINDY COHN
Yeah, we had them on our, we had a little list called Fix It Already in 2019. It wasn't a little, it was actually quite long for like kind of major privacy and other problems in tech companies. And the Venmo one was on there, right, in 2019, I think was when we launched it. In 2021, they fixed it, but that was right in between there was right when all that scraping happened.

KASHMIR HILL
And Venmo is certainly not alone in terms of forcing everyone to make their profile photos public, you know, Facebook did that as well, but it was interesting when I exposed Clearview AI and said, you know, here are some of the companies that they scraped from Venmo and also Facebook and LinkedIn, Google sent Clearview cease and desist letters and said, Hey, you know, you, you violated our terms of service in collecting this data. We want you to delete it, and people often ask, well, then what happened after that? And as far as I know, Clearview did not change their practices. And these companies never did anything else beyond the cease and desist letters.

You know, they didn't sue Clearview. Um, and so it's clear that the companies alone are not going to be protecting our data, and they've pushed us to, to be more public and now that is kind of coming full circle in a way that I don't think people, when they are putting their photos on the internet were expecting this to happen.

CINDY COHN
I think we should start from the source, which is, why are they gathering all these faces in the first place, the companies? Why are they urging you to put your face next to your financial transactions? There's no need for your face to be next to a financial transaction, even in social media and other kinds of situations, there's no need for it to be public. People are getting disempowered because there's a lack of privacy protection to begin with, and the companies are taking advantage of that, and then turning around and pretending like they're upset about scraping, which I think is all they did with the Clearview thing.

Like there's problems all the way down here. But I don't think that, from our perspective, the answer isn't to make scraping, which is often over limited, even more limited. The answer is to try to give people back control over these images.

KASHMIR HILL
And I get it, I mean, I know why Venmo wants photos. I mean, when I use Venmo and I'm paying someone for the first time, I want to see that this is the face of the person I know before I send it to, you know, @happy, you know, nappy on Venmo. So it's part of the trust, but it does seem like you could have a different architecture. So it doesn't necessarily mean that you're showing your face to the entire, you know, world. Maybe you could just be showing it to the people that you're doing transactions with.

JASON KELLEY
What we were pushing Venmo to do was what you mentioned was make it NOT public by default. And what I think is interesting about that campaign is that at the time, we were worried about one thing, you know, that the ability to sort of comb through these financial transactions and get information from people. We weren't worried about, or at least I don't think we talked much about, the public photos being available. And it's interesting to me that there are so many ways that public defaults, and that privacy settings can impact people that we don't even know about yet, right?

KASHMIR HILL
I do think this is one of the biggest challenges for people trying to protect their privacy is, it's so hard to anticipate how information that, you know, kind of freely give at one point might be used against you or weaponized in the future as technology improves.

And so I do think that's really challenging. And I don't think that most people, when they're kind of freely putting Photos on the internet, their face on the internet were anticipating that the internet would be reorganized to be searchable by face.

So that's where I think regulating the use of the information can be very powerful. It's kind of protecting people from the mistakes they've made in the past.

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our conversation with Kashmir Hill.

CINDY COHN
So a supporter asked a question that I'm curious about too. You dove deep into the people who built these systems, not just the Clearview people, but people before them. And what did you find? Are these like Dr. Evil, evil geniuses who intended to, you know, build a dystopia? Or are there people who were, you know, good folks trying to do good things who either didn't see the consequences of what they're looking at or were surprised at the consequences of what they were building

KASHMIR HILL
The book is about Clearview AI, but it's also about all the people that kind of worked to realize facial recognition technology over many decades.
The government was trying to get computers to be able to recognize human faces in Silicon Valley before it was even called Silicon Valley. The CIA was, you know, funding early engineers there to try to do it with those huge computers which, you know, in the early 1960s weren't able to do it very well.

But I kind of like went back and asked people that were working on this for so many years when it was very clunky and it did not work very well, you know, were you thinking about what you are working towards? A kind of a world in which everybody is easily tracked by face, easily recognizable by face. And it was just interesting. I mean, these people working on it in the ‘70s, ‘80s, ‘90s, they just said it was impossible to imagine that because the computers were so bad at it, and we just never really thought that we'd ever reach this place where we are now, where we're basically, like, computers are better at facial recognition than humans.

And so this was really striking to me, that, and I think this happens a lot, where people are working on a technology and they just want to solve that puzzle, you know, complete that technical challenge, and they're not thinking through the implications of what if they're successful. And so this one, a philosopher of science I talked to, Heather Douglas, called this technical sweetness.

CINDY COHN
I love that term.

KASHMIR HILL
This kind of motivation where it's just like, I need to solve this, the kind of Jurassic Park, the Jurassic Park dilemma where it's like,it'd be really cool if we brought the dinosaurs back.

So that was striking to me and all of these people that were working on this, I don't think any of them saw something like Clearview AI coming and when I first heard about Clearview, this startup that had scraped the entire internet and kind of made it searchable by face. I was thinking there must be some, you know, technological mastermind here who was able to do this before the big companies, the Facebooks, the Googles. How did they do it first?

And what I would come to figure out is that. You know, what they did was more of an ethical breakthrough than a technological breakthrough. Companies like Google and Facebook had developed this internally and shockingly, you know, for these companies that have released many kind of unprecedented products, they decided facial recognition technology like this was too much and they held it back and they decided not to release it.

And so Clearview AI was just willing to do what other companies hadn't been willing to do. Which I thought was interesting and part of why I wrote the book is, you know, who are these people and why did they do this? And honestly, they did have, in the early days, some troubling ideas about how to use facial recognition technology.

So one of the first deployments was of, of Clearview AI, before it was called Clearview AI, was at the Deploraball, this kind of inaugural event around Trump becoming president and they were using it because It was going to be this gathering of all these people who had had supported Trump, the kind of MAGA crowd, O=of which some of the Clearview AI founders were part of. And they were worried about being infiltrated by Antifa, which I think is how they pronounce it, and so they wanted to run a background check on ticket buyers and find out whether any of them were from the far left.

And apparently this smartchecker worked for this and they identified two people who kind of were trying to get in who shouldn't have. And I found out about this because they included it in a PowerPoint presentation that they had developed for the Hungarian government. They were trying to pitch Hungary on their product as a means of border control. And so the idea was that you could use this background check product, this facial recognition technology, to keep out people you didn't want coming into the country.

And they said that they had fine tuned it so it would work on people that worked with the Open Society Foundations and George Soros because they knew that Hungary's leader, Viktor Orban, was not a fan of the Soros crowd.

And so for me, I just thought this just seemed kind of alarming that you would use it to identify essentially political dissidents, democracy activists and advocates, that that was kind of where their minds went to for their product when it was very early, basically still at the prototype stage.

CINDY COHN
I think that it's important to recognize these tools, like many technologies, they're dual use tools, right, and we have to think really hard about how they can be used and create laws and policies around there because I'm not sure that you can use some kind of technological means to make sure only good guys use this tool to do good things and that bad guys don't.

JASON KELLEY
One of the things that you mentioned about sort of government research into facial recognition reminds me that shortly after you put out your first story on Clearview in January of 2020, I think, we put out a website called Who Has Your Face, which we'd been doing research for for, I don't know, four to six months or something before that, that was specifically trying to let people know which government entities had access to your, let's say, DMV photo or your passport photo for facial recognition purposes, and that's one of the great examples, I think, of how sort of like Venmo, you put information somewhere that's, even in this case, required by law, and you don't ever expect that the FBI would be able to run facial recognition on that picture based on like a surveillance photo, for example.

KASHMIR HILL
So it makes me think of two things, and one is, you know, as part of the book I was looking back at the history of the US thinking about facial recognition technology and setting up guardrails or for the most part NOT setting up guardrails.

And there was this hearing about it more than a decade ago. I think actually Jen Lynch from the EFF testified at it. And it was like 10 years ago when facial recognition technology was first getting kind of good enough to get deployed. And the FBI was starting to build a facial recognition database and police departments were starting to use these kind of early apps.

It troubles me to think about just knowing the bias problems that facial recognition technology had at that time that they were kind of actively using it. But lawmakers were concerned and they were asking questions about whose photo is going to go in here? And the government representatives who were there, law enforcement, at the time they said, we're only using criminal mugshots.

You know, we're not interested in the goings about of normal Americans. We just want to be able to recognize the faces of people that we know have already had encounters with the law, and we want to be able to keep track of those people. And it was interesting to me because in the years to come, that would change, you know, they started pulling in state driver's license photos in some places, and it, it ended up not just being criminals that were being tracked or people, not always even criminals, just people who've had encounters with law enforcement where they ended up with a mugshot taken.

But that is the the kind of frog boiling of ‘well we'll just start out with some of these photos and then you know we'll actually we'll add in some state driver's license photos and then we'll start using a company called Clearview AI that's scraped the entire internet Um, you know everybody on the planet in this facial recognition database.

So it just speaks to this challenge of controlling it, you know,, this kind of surveillance creep where once you start setting up the system, you just want to pull in more and more data and you want to surveil people in more and more ways.

CINDY COHN
And you tell some wonderful stories or actually horrific stories in the book about people who were misidentified. And the answer from the technologists is, well, we just need more data then. Right? We need everybody's driver's licenses, not just mugshots. And then that way we eliminate the bias that comes from just using mugshots. Or you tell a story that I often talk about, which is, I believe the Chinese government was having a hard time with its facial recognition, recognizing black faces, and they made some deals in Africa to just wholesale get a bunch of black faces so they could train up on it.

And, you know, to us, talking about bias in a way that doesn't really talk about comprehensive privacy reform and instead talks only about bias ends up in this technological world in which the solution is more people's faces into the system.

And we see this with all sorts of other biometrics where there's bias issues with the training data or the initial data.

KASHMIR HILL
Yeah. So this is something, so bias has been a huge problem with facial recognition technology for a long time. And really a big part of the problem was that they were not getting diverse training databases. And, you know, a lot of the people that were working on facial recognition technology were white people, white men, and they would make sure that it worked well on them and the other people they worked with.

And so we had, you know, technologies that just did not work as well on other people. One of those early facial recognition technology companies I talked to who was in business, you know, in 2000, 2001, actually used at the Super Bowl in Tampa in 2000 and in 2001 to secretly scan the faces of football fans looking for pickpockets and ticket scalpers.

That company told me that they had to pull out of a project in South Africa because they found the technology just did not work on people who had darker skin. But the activist community has brought a lot of attention to this issue that there is this problem with bias and the facial recognition vendors have heard it and they have addressed it by creating more diverse training sets.

And so now they are training their algorithms to work on different groups and the technology has improved a lot. It really has been addressed and these algorithms don't have those same kind of issues anymore.

Despite that, you know, the handful of wrongful arrests that I've covered. where, um, people are arrested for the crime of looking like someone else. Uh, they've all involved people who are black. One woman so far, a woman who was eight months pregnant, arrested for carjacking and robbery on a Thursday morning while she was getting her two kids ready for school.

And so, you know, even if you fix the bias problem in the algorithms, you're still going to have the issue of, well, who is this technology deployed on? Who is this used to police? And so yeah, I think it'll still be a problem. And then there's just these bigger questions of the civil liberty questions that still need to be addressed. You know, do we want police using facial recognition technology? And if so, what should the limitations be?

CINDY COHN
I think, you know, for us in thinking about this, the central issue is who's in charge of the system and who bears the cost if it's wrong. The consequences of a bad match are much more significant than just, oh gosh, the cops for a second thought I was the wrong person. That's not actually how this plays out in people's lives.

KASHMIR HILL
I don't think most people who haven't been arrested before realize how traumatic the whole experience can be. You know, I talk about Robert Williams in the book who was arrested after he got home from work, in front of all of his neighbors, in front of his wife and his two young daughters, spent the night in jail, you know, was charged, had to hire a lawyer to defend him.

Same thing, Portia Woodruff, the woman who was pregnant, taken to jail, charged, even though the woman who they were looking for had committed the crime the month before and was not visibly pregnant, I mean it was so clear they had the wrong person. And yet, she had to hire a lawyer, fight the charges, and she wound up in the hospital after being detained all day because she was so stressed out and dehydrated.

And so yeah, when you have people that are relying too heavily on the facial recognition technology and not doing proper investigations, this can have a very harmful effect on, on individual people's lives.

CINDY COHN
Yeah, I mean, one of my hopes is that when, you know, that those of us who are involved in tech trying to get privacy laws passed and other kinds of things passed can have some knock on effects on trying to make the criminal justice system better. We shouldn't just be coming in and talking about the technological piece, right?

Because it's all a part of a system that itself needs reform. And so I think it's important that we recognize, um, that as well and not just try to extricate the technological piece from the rest of the system and that's why I think EFF's come to the position that governmental use of this is so problematic that it's difficult to imagine a world in which it's fixed.

KASHMIR HILL
In terms of talking about laws that have been effective We alluded to it earlier, but Illinois passed this law in 2008, the Biometric Information Privacy Act, rare law that moved faster than the technology.

And it says if you want to use somebody's biometrics, like their face print or their fingerprint to their voice print, You need to get their consent, or as a company, or you'll be fined. And so Madison Square Garden is using facial recognition technology to keep out security threats and lawyers at all of its New York City venues: The Beacon Theater, Radio City Music Hall, Madison Square Garden.

The company also has a theater in Chicago, but they cannot use facial recognition technology to keep out lawyers there because they would need to get their consent to use their biometrics that way. So it is an example of a law that has been quite effective at kind of controlling how the technology is used, maybe keeping it from being used in a way that people find troubling.

CINDY COHN
I think that's a really important point. I think sometimes people in technology despair that law can really ever do anything, and they think technological solutions are the only ones that really work. And, um, I think it's important to point out that, like, that's not always true. And the other point that you make in your book about this that I really appreciate is the Wiretap Act, right?

Like the reason that a lot of the stuff that we're seeing is visual and not voice, // you can do voice prints too, just like you can do face prints, but we don't see that.

And the reason we don't see that is because we actually have very strong federal and state laws around wiretapping that prevent the collection of this kind of information except in certain circumstances. Now, I would like to see those circumstances expanded, but it still exists. And I think that, you know, kind of recognizing where, you know, that we do have legal structures that have provided us some protection, even as we work to make them better, is kind of an important thing for people who kind of swim in tech to recognize.

KASHMIR HILL
Laws work is one of the themes of the book.

CINDY COHN
Thank you so much, Kash, for joining us. It was really fun to talk about this important topic.

KASHMIR HILL
Thanks for having me on. It's great. I really appreciate the work that EFF does and just talking to you all for so many stories. So thank you.

JASON KELLEY
That was a really fun conversation because I loved that book. The story is extremely interesting and I really enjoyed being able to talk to her about the specific issues that sort of we see in this story, which I know we can apply to all kinds of other stories and technical developments and technological advancements that we're thinking about all the time at EFF.

CINDY COHN
Yeah, I think that it's great to have somebody like Kashmir dive deep into something that we spend a lot of time talking about at EFF and, you know, not just facial recognition, but artificial intelligence and machine learning systems more broadly, and really give us the, the history of it and the story behind it so that we can ground our thinking in more reality. And, you know, it ends up being a rollicking good story.

JASON KELLEY
Yeah, I mean, what surprised me is that I think most of us saw that facial recognition sort of exploded really quickly, but it didn't, actually. A lot of the book, she writes, is about the history of its development and, um, You know, we could have been thinking about how to resolve the potential issues with facial recognition decades ago, but no one sort of expected that this would blow up in the way that it did until it kind of did.

And I really thought it was interesting that her explanation of how it blew up so fast wasn't really a technical development as much as an ethical one.

CINDY COHN
Yeah, I love that perspective, right?

JASON KELLEY
I mean, it’s a terrible thing, but it is helpful to think about, right?

CINDY COHN
Yeah, and it reminds me again of the thing that we talk about a lot, which is Larry Lessig's articulation of the kind of four ways that you can control behavior online. There's markets, there's laws, there's norms, and there's architecture. In this system, you know, we had. norms that were driven across.

The thing that Clearview did that she says wasn't a technical breakthrough, it was an ethical breakthrough. I think it points the way towards, you know, where you might need laws.
There's also an architecture piece though. You know, if Venmo hadn't set up its system so that everybody's faces were easily made public and scrapable, you know, that architectural decision could have had a pretty big impact on how vast this company was able to scale and where they could look.

So we've got an architecture piece, we've got a norms piece, we've got a lack of laws piece. It's very clear that a comprehensive privacy law would have been very helpful here.

And then there's the other piece about markets, right? You know, when you're selling into the law enforcement market, which is where Clearview finally found purchase, that's an extremely powerful market. And it ends up distorting the other ones.

JASON KELLEY
Exactly.

CINDY COHN
Once law enforcement decides they want something, I mean, when I asked Kash, you know, like, what do you think about ideas about banning facial recognition? Uh, she said, well, I think law enforcement really likes it. And so I don't think it'll be banned. And what that tells us is this particular market. can trump all the other pieces, and I think we see that in a lot of the work we do at EFF as well.

You know, we need to carve out a better space such that we can actually say no to law enforcement, rather than, well, if law enforcement wants it, then we're done in terms of things, and I think that's really shown by this story.

JASON KELLEY
Thanks for joining us for this episode of how to fix the internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch, and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.

In this episode, you heard Cult Orrin by Alex featuring Starfrosh and Jerry Spoon.

And Drops of H2O, The Filtered Water Treatment, by Jay Lang, featuring Airtone.

You can find links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

No KOSA, No TikTok Ban | EFFector 36.4

Mon, 03/25/2024 - 1:32pm

Want to hear about the latest news in digital rights? Well, you're in luck! EFFector 36.4 is out now and covers the latest topics, including our stance on the unconstitutional TikTok ban (spoiler: it's bad), a victory helping Indybay resist an unlawful search warrant and gag order, and thought-provoking comments we got from thousands of young people regarding the Kids Online Safety Act.

You can read the full newsletter here, or subscribe to get the next issue in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFector 36.4 | No KOSA, No TikTok Ban

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Responding to ShotSpotter, Police Shoot at Child Lighting Fireworks

Fri, 03/22/2024 - 7:10pm

This post was written by Rachel Hochhauser, an EFF legal intern

We’ve written multiple times about the inaccurate and dangerous “gunshot detection” tool, Shotspotter. A recent near-tragedy in Chicago adds to the growing pile of evidence that cities should drop the product.

On January 25, while responding to a ShotSpotter alert, a Chicago police officer opened fire on an unarmed “maybe 14 or 15” year old child in his backyard. Three officers approached the boy’s house, with one asking “What you doing bro, you good?” They heard a loud bang, later determined to be fireworks, and shot at the child. Fortunately, no physical injuries were recorded. In initial reports, police falsely claimed that they fired at a “man” who had fired on officers.

In a subsequent assessment of the event, the Chicago Civilian Office of Police Accountability (“COPA”) concluded that “a firearm was not used against the officers.” Chicago Police Superintendent Larry Snelling placed all attending officers on administrative duty for 30 days and is investigating whether the officers violated department policies.

ShotSpotter is the largest company which produces and distributes audio gunshot detection for U.S. cities and police departments. Currently, it is used by 100 law enforcement agencies. The system relies on sensors positioned on buildings and lamp posts, which purportedly detect the acoustic signature of a gunshot. The information is then forwarded to humans who purportedly have the expertise to verify whether the sound was gunfire (and not, for example, a car backfiring), and whether to deploy officers to the scene.

ShotSpotter claims that its technology is “97% accurate,” a figure produced by the marketing department and not engineers. The recent Chicago shooting shows this is not accurate. Indeed, a 2021 study in Chicago found that, in a period of 21 months, ShotSpotter resulted in police acting on dead-end reports over 40,000 times. Likewise, the Cook County State’s Attorney’s office concluded that ShotSpotter had “minimal return on investment” and only resulted in arrest for 1% of proven shootings, according to a recent CBS report. The technology is predominantly used in Black and Latinx neighborhoods, contributing to the over-policing of these areas. Police responding to ShotSpotter arrive at the scenes expecting gunfire and are on edge and therefore more likely to draw their firearms.

Finally, these sensors invade the right to privacy. Even in public places, people often have a reasonable expectation of privacy and therefore a legal right not to have their voices recorded. But these sound sensors risk the capture and leaking of private conversation. In People v. Johnson in California, a court held such recordings from ShotSpotter to be admissible evidence.

In February, Chicago’s Mayor announced that the city would not be renewing its contract with Shotspotter. Many other cities have cancelled or are considering cancelling use of the tool.

This technology endangers lives, disparately impacts communities of color, and encroaches on the privacy rights of individuals. It has a history of false positives and poses clear dangers to pedestrians and residents. It is urgent that these inaccurate and harmful systems be removed from our streets.

Cops Running DNA-Manufactured Faces Through Face Recognition is Tornado of Bad Ideas

Fri, 03/22/2024 - 11:52am

In keeping with law enforcement’s grand tradition of taking antiquated, invasive, and oppressive technologies, making them digital, and then calling it innovation, police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list.

Parts of this process aren't entirely new. On more than one occasion, police forces have been found to have fed images of celebrities into face recognition software to generate suspect lists. In one case from 2017, the New York Police Department decided its suspect looked like Woody Harrelson and ran the actor’s image through the software to generate hits. Further, software provided by US company Vigilant Solutions enables law enforcement to create “a proxy image from a sketch artist or artist rendering” to enhance images of potential suspects so that face recognition software can match these more accurately.

Since 2014, law enforcement have also sought the assistance of Parabon NanoLabs—a company that alleges it can create an image of the suspect’s face from their DNA. Parabon NanoLabs claim to have built this system by training machine learning models on the DNA data of thousands of volunteers with 3D scans of their faces. It is currently the only company offering phenotyping and only in concert with a forensic genetic genealogy investigation. The process is yet to be independently audited, and scientists have affirmed that predicting face shapes—particularly from DNA samples—is not possible. But this has not stopped law enforcement officers from seeking to use it, or from running these fabricated images through face recognition software.

Simply put: police are using DNA to create a hypothetical and not at all accurate face, then using that face as a clue on which to base investigations into crimes. Not only is this full dice-roll policing, it also threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face.

But it gets worse.

In 2020, a detective from the East Bay Regional Park District Police Department in California asked to have a rendered image from Parabon NanoLabs run through face recognition software. This 3D rendering, called a Snapshot Phenotype Report, predicted that—among other attributes—the suspect was male, had brown eyes, and fair skin. Found in police records published by Distributed Denial of Secrets, this appears to be the first reporting of a detective running an algorithmically-generated rendering based on crime-scene DNA through face recognition software. This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people)  will create a “most likely match” for that face.

These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice. Face recognition tech alone has an egregious history of misidentifying people of color, especially Black women, as well as failing to correctly identify trans and nonbinary people. The algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. Combining this with fabricated 3D renderings from crime-scene DNA exponentially increases the likelihood of false arrests, and exacerbates existing harms on communities that are already disproportionately over-surveilled by face recognition technology and discriminatory policing. 

There are no federal rules that prohibit police forces from undertaking these actions. And despite the detective’s request violating Parabon NanoLabs’ terms of service, there is seemingly no way to ensure compliance. Pulling together criteria like skin tone, hair color, and gender does not give an accurate face of a suspect, and deploying these untested algorithms without any oversight places people at risk of being a suspect for a crime they didn’t commit. In one case from Canada, Edmonton Police Service issued an apology over its failure to balance the harms to the Black community with the potential investigative value after using Parabon’s DNA phenotyping services to identify a suspect.

EFF continues to call for a complete ban on government use of face recognition—because otherwise these are the results. How much more evidence do law markers need that police cannot be trusted with this dangerous technology? How many more people need to be falsely arrested and how many more reckless schemes like this one need to be perpetrated before legislators realize this is not a sustainable method of law enforcement? Cities across the United States have already taken the step to ban government use of this technology, and Montana has specifically recognized a privacy interest in phenotype data. Other cities and states need to catch up or Congress needs to act before more people are hurt and our rights are trampled. 

EFF and 34 Civil Society Organizations Call on Ghana’s President to Reject the Anti-LGBTQ+ Bill 

Fri, 03/22/2024 - 8:42am

MPs in Ghana’s Parliament voted to pass the country’s draconian ‘Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill’ on February 28th. The bill now heads to Ghana’s President Nana Akufo-Addo to be signed into law. 

EFF has joined 34 civil society organizations to demand that President Akufo-Addo vetoes the Family Values Bill.

The legislation criminalizes being LGBTQ+ or an ally of LGBTQ+ people, and also imposes custodial sentences for users and social media companies in punishment for vague, ill-defined offenses like promoting “change in public opinion of prohibited acts” on social media. This would effectively ban all speech and activity online and offline that even remotely supports LGBTQ+ rights.

The letter concludes:

“We also call on you to reaffirm Ghana’s obligation to prevent acts that violate and undermine LGBTQ+ people’s fundamental human rights, including the rights to life, to information, to free association, and to freedom of expression.”

Read the full letter here.

Disinformation and Elections: EFF and ARTICLE 19 Submit Key Recommendations to EU Commission

Thu, 03/21/2024 - 2:35pm
Global Elections and Platform Responsibility

This year is a major one for elections around the world, with pivotal races in the U.S., the UK, the European Union, Russia, and India, to name just a few. Social media platforms play a crucial role in democratic engagement by enabling users to participate in public discourse and by providing access to information, especially as public figures increasingly engage with voters directly. Unfortunately elections also attract a sometimes dangerous amount of disinformation, filling users' news feed with ads touting conspiracy theories about candidates, false news stories about stolen elections, and so on.

Online election disinformation and misinformation can have real world consequences in the U.S. and all over the world. The EU Commission and other regulators are therefore formulating measures platforms could take to address disinformation related to elections. 

Given their dominance over the online information space, providers of Very Large Online Platforms (VLOPs), as sites with over 45 million users in the EU are called, have unique power to influence outcomes.  Platforms are driven by economic incentives that may not align with democratic values, and that disconnect  may be embedded in the design of their systems. For example, features like engagement-driven recommender systems may prioritize and amplify disinformation, divisive content, and incitement to violence. That effect, combined with a significant lack of transparency and targeting techniques, can too easily undermine free, fair, and well-informed electoral processes.

Digital Services Act and EU Commission Guidelines

The EU Digital Services Act (DSA) contains a set of sweeping regulations about online-content governance and responsibility for digital services that make X, Facebook, and other platforms subject in many ways to the European Commission and national authorities. It focuses on content moderation processes on platforms, limits targeted ads, and enhances transparency for users. However, the DSA also grants considerable power to authorities to flag content and investigate anonymous users - powers that they may be tempted to mis-use with elections looming. The DSA also obliges VLOPs to assess and mitigate systemic risks, but it is unclear what those obligations mean in practice. Much will depend on how social media platforms interpret their obligations under the DSA, and how European Union authorities enforce the regulation.

We therefore support the initiative by the EU Commission to gather views about what measures the Commission should call on platforms to take to mitigate specific risks linked to disinformation and electoral processes.

Together with ARTICLE 19, we have submitted comments to the EU Commission on future guidelines for platforms. In our response, we recommend that the guidelines prioritize best practices, instead of policing speech. Furthermore, DSA risk assessment and mitigation compliance evaluations should focus primarily on ensuring respect for fundamental rights. 

We further argue against using watermarking of AI content to curb disinformation, and caution against the draft guidelines’ broadly phrased recommendation that platforms should exchange information with national authorities. Any such exchanges should take care to respect human rights, beginning with a transparent process.  We also recommend that the guidelines pay particular attention to attacks against minority groups or online harassment and abuse of female candidates, lest such attacks further silence those parts of the population who are already often denied a voice.

EFF and ARTICLE 19 Submission: https://www.eff.org/document/joint-submission-euelections

EFF Seeks Greater Public Access to Patent Lawsuit Filed in Texas

Wed, 03/20/2024 - 3:26pm

You’re not supposed to be able to litigate in secret in the U.S. That’s especially true in a patent case dealing with technology that most internet users rely on every day.

 Unfortunately, that’s exactly what’s happening in a case called Entropic Communications, LLC v. Charter Communications, Inc. The parties have made so much of their dispute secret that it is hard to tell how the patents owned by Entropic might affect the Data Over Cable Service Interface Specifications (DOCSIS) standard, a key technical standard that ensures cable customers can access the internet.

In Entropic, both sides are experienced litigants who should know that this type of sealing is improper. Unfortunately, overbroad secrecy is common in patent litigation, particularly in cases filed in the U.S. District Court for the Eastern District of Texas.

EFF has sought to ensure public access to lawsuits in this district for years. In 2016, EFF intervened in another patent case in this very district, arguing that the heavy sealing by a patent owner called Blue Spike violated the public’s First Amendment and common law rights. A judge ordered the case unsealed.

As Entropic shows, however, parties still believe they can shut down the public’s access to presumptively public legal disputes. This secrecy has to stop. That’s why EFF, represented by the Science, Health & Information Clinic at Columbia Law School, filed a motion today seeking to intervene in the case and unseal a variety of legal briefs and evidence submitted in the case. EFF’s motion argues that the legal issues in the case and their potential implications for the DOCSIS standard are a matter of public concern and asks the district court judge hearing the case to provide greater public access.

Protective Orders Cannot Override The Public’s First Amendment Rights

As EFF’s motion describes, the parties appear to have agreed to keep much of their filings secret via what is known as a protective order. These court orders are common in litigation and prevent the parties from disclosing information that they obtain from one another during the fact-gathering phase of a case. Importantly, protective orders set the rules for information exchanged between the parties, not what is filed on a public court docket.

The parties in Entropic, however, are claiming that the protective order permits them to keep secret both legal arguments made in briefs filed with the court as well as evidence submitted with those filings. EFF’s motion argues that this contention is incorrect as a matter of law because the parties cannot use their agreement to abrogate the public’s First Amendment and common law rights to access court records. More generally, relying on protective orders to limit public access is problematic because parties in litigation often have little interest or incentive to make their filings public.

Unfortunately, parties in patent litigation too often seek to seal a variety of information that should be public. EFF continues to push back on these claims. In addition to our work in Texas, we have also intervened in a California patent case, where we also won an important transparency ruling. The court in that case prevented Uniloc, a company that had filed hundreds of patent lawsuits, from keeping the public in the dark as to its licensing activities.

That is why part of EFF’s motion asks the court to clarify that parties litigating in the Texas district court cannot rely on a protective order for secrecy and that they must instead seek permission from the court and justify any claim that material should be filed under seal.

On top of clarifying that the parties’ protective orders cannot frustrate the public’s right to access federal court records, we hope the motion in Entropic helps shed light on the claims and defenses at issue in this case, which are themselves a matter of public concern. The DOCSIS standard is used in virtually all cable internet modems around the world, so the claims made by Entropic may have broader consequences for anyone who connects to the internet via a cable modem.

It’s also impossible to tell if Entropic might want to sue more cable modem makers. So far, Entropic has sued five big cable modem vendors—Charter, Cox, Comcast, DISH TV, and DirecTV—in more than a dozen separate cases. EFF is hopeful that the records will shed light on how broadly Entropic believes its patents can reach cable modem technology.

EFF is extremely grateful that Columbia Law School’s Science, Health & Information Clinic could represent us in this case. We especially thank the student attorneys who worked on the filing, including Sean Hong, Gloria Yi, Hiba Ismail, and Stephanie Lim, and the clinic’s director, Christopher Morten.

Related Cases: Entropic Communications, LLC v. Charter Communications, Inc.

The Tech Apocalypse Panic is Driven by AI Boosters, Military Tacticians, and Movies

Wed, 03/20/2024 - 10:36am

There has been a tremendous amount of hand wringing and nervousness about how so-called artificial intelligence might end up destroying the world. The fretting has only gotten worse as a result of a U.S. State Department-commissioned report on the security risk of weaponized AI.

Whether these messages come from popular films like a War Games or The Terminator, reports that in digital simulations AI supposedly favors the nuclear option more than it should, or the idea that AI could assess nuclear threats quicker than humans—all of these scenarios have one thing in common: they end with nukes (almost) being launched because a computer either had the ability to pull the trigger or convinced humans to do so by simulating imminent nuclear threat. The purported risk of AI comes not just from yielding “control" to computers, but also the ability for advanced algorithmic systems to breach cybersecurity measures or manipulate and social engineer people with realistic voice, text, images, video, or digital impersonations

But there is one easy way to avoid a lot of this and prevent a self-inflicted doomsday: don’t give computers the capability to launch devastating weapons. This means both denying algorithms ultimate decision making powers, but it also means building in protocols and safeguards so that some kind of generative AI cannot be used to impersonate or simulate the orders capable of launching attacks. It’s really simple, and we’re by far not the only (or the first) people to suggest the radical idea that we just not integrate computer decision making into many important decisions–from deciding a person’s freedom to launching first or retaliatory strikes with nuclear weapons.


First, let’s define terms. To start, I am using "Artificial Intelligence" purely for expediency and because it is the term most commonly used by vendors and government agencies to describe automated algorithmic decision making despite the fact that it is a problematic term that shields human agency from criticism. What we are talking about here is an algorithmic system, fed a tremendous amount of historical or hypothetical information, that leverages probability and context in order to choose what outcomes are expected based on the data it has been fed. It’s how training algorithmic chatbots on posts from social media resulted in the chatbot regurgitating the racist rhetoric it was trained on. It’s also how predictive policing algorithms reaffirm racially biased policing by sending police to neighborhoods where the police already patrol and where they make a majority of their arrests. From the vantage of the data it looks as if that is the only neighborhood with crime because police don’t typically arrest people in other neighborhoods. As AI expert and technologist Joy Buolamwini has said, "With the adoption of AI systems, at first I thought we were looking at a mirror, but now I believe we're looking into a kaleidoscope of distortion... Because the technologies we believe to be bringing us into the future are actually taking us back from the progress already made."

Military Tactics Shouldn’t Drive AI Use

As EFF wrote in 2018, “Militaries must make sure they don't buy into the machine learning hype while missing the warning label. There's much to be done with machine learning, but plenty of reasons to keep it away from things like target selection, fire control, and most command, control, and intelligence (C2I) roles in the near future, and perhaps beyond that too.” (You can read EFF’s whole 2018 white paper: The Cautious Path to Advantage: How Militaries Should Plan for AI here

Just like in policing, in the military there must be a compelling directive (not to mention the marketing from eager companies hoping to get rich off defense contracts) to constantly be innovating in order to claim technical superiority. But integrating technology for innovation’s sake alone creates a great risk of unforeseen danger. AI-enhanced targeting is liable to get things wrong. AI can be fooled or tricked. It can be hacked. And giving AI the power to escalate armed conflicts, especially on a global or nuclear scale, might just bring about the much-feared AI apocalypse that can be avoided just by keeping a human finger on the button.


We’ve written before about how necessary it is to ban attempts for police to arm robots (either remote controlled or autonomous) in a domestic context for the same reasons. The idea of so-called autonomy among machines and robots creates the false sense of agency–the idea that only the computer is to blame for falsely targeting the wrong person or misreading signs of incoming missiles and launching a nuclear weapon in response–obscures who is really at fault. Humans put computers in charge of making the decisions, but humans also train the programs which make the decisions.

AI Does What We Tell It To

In the words of linguist Emily Bender,  “AI” and especially its text-based applications, is a “stochastic parrot” meaning that it echoes back to us things we taught it with as “determined by random, probabilistic distribution.” In short, we give it the material it learns, it learns it, and then draws conclusions and makes decisions based on that historical dataset. If you teach an algorithmic model that 9 times out of 10 a nation will launch a retaliatory strike when missiles are fired at them–the first time that model mistakes a flock of birds for inbound missiles, that is exactly what it will do.

To that end, AI scholar Kate Crawford argues, “AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests.” 

AI does what we teach it to. It mimics the decisions it is taught to make either through hypotheticals or historical data. This means that, yet again, we are not powerless to a coming AI doomsday. We teach AI how to operate. We give it control of escalation, weaponry, and military response. We could just not.

Governing AI Doesn’t Mean Making it More Secret–It Means Regulating Use 

Part of the recent report commissioned by the U.S. Department of State on the weaponization of AI included one troubling recommendation: making the inner workings of AI more secret. In order to keep algorithms from being tampered with or manipulated, the full report (as summarized by Time) suggests that a new governmental regulatory agency responsible for AI should criminalize and make potentially punishable by jail time publishing the inner workings of AI. This means that how AI functions in our daily lives, and how the government uses it, could never be open source and would always live inside a black box where we could never learn the datasets informing its decision making. So much of our lives is already being governed by automated decision making, from the criminal justice system to employment, to criminalize the only route for people to know how those systems are being trained seems counterproductive and wrong.

Opening up the inner workings of AI puts more eyes on how a system functions and makes it more easy, not less, to spot manipulation and tampering… not to mention it might mitigate the biases and harms that skewed training datasets create in the first place.

Conclusion

Machine learning and algorithmic systems are useful tools whose potential we are only just beginning to grapple with—but we have to understand what these technologies are and what they are not. They are neither “artificial” or “intelligent”—they do not represent an alternate and spontaneously-occurring way of knowing independent of the human mind. People build these systems and train them to get a desired outcome. Even when outcomes from AI are unexpected, usually one can find their origins somewhere in the data systems they were trained on. Understanding this will go a long way toward responsibly shaping how and when AI is deployed, especially in a defense contract, and will hopefully alleviate some of our collective sci-fi panic.

This doesn’t mean that people won’t weaponize AI—and already are in the form of political disinformation or realistic impersonation. But the solution to that is not to outlaw AI entirely, nor is it handing over the keys to a nuclear arsenal to computers. We need a common sense system that respects innovation, regulates uses rather than the technology itself, and does not let panic, AI boosters, or military tacticians dictate how and when important systems are put under autonomous control. 

Lucy Parsons Labs Takes Police Foundation to Court for Open Records Requests

Tue, 03/19/2024 - 6:55pm

The University of Georgia (UGA) School of Law’s First Amendment Clinic has filed an Open Records Request lawsuit to demand public records from the private Atlanta Police Foundation (APF). The lawsuit, filed at the behest of the Atlanta Community Press Collective and Electronic Frontier Alliance-member Lucy Parsons Labs, is seeking records relating to the Atlanta Public Safety Training Center, which activists refer to as Cop City. While the facility will be used for public law enforcement and emergency services agencies, including training on surveillance technologies, the lease is held by the APF.  

The argument is that the Atlanta Police Foundation, as the nonprofit holding the lease for facilities intended for use by government agencies, should be subject to the same state Open Records Act as to its functions that are on behalf of law enforcement agencies. Beyond the Atlanta Public Safety Training Center, the APF also manages the Atlanta Police Department’s Video Surveillance Center, which integrates footage from over 16,000 public and privately-held surveillance cameras across the city. 

According to UGA School of Law’s First Amendment Clinic, “The Georgia Supreme Court has held that records in the custody of a private entity that relate to services or functions the entity performs for or on behalf of the government are public records under the Georgia Open Records Act.” 

Police foundations frequently operate in this space. They are private, non-profit organizations with boards made up of corporations and law firms that receive monetary or equipment donations that they then gift to their local law enforcement agencies. These gifts often bypass council hearings or other forms of public oversight. 

Lucy Parsons Labs’ Ed Vogel said, “At the core of the struggle over the Atlanta Public Safety Training Center is democratic practice. Decisions regarding this facility should not be made behind closed doors. This lawsuit is just one piece of that. The people have a right to know.” 

You can read the lawsuit here

Speaking Freely: Maryam Al-Khawaja

Tue, 03/19/2024 - 2:35pm

*This interview has been edited for length and clarity.

Maryam Al-Khawaja is a Bahraini Woman Human Rights Defender who works as a consultant and trainer on Human Rights. She is a leading voice for human rights and political reform in Bahrain and the Gulf region. She has been influential in shaping official responses to human rights atrocities in Bahrain and the Gulf region by leading campaigns and engaging with prominent policymakers around the world.

She played an instrumental role in the pro-democracy protests in Bahrain’s Pearl Roundabout in February 2011. These protests triggered a government response of widespread extra judicial killings, arrests, and torture, which she documented extensively over social media. Due to her human rights work, she was subjected to assault, threats, defamation campaigns, imprisonment and an unfair trial. She was arrested on illegitimate charges in 2014 and sentenced in absentia to one year in prison. She currently has an outstanding arrest warrant and four pending cases, one of which could carry a life sentence. She serves on the Boards of the International Service for Human Rights, Urgent Action Fund, CIVICUS and the Bahrain Institute for Rights and Democracy. She also previously served as Co-Director at the Gulf Center for Human Rights and Acting President of the Bahrain Centre for Human Rights.

York: Can you introduce yourself and tell us a little about your work? Maybe provide us a brief outline of your history as a free expression advocate going back as far as you’d like.

Maryam: Sure, so my name is Maryam Al-Khawaja. I’m a Bahraini-Danish human rights defender and advocate. I’ve worked in many different spaces around human rights and on many different thematic issues. Of course freedom of expression is an intricate part of nearly any kind of human rights advocacy work. And it’s one of the issues that is critical to the work that we do and critical to the civil society space because it not only affects people who live in dictatorships, but also people who live in democracies or pseudodemocracies. A lot of times there’s not necessarily an agreement around what freedom of expression is or a definition of what falls under the scope of freedom of expression. And also to who and how that applies. So while some things for some people might be considered free expression, for others it might be considered not as free expression and therefore it’s not protected.

I think it’s something that I’ve both experienced having done the work and having taken part in the revolution in Bahrain and watching the difference between how we went from self-censorship prior to the uprising and then how people took to the streets and started saying whatever they wanted. That moment of just breaking down that wall and feeling almost like you could breathe again because you suddenly could express yourself. Not necessarily without fear – because the consequences were still there – but more so that you were doing it anyway, despite the fear. I think that’s one of the strongest memories I have of the importance of speech and that shift that happens even internally because, yes, there’s censorship in Bahrain, but censorship then creates self-censorship for protection and self preservation.

It’s interesting because I then left Bahrain and came to Denmark and I started seeing how, as a Brown, Muslim woman, my right to free expression doesn’t look the same as someone who is White living in Europe. So I also had to learn those intricacies and how that works and how we stand up to that or fight against that. It’s… been a long struggle, to keep it short.

York: That’s a really strong answer and I want to come back to something you said, and that’s that censorship creates self-censorship. I think we both know the moment we’re living in right now, and I’m seeing a lot of self-censorship even from people who typically are very staunch in standing up for freedom of expression. I’m curious, in the past decade, how has the idea that censorship creates self-censorship impacted you and the people around you or the activists that you know?

One part of it is when you’re an advocate and you look how I look – especially when I was wearing the headscarf – you learn very quickly that there are things that people find acceptable coming from you, and things they find not acceptable. There are judgements and stereotypes that are applied to you and therefore what you can and cannot say actually has to also be taken into that context.

Like to give you a small example, one of the things that I faced a lot during my advocacy and my work on Bahrain was I was constantly put in a space where I had to explain or… not justify – because I don’t support the use of violence generally – but I was put in a defensive position of “Why are you as civil society not telling these youth not to use Molotov cocktails on the street of Bahrain?” And I would try to explain that while I don’t justify the use of violence generally, it’s important to understand the context. And to understand that a small group of youth in Bahrain started using Molotov cocktails as a way to defend themselves, to try and get the riot police out of their villages when the riot police would come in in the middle of the night and basically go on a rampage, break into people’s homes, beat people to a pulp, and then take people and disappear them or torture them and so on. And so one of the ways for them to try and fight back was to use Molotov cocktails to at least get the riot police to stop coming into their villages. Of course this was always taken as me justifying violence or me supporting terrorism. Unfortunately, it wasn’t surprising, but it was such a clarifying moment. Then I watched those very same people at the very same media outlets literally put out tutorials on how to make Molotov cocktails for people in Ukraine fighting back against Russia. It’s not surprising because I know that’s how the world works, I know that in the world that we live in and the societies that we live in, my life is not equal to that of others – specific others. I very quickly learned that my work as a person of color – and I don’t really like that term – but as a person of the global majority, it’s my proximity to whiteness that decides my value as a human being. Unfortunately.

So that’s one layer of it. Another layer of it is here in Europe. I live in Copenhagen. I travel in the West quite often. I’ve also seen the difference of how we’re positioned as – especially Muslims with the incredible amounts of Islamophobia especially in Copenhagen – and seeing how politicians can come out and say incredibly Islamophobic and racist things and be written off as freedom of expression. But if someone of the global majority were to do that they would immediately be dubbed as extremist or a radical.

There is this extreme double standard when it comes to what freedom of expression looks like and how it’s implemented. And I’ll end with this example, with the Charlie Hebdo example. There was such a huge international solidarity movement when the attack on Charlie Hebdo happened in France. And obviously the killing that happened, there doesn’t even need to be a conversation around that, of course everyone should condemn that. What I find lacking in the conversation around freedom of expression when it comes to Charlie Hebdo is that Charlie Hebdo targets Muslim minorities that are already under attack, that are already discriminated against, and, in my mind, it actually incites violence against them when it does so. Because they’re already so targeted, because they’re vilified already in the media by politicians and so on. So my approach isn’t to say, “we should start censoring these media publications” or “we should start censoring people from being able to say what they say.” I’m saying that when we’re going to implement rules or understandings around freedom of expression it needs to be implemented equally. It needs to be implemented without double standards. Without picking and choosing who gets to have freedom of expression versus who doesn’t.

York: That’s such a great point. And I’m glad you brought up Charlie Hebdo. Coming back to that, it reminded me about the different governments that we saw, from my perspective, pretending to march for free expression when that happened. We saw a number of states that ranked fairly poorly on press freedom at the time. My recollection is we saw a number of countries that don’t have a great track record on freedom of expression, I think including Russia, the UAE, and Saudi Arabia, take a stance at that time. What that evokes for me is the hypocrisy of various states. We think about censorship as a potent tool for those in power to maintain power and then of course that sort of political posturing is also a very potent tool. So what are your thoughts on that? How does that inform your advocacy?

Like I said, we’ve already seen it throughout Europe and throughout the United States. Right now with the Gaza situation we’re seeing this with even more clarity – and it’s not like it was hidden before, those of us that work in these spaces already knew this – but I think right now it’s just so in-your-face where people are literally getting fired from their jobs and called into HR for liking posts, for posting things basically standing against an ongoing genocide. And I think, again, it brings to the surface the double standard and the hypocrisy that exists within the spaces that talk about freedom of expression. France is actually a great example. Even when we’re talking about Charlie Hebdo; Charlie Hebdo did the cover of the magazine before they were attacked. It was mocking the Rabaa Massacre, which was one of the largest massacres to happen in Egypt in recent history. Regardless of what you think of the Muslim Brotherhood, that was a massacre, it was wrong, it should be condemned. And they poked fun at that. They had this man with a long beard who looked like the Muslim Brotherhood holding up a Quran with bullets going through the Quran and hitting him, saying, “your Quran won’t protect you.” This was considered freedom of expression even though it was mocking a literal massacre that happened in Egypt. Which, in my opinion, the Egyptian regime should be considered as committing terrorist acts for that massacre. And so in some ways that could be considered as supporting terrorism. Just like I consider what is happening to the Palestinians as a form of terrorism. The same thing with Syria and so on.

But, unfortunately, it’s the people who own the discourse that get to decide what phrases and what terminologies can be applied and used where. But the point that I was making about Charlie Hebdo is that not much later after the attack on Charlie Hebdo, there was a 16 year old in France who made a cartoon cover where he mocks the attack on Charlie Hebdo. He basically used the exact same type of cartoon that they had used around the Rabaa massacre. Where there’s a guy from Charlie Hebdo holding up a copy of Charlie Hebdo and being struck by bullets and saying “your magazine doesn’t stop bullets.” And he was arrested! This 16 year old kid does this cartoon – exactly the same as the magazine had done after the massacre – and he was arrested and charged with advocating terrorism. And I think this is one of the clearest examples of how freedom of expression is not implemented on an equal level when it comes to who’s practicing it.

I think it’s the same thing as what we’re seeing right now happening with Palestine. When you look at what’s happening in Germany with the amount of people being arrested [for unauthorized protests] and now we’re even hearing about raids on people’s homes. I’ve spoken to some of my friends in Germany who say that they’re literally trying to hide and get rid of any pro-Palestinian flyers or flags that they have just in case their home gets raided. It’s interesting because quite a few Arabs in Germany now are referring to Germany as Assad’s Germany. Because a lot of what’s happening in Germany right now, to them, is reminiscent of what it was like to live in Syria under Assad. I think that tells you almost everything you need to know about the double standards of how these things are implemented. I think this is where the problem comes in.

You can not talk about free expression and freedom of speech without talking about how it’s related to colonialism. About how it’s related to movements for freedom. About how it’s related to the fact that much of our human rights movements in civil society are currently based on institutionalized human rights – and I’m talking specifically about the West, obviously, because there are a lot of grassroots movements in the global majority countries. But we can not talk about these things without talking about the need and importance of decolonizing our activism.

My thinking right now is very much inspired by Fanon’s The Wretched of the Earth, where he talks about how when colonizers colonized, they didn’t just colonize the country and the institutions and education and all these different things. They even colonized and decided for us and dictated for us how we’re allowed to fight back. How we’re allowed to resist. And I think that’s incredibly true. There’s a very rigid understanding of the space that you’re allowed to exist in or have to exist in to be regarded as a credible human rights activist. Whether it’s for free speech or for any other human right. And so, in my mind, what we need right now is to decolonize our activism. And to step away from that idea that it’s the West that decides for us what “appropriate” or “acceptable” activism actually looks like. And start deciding for ourselves what our activism needs to look like. Because we know now that none of these people that have supported the genocide in Gaza can in any way shape or form try to dictate what humans rights look like or what activism looks like. I’ve seen this over social media over the past period and people have been saying this over and over again that what died in Gaza is that pretense. That the West gets to tell the rest of us what human rights are and what freedoms are and how we should fight for them.

York: Let’s change directions for a moment. What do you think of the control that corporations have over deciding what speech parameters look like right now? 

[Laughs] Where do I start? I think it’s a struggle for a lot of us.

I want to first acknowledge that I have a lot of privileges that other activists don’t. When I left Bahrain in 2011 I already had Danish citizenship. Which meant that I could travel. I already had a strong command of English. Which meant that I could do meetings without the need for a translator. That I could attend and be in certain spaces. And that’s not necessarily the case for so many other activists. And so I do have a lot of privileges that have put me in the position that I am in. And I believe that part of having privileges like that means that I need to use them to be also a loud speaker for others. And to try and make this world a better place, in whatever shape and form that I can. That being said, I think that for many of us even who have had privileges that other activists don’t, it’s been a real struggle to watch the mediums and tools that we have been using for the past, over a decade, as a means of raising pressure, communicating with the world, connecting, and so on, be taken away from us. In ways that we can’t control and in ways that we don’t have a say on. I think that for a lot of – and I know especially for myself – but I think for a lot of activists who really found their voices in 2011 as part of activism especially on platforms like Twitter.

When Elon Musk bought Twitter and decided to remove the verification status from all of us activists who had that for a reason. I remember I received my verification status because of the amount of fake accounts that the Bahraini government was creating at that time to impersonate me to try to discredit me. And also because I was receiving death threats and rape threats and all kinds of threats, over and over again. I received that verification status as an acknowledgement that I need support against those attacks that I was being subjected to. And it was gone overnight. It’s not just about that blue tick. It’s that people don’t see my Tweets the way that they used to. It’s about the fact that my message can’t go as far as it used to go. It’s not just because we no longer show up in people’s feeds, but also because so many people have left the platform because of how problematic it’s become.

In some ways I spent 13 years focused on Twitter, building a following—obviously, my work is so much more than Twitter—but Twitter has been a tool for the work that I do. And really building a following and making sure that people trusted me and the information that I shared and that I was a trusted and credible source of information. Not just on Bahrain, but on all of the different types of work that I do. And then suddenly overnight, at the age of 35, 36 having to recreate that all over again on Instagram. And on TikTok. And the thing is… we’re tired. We’re exhausted. We’re burnt out. We’re not doing well. Almost everyone I know is either depressed or sick or dealing with some form of health issue. Thirteen years after the uprisings we’re not doing well and we’re not okay. What’s happening with Gaza right now is hitting all of us. I think it’s incredibly triggering and hurtful. I think the idea that we now have to make that effort to rebuild platforms to be able to reach people, it’s not just “Oh my god, I don’t have the energy for it.” It’s like someone tore a limb from us and we have to try to regrow that limb. And how do you regrow a limb, right? It’s incredibly painful.

Obviously, it’s nice to have a large following and for people to recognize you and know who you are and so on—and it’s hard work not letting that get to your head—but, for me, losing my voice is not about the follower count or how much people know who I am. It’s the fact that I can no longer get the same kind of attention for my father’s case. I can no longer get the same kind of attention for the hundreds of people who no one knows their names or their faces who are sitting in prison cells in Bahrain who are still being tortured. For the children who are still being arrested for protesting. For Palestine and Bahrain. I can no longer make sure that I’m a loudspeaker so that people know these things are happening.

A lot of people talked about and wrote about the damage that Elon Musk did to Twitter and to that “public square” that we have. Twitter has always had its problems. And Meta has always had its problems. But it was a problem where we at least had a voice. We weren’t always heard and we weren’t always able to influence things, but at least it felt like we had a voice. Now it doesn’t feel like we have a voice. There was a lot of conversation around this, around the taking away of the public square, but there are these intricacies and details that affect us on such a personal level that I don’t think people outside of these circles can really understand or even think about. And how it affects when I need to make noise because my father might die from a heart attack because they’re refusing to give him medical treatment. And I can’t get retweets or I can’t get people to re-post. Or only 100 people are seeing the videos I’m posting on Instagram. It’s not that I care about having that following, it’s about literally being able to save my father’s life. So it takes such a toll on you on a personal level as well. I think that’s the part of the conversation that I think is missing when we talk about these things.

I can’t imagine—but in some ways I can imagine—how it feels for Palestinians right now. To watch their family members, their people being subjected to an ongoing genocide and then have their voices taken away from them, to be subjected to shadowbans, to have their accounts shut down. It’s insult added to injury. You’re already hurting. You’re already in pain. You’re already not doing well. You’re already struggling just to survive another day and the only thing you have is your voice and then even that is taken away from you. I don’t think we can even begin to imagine the kind of damage on mental health and even physical health that that’s going to have in the coming years and in the coming generations because, of course, we pass down our trauma to the people around us as well. 

York: I’m going to take a slight step back and a slight segue because I want to be able to use this interview for you to talk about your father’s case as well. Can you tell us about your father’s case and where it stands today?

My father, Abdulhadi Al-Khawaja, dedicated his entire life to human rights activism. Which is why he spent half his life, if not more than that, in exile. And it’s why he spent the last thirteen years in prison. My father is the only Danish prisoner of conscience in the world today. And I very strongly believe that if my father was not a Brown, Muslim man he would not have spent this long as an EU citizen in a prison cell based on freedom of expression charges. And this is one of those cases where you really get to recognize those double standards. Where Denmark prides itself on being one of the countries that is the biggest protector of freedom of expression. And yet the entire case against my father – and my father was one of the human rights leaders of the uprising in 2011 – and he led the protests and he talked about human rights and freedom and he talked about the importance of us doing things the right way. And I think that’s why he was seen as such a threat.

One of his speeches was about how even if we are able to change the government in Bahrain, we are not going to torture. We’re not going to be like them. We’re going to make sure that people who were perpetrators receive due process and fair trials. He always focused on the importance of people fighting for justice and fighting for change to do things the right way and from a human rights framework. He was arrested very violently from my sister’s home in front of my friends and family. He was beaten unconscious in front of my family. And he repeatedly said as he was being beaten, “I can’t breathe.” And every time I think of what happened with my father I think of Eric Garner as well – where he said over and over again “I can’t breathe” when he was basically killed by the United States police. Then my father was taken away.

Interestingly enough, especially because we’re talking about freedom of expression, my father was charged with terrorism. In Bahrain, the terrorism law is so vague that even the work of a human rights defender can be regarded as terrorism. So even criticizing the police for committing violations can be seen as inciting terrorism. So my father was arrested and tried under the terrorism law, and they said he was trying to overthrow the government. But Human Rights Watch actually dissected the case that was brought against my father and the “evidence” that he was of course forced to sign under torture. He was subjected to very severe psychological and sexual torture for over two months during which he was disappeared as well – held in incommunicado detention. When they did that dissection of the case they found that all of the charges against my father were based on freedom of expression issues. It was all based on things that he had said during the protests around calling for democracy, around calling for representative government, the right to self determination, and more. It’s very much a freedom of expression issue.

What I find horrifying – but also it says a lot about the case against my father and why he’s in prison today – is that one of the first things they did to my dad was they hit him with a hard object on his jaw and they broke his jaw. Even my father says that he feels they did that on purpose because they were hoping that he would never be able to speak again. They broke his jaw in six different places, or four different places. He had to undergo a four hour surgery where they reattached his jaw. They had to use more than twenty metal plates and screws to put his jaw back together. And he, of course, still has chronic pain and issues because of what they did. He was subjected to so much else like electrocutions and more, but that was a very specific intentional first blow that he received when he was arrested. To the face and to the mouth. As punishment, as retaliation, for having used his right to free expression to speak up and criticize the government. I think this tells you pretty much everything you need to know about what the situation of freedom of expression is in Bahrain. But it should also tell you a lot about the EU and the West and how they regard the importance of freedom of expression when the fact that my father is an EU citizen has not actually protected him. And 13 years later he continues to sit in a prison cell serving a life sentence because he practiced his right to free expression and because he practiced his right to freedom of assembly.

Last year, my father decided to do a one-person protest in the prison yard. Both in solidarity with Palestine, but also because of the consistent and systematic denial of adequate medical treatment to prisoners of conscience in Bahrain. Because of that, and because he was again using his right to free expression inside prison, he was denied medical treatment for over a year. And my father had developed a heart condition. So a few months ago his condition started to get really bad, the doctors told us he might have a heart attack or a stroke at any time given that he was being denied access to a cardiologist. So I had to put myself and my freedom at risk. I’m already sentenced to one year in prison in Bahrain, I have four pending cases – basically, going back to Bahrain means that I am very likely to spend the rest of my life in prison, if not be subjected to torture. Which I have been in the past as well. But I decided to try and go back to Bahrain because the Danish government was refusing to step up. The West was refusing to step up. I mean we were asking for the bare minimum, which was access to a cardiologist. So I had to put myself at risk to try and bring attention.

I ended up being denied boarding because there was too much international attention around my trip. So they denied me boarding because they didn’t want international coverage around me being arrested at the Bahrain airport again. I managed to get several very high profile human rights personalities to go with me on the trip. Because of that, and because we were able to raise so much international attention around my dad’s case, they actually ended up taking him to the cardiologist and now he’s on heart medication. But he’s never out of the danger zone, with Bahrain being what it is and because he’s still sitting in a prison cell. We’re still working hard on getting him out, but I think for my dad it’s always about his principles and his values and his ethics. For him, being a human rights defender, being in prison doesn’t mean the end of his activism. And that’s why he’s gone on more than seven hunger strikes in prison, that’s why he’s done multiple one-person protests in the prison yard. For him, his activism is an ongoing thing even from inside his prison cell.

York: That’s an incredible story and I appreciate you sharing it with our readers—your father is incredibly brave. Last question- who is your free speech hero?

Of course my dad, for sure. He always taught us the importance of using our voice not just to speak up for ourselves but for others especially. There’s so many that I’m drawing a blank! I can tell you that my favorite quote is by Edward Snowden. “Saying that you don’t care about the right to privacy because you have nothing to hide is like saying you don’t care about freedom of speech because you have nothing to say.” I think that really brings things to the point.

There’s also an indigenous activist in the US who has been doing such a tremendous job using her voice to bring attention to what’s happening to the indigenous communities in the US. And I know it comes at a cost and it comes at great risk. There’s several Syrian activists and Palestinian activists. Motaz Azaiza and his reporting on what’s happening now in Gaza and the price that he’s paying for it, same thing with Bisan and Plestia. She’s also a Palestinian journalist who’s been reporting on Gaza. There’s just so many free expression heroes. People who have really excelled in understanding how to use their voice to make this world a better place. Those are my heroes. The everyday people who choose to do the right thing when it’s easier not to.

Decoding the California DMV's Mobile Driver's License

Mon, 03/18/2024 - 9:16pm

The State of California is currently rolling out a “mobile driver’s license” (mDL), a form of digital identification that raises significant privacy and equity concerns. This post explains the new smartphone application, explores the risks, and calls on the state and its vendor to focus more on protection of the users. 

What is the California DMV Wallet? 

The California DMV Wallet app came out in app stores last year as a pilot, offering the ability to store and display your mDL on your smartphone, without needing to carry and present a traditional physical document. Several features in this app replicate how we currently present the physical document with key information about our identity—like address, age, birthday, driver class, etc. 

However, other features in the app provide new ways to present the data on your driver’s license. Right now, we only take out our driver’s license occasionally throughout the week. However, with the app’s QR Code and “add-on” features, the incentive for frequency may grow. This concerns us, given the rise of age verification laws that burden everyone’s access to the internet, and the lack of comprehensive consumer data privacy laws that keep businesses from harvesting and selling identifying information and sensitive personal information. 

For now, you can use the California DMV Wallet app with TSA in airports, and with select stores that have opted in to an age verification feature called TruAge. That feature generates a separate QR Code for age verification on age-restricted items in stores, like alcohol and tobacco. This is not simply a one-to-one exchange of going from a physical document to an mDL. Rather, this presents a wider scope of possible usage of mDLs that needs expanded protections for those who use them. While California is not the first state to do this, this app will be used as an example to explain the current landscape.

What’s the QR Code? 

There are two ways to present your information on the mDL: 1) a human readable presentation, or 2) a QR code. 

The QR code with a normal QR code scanner will display an alphanumeric string of text that starts with “mdoc:”. For example: 

 “mdoc:owBjMS4wAY..." [shortened for brevity]

This “mobile document” (mdoc) text is defined by the International Organization for Standardization’s ISO/IEC18013-5. The string of text afterwards details driver’s license data that has been signed by the issuer (i.e., the California DMV), encrypted, and encoded. This data sequence includes technical specifications and standards, open and enclosed.  

In the digital identity space, including mDLs, the most referenced and utilized are the ISO standard above, the American Association of Motor Vehicle Administrators (AAMVA) standard, and the W3C’s Verified Credentials (VC). These standards are often not siloed, but rather used together since they offer directions on data formats, security, and methods of presentation that aren’t completely covered by just one. However, ISO and AAMVA are not open standards and are decided internally. VCs were created for digital credentials generally, not just for mDLs. These standards are relatively new and still need time to mature to address potential gaps.

The decrypted data could possibly look like this JSON blob:

         {"family_name":"Doe",
          "given_name":"John",
          "birth_date":"1980-10-10",
          "issue_date":"2020-08-10",
          "expiry_date":"2030-10-30",
          "issuing_country":"US",
          "issuing_authority":"CA DMV",
          "document_number":"I12345678",
          "portrait":"../../../../test/issuance/portrait.b64",
          "driving_privileges":[
            {
               "vehicle_category_code":"A",
               "issue_date":"2022-08-09",
               "expiry_date":"2030-10-20"
            },
            {
               "vehicle_category_code":"B",
               "issue_date":"2022-08-09",
               "expiry_date":"2030-10-20"
            }
          ],
          "un_distinguishing_sign":"USA",
          {
          "weight":70,
          "eye_colour":"hazel",
          "hair_colour":"red",
          "birth_place":"California",
          "resident_address":"2415 1st Avenue",
          "portrait_capture_date":"2020-08-10T12:00:00Z",
          "age_in_years":42,
          "age_birth_year":1980,
          "age_over_18":true,
          "age_over_21":true,
          "issuing_jurisdiction":"US-CA",
          "nationality":"US",
          "resident_city":"Sacramento",
          "resident_state":"California",
          "resident_postal_code":"95818",
          "resident_country": "US"}
}

Application Approach and Scope Problems 

California decided to contract a vendor to build a wallet app rather than use Google Wallet or Apple Wallet (not to be conflated with Google and Apple Pay). A handful of other states use Google and Apple, perhaps because many people have one or the other. There are concerns about large companies being contracted by the states to deliver mDLs to the public, such as their controlling the public image of digital identity and device compatibility.  

This isn’t the first time a state contracted with a vendor to build a digital credential application without much public input or consensus. For example, New York State contracted with IBM to roll out the Excelsior app during the beginning of COVID-19 vaccination availability. At the time, EFF raised privacy and other concerns about this form of digital proof of vaccination. The state ultimately paid the vendor a staggering $64 million. While initially proprietary, the application later opened to the SMART Health Card standard, which is based on the W3C’s VCs. The app was sunset last year. It’s not clear what effect it had on public health, but it’s good that it wound down as social distancing measures relaxed. The infrastructure should be dismantled, and the persistent data should be discarded. If another health crisis emerges, at least a law in New York now partially protects the privacy of this kind of data. NY state legislature is currently working on a bill around mDLs after a round-table on their potential pilot. However, the New York DMV has already entered into a $1.75 million dollar contract with the digital identity vendor IDEMIA. It will be a race to see if protections will be established prior to pilot deployment. 

Scope is also a concern with California’s mDL. The state contracted with Spruce ID to build this app. The company states that its purpose is to empower “organizations to manage the entire lifecycle of digital credentials, such as mobile driver’s licenses, software audit statements, professional certifications, and more.” In the “add-ons” section of the app, TruAge’s age verification QR code is available.  

Another issue is selective disclosure, meaning the technical ability for the identity credential holder to choose which information to disclose to a person or entity asking for information from their credential. This is a long-time promise from enthusiasts of digital identity. The most used example is verification that the credential holder is over 21, without showing anything else about the holder, such as their name and address that appear on the face of their traditional driver’s license. But the California DMV wallet app, has a lack of options for selective disclosure: 

  • The holder has to agree to TruAge’s terms and service and generate a separate TruAge QR Code.  
  • There is already an mDL reader option for age verification for the QR Code of an mDL. 
  • There is no current option for the holder to use selective disclosure for their mDL. But it is planned for future release, according to the California DMV via email. 
  • Lastly, if selective disclosure is coming, this makes the TruAge add-on redundant. 

The over-21 example is only as meaningful as its implementation; including the convenience, privacy, and choice given to the mDL holder. 

TruAge appears to be piloting its product in at least 6 states. With “add-ons”, the scope of the wallet app indicates expansion beyond simply presenting your driver’s license. According to the California DMV’s Office of Public Affairs via email: 

The DMV is exploring the possibility of offering additional services including disabled person parking placard ID, registration card, vehicle ownership and occupational license in the add-ons in the coming months.” 

This clearly displays how the scope of this pilot may expand and how the mDL could eventually be housed within an entire ecosystem of identity documentation. There are privacy preserving ways to present mDLs, like unlinkable proofs. These mechanisms help mitigate verifier-issuer collusion from establishing if the holder was in different places with their mDL. 

Privacy and Equity First 

At the time of this post, about 325,000 California residents have the pilot app. We urge states to take their time with creating mDLs, and even wait for verification methods that are more privacy considerate to mature. Deploying mDLs should prioritize holder control, privacy, and transparency. The speed of these pilots is possibly influenced by other factors, like the push for mDLs from the U.S. Department of Homeland Security. 

Digital wallet initiatives like eIDAS in the European Union are forging conversations on what user control mechanisms might look like. These might include, for example, “bringing your own wallet” and using an “open wallet” that is secure, private, interoperable, and portable. 

We also need governance that properly limits law enforcement access to information collected by mDLs, and to other information in the smartphones where holders place their mDLs. Further, we need safeguards against these state-created wallets being wedged into problematic realms like age verification mandates as a condition of accessing the internet. 

We should be speed running privacy and provide better access for all to public services and government-issued documentation. That includes a right to stick with traditional paper or plastic identification, and accommodation of cases where a phone may not be accessible.  

We urge the state to implement selective disclosure and other privacy preserving tools. The app is not required anywhere. It should remain that way no matter how cryptographically secure the system purports to be, or how robust the privacy policies. We also urge all governments to remain transparent and cautious about how they sign on vendors during pilot programs. If a contract takes away the public’s input on future protections, then that is a bad start. If a state builds a pilot without much patience for privacy and public input, then that is also turbulent ground for protecting users going forward.  

Just because digital identity may feel inevitable, doesn’t mean the dangers have to be. 

EFF to California Appellate Court: Reject Trial Judge’s Ruling That Would Penalize Beneficial Features and Tools on Social Media

Mon, 03/18/2024 - 7:22pm

EFF legal intern Jack Beck contributed to this post.

A California trial court recently departed from wide-ranging precedent and held that Snap, Inc., the maker of Snapchat, the popular social media app, had created a “defective” product by including features like disappearing messages, the ability to connect with people through mutual friends, and even the well-known “Stories” feature. We filed an amicus brief in the appeal, Neville v. Snap, Inc., at the California Court of Appeal, and are calling for the reversal of the earlier decision, which jeopardizes protections for online intermediaries and thus the free speech of all internet users.

At issue in the case is Section 230, without which the free and open internet as we know it would not exist. Section 230 provides that online intermediaries are generally not responsible for harmful user-generated content. Rather, responsibility for what a speaker says online falls on the person who spoke.

The plaintiffs are a group of parents whose children overdosed on fentanyl-laced drugs obtained through communications enabled by Snapchat. Even though the harm they suffered was premised on user-generated content—messages between the drug dealers and their children—the plaintiffs argued that Snapchat is a “defective product.” They highlighted various features available to all users on Snapchat, including disappearing messages, arguing that the features facilitate illegal drug deals.

Snap sought to have the case dismissed, arguing that the plaintiffs’ claims were barred by Section 230. The trial court disagreed, narrowly interpreting Section 230 and erroneously holding that the plaintiffs were merely trying to hold the company responsible for its own “independent tortious conduct—independent, that is, of the drug sellers’ posted content.” In so doing, the trial court departed from congressional intent and wide-ranging California and federal court precedent.

In a petition for a writ of mandate, Snap urged the appellate court to correct the lower court’s distortion of Section 230. The petition rightfully contends that the plaintiffs are trying to sidestep Section 230 through creative pleading. The petition argues that Section 230 protects online intermediaries from liability not only for hosting third-party content, but also for crucial editorial decisions like what features and tools to offer content creators and how to display their content.

We made two arguments in our brief supporting Snap’s appeal.

First, we explained that the features the plaintiffs targeted—and which the trial court gave no detailed analysis of—are regular parts of Snapchat’s functionality with numerous legitimate uses. Take Snapchat’s option to have messages disappear after a certain period of time. There are times when the option to make messages disappear can be crucial for protecting someone’s safety—for example, dissidents and journalists operating in repressive regimes, or domestic violence victims reaching out for support. It’s also an important privacy feature for everyday use. Simply put: the ability for users to exert control over who can see their messages and for how long, advances internet users’ privacy and security under legitimate circumstances.

Second, we highlighted in our brief that this case is about more than concerned families challenging a big tech company. Our modern communications are mediated by private companies, and so any weakening of Section 230 immunity for internet platforms would stifle everyone’s ability to communicate. Should the trial court’s ruling stand, Snapchat and similar platforms will be incentivized to remove features from their online services, resulting in bland and sanitized—and potentially more privacy invasive and less secure—communications platforms. User experience will be degraded as internet platforms are discouraged from creating new features and tools that facilitate speech. Companies seeking to minimize their legal exposure for harmful user-generated content will also drastically increase censorship of their users, and smaller platforms trying to get off the ground will fail to get funding or will be forced to shut down.

There’s no question that what happened in this case was tragic, and people are right to be upset about some elements of how big tech companies operate. But Section 230 is the wrong target. We strongly advocate for Section 230, yet when a tech company does something legitimately irresponsible, the statute still allows for them to be liable—as Snap knows from a lawsuit that put an end to its speed filter.

If the trial court’s decision is upheld, internet platforms would not have a reliable way to limit liability for the services they provide and the content they host. They would face too many lawsuits that cost too much money to defend. They would be unable to operate in their current capacity, and ultimately the internet would cease to exist in its current form. Billions of internet users would lose.

Lawmakers: Ban TikTok to Stop Election Misinformation! Same Lawmakers: Restrict How Government Addresses Election Misinformation!

Fri, 03/15/2024 - 10:12pm

In a case being heard Monday at the Supreme Court, 45 Washington lawmakers have argued that government communications with social media sites about possible election interference misinformation are illegal.

Agencies can't even pass on information about websites state election officials have identified as disinformation, even if they don't request that any action be taken, they assert.

Yet just this week the vast majority of those same lawmakers said the government's interest in removing election interference misinformation from social media justifies banning a site used by 150 million Americans.

On Monday, the Supreme Court will hear oral arguments in Murthy v. Missouri, a case that raises the issue of whether the federal government violates the First Amendment by asking social media platforms to remove or negatively moderate user posts or accounts. In Murthy, the government contends that it can strongly urge social media sites to remove posts without violating the First Amendment, as long as it does not coerce them into doing so under the threat of penalty or other official sanction.

We recognize both the hazards of government involvement in content moderation and the proper role in some situations for the government to share its expertise with the platforms. In our brief in Murthy, we urge the court to adopt a view of coercion that includes indirectly coercive communications designed and reasonably perceived as efforts to replace the platform’s editorial decision-making with the government’s.

And we argue that close cases should go against the government. We also urge the court to recognize that the government may and, in some cases, should appropriately inform platforms of problematic user posts. But it’s the government’s responsibility to make sure that its communications with the platforms are reasonably perceived as being merely informative and not coercive.

In contrast, the Members of Congress signed an amicus brief in Murthy supporting placing strict limitations on the government’s interactions with social media companies. They argued that the government may hardly communicate at all with social media platforms when it detects problematic posts.

Notably, the specific posts they discuss in their brief include, among other things, posts the U.S. government suspects are foreign election interference. For example, the case includes allegations about the FBI and CISA improperly communicating with social media sites that boil down to the agency passing on pertinent information, such as websites that had already been identified by state and local election officials as disinformation. The FBI did not request that any specific action be taken and sought to understand how the sites' terms of service would apply.

As we argued in our amicus brief, these communications don't add up to the government dictating specific editorial changes it wanted. It was providing information useful for sites seeking to combat misinformation. But, following an injunction in Murthy, the government has ceased sharing intelligence about foreign election interference. Without the information, Meta reports its platforms could lack insight into the bigger threat picture needed to enforce its own rules.

The problem of election misinformation on social media also played a prominent role this past week when the U.S. House of Representatives approved a bill that would bar app stores from distributing TikTok as long as it is owned by its current parent company, ByteDance, which is headquartered in Beijing. The bill also empowers the executive branch to identify and similarly ban other apps that are owned by foreign adversaries.

As stated in the House Report that accompanied the so-called "Protecting Americans from Foreign Adversary Controlled Applications Act," the law is needed in part because members of Congress fear the Chinese government “push[es] misinformation, disinformation, and propaganda on the American public” through the platform. Those who supported the bill thus believe that the U.S. can take the drastic step of banning an app for the purposes of preventing the spread of “misinformation and propaganda” to U.S. users. A public report from the Office of the Director for National Intelligence was more specific about the threat, indicating a special concern for information meant to interfere with the November elections and foment societal divisions in the U.S.

Over 30 members of the House who signed the amicus brief in Murthy voted for the TikTok ban. So, many of the same people who supported the U.S. government’s efforts to rid a social media platform of foreign misinformation, also argued that the government’s ability to address the very same content on other social media platforms should be sharply limited.

Admittedly, there are significant differences between the two positions. The government does have greater limits on how it regulates the speech of domestic companies than it does the speech of foreign companies.

But if the true purpose of the bill is to get foreign election misinformation off of social media, the inconsistency in the positions is clear.  If ByteDance sells TikTok to domestic owners so that TikTok can stay in business in the U.S., and if the same propaganda appears on the site, is the U.S. now powerless to do anything about it? If so, that would seem to undercut the importance in getting the information away from U.S. users, which is one the chief purposes of the TikTik ban.

We believe there is an appropriate role for the government to play, within the bounds of the First Amendment, when it truly believes that there are posts designed to interfere with U.S. elections or undermine U.S. security on any social media platform. It is a far more appropriate role than banning a platform altogether.

 

 

The SAFE Act to Reauthorize Section 702 is Two Steps Forward, One Step Back

Fri, 03/15/2024 - 4:48pm

Section 702 of the Foreign Intelligence Surveillance Act (FISA) is one of the most insidious and secretive mass surveillance authorities still in operation today. The Security and Freedom Enhancement (SAFE) Act would make some much-needed and long fought-for reforms, but it also does not go nearly far enough to rein in a surveillance law that the federal government has abused time and time again.

You can read the full text of the bill here.

While Section 702 was first sold as a tool necessary to stop foreign terrorists, it has since become clear that the government uses the communications it collects under this law as a domestic intelligence source. The program was intended to collect communications of people outside of the United States, but because we live in an increasingly globalized world, the government retains a massive trove of communications between people overseas on U.S. persons. Now, it’s this US side of digital conversations that are being routinely sifted through by domestic law enforcement agencies—all without a warrant.

The SAFE Act, like other reform bills introduced this Congress, attempts to roll back some of this warrantless surveillance. Despite its glaring flaws and omissions, in a Congress as dysfunctional as this one it might be the bill that best privacy-conscious people and organizations can hope for. For instance, it does not do as much as the Government Surveillance Reform Act, which EFF supported in November 2023. But imposing meaningful checks on the Intelligence Community (IC) is an urgent priority, especially because the Intelligence Community has been trying to sneak a "clean" reauthorization of Section 702 into government funding bills, and has even sought to have the renewal happen in secret in the hopes of keeping its favorite mass surveillance law intact. The administration is also reportedly planning to seek another year-long extension of the law without any congressional action. All the while, those advocating for renewing Section 702 have toyed with as many talking points as they can—from cybercrime or human trafficking to drug smuggling, terrorism, oreven solidarity activism in the United States—to see what issue would scare people sufficiently enough to allow for a clean reauthorization of mass surveillance.

So let’s break down the SAFE Act: what’s good, what’s bad, and what aspects of it might actually cause more harm in the future. 

What’s Good about the SAFE Act

The SAFE Act would do at least two things that reform advocates have pressured Congress to include in any proposed bill to reauthorize Section 702. This speaks to the growing consensus that some reforms are absolutely necessary if this power is to remain operational.

The first and most important reform the bill would make is to require the government to obtain a warrant before accessing the content of communications for people in the United States. Currently, relying on Section 702, the government vacuums up communications from all over the world, and a huge number of those intercepted communications are to or from US persons. Those communications sit in a massive database. Both intelligence agencies and law enforcement have conducted millions of queries of this database for US-based communications—all without a warrant—in order to investigate both national security concerns and run-of-the-mill criminal investigations. The SAFE Act would prohibit “warrantless access to the communications and other information of United States persons and persons located in the United States.” While this is the bare minimum a reform bill should do, it’s an important step. It is crucial to note, however, that this does not stop the IC or law enforcement from querying to see if the government has collected communications from specific individuals under Section 702—it merely stops them from reading those communications without a warrant.

The second major reform the SAFE Act provides is to close the “data brooker loophole,” which EFF has been calling attention to for years. As one example, mobile apps often collect user data to sell it to advertisers on the open market. The problem is law enforcement and intelligence agencies increasingly buy this private user data, rather than obtain a warrant for it. This bill would largely prohibit the government from purchasing personal data they would otherwise need a warrant to collect. This provision does include a potentially significant exception for situations where the government cannot exclude Americans’ data from larger “compilations” that include foreigners’ data. This speaks not only to the unfair bifurcation of rights between Americans and everyone else under much of our surveillance law, but also to the risks of allowing any large scale acquisition from data brokers at all. The SAFE Act would require the government to minimize collection, search, and use of any Americans’ data in these compilations, but it remains to be seen how effective these prohibitions will be. 

What’s Missing from the SAFE Act

The SAFE Act is missing a number of important reforms that we’ve called for—and which the Government Surveillance Reform Act would have addressed. These reforms include ensuring that individuals harmed by warrantless surveillance are able to challenge it in court, both in civil lawsuits like those brought by EFF in the past, and in criminal cases where the government may seek to shield its use of Section 702 from defendants. After nearly 14 years of Section 702 and countless court rulings slamming the courthouse door on such legal challenges, it’s well past time to ensure that those harmed by Section 702 surveillance can have the opportunity to challenge it.

New Problems Potentially Created by the SAFE Act

While there may often be good reason to protect the secrecy of FISA proceedings, unofficial disclosures about these proceedings has from the very beginning played an indispensable role in reforming uncontested abuses of surveillance authorities. From the Bush administration’s warrantless wiretapping program through the Snowden disclosures up to the present, when reporting about FISA applications appears on the front page of the New York Times, oversight of the intelligence community would be extremely difficult, if not impossible, without these disclosures.

Unfortunately, the SAFE Act contains at least one truly nasty addition to current law: an entirely new crime that makes it a felony to disclose “the existence of an application” for foreign intelligence surveillance or any of the application’s contents. In addition to explicitly adding to the existing penalties in the Espionage Act—itself highly controversial— this new provision seems aimed at discouraging leaks by increasing the potential sentence to eight years in prison. There is no requirement that prosecutors show that the disclosure harmed national security, nor any consideration of the public interest. Under the present climate, there’s simply no reason to give prosecutors even more tools like this one to punish whistleblowers who are seen as going through improper channels.

EFF always aims to tell it like it is. This bill has some real improvements, but it’s nowhere near the surveillance reform we all deserve. On the other hand, the IC and its allies in Congress continue to have significant leverage to push fake reform bills, so the SAFE Act may well be the best we’re going to get. Either way, we’re not giving up the fight.  

Pages