EFF: Updates

Syndicate content
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 25 min 36 sec ago

Dear Web Developers: Thank You, You’re Awesome, and Wow Did That Really Just Happen?

Fri, Apr 18 2014 16:32 -0400

Two days ago, we asked web developers for help.

EFF and Sunlight Foundation published an open call for help testing a tool and populating an open data format that would make it easier for everyday people to contact members of Congress. We already had a prototype, but we needed volunteers to conduct tests on each and every Congressional website.

We expected the project would take about two weeks to complete, but feared it might take a month or longer. We worried that web developers wouldn’t want to spend hours working on a boring, frustrating, often technically complex task.

Instead, volunteers conquered the project in two days.

Within hours of publishing our blog post, we were flooded by offers of support. People from all over the world contacted us, and many immediately jumped in and started contributing. By 2:30 AM the day we launched, 70 people were already hacking on the project and had submitted over 420 commits.

The following morning, we found even more people had gotten involved.  More than a hundred people were helping us write the code after hearing about our project on Hacker News, reddit, and BoingBoing.

Today, we’re declaring victory. Thanks to the hard work of over a hundred volunteers around the globe, we’re incredibly proud to announce the first-ever public domain database for submitting emails to members of Congress.

142 authors helped us build the code. There were over 1,600 commits to the Github repo in the last few days. And we now have pathways for contacting 530 members of Congress1

We did it. We just made democracy a little more functional.

Why Everyone Should Be Able to Contact Congress

We wanted to build a tool for contacting congress so that we could ensure that the voices of Internet users would be heard in the halls of Congress. We wanted to feel confident that messages were being delivered when EFF supporters spoke out against bills like SOPA or demanded reform to NSA spying or software patents. We wanted a system that reflected our values—public domain, as secure as possible, and built with free software.

But we didn’t just want to build something for EFF. We wanted to create an open dataset that anybody could use to create similar tools. We wanted to fundamentally make elected officials more accountable to the people by lowering the bar to sending messages to Congress. We hope developers will use the dataset we’ve made for other projects, establishing new ways of interacting with Congress that we might not even have considered.

Today, that dataset exists.

Why People Got Involved

There were a lot of volunteers who worked long hours to finish this tool. Here are some thoughts they shared:

Darrik Mazey, who contributed over 59 commits to the project, said:

"I got involved with this project simply because when you get the opportunity to help an organization that has done so much for digital privacy rights, you don't pass it up. It felt like a chance to do something real to support a cause I strongly believe in, and facilitating communication between the public and their representatives is absolutely necessary for any sort of social improvement."

“It is crucial to support projects to help restore the voice of the public, especially at this moment in history of overwhelming influence of corporate, economic and political elites,” said Moiz Syed, who made 67 commits to the Github repo over the course of two days. "Being a part of this huge collaborative effort, working with people staying up till all hours of the night helping each other, was both an exhilarating and empowering experience."

Lucas Myer, who made 57 commit to the Github repo, said: “The community effort to help with Contact Congress was nothing short of amazing.  I think, like me, a lot of developers see the vital role the EFF serves in defending digital rights and civil liberties. Contributing to Contact Congress was a great opportunity to give something back to the EFF while helping build tools to help people more easily contact their representatives.”

Everyone who made over 55 commits to Github will be recognized on the EFF website under a new page we’re creating for volunteer technologists.

Let’s Do This Again Sometime!

We were completely floored by the outpouring of support we got from developers. In less than two days, we accomplished an enormous project that will benefit EFF and democracy. In fact, the experience has us brainstorming about other volunteer projects that could have a dramatic impact on our digital rights.

Here’s an obvious one: every two years, there’s an election that will necessitate us cleaning up our Contact Congress code. If you want to be on an email list that gets contacted to help out with that and other web development projects, just send an email rainey@eff.org and let us know to add you to the mailing list. Whenever we have a challenging project that needs tech volunteers, we’ll let you know.

But there are other ways you can stay involved. If you want to help us build a more secure Web, please help us maintain our free browser add-on, HTTPS Everywhere. Take a look.

And if you’re interested in building cool action campaigns that benefit the freedom online, consider joining the volunteer team at Taskforce.is. EFF has been teaming up with them for the last several months on technology and advocacy projects, and they could use the help of dedicated, skilled, and passionate developers.

And finally, if you really love working on these projects, you should know that EFF is hiring—we’re looking for a web developer with lovely, edgy front-end design skills and a passion for digital rights. Join us.

Big thanks

A ton of people contributed to this project, more than we can name in this blog post. But we want to extend a special thanks to:

  • Dan Drinkard, Eric Mill, and the rest of the team at the Sunlight Foundation. They labored on this project for months and months, and created both the original code and bookmarklet for this project.
  • Thomas Davis, the sleepless hacker in Australia who single-handedly managed dozens of volunteers through the nights while the US crew slept.
  • Moiz Syed, Darrick Mazey, Lucas Myers, Corey Garnett, Aaron Griffith, Steve Crozier, and everyone else who worked late into the night to pull this project together.
  • Jason Rosenbaum and the rest of the Action Network team, who dove in and helped both test our tools and manage volunteers.
  • David Moore at the Participatory Politics Foundation, whose original project was the inspiration for this project.
  • Bill Budington, our staff technologist, who single-handedly wrote our congress-forms tool and without whom none of this would have happened.

And of course, our deepest thanks go to Sina Khanifar, leader of exhausted web developers everywhere and the organizer of this effort.  If you have a minute, tweet your thanks to Sina: @sinak.

Thanks, everybody. See you next time!

  • 1. The last few members of Congress have buggy forms, and EFF’s staff technologists will be hacking a solution to those in the coming days.

Share this:   ||  Join EFF

Answers and Questions About Military, Law Enforcement and Intelligence Agency Chatbots

Fri, Apr 18 2014 12:21 -0400

You Asked: Tell us all your secrets.

SGT STAR: Good question. I'm not sure how to answer that. Please try rewording your question. I understand simple questions best.

You Asked: In that case, tell us everything.

SGT STAR: That is a good question, however, I am not positive that I understand what you're asking. Try rephrasing your question. I understand simple questions best.

Sgt. Star is the U.S. Army’s dedicated marketing and recruitment chatbot, and he isn’t going to turn whistleblower any time soon. There’s no use threatening him for answers either—he’s programmed to report that kind of hostility to the Army Criminal Investigation Division.

Last year, EFF began to look at how the government deploys chatbots to interact with and collect information from the public. Sgt. Star was a natural place to start, since he’s almost famous. Serving as the Army’s virtual public spokesperson, each year he guides hundreds of thousands of potential recruits through goarmy.com and fields their questions on Facebook.

(On the Media’s TLDR recorded an informative and entertaining podcast about Sgt. Star, our research and the issues AI chatbots raise—listen here.)

Since Sgt. Star wasn’t going to tell us everything he knows without us breaking it down into a thousand simple questions, we decided to just use the Freedom of Information Act to get it all at once. At first the Army ignored our inquiries, but with a little digging and pressure from the media1, we have been able to piece together a sort of personnel file for Sgt. Star.

We now know everything that Sgt. Star can say publicly as well as some of his usage statistics. We also learned a few things we weren’t supposed to: Before there was Sgt. Star, the FBI and CIA were using the same underlying technology to interact with child predators and terrorism suspects on the Internet. And, in a bizarre twist, the Army claims certain records don't exist because an element of Sgt. Star is “living.”

Everything We Know About Sgt. Star

Chatbots are computer programs that can carry on conversations with human users, often through an instant-message style interface. To put it another way: Sgt. Star is what happens when you take a traditional “FAQ” page and inject it with several million dollars worth of artificial intelligence upgrades.

Sgt. Star’s story dates back to the months after the 9/11 attacks, when the Army was experiencing a 40-percent year-over-year increase in traffic to the chatrooms on its website, goarmy.com.  By the time the U.S. invaded Iraq, analysts predicted that the annual cost to staff the live chatrooms would be as high as $4 million.

A cost-cutting solution presented itself in late 2003 in the form of an artificial intelligence program called ActiveAgent, developed by a Spokane, Washington-based tech firm called Next IT.  After years of trial runs and focus groups, the Army debuted Sgt. Star2 in 2006.

Technology and law scholars, such as Ryan Calo of the University of Washington School of Law and Ian Kerr of the University of Ottawa Faculty of Law, have warned of the threats to privacy posed by bots that combine social manipulation with mass data gathering. As Calo wrote of Sgt. Star in his paper, “Peering HALs: Making Sense of Artificial Intelligence and Privacy”:

As in the context of data mining, a computer equipped with artificial intelligence is capable of engaging thousands of individuals simultaneously, twenty-four hours a day.  But here the agent is able to leverage the power of computers to persuade via carefully orchestrated social tactics known to elicit responses in humans.  In an age of national security and targeted advertising, citizen and consumer information is at an all time premium. Techniques of AI and HCI [Human-Computer Interaction] create the opportunity for institutions to leverage the human tendency to anthropomorphise and other advantages computers hold over humans (ubiquity, diligence, trust, memory, etc.) to facilitate an otherwise impracticable depth and breadth of data collection.

Through a FOIA request, we were able to quantify Sgt. Star’s reach. According to a spreadsheet provided by the Army, Sgt. Star does the work of 55 human recruiters.  Over the last five years, he has answered 10.5 million questions (94 percent of them correctly) in 2.8 million chat sessions. That means, on average, that Sgt. Star is engaging more than 1,550 people each day.

Sgt. Star is currently operated by the Army Marketing and Research Group, a division created in 2012 that operates out of the Pentagon. According to contracting documents, as of 2011, the Army had spent $5 million on the program, plus another couple of million in maintenance contracts3 in the years since.

The Army does not have a privacy policy specific to Sgt. Star, but officials say they are not tracking cookies or retaining personal information beyond IP address traffic to goarmy.com. But Sgt. Star is recording the questions he’s asked, which the Army acknowledged in an interview with TLDR are often more personal than what someone would ask a real recruiter (for example, a potential recruit might be afraid he’ll be seen as “soft” if he asks if he has to shower in front of other people or if there are jobs where he won’t have to be deployed).

The Army also revealed to TLDR that Sgt. Star is not so smart that he can increase his own knowledge base. Instead, the Army and Next IT hold monthly meetings to assess and adjust how Sgt. Star responds to questions.

While Sgt. Star may seem innocuous enough, he provides a window into how other agencies may use the technology for far more invasive purposes. 

Sgt. Star’s Family

You Asked: Are there other government chatbots?

Sgt Star: I'm here to help you navigate though goarmy.com and answer your questions about the Army. I'm not here to discuss politics.

Sgt. Star has a large family of “intelligence virtual assistants” at Next IT. His brother Spike assists incoming students at Gonzaga University. Ann helps health consumers at AETNA. Travelers interact (and flirt) with Jenn at Alaska Airlines, Alex at United Airlines, and Julie at Amtrak. Next IT’s newest addition is Alme, a healthcare AI designed to help physicians interface with patients. But so far, Sgt. Star is the only federal government chatbot acknowledged on Next IT’s website.

Secretly, however, Sgt. Star does have family at law enforcement and intelligence agencies.  According to an inadequately redacted document publicly available on the federal government’s contracting site, FBO.gov, Sgt. Star is built on technology developed for the FBI and CIA more than a decade ago to converse with suspects online. From the document:

LTC Robert Plummer, Director, USAREC PAE, while visiting the Pacific Northwest National Laboratories (PNNL) in late 2003, discovered an application developed by NextIt Corporation of Spokane, WA, that PNNL identified for the FBI AND CIA. The application used chat with an underlying AI component that replicated topical conversations. These agencies were using the application to engage PEDOPHILES AND TERRORISTS online, and it allowed a single agent to monitor 20-30 conversations concurrently.

The bolded text was redacted, but still legible in the document.  At this point we don’t know whether the CIA and FBI are still using these bots.4 That will likely take a much longer FOIA process and, considering the redaction, the agencies may not be willing to give up the information without a fight.  

Some food for thought: Sgt. Star engaged in almost 3 million conversations over the last five years, and those were people who actually wanted to talk to him. How many people could two CIA and FBI covert bots converse with over 10 years? What happens to conversations that aren’t relevant to an investigation, and how do the agencies weed out the false positives, such as when a chatbot misinterprets a benign conversation as dangerous? 

We also know that Next IT developed an artificial intelligence program called ActiveSentry5, specifically designed to identify and neutralize threats to data security within a computer network. According to promotional materials, ActiveSentry’s origin story is very similar to Sgt. Star’s:

Our solution, ActiveSentry, is evolved from a proven technology we developed that has been deployed by government agencies to monitor Internet Relay Chat (IRC) rooms for such suspicious behavior as potential threats to national security, child exploitation, and drug trafficking.

ActiveSentry is not a chatbot, but a kind of patrol guard, watching keystrokes, conversations and monitoring network activity, looking for employee misbehavior and then springing into action. Here’s how Next IT’s founder and CEO described its investigative and evidence-gathering capabilities in a presentation at a homeland security technology showcase:

If bad behavior occurs, ActiveSentry can respond with real-time proactive steps to block the action. It might shut down the computer, shut down the application, alarm a security officer, or give the user a warning. If the security breach is sufficient, ActiveSentry creates a detailed record of everything the user does, stores all the action with date/time stamps, and secures the data for foolproof evidentiary material.

So far, we are unaware of any criminal cases where information gathered by a Next IT bot has been used as evidence. If you do know of such a case, please get in touch with us.

Is Sgt. Star Alive?

You Asked: Are you alive?

SGT STAR: I am a dynamic, intelligent self-service virtual guide, created by the Army, that will help you quickly navigate the GoArmy Web site and retrieve Army-related subject matter, while simultaneously answering questions and directing communication based on interaction with the user.

Generations upon generations of folklorists, philosophers, futurists and science fiction authors have asked the question: at what point does a machine become a living being?

Sorry, we can’t settle that debate. That said, the Army’s FOIA office did use some rather strange language in describing one piece of Sgt. Star’s programming.

In our request, we sought his output script (every possible response in his database) as it stands now, but also for each year since he went live. That way, we could compare how his answers have evolved and grown through military policy changes, such as the end of Don’t Ask Don’t Tell and the ban on women in combat. The Army gave us the 835 current responses, but could not give us the rest. Apparently, the historical scripts don’t exist because the script is a “living body.” The Army’s exact words in describing the spreadsheet of 835 responses:  

This collection is the current running content that appears on the site and represents a “living body” of content that reflects the additions and updates to answers that have evolved since the introduction of the Sgt. Star feature.

Do they mean, “living” like the Constitution is said to be a living document? Or are they using it in the sense of Johnny 5 from Short Circuit?  We plan to file a FOIA appeal to learn more. Likely what they mean is that they don’t maintain older versions of Sgt. Star’s script because they constantly update a single file. If that’s the case, then that indicates poor record-keeping by the Army. If the FBI treats their bots in the same way, that would raise serious questions about the ability of defendants to challenge the reliability of a bot if they are charged with a crime after the bot’s programming has changed.

The Army also refused to provide us with Sgt. Star’s input patterns, i.e. all the phrases and keywords that he is programmed to recognize, claiming that this information is Next IT’s proprietary material and therefore exempt from FOIA. We will address this in our FOIA appeal; we are not seeking Sgt. Star's underlying off-the-shelf ActiveAgent code, but we would like to see the customization details that give Sgt. Star his distinct Army flavor. It's also worth knowing what Sgt. Star is programmed to flag, even if he’s not programmed to actually respond to it.

Everything Sgt. Star Knows

You can view all of Sgt. Star’s current responses here. It's 288 pages.

Sgt. Star has a seemingly exhaustive supply of answers to questions about military service, from opportunities for dentists and veterinarians to whether soldier are allowed to use umbrellas (only women and under certain conditions). He also has answers that simply exist to deepen his personality, such as his music and film preferences, and information about his Rottweiler, "Chomp." He will also deliver rather in-depth, scientific answers to throwaway questions, including "why is the sky blue?" and "why is grass green?"

For all his character quirks, a user would never mistake Sgt. Star for human—that's just not how he was designed. That can’t necessarily be said for other government bots. Military, law enforcement and intelligence agencies have employed virtual people capable of interacting with and surveilling the public on a massive scale, and every answer raises many, many more questions.

You Asked: Until next time.

SGT STAR: Goodbye.

  • 1. Special thanks to Alex Goldman of On The Media for following up with the Army, and journalist Jason Leopold for firing off a supporting FOIA request.
  • 2. Technically, he is SGT STAR, with STAR as an acronym for “Strong, Trained And Ready,” which Sgt. Star says all soldier must be before being assigned to a unit or deployed. However, in the development stages, the program used the name “Sgt. Rock.”
  • 3. While the Army does work directly with Next IT, it has also contracted out maintenance to two defense contractors—Nakuuruq Solutions and Truestone Communications—both subsidiaries of a corporation owned by the Iñupiat people of Northwest Alaska.
  • 4. Next IT is no longer the only company offering pedophile-hunting chatbots. In 2004, a British programmer introduced a product he called “NetNannies.” Last year, Spanish researchers announced another AI, called Negobot.
  • 5. ActiveSentry is now marketed by Next IT's affiliate, NextSentry Corporation.
Files:  sgt_star_answers_current_-_stateless.pdf sgt_star_usage_data_-_chat_session_totals.pdf foia_closing_letter.pdf ja_redacted.pdf
Share this:   ||  Join EFF

Armenian Bill Threatens Online Anonymity

Wed, Apr 16 2014 17:05 -0400

In Armenia, online anonymity could be a luxury of the past if a bill that is currently before the Armenian parliament is passed.  The bill would make it illegal for media outlets to publish defamatory content by anonymous or fake sources.  Additionally, under this bill, sites that host libelous comments that are posted anonymously or under a pseudonym would be required to remove such content within 12 hours unless an author is identified.

Edmon Marukyan, one of the bill’s drafters, explained the goal of the bill saying, “You can remain incognito as much as you like. Write your posts, but if they end up in the media, then someone has to bear responsibility.” Thus this bill was drafted in an effort to hold a party accountable if and when the dissemination of defamatory material on public websites occurs.  However, the need for Armenian legislators to target media outlets and hold them responsible for this type of commentary greatly infringes upon the right to freedom of expression and association.  Marukyan believes that sites “bear responsibility” for users' comments, but said “the purpose of the bill was to clarify liability, not curb expression.”  Unfortunately, the bill would most certainly curb expression—stifling the commentary of those who would no longer feel secure posting on a medium that would require them to reveal their true self.

Holding a public electronic site liable for its users’ commentary is risky, as displayed in a legal analysis of the Armenian bill published in March 2014 by the Organization for Security and Co-operation in Europe (OSCE). The OSCE raises concerns with the bill, mainly criticizing it for its excessively broad scope, vague definitions, and general lack of clarity.  The OSCE proposes that Armenia, though not a member state of the European Union (and thus not legally bound to EU law), look to European law and other directives as a guide for determining whether the bill upholds the right to freedom of expression as outlined by the Universal Declaration of Human Rights.  Legislation that is noted in the OSCE’s legal analysis includes Directive 95/46/EC (Directive on Data Protection), “a reference text, at European level, on the protection of personal data."

Furthermore, the OSCE notes that since Armenia is a member state of the United Nations, it is obligated to uphold the civil and political rights of individuals outlined in the International Covenant on Civil and Political Rights (ICCPR)—an international treaty aimed at preserving the right to freedom of expression, amongst other liberties. Additionally, the legal analysis points to the International Principles on the Application of Human Rights to Communications Surveillance (the 13 Principles) as another guide for the Armenian parliament to use when determining whether or not the proposed bill is consistent with human rights law.  

The OSCE writes that if the bill is passed, it’s “likely to discourage Internet operators from carrying out business in the Republic of Armenia, since the risk of being charged with liability for defamation is apparently doomed to increase.”  It would be devastating if certain online platforms that were once available for anonymous users to post and exercise their basic human right to freedom of expression were suddenly inaccessible.

Stay tuned for updates on the bill and click here to read the Legal Analysis of Draft Amendments to the Civil Code of the Republic of Armenia in its entirety.

Related Issues: Free SpeechAnonymity
Share this:   ||  Join EFF

In the One-sided Foreign Intelligence Surveillance Court, It's Hard to Get The Whole Story

Wed, Apr 16 2014 14:06 -0400

While most courts in the United States are adversarial—each party presents its side and a jury, or occasionally a judge, makes a decision—in the Foreign Intelligence Surveillance Court (FISC), only the government presents its case to a judge. While typically two opposing sides work under public review to make sure all the facts are brought to light, in the FISC the system relies on a heightened duty of candor for the government. As is illustrated all too well by recent developments in our First Unitarian v. NSA case, this one-sided court system is fundamentally unfair.

In March, after we learned that the government intended to destroy records of Section 215 bulk collection relevant to our NSA cases, we filed for a temporary restraining order in the federal court in San Francisco. We also filed a motion to correct the record with the FISC, since it was a FISC order requiring the destruction of bulk metadata after five years that was at issue.

Following the emergency hearing on our motion, the San Francisco federal court ordered the government to preserve the evidence. On the same day that the federal court issued its order, the FISC issued its own strongly worded order in which it granted our motion and mandated the government to make a filing with the FISC explaining exactly why it had failed to notify the Court about relevant information regarding preservation orders in two related cases, Jewel and Shubert. This omission influenced the FISC's decision on the government's request for relief, and the FISC was not happy about it.

On April 2, the DOJ made its filing. The government's statements in this document deserve close attention because they illustrate in high-definition the failures of the FISC's one-sided system.

The response essentially says that in hindsight, it is clear to the government why the FISC would have wanted to know about the Jewel and Shubert orders. But the government's filings show that it unilaterally decided it was right about its interpretation of the legal theories in these cases. In so doing, it failed to live up to the heightened duty of candor present in ex parte proceedings by failing to inform the FISC that this was disputed. In essence, the government narrowly interpreted the causes of action in the Jewel complaint, excluding the Section 215 surveillance purportedly authorized by the FISC, and thereby narrowing the evidence it would preserve. By making a decision about what facts were relevant, the DOJ attorneys elevated themselves into the role of a judge.

The government apologized to the FISC for its omission, but it also continues to inaccurately portray the controversy over the legal theories our cases. In fact, the DOJ uses this filing to again present their interpretation of the disagreement over the scope of the cases, failing to mention the various arguments we have made on that issue before Judge White in San Francisco. The DOJ calls our view "recently-expressed," attempting to create the impression that the DOJ had no idea that there was any controversy until 2014.  They neglect to mention that we wrote in a 2010 brief that the "government defendants' assertion that 'plaintiffs do not challenge surveillance authorized by the FISA Court' ... misconceives both plaintiffs' complaint and the role of the district court ...."

If this had been a normal court proceeding, each side would present their position in the most favorable light, and the judge would decide who is right. In the FISC, however, this balanced system breaks down. This one-sided system allows for no accountability except in the rare circumstance where the affected parties can raise the issue with the court. Indeed, in most cases, the arguments and the decision are kept secret, and no one can second-guess the government. 

This is why we continue to urge Congress to change the laws governing how FISC operates. At a minimum, significant court decisions must be made public, and a privacy advocate should be a part of the process. These improvements won't bring the same kind of balance that can come with an adversarial system, but could at least deliver a semblance of fairness to the process.

 

Related Issues: NSA SpyingRelated Cases: Jewel v. NSAFirst Unitarian Church of Los Angeles v. NSA
Share this:   ||  Join EFF

Dear Web Developers: EFF Needs Your Help

Tue, Apr 15 2014 20:32 -0400
Donate a Few Hours to Help Us Create a Free Software Backend for Contacting Congress, Make the World a Better Place for Digital Rights

UPDATE (4/16/14): We're lowering the threshold for getting prizes, take a look below.

For years, EFF has been helping concerned technology users contact Congress. The EFF community stopped SOPA, we fought back privacy-invasive cybersecurity proposals, we are championing software patent reform, and now we’re demanding real NSA reform—not a fake fix.

Here's How To Jump In and Help

But we’re at an impasse. Our community has grown significantly in the last few years, and every day we’re confronted with more reasons that users need to be speaking to lawmakers. But no one has a good system for contacting Congress.

Right now, EFF pays a for-profit company using proprietary software so that our friends and members can stop Congress from enacting dumb laws that hurt the Internet.

This rubs us the wrong way. At EFF, we like to practice what we preach, but our third-party action center suffers from proprietary licensing and limited configurability. When we find bugs, we can’t always fix them ourselves or hack around the problem.

It shouldn’t be this way. We shouldn’t have to compromise our principles just so that our friends and members can speak out about important issues. We shouldn’t have to sacrifice security, customizability, or freedom when engaging in political activism.

 We can build something new. And better.

For the last few months, EFF and our partners at the Sunlight Foundation have been working on a way to revolutionize how everyday people contact Congress. The resource we're building with Sunlight is in the public domain, released under CC0, and makes it easy to contact members of Congress using online forms. The new action tool we're creating will be free software, so anyone can hack on and improve it. That means it will be customizable—the community can improve it and hold it to the high level of security that should be the standard for all infrastructure projects and tools for change. And it won’t just be for EFF: anybody can customize this system to contact Congress.

Thanks to our partners at Taskforce.is and the Sunlight Foundation, we’ve got a prototype of the new system ready.

Now, we need your help.

 Calling all techs.

We finished the basic backend for the new contacting Congress tool, but now we need tech volunteers to help us complete the project.

Here’s the challenge: Each member of Congress has a special form that their own constituents can use to contact them. Each form is different: some require a CAPTCHA, some require a title, some require you to choose a topic from a dropdown list. Our new action center will let you connect directly to these Congressional forms for your elected officials whenever you want to submit a letter about an issue you care about. However, we need to program for each unique form used for individual members of Congress.

To that end, we need volunteers to conduct tests on the forms of each of the 500+ members of Congress. We created a simple bookmarklet that you can install in your browser, then visit our action center hub and test out different members of Congress. It’s easy to use, and it takes 4-10 minutes to test a Congressional form and make sure it works.

How many volunteers do you need?

We’re looking for between 10 and 30 people who can commit time to this project. We’re hoping to find several people who can work 4-5 hours on this, and then we’re hoping for 10 people who will be willing to spend one or two days on this project.

How technical do I need to be?

You should be comfortable using Github, have basic programming proficiency in at least one language, and have a reasonable grasp of HTML and Javascript. Experience collaborating via IRC is handy, but not critical.

Do more. 

People contact EFF frequently with offers to help. I want to help you, they tell us.  I want to contribute more than just money. What can I do?

This is it. We really need this system to work so that our voices can be heard in the halls of Congress. And we can only be successful if folks like you (yes you) step up and donate a few hours to help us finish this off.

There’s no tool currently available that would do what we want to do using secure, free software. With a system like this in place, EFF’s efficacy in advocating for your rights can increase dramatically.

We can’t do this without the support and engagement of our best supporters. Want to get involved? Email rainey@eff.org.

It’s not hard and we’ll show you how.

We created these instructions (including video) on how to get started.

Most importantly, we’re available on IRC pretty much all the time. If you bump into problems, just let us know and we’ll try to troubleshoot. Find us on #opencongress on irc.freenode.net.

Ready to get involved? Send an email to rainey@eff.org if you want more information or are ready to get involved.

You can also check out the github repo: https://github.com/unitedstates/contact-congress/

We want to show you some love.

The main reason to take part in this is because you want to help EFF and the Sunlight Foundation, and you believe that the world is a better place when everyday people can contact Congress simply and easily.

Nonetheless, we want to shower you with mountains of amazing swag to thank you for your help. 

Here are the prize bundles for volunteers who make:

 40+15 commits to the project on Github

  • Our undying gratitude
  • An EFF hat

150+35 commits to the project on Github

  • Our undying gratitude
  • 1 year EFF membership -- for yourself, as a gift for a friend, or in memory of someone who inspired you.
  • An EFF hat
  • An EFF sticker pack
  • An EFF shirt

300+ 55 commits to the project on Github:

  • Our undying gratitude
  • 1 year EFF membership -- for yourself, as a gift for a friend, or in memory of someone who inspired you.
  • The famous EFF NSA Hoodie
  • An EFF hat
  • An EFF sticker pack
  • An EFF shirt
  • Free entry to any EFF-hosted party (typically, this is our Pioneer Awards and our birthday party, both of which are in San Francisco. Note that the DefCon party is hosted for EFF by someone else, so we cannot guarantee entry to that.)
  • A public profile on the EFF website, under a soon-to-be-created ‘tech volunteers’ section.

We really need you. Please email  rainey@eff.org  to let us know if you can help out.

 


Share this:   ||  Join EFF

Is the SEC Obtaining Emails Without a Warrant?

Tue, Apr 15 2014 18:48 -0400

Updates to the email privacy law called the Electronic Communications Privacy Act (ECPA) are long overdue. It's common sense that emails and other online private messages (like Twitter direct messages) are protected by the Fourth Amendment. But for a long time, the Department of Justice (DOJ) argued ECPA allowed it to circumvent the Fourth Amendment and access much of your email without a warrant. Thankfully, last year it finally gave up on that stance.

But now it appears that the Securities and Exchange Commission (SEC), the civil agency in charge of protecting investors and ensuring orderly markets, may be doing the same exact thing: it is trying to use ECPA to force service providers to hand over email without a warrant, in direct violation of the Fourth Amendment.

EFF and the Digital Due Process Coalition, a diverse coalition of privacy advocates and major companies, are fighting hard to push a common sense reform to ECPA. The law, passed in the 1980s before the existence of webmail, has been used to argue that emails older than 180 days may be accessed without a warrant based on probable cause. Instead, the agencies send a mere subpoena, which means that the agency does not have to involve a judge or show that the emails will provide evidence of a crime.

Contrary to the position taken by the DOJ, the courts, the public at-large, and EFF, the SEC asserted last week that it can obtain emails with simple subpoenas, issued under ECPA. The Chair of the SEC, Mary Jo White, tried to reassure Rep. Kevin Yoder that the SEC's "built-in privacy protections" make it ok. Unfortunately, Chair White wouldn't explain what are the exact "privacy protections." Rep. Yoder, the sponsor of HR 1852, The Email Privacy Act—a bill with over 200 cosponsors that updates ECPA—was rightfully dubious and tried to no avail to get the Chair to explain why the SEC thinks it can use ECPA to get around the Fourth Amendment.

Just because your emails are on your computer, must not mean they have any less protection than if they were printed on your desk. Many other agencies disagree with the SEC's approach and recognize the Fourth Amendment covers all private communications—whether paper or electronic. It's time for the SEC to update its practices so that it's inline with the courts, public opinion, and with other agencies.  

It's also time for the White House to send a clear message to all of its executive agencies. Remember, the SEC consists of five presidentially appointed commissioners. Since November, the White House has failed to respond to a White House Petition demanding ECPA reform. The White House must pronounce loud and clear that it supports HR 1852, The Email Privacy Act, and that government agencies like the SEC should not be using ECPA as a run-around to the Fourth Amendment. 

Many courts, including the Sixth Circuit in United States v. Warshak, have already ruled that emails and other private communications are protected by the Fourth Amendment. Congress, through members such as Senators Patrick Leahy and Ron Wyden; and Representatives Kevin Yoder, Tom Graves, and Jared Polis, are pushing common sense reforms to ECPA like HR 1852 The Email Privacy Act. The bills are slowly making its way through Congress, but we can speed them up. Tell your Representative now to support HR 1852 so that we don't leave email privacy laws stuck in the 1980s.

 

Related Issues: Privacy
Share this:   ||  Join EFF

Tea Party, Taxes and Why the Original Patriots Would’ve Revolted Against the Surveillance State

Tue, Apr 15 2014 14:02 -0400

Let’s just imagine we could transport an Internet-connected laptop back to the 1790s, when the United States was in its infancy. The technology would no doubt knock the founders out of their buckle-top boots, but once the original patriots got over the initial shock and novelty (and clearing up Wikipedia controversies, hosting an AMA and boggling over Dogecoin), the sense of marvel would give way to alarm as they realized how electronic communications could be exploited by a tyrant, such as the one from which they just freed themselves.

As America’s first unofficial chief technologist, Benjamin Franklin would be the first to recognize the danger and take to trolling the message boards with his famous sentiment: Those who would trade liberty for safety deserve neither. (And he’d probably troll under a fake handle, using Tor, since the patriots understood that some truths are best told with anonymity.)

Today the Tea Party movement aspires to continue the legacy of the founders by championing the rights guaranteed by the Constitution and Bill of Rights. Never afraid of controversy, Tea Party activists and elected leaders are fighting against mass surveillance in the courts and in the halls of state legislatures and Congress.

Each year on April 15, Americans pay taxes that keep the government running. It’s a time for reflecting upon whether that money is funding a government for the people, or a government that is undermining the people, supposedly for their own good. After a watershed year of newly disclosed information about the National Security Agency, the Tea Party has plenty to protest about.

How the Founders Fought Mass Surveillance

Mass surveillance was not part of the original social contract—the terms of service, if you will—between Americans and their government. Untargeted surveillance is one reason we have an independent country today.

Under the Crown’s rule, English officials used writs of assistance to indiscriminately “enter and go into any house, shop cellar, warehouse, or room or other place and, in case of resistance, to break open doors, chests, trunks, and other package there” in order to find tax evaders. Early patriot writers, such as James Otis Jr. and John Dickinson, railed against these general warrants, and it was this issue, among other oppressive conditions, that inspired the Declaration of Independence and the Fourth Amendment.

James Madison drafted clear language guaranteeing the rights of Americans, and it bears reading again in full:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Centuries later, the principle still applies, whether we’re talking about emails or your mobile phone. As the Tea Party activists at FreedomWorks told us when we consulted them for this post: the Fourth Amendment does not stop at technology’s door.

(For a more in-depth historical review, check out former EFF legal intern David Snyder's essay, "The NSA's 'General Warrants': How the Founding Fathers Fought an 18th Century Version of the President's Illegal Domestic Spying.")

Tea Party vs. Big Brother

The Tea Party movement is closely associated with the right to bear arms, religious rights, and tax freedom. But, as Brian Brady, a prolific Tea Party activist in San Diego County we also consulted, said: the movement must embrace the Constitution as a whole. Threats to privacy, he says, are also threats to freedom of speech, religion and association. Property rights mean nothing if the government can search your home or computer without probable cause.

In other words, mass surveillance is a manifestation of big government.

Tea Party activists don’t shy away from confrontations that may put them at odds with other groups (particularly on the left), but no one can deny that on the subject of mass surveillance, the movement is on the frontlines protecting every American’s rights.

TechFreedom and gun-rights groups, such as the CalGuns Foundation and the Franklin Armory (named after Ben), have joined unlikely allies such as Greenpeace and People for the American Way to sue the NSA. Represented by EFF, the plaintiffs argue that collecting phone metadata (your number, who you called, when and for how long you spoke), chills the ability for these groups to associate freely, as guaranteed by the First Amendment as well as the Fourth Amendment. FreedomWorks and Sen. Rand Paul have also filed a class action lawsuit against the NSA on similar grounds

Conservative attorney and founder of Judicial Watch Larry Klayman was the first plaintiff to challenge the program's unconstitutionality. So far, his lawsuit in Washington, D.C. has been successful. In December, the federal judge in the case wrote, “I cannot imagine a more ‘indiscriminate’ and ‘arbitrary invasion’ than this systematic and high-tech collection and retention of personal data on virtually every single citizen for purposes of querying it and analyzing it without judicial approval.” 

Tea Party-affiliated lawmakers have also been pushing back against mass surveillance with a variety of bipartisan legislative reforms; Rep. Justin Amash, for example, came within a few votes of cutting the NSA’s telephone metadata program funding with a budget amendment last July. State legislators who align with the Tea Party have also sponsored bills across the country condemning the NSA, from California State Sen. Joel Anderson’s successful resolution calling for an end to the call records program to Michigan Rep. Tom McMillin’s call for the Department of Justice to prosecute Director of National Intelligence James Clapper for misleading Congress.

Tax, Spend and Surveil

Reason magazine has an excellent essay about IRS and privacy, outlining how the IRS obtains, scours and fails to secure personal data collected from taxpayers, while tax-reform advocate Grover Norquist wrote a worthwhile op-ed in The Daily Caller today about how the IRS exploits the outdated Electronic Communications Privacy Act.  But it’s also important to consider that the taxes the government collects ultimately fund the surveillance state. “No taxation without representation” was the rallying cry of the American revolution, and yet here we are today, with the NSA conducting surveillance without adequate checks and balances. Members of Congress complain that they haven’t been properly briefed on the NSA’s programs and judicial approval of these programs is conducted by a secret court that only hears the government’s side of the story. On the local level, law enforcement agencies are adopting new surveillance technologies such as automatic license plate readers, facial recognition and Stingrays with little public input or other oversight.  

On the whole, maintaining the mass surveillance state is expensive. There are 17 (that’s right, 17) different federal agencies that are part of the “intelligence community,” each of them involved in various, interconnected forms of surveillance. Some would say there is little concrete evidence of how it has made us safer, but there’s plenty of concrete evidence of how much it has cost. The bottom line? We’re paying the government to unreasonably intrude on our lives. The budget for intelligence in 2013 was $52.6 billion. Of that, $10.8 billion went to the NSA. That’s approximately $167 per person in the United States

For a prime example of the wasteful spending, one only need to read  Sen. Tom Coburn’s report, “Safety at Any Price” that outlined the inappropriate spending done under the Department of Homeland Security’s grant program (such as paying for “first responders to attend a HALO Counterterrorism Summit at a California island spa resort featuring a simulated zombie apocalypse.”) This followed on the heels of a harsh bipartisan Senate report criticizing the extreme waste at fusion centers around the country. Federal funds were used to purchase big screen TVs, decked out SUVS, and miniature cameras. To make matters worse, the report found that fusion centers violated civil liberties and produced little information of any use.

Mass surveillance is a symptom of uncontrolled government overreach. The question is what’s the cure?

Defending Privacy is a Patriotic Duty

While every single person has cause to be alarmed by surveillance, those who criticize government policies have particular reason to be concerned. Those who have new, or not yet popular ideas (or, in the case of the Tea Party, old and popular ideas in resurgence) are often targets of overreaching surveillance. It’s not a partisan issue; it’s a constitutional issue.

Activism is most effective when is happens at the personal, local and national levels and the Tea Party has proven it knows how make a ruckus, whether it’s on a personal blog or outside the White House. America needs the Tea Party to keep applying that patriotic passion to NSA reform.

We have also just created a new collection of resources for grassroots activists, including tips on how to organize public events and use the media to spread the word about your issues, as well as a collection of one-page informational sheets that make it easy to explain these issues. And above all, speak out. Help us stop bills that attempt to legalize mass surveillance and join us in demanding real reform.

Stopping mass surveillance—it’s what the first patriots did, and it’s what today’s patriots are doing right now.

 

Related Issues: PrivacyNSA Spying
Share this:   ||  Join EFF

EFF Supports CafePress Safe Harbor Claim

Tue, Apr 15 2014 10:40 -0400

After seven years of litigation, the basic contours of the Digital Millennium Copyright Act (DMCA) safe harbors should be pretty well established. Unfortunately, a new front may have opened up in a case called Gardner v. CafePress, thanks to a mistaken and dangerous misreading of Section 512.

With the invaluable assistance of Venkat Balasubramani, EFF, joined by the Center for Democracy and Technology, the Computer & Communications Industry Association, and Public Knowledge, has filed an amicus brief in that case. In our brief, we explain our deep concerns about how that recent ruling could have profound consequences for user-generated content sites.

CafePress is a platform that allows users to set up online shops to sell custom physical goods like clothing and stationery. The lawsuit was filed by photographer Steven Gardner, whose wildlife images were included on a user's sales page. CafePress had asked the court to resolve the case as a matter of law (also called summary judgment) because it believed it was clearly protected by the DMCA's safe harbors. The court denied that request, concluding that it could not be sure that CafePress was protected by the DMCA.

Our brief explains why that was a dangerous decision for online speech and innovation.  We focus on two issues in particular: (1) the court’s interpretation of the term “service provider”; and (2) the court’s suggestion that image metadata might qualify as a “standard technical measure” under the DMCA—which would mean CafePress's automated stripping of metadata from photos would jeopardize the availability of safe harbor protections. The court could have resolved these arguments in CafePress’s favor as a matter of law. By forcing the parties to go trial on these issues, the court may undermine the purpose of the DMCA safe harbors.

On the first point, it appears that the court conflated CafePress’s online and offline activities as a website and as a producer of physical goods, and adopted a cramped definition of “service provider” that has long since been rejected by numerous courts.

On the second point, the court clearly misunderstood the definition of a “standard technical measure.” This point is pretty technical, but it has serious implications because service providers are required to comply with “standard technical measures” in order to enjoy the legal protections of the DMCA safe harbors.

A standard technical measure, in the sense of DMCA § 512(i) is one that is “used by copyright owners to identify or protect copyrighted works” and “has been developed pursuant to a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry standards process;” is “available to any person on reasonable and nondiscriminatory terms;” and does not “impose substantial costs on service providers or substantial burdens on their systems or networks.”

However, no broad consensus has ever emerged as to any such measure, with respect to metadata or any other technical artifact. In fact, with respect to metadata, industry practices show there is no such consensus: service providers commonly strip metadata from uploaded images. Without a consensus standard, there can be no "technical measure" that a website is required to honor.

And a good thing too. From our brief:

Casting doubt on the practice of removing metadata may also put users at risk. ... Stripping metadata from uploaded images helps protect users’ privacy and security, and should not be discouraged.

But even though there is no broad industry consensus to treat image metadata as a "standard technical measure" for copyright enforcement, the court seems to have made metadata removal a ticket to trial. That's bad news.

Heads up: this case has flown under the radar, but a wrong decision on these points could end up shrinking the effective contours of DMCA safe harbors. Online service providers have a very strong incentive to stay inside those boundaries: the staggering quantity of user-generated content uploaded combined with ridiculously large statutory damages and litigation costs mean any risk of ambiguity is serious.

Service providers need well-established legal safe harbors, because those safe harbors create the space within which new platforms can develop and thrive. That’s good for user speech, and good for online innovation. We hope the court agrees.

Files:  cafepress_amicus_curiae_brief.pdfRelated Issues: Fair Use and Intellectual Property: Defending the BalanceDMCA
Share this:   ||  Join EFF

Prenda On Appeal: Copyright Troll Tactics Challenged in DC Circuit

Mon, Apr 14 2014 16:36 -0400

The DC Circuit Court of Appeals heard argument today in AF Holdings v. Does 1-1058, one of the few mass copyright cases to reach an appellate court, and the first to specifically raise the fundamental procedural problems that tilt the playing field firmly against the Doe Defendants. The appeal was brought by several internet service providers (Verizon, Comcast, AT&T and affiliates), with amicus support from EFF, the ACLU, the ACLU of the Nation's Capitol, Public Citizen, and Public Knowledge. On the other side: notorious copyright troll Prenda Law.

Copyright trolls like Prenda want to be able to sue thousands of people at once in the same court – even if those defendants have no connection to the venue or each other. The troll asks the court to let it quickly collect hundreds of customer names from ISPs. It then shakes those people down for settlements. These Doe defendants have a strong incentive to pay nuisance settlements rather than travel to a distant forum to defend themselves. The copyright troll business model relies on this unbalanced playing field.

In this case, Prenda sued 1058 Does (anonymous defendants identified only by an IP address) in federal district court in the District of Columbia. It then issued subpoenas demanding that ISPs identify the names of these customers. The ISPs objected to this request arguing that most of the IP addresses were associated with computers located outside of the court's jurisdiction. The ISPs and EFF also showed that Prenda could have used simple geolocation tools to determine the same thing. And we explained that joining together 1000+ subscribers in one lawsuit was fundamentally unfair and improper under the rules governing when defendants can be sued together (known as ‘joinder’).

Unfortunately, the district court did not agree, holding that any consideration of joinder and jurisdiction was "premature." In other words, the court can't consider whether the process is unfair unless and until a Doe comes to the court to raise the issue. By then, of course, it is too late; the subscribers will have already received threatening letters and, in many cases, be reluctant to take on the burden of defending themselves in a far away location.

We believe this ruling was fundamentally wrong. As we've said many times, plaintiffs have every right to go to court to enforce their rights. But they must play by the same litigation rules that everyone else has to follow. To get early discovery, plaintiffs must have a good-faith belief that jurisdiction and joinder are proper. Given the evidence presented to the district court, there is no way Prenda could have formed this good faith belief. So its demand for customer information should have been denied.

The ISPs appealed the district court’s troubling ruling. At the hearing today, the appellate court was particularly interested in the issue of joinder. The court seemed immediately skeptical of the notion of suing 1000 people at once, but wondered if it might be acceptable join together 20 Bittorrent users who had joined the same swarm to acquire the same work. The ISPs and amici said generally no, because the plaintiff can't know whether a given Doe 1 acquired anything from a given Doe 2 – in other words, they aren't necessarily part of the same "transaction or occurrence." We analogized a bittorrent swarm to a casino poker table: over the course of a weekend, a week, or a month, players may come and go, adding and subtracting from the pot, but the players on day one are unlikely to be related to the players on day 4, or day 30.

The ISPs and amici also stressed the issue of burden. While the ISPs were focused on the burden they faced in responding to the subpoenas, EFF directed the court's attention to the fundamental burden on the IP subscribers, noting that the subscribers identified as a result of a subpoena aren't necessarily going to be responsible for any unauthorized activity. An IP address, we explained, only tells you the name on the bill, not who is using the account. In this context, it is crucial that courts attend to the burden on the Does, as well as the ISPs.

The court had a number of question regarding jurisdiction, and directed many of them to counsel for AF Holdings, Paul Duffy. At root, the court seemed to want to know why AF Holdings had not used geolocation tools to help determine where its targets might be located, and why it had not dropped its effort to pursue many of them when the ISPs explained that the Does just weren't in the court's jurisdiction. Finally, the court had some questions about AF Holdings litigation tactics, including the shenanigans that have been widely reported elsewhere.

It is difficult to predict how a court will rule based only on a hearing. But we are encouraged that the judges asked the important and thoughtful questions, and clearly understood both the context and implications of their decision. Many district courts have now concluded that the copyright troll business model is fundamentally unfair, and have taken steps to ensure the judicial process is not abused to foster a shakedown scheme. Let's hope they will soon be joined by the DC Circuit Court of Appeals.

Related Issues: Fair Use and Intellectual Property: Defending the BalanceCopyright TrollsRelated Cases: AF Holdings v. Does
Share this:   ||  Join EFF

FBI Plans to Have 52 Million Photos in its NGI Face Recognition Database by Next Year

Mon, Apr 14 2014 11:37 -0400

New documents released by the FBI show that the Bureau is well on its way toward its goal of a fully operational face recognition database by this summer.

EFF received these records in response to our Freedom of Information Act lawsuit for information on Next Generation Identification (NGI)—the FBI’s massive biometric database that may hold records on as much as one third of the U.S. population. The facial recognition component of this database poses real threats to privacy for all Americans.

What is NGI?

NGI builds on the FBI’s legacy fingerprint database—which already contains well over 100 million individual records—and has been designed to include multiple forms of biometric data, including palm prints and iris scans in addition to fingerprints and face recognition data. NGI combines all these forms of data in each individual’s file, linking them to personal and biographic data like name, home address, ID number, immigration status, age, race, etc. This immense database is shared with other federal agencies and with the approximately 18,000 tribal, state and local law enforcement agencies across the United States.

The records we received show that the face recognition component of NGI may include as many as 52 million face images by 2015. By 2012, NGI already contained 13.6 million images representing between 7 and 8 million individuals, and by the middle of 2013, the size of the database increased to 16 million images. The new records reveal that the database will be capable of processing 55,000 direct photo enrollments daily and of conducting tens of thousands of searches every day.

NGI Will Include Non-Criminal as well as Criminal Photos

One of our biggest concerns about NGI has been the fact that it will include non-criminal as well as criminal face images. We now know that FBI projects that by 2015, the database will include 4.3 million images taken for non-criminal purposes.

Currently, if you apply for any type of job that requires fingerprinting or a background check, your prints are sent to and stored by the FBI in its civil print database. However, the FBI has never before collected a photograph along with those prints. This is changing with NGI. Now an employer could require you to provide a “mug shot” photo along with your fingerprints. If that’s the case, then the FBI will store both your face print and your fingerprints along with your biographic data.

In the past, the FBI has never linked the criminal and non-criminal fingerprint databases. This has meant that any search of the criminal print database (such as to identify a suspect or a latent print at a crime scene) would not touch the non-criminal database.  This will also change with NGI. Now every record—whether criminal or non—will have a “Universal Control Number” (UCN), and every search will be run against all records in the database. This means that even if you have never been arrested for a crime, if your employer requires you to submit a photo as part of your background check, your face image could be searched—and you could be implicated as a criminal suspect—just by virtue of having that image in the non-criminal file.  

Many States Are Already Participating in NGI

The records detail the many states and law enforcement agencies the FBI has already been working with to build out its database of images (see map below). By 2012, nearly half of U.S. states had at least expressed an interest in participating in the NGI pilot program, and several of those states had already shared their entire criminal mug shot database with the FBI. The FBI hopes to bring all states online with NGI by this year.

The FBI worked particularly closely with Oregon through a special project called “Face Report Card.” The goal of the project was to determine and provide feedback on the quality of the images that states already have in their databases. Through Face Report Card, examiners reviewed 14,408 of Oregon’s face images and found significant problems with image resolution, lighting, background and interference. Examiners also found that the median resolution of images was “well-below” the recommended resolution of .75 megapixels (in comparison, newer iPhone cameras are capable of 8 megapixel resolution).

FBI Disclaims Responsibility for Accuracy

At such a low resolution, it is hard to imagine that identification will be accurate.1 However, the FBI has disclaimed responsibility for accuracy, stating that “[t]he candidate list is an investigative lead not an identification.”

Because the system is designed to provide a ranked list of candidates, the FBI states NGI never actually makes a “positive identification,” and “therefore, there is no false positive rate.” In fact, the FBI only ensures that “the candidate will be returned in the top 50 candidates” 85 percent of the time “when the true candidate exists in the gallery.”

It is unclear what happens when the “true candidate” does not exist in the gallery—does NGI still return possible matches? Could those people then be subject to criminal investigation for no other reason than that a computer thought their face was mathematically similar to a suspect’s? This doesn’t seem to matter much to the FBI—the Bureau notes that because “this is an investigative search and caveats will be prevalent on the return detailing that the [non-FBI] agency is responsible for determining the identity of the subject, there should be NO legal issues.”

Nearly 1 Million Images Will Come from Unexplained Sources

One of the most curious things to come out of these records is the fact that NGI may include up to 1 million face images in two categories that are not explained anywhere in the documents. According to the FBI, by 2015, NGI may include:

  • 46 million criminal images
  • 4.3 million civil images
  • 215,000 images from the Repository for Individuals of Special Concern (RISC)
  • 750,000 images from a "Special Population Cognizant" (SPC) category
  • 215,000 images from "New Repositories"

However, the FBI does not define either the “Special Population Cognizant” database or the "new repositories" category. This is a problem because we do not know what rules govern these categories, where the data comes from, how the images are gathered, who has access to them, and whose privacy is impacted.

A 2007 FBI document available on the web describes SPC as “a service provided to Other Federal Organizations (OFOs), or other agencies with special needs by agreement with the FBI” and notes that “[t]hese SPC Files can be specific to a particular case or subject set (e.g., gang or terrorist related), or can be generic agency files consisting of employee records.” If these SPC files and the images in the "new repositories" category are assigned a Universal Control Number along with the rest of the NGI records, then these likely non-criminal records would also be subject to invasive criminal searches.

Government Contractor Responsible for NGI has built some of the Largest Face Recognition Databases in the World

The company responsible for building NGI’s facial recognition component—MorphoTrust (formerly L-1 Identity Solutions)—is also the company that has built the face recognition systems used by approximately 35 state DMVs and many commercial businesses.2 MorphoTrust built and maintains the face recognition systems for the Department of State, which has the “largest facial recognition system deployed in the world” with more than 244 million records,3 and for the Department of Defense, which shares its records with the FBI.

The FBI failed to release records discussing whether MorphoTrust uses a standard (likely proprietary) algorithm for its face templates. If it does, it is quite possible that the face templates at each of these disparate agencies could be shared across agencies—raising again the issue that the photograph you thought you were taking just to get a passport or driver’s license is then searched every time the government is investigating a crime. The FBI seems to be leaning in this direction: an FBI employee email notes that the “best requirements for sending an image in the FR system” include “obtain[ing] DMV version of photo whenever possible.”

Why Should We Care About NGI?

There are several reasons to be concerned about this massive expansion of governmental face recognition data collection. First, as noted above, NGI will allow law enforcement at all levels to search non-criminal and criminal face records at the same time. This means you could become a suspect in a criminal case merely because you applied for a job that required you to submit a photo with your background check.

Second, the FBI and Congress have thus far failed to enact meaningful restrictions on what types of data can be submitted to the system, who can access the data, and how the data can be used. For example, although the FBI has said in these documents that it will not allow non-mug shot photos such as images from social networking sites to be saved to the system, there are no legal or even written FBI policy restrictions in place to prevent this from occurring. As we have stated before, the Privacy Impact Assessment for NGI’s face recognition component hasn’t been updated since 2008, well before the current database was even in development. It cannot therefore address all the privacy issues impacted by NGI.

Finally, even though FBI claims that its ranked candidate list prevents the problem of false positives (someone being falsely identified), this is not the case. A system that only purports to provide the true candidate in the top 50 candidates 85 percent of the time will return a lot of images of the wrong people. We know from researchers that the risk of false positives increases as the size of the dataset increases—and, at 52 million images, the FBI’s face recognition is a very large dataset. This means that many people will be presented as suspects for crimes they didn’t commit. This is not how our system of justice was designed and should not be a system that Americans tacitly consent to move towards.

For more on our concerns about the increased role of face recognition in criminal and civil contexts, read Jennifer Lynch’s 2012 Senate Testimony. We will continue to monitor the FBI’s expansion of NGI.

Here are the documents:

FBI NGI Description of Face Recognition Program

FBI NGI Report Card on Oregon Face Recognition Program

FBI NGI Sample Memorandum of Understanding with States

FBI NGI Face Recognition Goals & Objectives

FBI NGI Information on Implementation

FBI Emails re. NGI Face Recognition Program

FBI Emails from Contractors re. NGI

FBI NGI 2011 Face Recognition Operational Prototype Plan

FBI NGI Document Discussing Technical Characteristics of Face Recognition Component

FBI NGI 2010 Face Recognition Trade Study Plan

FBI NGI Document on L-1's Commercial Face Recognition Product

  • 1. In fact, another document notes that “since the trend for the quality of data received by the customer is lower and lower quality, specific research and development plans for low quality submission accuracy improvement is highly desirable.”
  • 2. MorphoTrust’s parent company, Safran Morpho, describes itself as “[t]he world leader in biometric systems,” is largely responsible for implementing India’s Aadhaar project, which, ultimately, will collect biometric data from nearly 1.2 billion people.
  • 3. One could argue that Facebook’s is larger. Facebook states that its users have uploaded more than 250 billion photos. However, Facebook never performs face recognition searches on that entire 250 billion photo database.
Related Issues: BiometricsPrivacyTransparencyRelated Cases: FBI's Next Generation Identification Biometrics Database
Share this:   ||  Join EFF

404 Day Recap

Fri, Apr 11 2014 20:51 -0400

Friday, April 4th was 404 Day - a day meant to call attention to Internet censorship in public schools and libraries in the United States. This censorship is the result of a well-meaning but misguided law, the Children's Internet Protection Act (CIPA), which ties federal funding for public schools and libraries to requirements to filter child pornography and content that is obscene or "harmful to minors." Unfortunately, bad and secretive filtering technology and over-aggressive filtering implementations result in the filtering of constitutionally-protected speech, among other problems.

The day centered around a digital teach-in for an in-depth discussion of the issues, featuring: Deborah Caldwell-Stone, Director of Intellectual Freedom at the American Library Association; Chris Peterson from MIT's Center for Civic Media and the National Coalition Against Censorship; and Sarah Houghton, blogger and Director of the San Rafael Public Library in Northern California.

Privacy info. This embed will serve content from youtube-nocookie.com

They addressed such issues as the cost and efficacy of these filters, the lack of transparency around what is filtered, and how you can ask your librarian to turn them off. The video, above, is a fantastic resource for beginning to understand problems CIPA creates.

Concurrently, a discussion ranged on Twitter around the hashtag #404day, as users, including Senator Ron Wyden, asked questions and shared their own experiences with filtering software in libraries and schools.

My mom was a librarian & #404Day rings loud & clear. Ham-handed Internet censorship in schools is a real problem. https://t.co/QcVv4zejjk

— Ron Wyden (@RonWyden) April 4, 2014

Many of those participating in the online discussion discussed the futility of filtering and how they had learned to circumvent filters at early ages, and brought up how the filters disproportionately affect low-income communities or those who rely on public computer access.

When the screen reads “404 Error – Not Found” we need to recognize that one of the things which is not being found is the values of libraries.
-LibrarianShipwreck

Throughout the day, librarians, researchers, teachers, and even a student blogged about how CIPA hinders their work, stifles speech, and runs counter to the ideals of public libraries. From an explainer about the censorship reporting tool Herdict to the experiences of a researcher unable to access material she needed to the manifesto of high school librarian preferring trust and education to blocking, the posts illustrated the personal and social harms of censorship under CIPA.

We're thrilled about the discussion the day engendered and thankful to our partners at the National Coalition Against Censorship and the Center for Civic Media at MIT, the teach-in participants, and all those who joined in blogging or tweeting throughout the day. The next time you get a 404 error at the library, we hope you think about why it's there and ask your librarian whether it's because of filtering and to turn the filtering off if it is.

var mytubes = new Array(1); mytubes[1] = '%3Ciframe src=%22//www.youtube-nocookie.com/embed/g_9sgZIVCJY%22 allowfullscreen=%22%22 frameborder=%220%22 height=%22315%22 width=%22560%22%3E%3C/iframe%3E';
Share this:   ||  Join EFF

Australian Attorney General Picks Surveillance Over Fair Use on U.S. Visit

Fri, Apr 11 2014 18:46 -0400

"Australia is ready for, and needs, a fair use exception now." These were the unambiguous words of the Australian Law Reform Commission's report investigating how to modernize the country's copyright laws. Specifically, the Commission called for a fair use doctrine that resembles that of the U.S., with the same four-factor balancing test.

So then you might expect that when George Brandis, Australia's Attorney General, makes his first official trip to the United States—the one he concluded just days ago—he would take the opportunity to meet with American experts on fair use. They could discuss the areas where the law has proven flexible in accommodating unforeseen uses, how the balance between specificity and flexibility is continuously struck, and what he might hope to bring back to his home country.

You might be disappointed to learn, then, that despite the straightforwardness of the Commission's recommendation, Brandis has pointedly refused to explore the idea of fair use in Australia. Though he received the Commission's report in November, he waited until February to publish it—and even then, only alongside his own misguided proposal: that Australia should establish a three-strikes-style graduated response program.

Along those lines, instead of meeting with copyright scholars and fair use expert on this week's trip to Washington, DC, Brandis met with the executive director of the Center for Copyright Information—the organization behind the U.S. graduated response system known as "Six Strikes," or the Copyright Alert System.

In terms of evidence-based policy making, this is a failure. For one thing, he needn't come all the way to the U.S. to find out how graduated response programs work (or don't). Australian copyright scholar Rebecca Giblin has conducted an exhaustive study on the effect of these programs and found "remarkably little evidence" that they were effective in reducing infringement, increasing legitimate markets, or improving access to knowledge and culture.

But more broadly, the fact that the rest of Brandis's agenda consisted of meetings with senior officials at intelligence agencies like the NSA, FBI, and CIA, raises major red flags for user privacy. And indeed, politicians in Australia have recently re-introduced mandatory data retention proposals for Internet service providers, after similar proposals suffered defeat just last year. Perhaps unsurprisingly, these proposals have the backing of Attorney General Brandis, who has repeatedly defended NSA spying during parliamentary question time.

Brandis may consider increased surveillance to be a two-for-one special: take some visible action to look strong on national security and at the same time appease the legacy content industries that want to make Internet companies snoop on their users.

Australians should demand better. Reforms that would empower users, like fair use, merit serious consideration. An obsessive devotion to mass surveillance at home and abroad does not.

Related Issues: Fair Use and Intellectual Property: Defending the BalanceThe "Six Strikes" Copyright Surveillance MachineInternational
Share this:   ||  Join EFF

EFF Urges Appeals Court to Reconsider Dangerous Copyright Ruling

Fri, Apr 11 2014 15:52 -0400
Decision About “Innocence of Muslims” Video Could Be Disastrous for Free Speech

San Francisco - The Electronic Frontier Foundation (EFF) is urging a federal appeals court to reconsider its decision to order Google to take down the controversial "Innocence of Muslims" video while a copyright lawsuit—based on a claim that the Copyright Office itself has rejected—is pending. As EFF explains, the decision sets a dangerous precedent that could have disastrous consequences for free speech.

"Innocence of Muslims" sparked protests worldwide in the fall of 2012. For a time, its anti-Islamic content was even linked to the violent attack on an American diplomatic compound in Benghazi, Libya, although that was later refuted. An actress named Cindy Lee Garcia, after being tricked into appearing in the film for just five seconds, claimed she held a copyright in that performance. She sued Google for copyright infringement and asked the court to order Google to take the video offline. The district court refused, noting that it could not restrain speech massed on nothing more than a highly debatable copyright claim. On appeal, a three-judge panel of the United States Court of Appeals for the Ninth Circuit agreed that the copyright claim was not strong, but nonetheless ordered Google to take down all copies of the video. It even issued a gag order, preventing Google from talking about the controversial decision for a full week.

"This video is a matter of extreme public concern–the center of a roiling, global debate," EFF Intellectual Property Director Corynne McSherry said. "The injunction in place now means we can still talk about the video–but we can't see what we are actually talking about. While the injunction stretched the First Amendment beyond its intent, the gag order snapped it in half. It delayed the public and the press from discovering this unprecedented copyright decision, and prevented others from challenging the ruling."

In an amicus brief filed today, EFF argues that the full appeals court must reconsider the earlier decision in order to protect free speech in the debate over the film and also to safeguard the future of free expression online.

"This decision means that any number of creative contributors–from actors to makeup artists to set designers–could be entitled to royalties and even control over the distribution of works they were paid to contribute to," said EFF Staff Attorney Nate Cardozo. "Such a rule would stifle creative expression for big studios and amateur filmmakers alike. While we can understand Garcia's desire to distance herself from this film, copyright law is not designed to address the harm she suffered by suppressing the global debate on a matter of public concern."

The American Civil Liberties Union, Public Knowledge, the Center for Democracy and Technology, New Media Rights, the American Library Association, the Association of College and Research Libraries, and the Association of Research Libraries joined EFF in this brief.

For the full amicus brief:

https://www.eff.org/document/garcia-v-google-amicus

For more on Garcia v. Google:

https://www.eff.org/cases/garcia-v-google-inc

Contacts:

Nate Cardozo
   Staff Attorney
   Electronic Frontier Foundation
   nate@eff.org

Corynne McSherry
   Intellectual Property Director
   Electronic Frontier Foundation
   corynne@eff.org


Share this:   ||  Join EFF

EFF to Present Oral Argument in Copyright 'Troll' Case

Fri, Apr 11 2014 15:21 -0400
Shake Down of BitTorrent Users Abuses Justice System

Washington, DC - The Electronic Frontier Foundation (EFF) will ask a federal appeals court at a hearing on Monday, April 14, to prevent a notorious copyright troll from obtaining the identities of more than 1,000 Internet users.

Speaking on behalf of EFF, the American Civil Liberties Union, the ACLU of the Nation's Capital, Public Citizen and Public Knowledge, EFF Intellectual Property Director Corynne McSherry will urge the Court of Appeals for the District of Columbia to reverse a district court decision that allowed the plaintiff to seek identifying information for thousands of "John Does" without complying with basic procedural rules.

The coalition of public interest groups filed an amicus brief in May 2013 in support of several Internet service providers that are resisting subpoenas for user records. Representatives for those providers will offer the principal argument. However, the court took the unusual step of allowing amici to appear and argue as well.

AF Holdings, the plaintiff in the case, is seeking the identities of individuals that it claims may have illegally downloaded a copyrighted adult film. The case is one of hundreds being pursued around the country that follow the same pattern, which judges have described as "essentially an extortion scheme." A copyright troll looks for IP addresses that may have been used to download films (usually adult films) via BitTorrent, files a single lawsuit against thousands of "John Doe" defendants based on those IP addresses, then seeks to subpoena the ISPs for the contact information of the account holders associated with those IP addresses. The troll then uses that information to contact the account holders and threatens expensive litigation if they do not settle promptly. Faced with the prospect of hiring an attorney and litigating the issue, often in a distant court, most subscribers—including those who may have done nothing wrong—will choose to settle rather than fight.

AF Holdings is linked to Prenda Law, a firm that is facing allegations that it used stolen identities and fictitious signatures on key legal documents and made other false statements to the courts. AF Holdings will have an opportunity to address the court but has so far not designated a representative for the hearing.

WHAT: Oral Argument in AF Holdings v. Does

WHO: Corynne McSherry, Intellectual Property Director, EFF

Benjamin Fox, Partner, Morrison & Foerster LLP, counsel for ISPs

WHERE: U.S. District Court of Appeals for the District of Columbia Circuit

625 Indiana Ave NW, Washington, DC 20004

WHEN: Monday, April 14, 2014 9:30 A.M. EST

For more information on our case, including the amicus brief: https://www.eff.org/cases/af-holdings-v-does

Contacts:

Corynne McSherry
   Intellectual Property Director
   Electronic Frontier Foundation
   corynne@eff.org


Share this:   ||  Join EFF

Appeals Court Overturns Andrew “weev” Auernheimer Conviction

Fri, Apr 11 2014 14:04 -0400
Important Decision Impacts Constitutional Rights in the Internet Age

San Francisco - A federal appeals court overturned the conviction of Andrew "weev" Auernheimer, the computer researcher who was charged with violating the Computer Fraud and Abuse Act (CFAA) after he exposed a massive security flaw in AT&T's website.

Auernheimer was represented on appeal by the Electronic Frontier Foundation (EFF), Professor Orin Kerr of George Washington University, and attorneys Marcia Hofmann, and Tor Ekeland. In an opinion issued this morning by the U.S. Court of Appeals for the Third Circuit, Judge Michael Chagares wrote that the government should not have charged Auernheimer in New Jersey, which had no direct connection to AT&T or Auernheimer.

"We're thrilled that the Third Circuit reversed Mr. Auernheimer's conviction," EFF Staff Attorney Hanni Fakhoury said. "This prosecution presented real threats to security research. Hopefully this decision will reassure that community."

In 2010, Auernheimer's co-defendant, Daniel Spitler, discovered that AT&T had configured its servers to make the email addresses of iPad owners publicly available on the Internet. Spitler wrote a script and collected roughly 114,000 email addresses as a result of the security flaw. Auernheimer then distributed the list of email addresses to media organizations as proof of the vulnerability, ultimately forcing AT&T to acknowledge and fix the security problem.

Federal prosecutors charged Auernheimer and Spitler with identity theft and conspiracy to violate the CFAA in New Jersey federal court. Spitler accepted a plea deal, while Auernheimer unsuccessfully fought the charges in a jury trial. Auernheimer began serving a 41-month prison sentence in March 2013.

On appeal, Auernheimer's defense team argued that accessing a publicly available website does not constitute unauthorized access to a computer under the CFAA. They also argued that Auernheimer should not have been charged in New Jersey. At the time they were obtaining email addresses, Auernheimer was in Arkansas, Spitler was in California and AT&T's servers were in Georgia and Texas.

The court agreed with Auernheimer that charging the case in New Jersey was improper and reversed his conviction and ordered him released from prison. Although it did not directly address whether accessing information on a publicly available website violates the CFAA, the court suggested that there may have been no CFAA violation, since no code-based restrictions to access had been circumvented.

"Today's decision is important beyond weev's specific case," added Fakhoury. "The court made clear that the location of a criminal defendant remains an important constitutional limitation, even in today's Internet age."

For the opinion: https://www.eff.org/document/appellate-court-opinion

Contact:

Hanni Fakhoury
   Staff Attorney
   Electronic Frontier Foundation
   hanni@eff.org


Share this:   ||  Join EFF

Warrant Canary Frequently Asked Questions

Thu, Apr 10 2014 15:09 -0400

What is a warrant canary?

A warrant canary is a colloquial term for a regularly published statement that a service provider has not received legal process that it would be prohibited from saying it had received. Once a service provider does receive legal process, the speech prohibition goes into place, and the canary statement is removed.

Warrant canaries are often provided in conjunction with a transparency report, listing the process the service provider can publicly say it received over the course of a particular time period. The canary is a reference to the canaries used to provide warnings in coalmines, which would become sick before miners from carbon monoxide poisoning, warning of the danger.

How might a warrant canary work in practice?

An ISP might issue a semi-annual transparency report, stating that it had not received any national security letters in the six month period.  NSLs come with a gag, which purports to prevent the recipient from saying it has received one. (While a federal court has ruled that the NSL gag is unconstitutional, that order is currently stayed pending the government’s appeal). When the ISP issues a subsequent transparency report without that statement, the reader may infer from the silence that the ISP has now received an NSL.

Why would an ISP want to publish a warrant canary?

Sunlight is said to be the best of disinfectants.” – Justice Louis D. Brandeis.

We are in a time of unprecedented public debate over the government’s powers to secretly obtain information about people. The revelations about the massive NSA bulk surveillance program have raised serious questions about whether these powers are necessary, legal and constitutional.  Secret surveillance violates not only the privacy interests of the account holder, but the speech interests of ISPs who wish to participate in these public debates.

Why should we care about publicizing secret legal process like national security letters?

As part of the reauthorization of the Patriot Act in 2006, Congress directed the DOJ Inspector General to investigate and report on the FBI’s use of NSLs. In three reports issued between 2007, 2008 and 2010, the IG documented the agency’s systematic and extensive misuse of NSLs.

The reports showed that between 2003 and 2006, the FBI’s intelligence violations included improperly authorized NSLs, factual misstatements in the NSLs, improper requests under NSL statutes, and unauthorized information collection through NSLs. The FBI’s improper practices included requests for information based on First Amendment protected activity.

In December 2013, the President’s Review Group on Intelligence and Communications Technologies recommended public reporting—both by the government and NSL recipients—of the number of requests made, the type of information produced, and the number of individuals whose records have been requested.

As discussed below, NSLs are just one type of gagged legal process.  Similar problems persist in other forms of secret process.

Is it legal to publish a warrant canary?

There is no law that prohibits a service provider from reporting all the legal processes that it has not received. The gag order only attaches after the ISP has been served with the gagged legal process.  Nor is publishing a warrant canary an obstruction of justice, since this intent is not to harm the judicial process, but rather to engage in a public conversation about the extent of government investigatory powers.

What are some of the gagged legal processes that an ISP might receive?

An ISP may be gagged from stating it has received any one of several types of national security letters, orders from the Foreign Intelligence Surveillance Court (like the Section 215 orders used for the bulk call records program and the Section 702 orders used for the NSA’s PRISM program), or even an ordinary subpoena when accompanied by a gag order pursuant to the Electronic Communication Privacy Act. The government has issued hundreds of thousands of these gagged legal requests, but very few have ever seen the light of day. 

What does the government say is permissible for recipients of gagged legal process?

The government allows ISPs to report receipt of gagged legal process in ranges of 1000, starting at 0, for six-month periods.  So if an ISP received 654 NSLs, it could report 0-999.  If the companies choose to report FISC requests and NSL requests combined, they can use ranges of 250, again starting at 0.  For example, Apple reported receiving 0-249 national security requests in the first half of 2013 and AT&T reported 0-999 content FISC orders, 0-999 non-content FISC orders and 2000-2999 NSLs for the same period. 

While the government-approved ranges all start at zero, publication of a range indicates that the ISP has received at least one, as otherwise the ISP would have no obligation to follow the government’s formula. 

In contrast to the government-approved ranges, warrant canaries can be much more specific, making the it easier to determine what sort of legal process an ISP has been served with.

What’s the legal theory behind warrant canaries?

The First Amendment protects against compelled speech. For example, a court held that the New Hampshire state government could not require its citizens to have “Live Free or Die” on their license plates. While the government may be able to compel silence through a gag order, it may not be able to compel an ISP to lie by falsely stating that it has not received legal process when in fact it has.

Have courts upheld compelled speech?

Rarely.  In a few instances, the courts have upheld compelled speech in the commercial context, where the government shows that the compelled statements convey important truthful information to consumers.  For example, warnings on cigarette packs are a form of compelled commercial speech that have sometimes been upheld, and sometimes struck down, depending on whether the government shows there is a rational basis for the warning.

Have courts upheld compelled false speech?

No, and the cases on compelled speech have tended to rely on truth as a minimum requirement. For example, Planned Parenthood challenged a requirement that physicians tell patients seeking abortions of an increased risk of suicidal ideation. The court found that Planned Parenthood did not meet its burden of showing that the disclosure was untruthful, misleading, or not relevant to the patent’s decision to have an abortion.

Are there any cases upholding warrant canaries?

Not yet. EFF believes that warrant canaries are legal, and the government should not be able to compel a lie. To borrow a phrase from Winston Churchill, no one can guarantee success in litigation, but only deserve it.

What should an ISP do if the warrant canary is triggered?

If an ISP with a warrant canary receives gagged legal process, it should obtain legal counsel and go to a court for a determination that it cannot be required to publish false information.  While some ISPs may be tempted to engage in civil disobedience, EFF believes that it is better to present the issue to a court, to help establish a precedent. If you run an ISP with a warrant canary and receive gagged legal process, contact info@eff.org if you would like help finding counsel.

How often should an ISP publish the warrant canary?

Various ISPs have published canaries on a wide range of schedules.  To allow time to file a case and for the court to rule on the important legal questions, we suggest at least few months between the transparency report and the time period covered.

Who has issued warrant canaries?

A number of service providers have issued warrant canaries, including:

  • Apple (“Apple has never received an order under Section 215 of the USA Patriot Act.”)
  • Espionageapp.com (“We have not placed any backdoors into our software and have not received any requests for doing so. Pay close attention to any modifications to the previous sentence, and verify the signature of this "watch zone" by viewing the page source. Our public GPG key can be found using this ID: A884B988”)
  • Lookout (“Furthermore, as of the date of this report, Lookout has not received a national security order and we have not been required by a FISA court to keep any secrets that are not in this transparency report.”)
  • MagusNet (picture of a warrant canary with the statement, “No Warrants. No Searches, No Seizures [sic] at Magus Net, LLC.”)
  • Pinterest. (“National security: 0”)
  • Rise Up (“We would like to clearly state that Riseup has never given any user information to any third party.”)
  • Rsync.net (“No warrants have ever been served to rsync.net, or rsync.net principals or employees. No searches or seizures of any kind have ever been performed on rsync.net assets . . . .”)
  • Tumblr (“As of the date of publication of this report, we have never received a National Security Letter, FISA order, or any other classified request for user information.”)
  • Vilain (“THE FBI HAS NOT BEEN HERE (watch very closely for the removal of this sign).”)
  • Wickr (“As of the date of this report, Wickr has not been required by a FISA request to keep any secrets that are not in this transparency report as part of a national security order.”)
Related Issues: PrivacyNational Security LettersNSA SpyingPATRIOT Act
Share this:   ||  Join EFF

EFF Asks Court To Allow Human Rights Case Against Cisco to Proceed

Thu, Apr 10 2014 13:54 -0400
Case Argues Cisco Built Surveillance Tools Specifically Designed to Help Chinese Authorities Target Falun Gong

EFF filed a request to submit an amicus brief today in the Federal District Court of the Northern District of California, urging the Court to let a case entitled Doe v. Cisco Systems go forward against Cisco for its role in contributing to human rights abuses against the Falun Gong religious minority in China. China's record of human rights abuses against the Falun Gong is notorious, including detention, torture, forced conversions, and even deaths. These violations have been well-documented by the U.N., the U.S. State Department, and many others around the world, including documentation of China's use of sophisticated surveillance technologies to facilitate this repression.

The central claim in the case is that Cisco purposefully customized its general purpose router technology to allow the Chinese government to identify, track, and detain Falun Gong members. Specifically, the case alleges that Cisco customized technology for anti-Falun Gong purposes including:

  • A library of carefully analyzed patterns of Falun Gong Internet activity (or “signatures”) that enable the Chinese government to uniquely identify Falun Gong Internet users;
  • Several log/alert systems that provide the Chinese government with real time monitoring and notification based on Falun Gong Internet traffic patterns;
  • Applications for storing data profiles on individual Falun Gong practitioners for use during interrogation and “forced conversion” (i.e., torture);
  • Applications for storing and sharing videos of “efficient forced conversions” for purposes of training security officers on successful methods;
  • Applications for categorizing individual Falun Gong practitioners by their likely susceptibility to different methods of “forced conversion”;
  • Highly advanced video and image analyzers that Cisco marketed as the “only product capable of recognizing over 90% of Falun Gong pictorial information;” and
  • A nationwide video surveillance system which enabled the Chinese government to identify and detain Falun Gong practitioners.

The suit also alleges that Cisco not only knew that its customizations would be used to repress the Falun Gong, but actively marketed, sold, and supported the technologies toward that purpose. In fact, the case arises in part from the publication several years ago of a presentation in which Cisco confirms that the Golden Shield is helpful to the Chinese government to “Combat Falun Gong Evil Religion and Other Hostilities.” It also alleges that these customizations were actually used to identify and detain the plaintiffs.

People around the world are increasingly concerned about the sale by Western companies of surveillance and other technologies used for repression. Over the past few years, EFF has tracked a pattern around the world (here, here and here) and has suggested "Know Your Customer" standards for technology companies who are selling technologies that can be used in human rights abuses to potentially repressive governments. Many have suggested increased export controls to combat the problem, but the Doe v. Cisco and EFF's Kidane v. Ethiopia cases show that there are other ways to address the very real problem of companies selling the tools of repression as well as the repression that results.

In its brief, EFF suggests a careful liability analysis, expressly noting in this case, and in another case against Cisco from last year, Du Daobin v. Cisco,1 that a tech company could not (and should not) be held accountable when governments misuse general use products for nefarious purposes. Yet the allegations here are that Cisco has done far more than sell standard router technology and services to the Chinese authorities; they are that Cisco has specifically and intentionally customized its technologies and services in order to facilitate well-documented human rights violations against a religious minority. That should be sufficient to allow the case to proceed.

EFF legal intern Hilary Richardson greatly assisted in the writing of EFF's amicus brief. Thanks Hilary!

  • 1. The Du Daobin case was dismissed earlier this year and EFF noted the problems with that decision and urged the California court not to follow suit.
Files:  cisco_amicus_brief.pdfRelated Issues: Export ControlsState Surveillance & Human RightsRelated Cases: Kidane v. Ethiopia
Share this:   ||  Join EFF

Wild at Heart: Were Intelligence Agencies Using Heartbleed in November 2013?

Thu, Apr 10 2014 08:00 -0400

Yesterday afternoon, Ars Technica published a story reporting two possible logs of Heartbleed attacks occurring in the wild, months before Monday's public disclosure of the vulnerability. It would be very bad news if these stories were true, indicating that blackhats and/or intelligence agencies may have had a long period when they knew about the attack and could use it at their leisure.

In response to the story, EFF called for further evidence of Heartbleed attacks in the wild prior to Monday. The first thing we learned was that the SeaCat report was a possible false positive; the pattern in their logs looks like it could be caused by ErrataSec's masscan software, and indeed one of the source IPs was ErrataSec.

The second log seems much more troubling. We have spoken to Ars Technica's second source, Terrence Koeman, who reports finding some inbound packets, immediately following the setup and termination of a normal handshake, containing another Client Hello message followed by the TCP payload bytes 18 03 02 00 03 01 40 00 in ingress packet logs from November 2013. These bytes are a TLS Heartbeat with contradictory length fields, and are the same as those in the widely circulated proof-of-concept exploit.

Koeman's logs had been stored on magnetic tape in a vault. The source IP addresses for the attack were 193.104.110.12 and 193.104.110.20. Interestingly, those two IP addresses appear to be part of a larger botnet that has been systematically attempting to record most or all of the conversations on Freenode and a number of other IRC networks. This is an activity that makes a little more sense for intelligence agencies than for commercial or lifestyle malware developers.

To reach a firmer conclusion about Heartbleed's history, it would be best for the networking community to try to replicate Koeman's findings. Any network operators who have extensive packet logs can check for malicious heartbeats, which most commonly have a TCP payload of 18 03 02 00 03 01 or 18 03 01 00 03 01 (or perhaps even 18 03 03 00 03 01). We urge any network operators who find this pattern to contact us.

Network operators might also keep an eye out for other interesting log entries from 193.104.110.* and the other IPs in the related botnet. Who knows what they might find?

A lot of the narratives around Heartbleed have viewed this bug through a worst-case lens, supposing that it might have been used for some time, and that there might be tricks to obtain private keys somewhat reliably with it. At least the first half of that scenario is starting to look likely.

Related Issues: Encrypting the WebSecurity
Share this:   ||  Join EFF

The Bleeding Hearts Club: Heartbleed Recovery for System Administrators

Thu, Apr 10 2014 03:00 -0400

The Heartbleed SSL vulnerability presents significant concerns for users and major challenges for site operators. This article presents a series of steps server and site owners should carry out as soon as possible to help protect the public. We acknowledge that some steps might not be feasible, important, or even relevant for every site, so the steps are given in order both of their importance and the order they should be carried out.

1. Update Your Servers

If you haven't yet, update any and all of your systems that use OpenSSL for TLS encrypted communications. This includes most web servers, load balancers, cache servers, mail servers, messaging and chat servers, VPN servers, and file servers, especially those running on Linux, Unix, BSD, Mac OS X, or Cygwin.

The vulnerable OpenSSL version numbers are 1.0.1 through 1.0.1f and 1.0.2-beta1. The flaw is fixed in OpenSSL 1.0.1g. However, some operating systems have introduced the fix to earlier branches of OpenSSL, and may instruct you to install packages with minimum versions such as 1.0.1e-2+deb7u5 (in the case of Debian GNU/Linux).

If your operating system has not yet released an updated package, download openssl-1.0.1g.tar.gz directly from https://www.openssl.org/source/ and follow the instructions in the INSTALL text file to compile the new version locally.

After installing a fixed version of OpenSSL, be sure to restart all services that depend on it. On your sysytem this might include web and proxy servers such as apache, nginx, pound, and squid, caches such as memcached and redis, databases like mysql and postgres, and mail services like postfix, exim, and dovecot. When in doubt, reboot the entire server if possible.

If you manage systems with custom operating systems like switches and routers, you may need to ask your vendor for a patch directly.

If you haven't updated your systems yet, stop reading and do it now. If this is the only step you can carry out in your environment, you will still have done the most important thing by far.

2. Test Your Servers

It's important to verify that the hole has been closed, especially if you have multiple servers and services to stitch up. The bad news is that this vulnerability is relatively easy to exploit. The good news is that means there are a few tools available to see if you're safe.

The SSL Server Test from Qualys SSL Labs will let you know if your web server remains vulnerable. If you have servers running on other ports to test, or STARTTLS mail servers, you can try the hb-test.py script. The hbcheck script can help you test an internal network using nmap. Finally, if you have a large number of hostnames to test, my hb-batch.py script might be helpful.

Please note these tests might not be completely reliable, and running them against servers you do not own might not always be considered polite.

3. Be Safer Next Time

This is the worst and biggest security flaw we've seen recently, but it won't be the last. Putting good practices into play for Heartbleed can help you prepare for anything else that might come down the pike next.

One of the strongest protections you can have against TLS vulnerabilities is Perfect Forward Secrecy. This is not simple to configure, and does not yet have global browser support. However, it is the encryption technology that provides the best defense against attacks with the potential to steal your private key and use it to decrypt your traffic.

You should also make sure you're practicing good password discipline. Use a password vault, use strong passwords, change them regularly, and don't reuse them.

Practice least authority for certificates, too. If you don't need to give everyone root access to every server, you probably don't need to give every server a certificate for *.example.com.

Finally, make sure you have reliable (if not automated) process for providing all of your servers with security updates quickly. After all, the only thing worse than getting pwned by a zero-day vulnerability is getting pwned by a one-day.

4. Consider Rekeying Your Servers

One of the worst things about the Heartbleed vulnerability is that it makes it theoretically possible for an attacker to recover your server's private key. Fortunately, the probability of this being possible on a given server appears quite low. Unfortunately, we can't yet be completely sure if that's true.

Key theft is a terrible attack because it tends to be undetectable by you, the server operator. Worse still is the harsh truth that, unless all your connections are served with Perfect Forward Secrecy, this would allow such an attacker not only to decrypt any newly intercepted traffic but to decode records of past traffic. If you run a server that intelligence agencies are likely to attack, this is a serious problem.

That means you may wish to consider revoking and regenerating your existing SSL certificates using new keys. Doing so will protect against the possibility of passive traffic decryption (if you don't use PFS) and man-in-the-middle attacks with a stolen key.

Because private key compromise via Heartbleed currently appears to be quite rare, this may not need to be a priority except for high value services (large or sensitive email and messaging systems, software distribution points, banks). Other services may not need to panic and rush to rekey quite so urgently. For most threat scenarios, adopting PFS provides greater overall protections than rekeying so we will remind you to make PFS a priority.

The details of the rekeying process will vary depending on the Certificate Authority you use to generate certificates and/or manage domain names. Some will allow you to regenerate in one step. Some will require you to revoke the old certificate before requesting a new one.1 Most will have a prominent link in their control panels, and many will waive their normal fees right now.

If you are given the option during the certificate regeneration process, it's a good idea to create a .csr file (Certificate Signing Request) and private key locally on your server using the openssl command. It might seem strange to prefer trusting OpenSSL at the moment, but it's still a safer bet than trusting a third party with your private key right off the bat.

5. Consider Changing Passwords

Unlike private key compromises, Heartbleed leakage of recently-used passwords from server processes linked to OpenSSL appears to have been quite common. Unfortunately, this could affect not just your operators and staff but your users.

This means you should perform risk assessment and determine which categories of passwords on your servers and services may need immediate resets, user-reset-on-next-login, or advisories suggesting resets. Variables in the risk assessment include how quickly you were able to patch your servers after the vulnerability was publicly disclosed at around 17:30UTC on 2014-04-07, According to a recent article in Bloomberg, i2. the sensitivity and value of potentially accessible accounts, whether accounts had been used recently (meaning their passwords were in RAM), and the probability that random or specific people on the Internet might have found your servers to be interesting targets.

You should determine which passwords are of sufficient value to deserve precautionary resets, and perform these after the steps above, in order to offer the new passwords proper protection. (If you've decided to rekey because of a concern about private key exposure, that is another reason to change users' passwords.)

You should also consider changing CSRF and OAUTH authentication tokens, invalidating session cookies, and rotating authentication cookies. These steps can be performed independently of passwords changes and may be far less disruptive.

6. Update Your Users

Your users have already heard of this scary Internet password thing, and chances are they're concerned about how it affects your site. Let them know what you've done, what you will do, and what the remaining risks are. Don't try to give them a false sense of security. Knowing that you're working on it and reaching out to them at all will work wonders.

7. Turn on Perfect Forward Secrecy

Because you skipped it in step three, didn't you? That's okay. There's still time. We'll wait.

Related Issues: Encrypting the WebSecurity
Share this:   ||  Join EFF

EFF is Expanding into Student and Community Organizing, and We Need Your Help

Wed, Apr 9 2014 23:30 -0400

Recent events have shown us more than ever that the technologies we use and create every day have astonishing implications on our basic, most cherished rights. Tens of thousands more people have joined us in the past year alone—together, we're building a movement. But we need your help.

Today, we at EFF are unveiling new tools for student and community activists to engage in campaigns to defend our digital rights.

We want you to bring the fight to protect online civil liberties to cities, towns, and campuses across the country. We invite you—whether you're a newly minted activist or an experienced community organizer—to join our growing team of driven individuals and organizations actively working make sure that our rights are not left behind as we develop and adopt new technologies.

Interested? Join our mailing list for organizers today and check out our helpful resources.

I'm in. How can I help?

There are plenty of ways to take part, no matter how much organizing experience you have.

  • Start a group: Talk to friends and community members to gauge who else in your network is interested in digital freedom. Form a group that can discuss the issues and plan ways of advocating for your rights. For some tips on getting started, check out our guide on how to build a coalition on campus and in your community.
  • Bring digital rights to an existing group: These issues are everybody's issues, no matter where on the political spectrum you lie. You can work with existing political, civil liberties, activist, and computer-related groups and urge members to take on a digital rights campaign.
  • Organize an event: We have plenty of suggestions for events you can throw, from film screenings to rallies, parties to speaker series.
  • Let your voice be heard: We are all part of the digital rights movement together, and your voice is as important as ours. Learn how to coordinate with local and national campaigns, and amplify your message by reading our tips on engaging with the press.

While many student groups and local community organizations are working on surveillance reform in light of the recent disclosures about massive government spying, it’s not only the NSA that we’re fighting: we’re demanding open access to publicly funded research; we’re fighting to protect the future of innovation from patent trolls; we’re urging companies and institutions to deploy encryption; we're defending the rights of coders and protecting the free speech rights of bloggers worldwide—the list goes on.

We can’t do this by ourselves. That’s why we’re building a trusted team of activists and organizers across the country to spread the word and build momentum for political reform and technical tools to protect our rights.

Road trip!

EFF is also hitting the road. We're traveling to cities and towns across the country to speak to student groups, meet with community organizers, and host local events to share and broaden our vision of an Internet grounded in creativity, community, and civil rights. In March and April, we’re visiting Boston, Cambridge, New York City, Ames, Des Moines, Washington, D.C., New Haven, and Middletown.

If you’re interested in having someone from EFF come to your event, class, or campus or community group to speak and help you all organize, send an email to april@eff.org and join our community organizers mailing list. Let us know what you’re up to, and we’ll let you know when we’re in your area.

Campus activism: All the cool people are doing it

Many activists, lawyers, and technologists will tell you that they got their start as a student. That's why we're especially excited to work with students and professors.

You don’t have to be a lawyer or have a college degree to be a strong voice. There’s no prerequisite for setting up a meeting with your elected official, writing an op-ed, or growing a campus organization. All it takes is a vision for change. We’ve seen student activists and innovators drive reform by challenging poorly written policies and developing new technologies that bring us closer to our vision of a networked world that respects our rights and fosters creativity.

Not a student? No worries! If you’re a member of a community that wants to engage deeper in EFF’s work, you can still join our organizers mailing list. There’s so much to do on the community level, too. If you’re concerned about local law enforcement surveillance hubs, the use of license plate readers, domestic drones, or are in a community of artists stifled by oppressive copyright policies, now is the time to raise awareness, build a coalition, and organize to defend our digital rights. 

This is only the beginning. When we finally see meaningful reform of our broken intellectual property system and new bills passed that bring our national security programs back within the bounds of the Constitution—and we will—it won’t be due to the effort of a few policy wonks and privacy enthusiasts or a handful of lawyers in Washington, D.C. It will be because millions of people across the world fought for change, demanded meaningful reform, started using privacy enhancing technology, and held their elected officials accountable. Together, we’re going to make history

We hope to see you digital rights activists out there. Stay tuned. This is going to be huge.

 

 

 

 

 


Share this:   ||  Join EFF

Back to top