The bipartisan Innovation Act is the best bill yet when it comes to fighting patent trolls. This post is the third of a series explaining the bill's various provisions. While the Innovation Act won't fix every problem with the patent system, it includes a powerful set of proposed reforms that—taken together—will significantly reduce the threat of abusive patent trolls.
Join us in supporting the Innovation Act. Take action and contact your member of Congress now.
Ending Discovery Abuse
Patent trolls use the expense of litigation to pressure defendants to settle, even when the underlying claims are weak. One of the major pressure points is the extraordinary cost of discovery (especially the cost of locating, reviewing, and producing electronic documents like email messages). Patent trolls, who are often shell companies with few employees and documents, face a much lower discovery burden. The trolls know this. Some will even openly threaten to make litigation as expensive as possible in order extort a payment.
In recent testimony to Congress (PDF), the General Counsel of SAS explained that in just one patent case his company was required to produce over 10,000,000 documents at a cost of over $1,500,000. Of these millions of documents, only 0.000183% appeared on the plaintiffâ€™s evidence list for use at trial. SAS ultimately won that case before trial on summary judgment. Yet it still had to bear the extraordinary discovery cost.
The Innovation Act deals with this problem in two ways. First, it delays most discovery until after the relevant patent claims have been interpreted by the court. (This is known as claim construction.) In many cases, claim construction quickly disposes of a case by establishing that the defendant does or does not infringe. Delaying most discovery until after this point will save many innocent defendants from huge and unnecessary expense.
Second, the Innovation Act limits discovery to "core documents." These are defined as those documents most likely to be relevant to the litigation (such as documents about how the accused products actually work). Plaintiffs that want additional discovery will have to pay for it themselves. This should stop patent trolls from using asymmetric discovery burdens as a litigation weapon. Taken together with the other reforms in the Innovation Act, this will make the patent troll business model much less attractive.
Ending discovery abuse is just one of the ways the Innovation Act stops patent trolls. Tell your members of Congress to support this much-needed reform.
Previous posts in this series:InnovationPatentsLegislative Solutions for Patent ReformPatent TrollsIntellectual Property
After years of litigation, and a complete defeat in a New York district court, Viacom’s lawsuit against YouTube is back before the Second Circuit Court of Appeals. You’d think by now that Viacom, having lost battle after battle in this war against a version of YouTube that even Viacom admits hasn’t existed since 2008 (when YouTube launched its Content I.D. filtering program), would wise up and walk away. You’d be wrong.
Admittedly, there’s a fair amount of money at stake. Thanks to copyright’s irrational statutory damages provisions, even a partial win can mean a windfall for Viacom. But that’s not what this case is about. Instead, it's an effort by Viacom and its friends at the MPAA and the RIAA to get the courts to undermine the safe harbors of the Digital Millennium Copyright Act (DMCA).
A particularly dangerous piece of Viacom’s latest argument is its suggestion that YouTube "induced" infringement and, therefore, effectively loses the safe harbors. As we explain in an amicus brief filed today, Viacom gets it wrong in at least two ways.
First, “inducement” is just a particular species of secondary copyright infringement, and the DMCA safe harbor expressly provides protection for all types of secondary liability. Therefore, if a service provider has otherwise followed the DMCA rules (taking down material when its get s a proper DMCA notice, etc.) a content owner can’t use inducement to effectively strip a service provider of DMCA protections.
Second, Viacom gets the standard for inducement liability wrong, setting the bar much too low. According to Viacom, a bunch of, ahem, ill-advised internal emails, knowledge that the service could be used to infringe, and the business choice not to let some content owners use its filtering tools, taken together, amount to inducement.
Fortunately for the millions of users who rely on new and innovative services like YouTube, that is not enough. That’s because copyright inducement is not a ‘thought tort’ and because the bar against a finding of inducement is set particularly high where the service has substantial non-infringing uses. As we explain in our brief, Viacom has to show affirmative and public acts encouraging infringement, and link those acts to actual infringing activity.
The legal point may be a bit obscure for non-lawyers, but the stakes shouldn’t be. It seems likely that YouTube will win this battle. The company has done a good job of showing that it offers a basic and valuable service that is used for any number of lawful purposes, and has since the beginning. It’s also clear the YouTube has gone well beyond its DMCA obligations in policing infringement. We expect the Second Circuit, like the district court, will send Viacom packing. But we are worried about the many new platforms and services being developed right now by innovators who don't happen to have a lawyer monitoring their every communication and who are more interested in growing their business than making sure their services can’t be used to infringe copyright. Those innovators need to know that a few embarrassing emails won’t be enough to destroy their business.
That’s why we are urging the court not to endorse Viacom’s interpretation of inducement and the DMCA. Here’s hoping the court gets it right.Files: viacom-youtube-second-appeal-amicus_brief_eff-pk-final-as-filed.pdfRelated Cases: Viacom v. YouTube
The bipartisan Innovation Act is the best bill yet when it comes to fighting patent trolls. This post is the second of a series explaining the bill's various provisions. While the Innovation Act won't fix every problem with the patent system, it includes a powerful set of proposed reforms that—taken together—will significantly reduce the threat of abusive patent trolls.
Join us in supporting the Innovation Act. Take action and contact your member of Congress now.Fee Shifting
Imagine you're a startup facing a patent troll lawsuit. The odds are in your favor: the patent it is asserting is of poor quality, and its claims of infringement are spurious at best. Yet the projected legal fees of the lawsuit run into the millions of dollars. Resigned, you decide to settle.
Patent trolls leverage this dynamic to shake down businesses and individuals. The Innovation Act, however, features a fix: fee shifting. The bill allows courts to shift fees to winning parties, giving those facing suits added incentive to fight back. In other words, if a troll loses a lawsuit, it could be liable for covering the winning party's costs.
Since patent trolls are often shell companies with no assets other than the patents they assert, the bill includes language allowing the defendant to bring a "real party in interest" into the litigation—parties that financially benefit from the litigation. For example, This American Life focused on a shell company known as Oasis Research, which asserted patents sold to it by the larger troll Intellectual Ventures. Intellectual Ventures received 90% of Oasis' net profits, yet because it's a separate company, it was absolved of all of Oasis' sins. Not anymore: with this bill, Intellectual Ventures, as the troll's major financial beneficiary, could be joined to the lawsuit.
Fee shifting is a legislative solution that we have long supported as a fix for the explosion of patent troll litigation. We were pleased earlier this year to see bills like Reps. Peter DeFazio's and Jason Chaffetz' SHIELD Act, which offered a similar fee-shifting proposal that added a bond requirement. (Both DeFazio and Chaffetz have sponsored the Innovation Act.) More recently, Sen. Orrin Hatch (R-UT) introduced similar legislation in the Senate, the Patent Litigation Integrity Act. We strongly support that legislation.
Ultimately, fee shifting will give small defendants a chance to fight back against weak troll cases. At the same time, it will do no harm to a party who has a legitimate claim of infringement on a good quality patent. In other words, the only ones on the losing end of this fee-shifting provision are the trolls who rely on the outrageous expense of patent litigation to extort quick settlements.
Fee shifting is one of the strong solutions the Innovation Act has for the patent troll problem. Tell your members of Congress to support this much-needed reform.
Previous posts in this series:PatentsLegislative Solutions for Patent ReformPatent Trolls
Sen. Dianne Feinstein, the chairman of the Senate Intelligence Committee and one of the NSA’s biggest defenders, released what she calls an NSA “reform” bill today.
Don’t be fooled: the bill codifies some of the NSA’s worst practices, would be a huge setback for everyone’s privacy, and it would permanently entrench the NSA’s collection of every phone record held by U.S. telecoms. We urge members of Congress to oppose it.
We learned for the first time in June that the NSA secretly twisted and re-interpreted Section 215 of the Patriot Act six years ago to allow them to vacuum up every phone record in America—continuing an unconstitutional program that began in 2001. The new leaks about this mass surveillance program four months ago have led to a sea change in how Americans view privacy, and poll after poll has shown the public wants it to stop.
But instead of listening to her constituents, Sen. Feinstein put forth a bill designed to allow the NSA to monitor their calls. Sen. Feinstein wants the NSA to continue to collect the metadata of every phone call in the United States—that’s who you call, who calls you, the time and length of the conversation, and under the government’s interpretation, potentially your location—and store it for five years. This is not an NSA reform bill, it’s an NSA entrenchment bill.
Other parts of the bill claim to bring a modicum of transparency to small parts of the NSA, but requiring some modest reporting requirements, like how many times NSA searches this database and audit trails for who does the searching.
But its real goal seems to be to just paint a veneer of transparency over still deeply secret programs. It does nothing to stop NSA from weakening entire encryption systems, it does nothing to stop them from hacking into the communications links of Google and Yahoo’s data centers, and it does nothing to reform the PRISM Internet surveillance program.
Ironically, a bill that claims to bring transparency to the NSA was debated, discussed and modified by the Intelligence Committee today in secret.
The bill does make minor improvements in other areas, by explicitly allowing the FISA court to accept amicus briefs in certain circumstances (though it has already done so with existing authority), and authorizing a report to Congress will summarize significant FISA court opinions. Summarize, but not release.
Make no mistake: this is not an NSA reform bill at all. Instead of codifies one of the NSA’s most controversial surveillance programs. We urge you to call your Senator to oppose Sen. Feinstein’s disingenuous bill.Related Issues: NSA Spying
The bipartisan Innovation Act is the best bill yet when it comes to fighting patent trolls. This post is the first of a series explaining the bill's various provisions. While the Innovation Act won't fix every problem with the patent system, it includes a powerful set of proposed reforms that—taken together—will significantly reduce the threat of abusive patent trolls.
Join us in supporting the Innovation Act. Take action and contact your member of Congress now.
Under current law, patent owners can file bare-bones complaints. This means that trolls can—and often do—file suits without specifying what products they think infringe their patents or even which patent claims they are asserting. This leaves defendants guessing what the case is actually about. The basic details of the plaintiff's infringement allegations—assuming it can even articulate some—won't emerge until after expensive discovery. Rather than pay a fortune in legal fees to reach that stage, many defendants simply settle.
The Innovation Act fixes this problem by requiring plaintiffs, at the outset, to provide the basic details of their case. It requires that pleadings include: "each patent," "each claim" and, for each such claim, the products or services of the defendant that allegedly infringe. The patent owner must also allege "with detailed specificity" how the accused products allegedly infringe. (If any of this information is not accessible, the plaintiff can instead explain its efforts to uncover the information.)
Although many have been calling this a "heightened pleading" requirement, it is really no more than a sensible pleading requirement: what patent, what claims, what products, and how infringed. This should not be a burden for anyone with a legitimate case. Patent owners who can't provide these basic and simple details should not be dragging defendants into federal court in the first place.
For an illustration of the importance of heightened pleading, consider the case of Fark. As Drew Curtis explains in this TED talk, Fark Inc., along with many other Internet news companies, was sued by a patent troll in 2011. The troll owned a silly patent on emailing news releases (i.e. press releases)—something none of the defendants actually did. Despite the absurdity of the case, most of the defendants settled early to avoid expensive litigation. But Fark refused to pay. Finally, after months of litigation, the troll was required to provide an explanation (in the form of screen shots) showing how Fark.com supposedly infringed the patent. Since this was impossible, it promptly conceded defeat and withdrew its case. With heightened pleading, this case would likely never have been filed.
The Innovation Act's heightened pleading requirement will make it far more difficult for bottom-feeder patent trolls to launch nuisance suits. Together with the Innovation Act's other reforms, it will make life safer for the small innovators and creators.Related Issues: InnovationPatentsLegislative Solutions for Patent ReformPatent TrollsIntellectual Property
US lawmakers may soon introduce legislation to give the Trans-Pacific Partnership (TPP) a “fast-track” through Congress. Senate Finance committee leaders Sen. Max Baucus and Sen. Orrin Hatch have renewed their call to pass such fast-track legislation and hand over Congress' constitutional power to set the terms of US trade policy. Instead, under fast-track, (also known as Trade Promotion Authority) lawmakers would be limited to an up-or-down vote, and shirk their responsibility to hold proper hearings on its provisions.
President Obama's trade negotiator, then, gets more leeway to push for unfair provisions that could not withstand public scrutiny. In other words: free rein to finalize agreements like TPP and the upcoming EU-US trade agreement. It is not surprising that Sen. Baucus and Sen. Hatch are the ones leading the charge, given they've been supporters of TPP from the beginning and have loudly touted the importance of fast-track to get this undemocratic agreement passed.
That's dangerous, because the TPP has been negotiated in secret and includes a wide range of provisions that would negatively impact the Internet and our digital rights. Corporate advisors have had easy access to see and comment and draft text while Congress and the public have had little to no ability to influence its provisions.
US Trade Representative Michael Froman admitted this week that he's recently been spending most of his time lobbying both Democratic and Republican Congress members to introduce and support a bill to authorize fast-track. Froman says that this authority would be “a manifestation of the partnership between Congress and the executive on trade policy and really gives Congress a very important and meaningful role in both the substance and the process of trade negotiations.” It's a strange thing to say when the opposite is true: it would actually give the Obama administration almost full control over trade deals.
Last week EFF and 13 other organizations—including Amnesty International, Demand Progress, Free Software Foundation, and Knowledge Ecology International—sent a letter to the heads of US Congressional committees explaining that it is vitally important that democratically elected representatives are given the opportunity to review and amend international agreements to ensure that users' rights are included. They should not ignore the public interest by rubber-stamping agreements negotiated in near total secrecy.
Fast track is the final step of locking the public out of trade negotiations. Congress, civil society, and the public at large should be consulted from the beginning over agreements like TPP. Compounding this problem, according to the Congressional Research Service, the US Trade Rep is negotiating TPP as if fast track authority is in place, acting as if it has the unilateral authority to further a one-sided agenda.
We need to demand that lawmakers oppose fast track, that they call for hearings, and that they exercise their authority to oversee the US trade office’s secret copyright agenda.Intellectual PropertyInternationalTrans-Pacific Partnership Agreement
Just in time for Halloween, the Washington Post has brought us a horror story about U.S. and U.K. intelligence agencies reading massive amounts of private data directly off of the internal communications infrastructure of U.S. Internet giants Google and Yahoo.
The Post's report reveals that the spy agencies tapped into the internal, private fiber-optic links between the companies' data centers. This gave the spooks a view into corporate and customer data moving between data centers—data that the companies likely didn't encrypt because they viewed these dedicated private links as secure. That means that the private communications of millions of ordinary users, both foreign and domestic, were exposed to surveillance by the intelligence agencies.
A chilling back-of-the-napkin sketch obtained by the Post depicts user data protected by SSL/TLS encryption as it traveled over the public Internet—but unprotected and exposed to spying within the companies' internal infrastructure.What does this mean for ordinary users?
The story suggests that the user data (including the text of chats and e-mails, as well as metadata about users' relationships and whereabouts) probably was intercepted as it flowed over the private links. There's no way to know exactly whose data was intercepted, but potentially all users of these services—or other services that may have been attacked in the same way—could have had their data monitored.
Google, at least, said last month that it was deploying more encryption internally to protect against this monitoring in the future. In a statement issued today, Google's chief legal officer, David Drummond said, “We have long been concerned about the possibility of this kind of snooping, which is why we continue to extend encryption across more and more Google services and links.” A Yahoo spokesperson said, “We have strict controls in place to protect the security of our data centers, and we have not given access to our data centers to the NSA or to any other government agency.” Yahoo has not yet clarified whether it will take new technical measures to protect against this spying.
Users who used third-party encryption software like OTR to encrypt their messages may have been partially protected because their intercepted communications would still have been encrypted as they transited the companies' internal networks.
The materials published by the Post suggest that the SSL/TLS encryption used to protect users' data on sites that use HTTPS provides privacy and security benefits: the author of the spooky napkin sketch implicitly regards it as non-trivial for NSA to remove this encryption. That's why NSA would rather go around it and try to access our communications after they've already been decrypted.What should technology companies do about this kind of monitoring?
This reporting goes to show that the intelligence agencies are sophisticated attackers that are prepared to find and take advantage of the weakest link in a chain of protections. So companies need to examine the entire set of protections that apply across their infrastructure, to identify and address the weak links. One specific precaution that would be valuable for companies with their own distributed data centers is to ensure that they're using state-of-the-art VPNs or link encryption between their facilities. Indeed, encryption even between the devices within a data center may be an important precaution if the routers and switches connecting those devices can be targeted with malware—though such attacks don't appear to be part of the monitoring revealed by the Post.
These attacks remind us that there are many ways that a network can be untrustworthy, and that encryption is the main technology we have for keeping data safe on untrustworthy networks. Now is a great time to think about how we can use more encryption and make sure that we're using it correctly.Related Issues: AnonymityInternationalPrivacyNSA SpyingSecurity
Ayer, la Comisión Interamericana de Derechos Humanos (CIDH) realizó la primera audiencia pública donde examinó directamente los programas de vigilancia masiva realizados por las Oficinas de Seguridad de los Estados Unidos (NSA) desde la óptica de los estándares interamericanos de derechos humanos.
Ante la evidencia pública de una intrusión no solo en las comunicaciones de ciudadanos del mundo, si no de líderes y miembros del gobierno, de países tan diversos como Alemania, México y Brasil, queda claro que la intención escapa a la mera protección de la seguridad nacional.
Para aquella audiencia, EFF, la Asociación por los Derechos Civiles, Article 19, Access, Privacy International y otras veinticinco organizaciones han presentado un informe a la Comisión Interamericana de Derechos Humanos respaldando la posición tomada por la Unión Americana de Libertades Civiles (ACLU) que afirma que los programas de vigilancia masiva de USA violan los derechos de los ciudadanos de USA y el mundo.
En nuestro informe, explicamos a la CIDH como elementos del programa de vigilancia de inteligencia extranjera de Estados Unidos, otorgan a las personas no estadounidenses una protección limitada bajo la cuarta enmienda de la constitution de USA. Más aún, esta premisa carece de fundamento alguno en el derecho internacional, que garantiza los derechos de privacidad de todos los individuos en todas partes del mundo independientemente de su nacionalidad.
Las agencias de seguridad nacional de los Estados Unidos, así como las formas bajo las cuales operan, han estado fuera de cualquier presupuesto democrático y legal durante demasiado tiempo y ahora deben ser sometidas al imperio de la ley. Más aún, estas prácticas son demasiado ampĺias, carecen de supervisión e interfieren con el derecho a la intimidad más allá de lo necesario para cubrir el legítimo interés del Estado en materia de seguridad nacional. Asimismo, estos programas son indiscriminados en su alcance, especialmente en su aplicación a personas no estadounidenses. Es difícil ver cómo un programa tan extenso y amplio puede ser visto como 'proporcional' en el contexto de la protección de los derechos humanos a la luz del derecho internacional.
La normativa de inteligencia de USA - FISA - no ofrece a los usuarios de Internet de todo el mundo (cuya información personal se almacena en servidores de EE.UU. o cuyos datos viajan a través de redes de Estados Unidos) protección jurídica alguna. La definición de "inteligencia extranjera", que establece el mandato de la NSA es excesivamente amplia, aparentemente abarca cualquier información que pueda dar a los Estados Unidos una ventaja no solo política sino también económica.
Frank La Rue, un Relator Especial de la ONU, alerta al respecto:
“(Las) Nociones vagas y no especificadas de "seguridad nacional" se han convertido en una justificación aceptable para la interceptación y el acceso a las comunicaciones en muchos países ... El uso de una concepción caprichosa de la seguridad nacional para justificar limitaciones invasoras en el disfrute de los derechos humanos es motivo de grave preocupación. La definición del concepto es demasiado gaseosa y por lo tanto es vulnerable a la manipulación por parte del Estado para justificar las acciones dirigidas a grupos vulnerables, como los defensores de derechos humanos, periodistas o activistas. También actúa para justificar a menudo un secreto innecesario alrededor de investigaciones o actividades de aplicación de la ley, lo que socava los principios de transparencia y rendición de cuentas.”
En resumen; las formas de protección legal en EEUU son pocas y aun peor, los alcances de la ley FISA se interpretan en secreto y por lo general aisladas de cualquier forma de interpelación. Esto hace que sea poco probable que los usuarios de Internet fuera de los Estados Unidos cuenten con protección alguna en los Estados Unidos frente al espionaje de la NSA.
En nuestro informe, también describimos algunos de los actuales esfuerzos en curso a nivel internacional para enunciar cómo las leyes de derechos humanos y particularmente el derecho a la privacidad deben ser interpretados - sobre todo en este momento en que las capacidades de vigilancia de masas, incluyendo la capacidad técnica para analizar las comunicaciones y los metadatos (datos que revelan quien se comunican con quien y desde donde) se incrementan exponencialmente - de manera que tengan un profundo impacto en los derechos de las personas. Esperamos que este análisis ayude a la Comisión en su examen de estas cuestiones en esta audiencia y en la definición de los parámetros de una futura investigación en profundidad sobre los programas de vigilancia de masas de Estados Unidos y su impacto en los derechos humanos en las Américas.
EFF seguirá trabajando para detener las actividades de vigilancia en masa de los Estados Unidos y para que estas sean condenadas en los términos más enérgicos posible, reafirmando los principios internacionales de derechos humanos establecidos por el derecho internacional.Privacy info. This embed will serve content from youtube-nocookie.com
Encryption is one of the most important ways to safeguard data from prying eyes. But what happens when those prying belong to the government? Can they force you to break your own encryption and provide them with the information they want?
In a new amicus brief, we explain that the Fifth Amendment privilege against self-incrimination prohibits the government from forcing someone to decrypt their computer when they're suspected of a crime.
Leon Gelfgatt was charged with forgery and the government, with a search warrant, seized a number of his electronic devices. Law enforcement couldn't break the encryption that protected the devices, so it went to court, asking a judge to order Gelfgatt to decrypt the devices for them. The Fifth Amendment protects a person from being forced to testify against themselves and so the government promised not to look at the encryption key—the "testimony" in their eyes—but nonetheless wanted the ability to use the unencrypted data against Gelfgatt. The judge denied the government's request, ruling that forcing Gelfgatt to decrypt the devices would violate the Fifth Amendment.
The government appealed that decision and the case is now before the Massachusetts Supreme Judicial Court, where we filed an amicus brief with the ACLU and the ACLU of Massachusetts.
Our brief argues that the lower court got it right. The Fifth Amendment protects a person from being forced to reveal the "contents of his mind" to the government, allowing law enforcement to learn facts it didn't already know. When it comes to compelled decryption, the Fifth Amendment clearly applies because the government would be learning new facts beyond simply the encryption key. By forcing Gelfgatt to translate the encrypted data it cannot read into a readable format, it would be learning what the unencrypted data was (and whether any data existed). Plus, the government would learn perhaps the most crucial of facts: that Gelfgatt had access to and dominion and control of files on the devices.
It's not the first time we've made this argument in court; we've filed amicus briefs in other cases involving forced decryption, and won big last year in the Eleventh Circuit Court of Appeals, which agreed with us that the act of decrypting a computer is protected by the Fifth Amendment.
At a time when the recent public disclosures have suggested the government has been undermining cryptography, we hope the court understands the importance of having strong technological safeguards to protect our privacy and find that our constitutional protections prohibit what the government is trying to do here.
Oral argument in the case is set for Nov. 5, 2013 in Boston.Related Issues: PrivacySecurityRelated Cases: US v. FricosuU.S. v Doe (In re: Grand Jury Subpoena Duces Tecum Dated March 25, 2011)
Patent trolls are facing another legislative threat from Sen. Orrin Hatch (R-UT), who today introduced the Patent Litigation Integrity Act (S. 1612). This bill is a fairly simple—but very important—one that would curb patent trolls' dangerous litigation practices. We strongly support the bill and sent Sen. Hatch a letter today saying as much.
The Litigation Integrity Act is a fee-shifting bill. Fee shifting, often called "loser pays," is not a new idea. It's long existed in copyright law, for instance, allowing a court to award a winning party costs and fees in certain cases. In patent litigation, this type of provision would help tilt the playing field slightly more in favor of the good guys. To understand, think about the patent troll business model: making broad claims of infringement based on patents of questionable validity is the troll's favorite move. It's no wonder that many defendants choose to pay up rather than take the time, energy, and especially the money to fight in court. Fee shifting would empower innovators to fight back, while discourging trolls from threatening lawsuits to start.
Even more, this bill would explicitly give courts the tools to require that the troll put up a bond at the outset of litigation. In other words, if the court thinks the party bringing a suit is a troll or otherwise has not brought a good claim, it can require that party to put aside the money it would need to cover the defendant's legal fees and costs at the end. Because trolls use shell companies with very few assets to sue, the bond requirement is an important one that would require patent trolls to put their money where their mouth is.
The Innovation Act, currently pending in the House, also has a fee-shifting provision, but it lacks this important bonding provision.
This is what we told Sen. Hatch today:
The troll business is one of litigation and licensing, not creating and providing products and services. Thus, the high costs of litigation are baked into their very business model. This is not true of the productive businesses targeted by trolls, particularly smaller ones. And, in fact, it is these smaller businesses that bear the brunt of troll litigation. More than half of the defendants in troll suits make under $10 million annually.
This problem will not be fixed until those facing threats from trolls can fight back. Currently, the costs associated with taking up that fight in federal court are staggering. If taken to verdict, defending a lawsuit can easily cost nearly $3 million, according to findings by the American Intellectual Property Law Association. And when these cases make it to judgment, the troll only wins a shockingly low 9.2 percent of the time.
The Patent Litigation Integrity Act would remedy a core component of the patent troll problem. It would give those facing troll threats the tools necessary to fight back while also giving trolls a disincentive to bring harassment suits. Importantly, the bill would not affect any party bringing a meritorious suit. Currently, the system is skewed heavily in favor of the trolls; the Patent Litigation Integrity Act would rectify this problem.
Thank you, Sen. Hatch, for introducing common-sense litigation that would protect inventors, startups, consumers, and the rest of the innovation economy from the scrouge of patent trolls.Related Issues: PatentsPatent Trolls
When it comes to searching the most sensitive part of our bodies—our DNA—the Fourth Amendment's prohibition against unreasonable searches and seizures should be a strong bulwark, keeping the government out of our most personal and private biological information. But in the last few years, those protections have been eroded as courts throughout the country, including the US Supreme Court, have approved of the warrantless DNA collection of people arrested for crimes—individuals who are presumed to be innocent in the eyes of the law. A new amicus brief we filed on Monday argues that these decisions don't mean the complete death of Fourth Amendment protection from DNA collection.
This summer, the Supreme Court issued its disappointing decision in Maryland v. King, approving Maryland's warrantless DNA collection scheme from pretrial arrestees. The court reasoned that the purpose of collecting DNA is "identification," or to make sure the police had arrested the right person, noting that collecting DNA was similar to the routine police practice of collecting a fingerprint.
Following the Supreme Court ruling, the Ninth Circuit Court of Appeals asked for amicus briefs to address what impact King would have on the court's review of Haskell v. Harris a case in which the ACLU of Northern California is challenging Proposition 69, a warrantless DNA collection program approved by California voters in 2004. We filed a new amicus brief, explaining that even after King, Prop 69 is unconstitutional. It's the second amicus brief we've filed in this important privacy case.
Our new brief explains that although King approved of Maryland's DNA collection scheme, it didn't approve all warrantless DNA collection schemes per se. Instead, the Supreme Court has consistently said that what is "reasonable" under the Fourth Amendment depends on the "context" in which a search takes place and so the Ninth Circuit would have to look at California's law anew.
While King focused on "identification" and equated DNA with a fingerprint, we explain how DNA reveals far more information than a mere fingerprint, since DNA contains our entire genetic makeup, revealing where we came from, who we're related to and whether we're likely to get certain diseases. Nor does the government need DNA to "identify" the person they've arrested since fingerprints have proven to be a effective way to ensure the police have the right person without implicating the same privacy concerns as DNA collection. As the government's ability to collect DNA rapidly expands, the court must impose some limits to prevent the real harms that occur with excessive DNA collection, including false identification of innocent people.
DNA collection is just another example of the government's use of technology to shrink privacy and push the boundaries of what it can collect outside the confides of the Fourth Amendment. We hope the Ninth Circuit will appreciate that Maryland's collection scheme is different than California's and find Prop 69 unconstitutional.
The Ninth Circuit will hear oral argument in Haskell sometime in December in San Francisco.Related Issues: BiometricsPrivacySearch Incident to ArrestRelated Cases: US v. PoolMaryland v. King
David Plotz: People have a misguided belief in it, but, in general, the fact that anonymity is increasingly hard to get—Facebook doesn't permit it, most commenting on a lot of sites doesn't permit it—there's a loss when you don't have anonymity.
Emily Bazelon: Oh god, I am so not with you on this one. There is a loss if you're, like, a political dissident in Syria. If you are in this country, almost all of the time, there is a net gain for not having anonymous comments. We so err on the side of 'Oh, free speech, everywhere, everywhere, let people defame each other and not have any accountability for it.' And I think in free societies, that is generally a big mistake. And yes, you can make small exceptions for people who truly feel at risk, like victims of domestic violence are an example, but most of the time it is much healthier discourse when people have to own up to what they are saying.
- Slate's Political Gabfest, Oct. 25, 2013
During last week's episode of Slate's Political Gabfest, a weekly podcast I normally adore, senior editor Emily Bazelon mocked the concept of online anonymity. Our society would be better off if everyone was forced to put their name to their words, she said, generalizing that online anonymous users are poisoning civil discourse with their largely vile and defamatory comments. She deemed only one class of user legitimately deserving of anonymity: "people who directly fear violence."
In this view of the Internet, everyone else's anonymity is worth sacrificing to silence the trolls.
It's easy to understand why some in the press have this perspective. If you work in online media, the bulk of your interactions involve news stories, which seem to draw the ugliest forms of discourse. If you're a public figure, you're faced with haters on Twitter who are obsessed with enumerating all the ways you suck. They're even worse in the comments on YouTube. A website, such as Slate, certainly has the right to determine the culture of its online community, and I don't have a position whether such sites, across the spectrum, should or should not allow anonymous comments, or even allow comments at all. I do, however, dispute this narrow vision of the Internet.
So, I spent the weekend brainstorming and jotting down all the kinds of people who would lose out if anonymity no longer existed in any form on the Internet.
Anonymity is important to:
Anonymity is important to anyone who doesn't want every facet of their online life tied to a Google search of their name. It is important to anyone who is repulsed by the idea of an unrelenting data broker logging everything she has ever said, or shown interest in, in a permanent marketing profile. And more.
Bazelon describes anonymous comments as "generally a big mistake" for free societies. I disagree and point to Common Sense by Thomas Paine, originally published under the anonymous byline, “an Englishman.” (Perhaps that could be Gabfest's next Audible recommendation.)
To suggest anonymity should be forbidden because of troll-noise is just as bad as suggesting a ban on protesting because the only demonstrators you have ever encountered are from the Westboro Baptist Church—the trolls of the picket world. People who say otherwise need to widen their experience and understanding of the online world. The online spaces we know and love would be doomed without anonymity, even if the security of that anonymity is far from absolute or impenetrable. The ability to explore other identities, to communicate incognito, to seek out communities and advice without revealing your identity is not only a net positive, but crucial to preserving a free and open Internet.Related Issues: Free SpeechAnonymity
The government released a second batch of documents yesterday in response to EFF's ongoing FOIA lawsuit for information concerning Section 215 of the Patriot Act—the provision of law the government relies on to compel the disclosure of records of millions of Americans' calls.
One document, in particular, confirms what in recent months has become abundantly clear: the NSA is unwilling to submit to meaningful and effective oversight and seems unwilling to recognize the extraordinarily sensitive nature of the information it collects.
The document, which appears to be a written response to an Intelligence Committee staffer's question, describes the NSA's acquisition and testing of Americans' cell site location data. The document shows that, prior to obtaining and testing samples of location information taken from Americans' cell phone calls, the NSA didn't even bother to inform the Foreign Intelligence Surveillance Court (FISC) or the relevant Congressional oversight bodies prior to doing so. In fact, neither NSA nor the National Security Division of the Department of Justice thought the collection of Americans' location information sufficiently novel or important to even justify an individualized legal analysis. In the view of DOJ, the location information of thousands (or millions) of Americans could just be lumped in with the information the FISC had already approved for collection.
Keep this in mind, too: approximately a year prior, the FISC nearly shut down the call record program after the agency repeatedly misled the court about how and under what circumstances it was accessing Americans' call records. To then obtain extraordinarily sensitive information about the movements of Americans—without first informing either the FISC or any of NSA's Congressional oversight bodies—smacks of a fundamental disregard for the NSA's oversight system and the coordinate branches of government.
It's time to put an end to the agency's "collect first, seek authorization later" mentality. The NSA needs to recognize, once and for all, that it is not above the law. When an agency acts without oversight or the authorization of Congress, the judiciary, or even the President, it's clear that the agency has gone off the rails. We need a full and public investigation of the NSA's spying activities, and members of the intelligence community should be held accountable.
EFF will keep fighting until the NSA's bulk collection of sensitive communications data is finally reined in.Related Issues: TransparencyRelated Cases: Section 215 of the USA PATRIOT Act
In 2010, Auernheimer's co-defendant Daniel Spitler discovered that AT&T configured its website to automatically publish an iPad user's e-mail address when the server was queried with a URL containing the number that matched an iPad's SIM card ID. Spitler collected approximately 114,000 email addresses, and Auernheimer talked about the discovery to several news outlets and Gawker published a story about it. Auernheimer was convicted of violating the CFAA and identity theft and sentenced to 41 months in prison.
We filed our appeal on July 1, raising a number of claims why the conviction and sentence were improper, but most critically, we argued that Weev didn't violate the CFAA because AT&T deliberately chose to have their users' email addresses published on the web. The government responded with a 133-page brief in September and on Friday, we responded to the government's argument with a reply brief of our own, refuting all of the government's arguments point by point.
Contrary to the government's assertions, AT&T did not employ any technical measure to restrict access to the emails and thus Spitler and any other user was "authorized" to view the email addresses, even if AT&T didn't want them to. The government had argued that the serial numbers were passwords, that Spitler had "lied" to AT&T's servers by changing his computer's user agent to impersonate an iPad and that the "expertise" needed to do these things meant Spitler's actions were criminal. Most puzzling, the government argued that Spitler's actions violated “norms of behavior that are generally recognized by society” and apparent to a “reasonable person,” and as a result he wasn't "authorized" to obtain the email addresses.
Our brief explains to the Court why this is not so.
First, the serial numbers aren't "passwords" because most AT&T customers wouldn't know the number or memorize them, nor would be required to enter the serial number in the login prompt to AT&T's website to access their customer account information. Second, we once again explain that there is nothing deceptive or criminal about changing a user agent and that common web browsers do this all the time. Finally, we explain that criminal liability cannot hinge on a particular user's "expertise" nor on the government's proposed "norms of behavior" standard. Courts have long cautioned that criminal liability cannot be based on vague or ambiguous standards, and hinging CFAA liability on "norms of behavior" leaves most computer users with uncertainty about what they can and cannot do.
Now with the briefing complete, the next step will be an oral argument before a three judge panel of the court sometime in the next few months.Related Issues: Computer Fraud And Abuse Act ReformRelated Cases: U.S. v Auernheimer
The field of "mobile location analytics"—where tracking companies work with brick-and-mortar retail stores to collect insights about customer behavior based on fine-grained location information harvested from mobile phones—has taken a small step towards self-regulation with a new code of conduct published this week. The code was announced by the Future of Privacy Forum and Senator Charles Schumer, who two years ago intervened to convince a mobile location tracking company not to test its system in two American malls during the holiday shopping season.
The industry is likely hoping to calm privacy concerns that have generated public outcry and attracted the attention of legislators. Earlier this year, Senator Al Franken said that mobile location tracking companies are violating people's "fundamental right to privacy." And in 2011, Senator Schumer himself acknowledged those problems and urged the industry to develop an opt-in mechanism to get explicit consent.The Code of Conduct
Unfortunately, the published code falls well short of that proposed standard. Instead, it establishes an opt-out system, where users must enter the unique 12-digit MAC addresses of each of their mobile device's Bluetooth and Wi-Fi chips into a database that tracking companies commit to honoring.
Beside the irony of asking the most privacy-conscious consumers to hand over their MAC addresses to tracking companies, the scheme seems unlikely to see much pickup. For one thing, many users may not be aware of this kind of tracking in the first place, much less whether any particular retailer is tracking them. Tracking is invasive, but surreptitious.
The code attempts to address that lack of information by establishing notice rules as its first principle, but its notice proposals are weak as well. For example, it depends on the retailers, which are not party to this agreement, to implement in-store signage providing notice of the tracking. Retailers, though, have seen customers get upset about the tracking after seeing those signs, so there's an incentive to make it less noticeable.
Further, the code proposes creating a widely adopted symbol to indicate that mobile location tracking is taking place, rather than plain language like "If you’re carrying a mobile device, this establishment may be tracking your movement and location." The most direct parallel to that symbol might be the "AdChoices" icon, which allows people to configure whether they are shown targeted online ads. That icon has been widely adopted by advertisers, but is virtually unknown among users.How Identifiable Is a MAC, Anyway?
The code instructs that tracking companies should use hashing to "de-personalize" MAC addresses. That approach, though, has significant limitations.
For one thing, MAC addresses are, by design, fixed permanently to a single device. In practice, they can sometimes be changed in software, or "spoofed," but that software is not available for every platform and may require technical expertise to use. The privacy concern here is like that presented by biometrics: once a MAC address is correlated with an identity, it can be difficult or impossible to shake that connection.
That quality makes a MAC address attractive for tracking repeat customers, because it's unlikely to change between visits, but also rings alarm bells for privacy. Hashing the MAC address doesn't address those concerns: by definition, hashing the same value always produces the same result. In other words, hashing creates a pseudonym for the MAC address, but it is still persistent.
MAC addresses are also broadcast frequently, and it's easy to imagine advertisers or others could work to correlate them with personally identifiable information. Companies that operate paid WiFi networks, and thus collect both device networking information and account credentials, may already have that kind of database.
That's a problem because hashing MAC addresses doesn't really de-personalize them. Hashing generally make it virtually impossible to go from a hashed value to the original, but hashed MAC addresses could actually be reversed through brute computing force. That's because there are only 248 possible MAC addresses, and in practice many fewer than that, due to fixed bits and standard vendor prefixes.
Conversely, with a list of unknown hashed MAC addresses and a list of identified unhashed MAC addresses, it is simple to hash the second list and look for matches.1
Finally, the code requires companies to commit to not de-personalize the data or allow downstream clients or contractors not to use it to identify particular individuals. Importantly, though, that is a policy limitation—not a technical one.Pen Register/Trap-and-Trace Device Concerns
It’s generally illegal to record or decode "dialing, routing, addressing, or signaling information transmitted by an instrument or facility from which a wire or electronic communication is transmitted" without a court order unless you're a provider of communications service and can fit into one of the statutory exceptions.
We’re unaware of any relevant case law, but capturing MAC addresses from smartphones might run into this statute. Note, however, that this law doesn’t have a private right of action, i.e., it doesn’t say that an ordinary person can sue someone under it.What Tracking Companies Shouldn't Do
Mobile tracking companies have compared their services to online analytics options, but there are important reasons not to accept that argument at face value.
First, it creates a privacy ratchet: treating online tracking practices as uncontroversial and bringing offline tracking to the same level could undermine important steps the public has taken to unwind certain invasive online methods.
Second, offline analytics techniques for now leave less of a trail than their online counterparts. Users can monitor or block connections to and cookies from online tracking networks, but currently have no way of knowing whether an offline store was using location tech, and from which vendor. That missing information means users can't truly be making informed consent decisions.
It's encouraging to see companies in this field acknowledging the concerns and adopting regulations. But until that approach provides meaningful benefits for the users, it is not much comfort for privacy-conscious consumers.
One of the trends we've seen is how, as the word of the NSA's spying has spread, more and more ordinary people want to know how (or if) they can defend themselves from surveillance online. But where to start?
The bad news is: if you're being personally targeted by a powerful intelligence agency like the NSA, it's very, very difficult to defend yourself. The good news, if you can call it that, is that much of what the NSA is doing is mass surveillance on everybody. With a few small steps, you can make that kind of surveillance a lot more difficult and expensive, both against you individually, and more generally against everyone.
Here are ten steps you can take to make your own devices secure. This isn't a complete list, and it won't make you completely safe from spying. But every step you take will make you a little bit safer than average. And it will make your attackers, whether they're the NSA or a local criminal, have to work that much harder.
We have praised the Innovation Act of 2013, introduced this week by House Judiciary Committee Chairman Rep. Bob Goodlatte (R-VA) and co-sponsored by a bipartisan coalition, as the best patent troll-killing bill yet. We support the bill because it offers a host of fixes to the growing patent troll problem. Taken together, these reforms will help stop the abusive patent litigation that has targeted everyone from grocery stores to podcasters.
Now that the bill has been released, patent trolls are scrambling for a way to save their destructive business model. The latest attempt is through spreading a story that the Innovation Act would hurt smaller companies. Of course, opponents of the Innovation Act are not small companies but multi-billion dollar enterprises like patent troll Intellectual Ventures. And their self-serving claim is simply false. By cutting down on troll litigation the Innovation Act would deliver massive benefits to startups and small businesses.
Patent trolls do not just go after the big guys. In fact, more than 50 percent of patent troll lawsuits are against defendants with less than $10 million in annual revenue. Trolls have hit cafes, podcasters, application developers, and small businesses using standard office equipment. And trolls have made a specialty of targeting startups—sapping them of time and money when they need it most. Since patent cases cost well over $1 million to defend, smaller companies targeted by trolls generally have no choice but to pay up, even when the underlying suit is weak.
The Innovation Act helps fix this imbalance. For example, it will reduce the number of meritless troll suits by requiring a patent holder to provide certain obvious and essential details (such as which patents and claims are at issue, as well as exactly what products allegedly infringe and how) when it files a lawsuit. Any patent holder, big or small, filing a legitimate case will easily be able to provide these basic details. But this requirement will make it harder for patent trolls to launch massive litigation campaigns against defendants who don’t actually infringe their patents.
Similarly, the Innovation Act’s fee-shifting provision will make it easier for small companies to fight back against meritless troll suits. This is important. The most aggressive patent trolls tend to bring the worst cases (one study found that, if forced to litigate to judgment, these trolls win only 9.2% of their cases). Instead of being forced to pay a settlement to avoid ruinous litigation expenses, small defendants can fight back against weak troll cases. The Innovation Act’s fee-shifting provision helps small company patentees with meritorious cases by allowing them to recover fees when they prevail.
The Innovation Act also has transparency provisions (requiring patent trolls to reveal the parties that would actually benefit from the litigation). These provisions won’t have much impact on legitimate small companies. But they will have a huge impact on giants like Intellectual Ventures which notoriously hides behind more than 2,000 shell companies. Opposition to these provisions is not about protecting small companies. It’s about keeping damaging facts secret from the public.
Patent troll lawsuits are devastating to startups and small business. The Innovation Act will deter these abusive suits and give defendants tools to fight back. At the same time, it does not impose any burdens on smaller companies bringing legitimate cases. The bill is good for everyone but patent trolls.
Join us in supporting the Innovation Act. Take action and contact your member of Congress now.Related Issues: InnovationPatentsPatent TrollsIntellectual Property
One of the core messages of Open Access Week is that the inability to readily access the important research we help fund is an issue that affects us all—and is one with outrageous practical consequences. Limits on researchers' ability to read and share their works slow scientific progress and innovation. Escalating subscription prices for journals that publish cutting-edge research cripple university budgets, harming students, educators, and those of us who support and rely on their work.
But the problems don't stop there. In the digital age, it is absurd that ordinary members of the public, such as healthcare professional and their patients, cannot access and compare the latest research quickly and cheaply in order to take better care of themselves and others.
Take the case of Cortney Grove, a speech-language pathologist based in Chicago, who posted this on Facebook:
In my field we are charged with using scientific evidence to make clinical decisions. Unfortunately, the most pertinent evidence is locked up in the world of academic publishing and I cannot access it without paying upwards of $40 an article. My current research project is not centered around one article, but rather a body of work on a given topic. Accessing all the articles I would like to read will cost me nearly a thousand dollars. So, the sad state of affairs is that I may have to wait 7-10 years for someone to read the information, integrate it with their clinical opinions (biases, agendas, and financial motivations) and publish it in a format I can buy on Amazon. By then, how will my clinical knowledge and skills have changed? How will my clients be served in the meantime? What would I do with the first-hand information that I will not be able to do with the processed, commercialized product that emerges from it in a decade?
Cortney's frustration is not uncommon. Much of the research that guides health-related progress is funded by taxpayer dollars through government grants, and yet those who need this information most—practitioners and their patients—cannot afford to access it. We asked Cortney to share her story in more detail.
What do you do for a living?
I'm a speech-language pathologist, and I specialize in autism, social cognition, and language-based learning disorders. Because of that, I tend to look at research from across a lot of disciplines. I need to know what cognitive scientists are finding, to learn about motor development, to get social linguists' perspectives.
On a daily basis, I provide therapy for kids with special needs. What I do in my spare time and continuing education is to try and figure out better, more efficient ways to hep these kids.
Can you describe the issue you recently ran into?
We do continuing education in order to keep our licensure, so I recently attended an online conference. Frequently what happens is that I'll hear about a bit of research in a lecture that I'll find interesting from another perspective, so I'll write it down to look for it later.
I went online to find the referenced articles when I started to realize I couldn't access any of the articles on my list for free. All of them are behind a gate and cost somewhere between $40 and $100 an article.
I got frustrated. I spent maybe three-and-a-half hours looking at subscriptions to these companies to see if that was a viable option, but they were too expensive. I then started going to the websites of individual researchers. Unfortunately, only one of the 17 or 18 papers I was looking for was available.
This is when I started to get really frustrated. It became clear to me that what was going to happen was what I heard during a number of lectures: "Don't worry, I'm publishing a book about all of this if you want to know more."
Why do I need to wait five, seven, ten years from when a research article comes out for someone to package and process it in order for me to consume this information? In that five to ten years, my clients are going to continue developing without the benefit of the latest research.
What sort of articles were you looking for?
When we're in school, they teach us about evidence-based practice, and a big part of that is based on using the latest research. The other part is using your own clinical expertise to determine whether the evidence available to you is good for your client. For example, something that may be a great method for me to employ would be terrible for another speech-language pathologist to employ if she doesn't have the same experience.
Now, I can access the publications of [the American Speech-Language-Hearing Association's] core journals. Unfortunately, the articles I need are rarely in those. Topics in Language Disorders, for example, has a $122 subscription for four issues. But there's no guarantee that the articles I'll get in the four issues next year will be useful for me—and that's just one journal!
I really need to collect information from different fields so I can do proper evidence-based practice, but the work I want is from a hundred different journals, not just three or four.
How much did you end up spending?
Nothing yet. I ended up emailing a professor of mine from school, and I'm waiting to hear back from her, while at the same time asking her, "Is there a more reasonable way for me to do this?"
Some people told me to go to the local medical school library and download the articles from there. I don't know if it's feasible for me to go to a library of a school I don't go to! And at the moment, I don't really know any students who I could ask.
When there's a PDF available somewhere in the world, it's really a shame to have to either pay or jump through so many hoops to get it.
When it comes to research articles, what do you think the future should look like?
Here's my concern: how many times the information gets paid for is really frustrating. A lot of these research experiments are conducted thanks to government grants. If it is already government funded, taxpayers are already paying for it. Institutions then pay for subscriptions, then authors turn it into a book and sell it to me. It's like the information has been paid for multiple times before it comes to me—and it comes to me five-to-ten years later in some sort of packaged form.
I think that ideally, if you're going to be in a healthcare profession—or really any profession—that research should be easily available. Even if I had to pay an acceptable yearly fee—if for $300 a year I could access everything—that would be better than how it is today.
I'm a speech-language pathologist in private practice. I know that if I was affiliated with a university, then through that I could have access to the information I need. And that highlights a bigger issue: there's always a gap between the research world and the clinical world. There's a gate that holds the normal profession out of the research process—or even from simply being able to consume the information. By the time it comes to most of us, it's prepackaged and late.
Even with my continuing professional education, there's a barrier. Researchers will let you read their research—once you pay $700 to come to their course. Either way, the regular working professional is being denied this information, or the information is very expensive. Had I wanted to read the articles I needed, I would have had to pay thousands and thousands of dollars. And that doesn't seem right, especially when I'm just working to help kids with special needs.
Cortney's experience highlights the misaligned incentives of large swathes of the scholarly publishing system. Thanks to an obstinate and powerful group of legacy publishers, research is slow to get to those who benefit most from it: practitioners, researchers, patients, students. It is becoming more and more obvious that the only players benefitting from the status quo are the middlemen, the publishers themselves. Open access to scholarly works bypasses an unnecessary point of friction, empowering healthcare professionals like Cortney to take advantage of the most innovative practices and provide the aid their patients and clients truly need.
The good news is that a fix is within our grasp. Please support access to taxpayer-funded research.
Do you have a closed or open access story? Let us know by emailing email@example.com.Related Issues: Open Access
Ever since Google issued its first transparency report in early 2010, EFF has called on other companies to follow suit and disclose statistics about the number of government requests for user data, whether the request they receive is an official demand (such as a warrant) or an unofficial request. After all, users make decisions every day about which companies they trust with their data, therefore companies owe it to their customers to be transparent about when they hand data over to governments and law enforcement.
Since 2010, other companies have risen to the challenge, including Microsoft, Internet service provider Sonic.Net, cloud storage providers SpiderOak and DropBox, as well as social media companies such as LinkedIn and Twitter.
While we wish they had not taken this long, the two companies deserve kudos for taking this important step. Companies are under no legal obligation to inform their customers aggregate data about government requests for their data—this is a voluntary step. Both companies are members of the Global Network Initiative, however, which counts transparency among its core principles.
But in light of this summer’s revelations about the NSA’s PRISM—the program under which the NSA gains the ability to access to the private communications of users of many of the most popular Internet services, including those owned by Google, Microsoft, Facebook, and Yahoo—Internet giants are rushing to do what they can to restore user trust.
In September, Google, Facebook, and Yahoo all filed requests to the U.S. Foreign Intelligence Surveillance Court (FISC), asking for permission to publish the specific number of National Security Letters (NSL) that the companies received in the past year as well as the total number of user accounts affected by those requests. Of all the dangerous government surveillance powers that were expanded by the USA PATRIOT Act, the NSL power provided by five statutory provisions is one of the most frightening and invasive. These letters—the type served on communications service providers such as phone companies and ISPs and are authorized by 18 U.S.C. 2709—allow the FBI to secretly demand data about ordinary American citizens' private communications and Internet activity without any prior judicial review. To make matters worse, recipients of NSLs are subject to gag orders that forbid them from ever revealing the letters' existence to anyone. A federal judge found NSLs unconstitutional in March, but the order is on hold pending the government's appeal.
Some companies have published aggregate numbers, ranging from 0-999 or 1000-1999 that give us a broad and blurry view of just how widespread the use of NSLs has been, but more detailed numbers would much more helpful to the public understanding of the surveillence, without compromising security.
So now that Facebook and Yahoo have issued transparency reports, what do they tell us?
Facebook’s Global Government Requests Report covers January-June 2013 and reveals that 71 countries requested data on a total of 37,954 to 38,954 users. Unsurprisingly, the US demanded the largest amount of user data, making somewhere between 11,000 to 12,000 requests for 20,000 to 21,000 users.
India came in a close second, with 3,245 requests for 4,144 accounts, and the United Kingdom ranked third with 1,975 requests for 2,337 users. Facebook also revealed the number of times the requests produced "some data." Facebook handed over data to the U.S. 79% of the time, but only 50% and 68% of the time for India and the United Kingdom, respectively.
The vast majority of requests made to Facebook by less democratic countries (including Cote d’Ivoire, Nepal, and Qatar) were refused, however two nations stood out in the report: Pakistan and Turkey. In the case of Pakistan, 35 requests were made for 47 users, 77% of which Facebook complied with. In the case of Turkey, 96 requests for 170 users were made, and complied with 47% of the time.
What makes this unique is that no other major company has reported compliance with requests from Pakistan. The South Asian country is nominally a democracy, but censors the Internet heavily and has made a relatively transparent effort of seeking Western companies to enable greater censorship and surveillance, a role that Canadian company Netsweeper has been all too eager to fill. It is notable that Facebook has no offices in Pakistan (an office in-country could allow Pakistan to directly seek information from a local employee), nor has Pakistan signed a mutual legal assistance treaty (MLAT) with the US, putting Facebook under no legal obligation to comply with requests from the government.
With no offices in Turkey, either, it’s surprising to see such a high rate of compliance. Complaints of Facebook censoring certain content in Turkey abound, and as a recent blog post by a Kurdish activist demonstrates, some of that censorship seems quite arbitrary.
At the same time, if Facebook doesn’t comply, it undoubtedly risks being blocked in these countries, just as YouTube was for several years, and a tool used by opposition figures and activists might become unavailable. On balance, we think most countries would rightly be hesitant to remove popular Internet tool, as it may create more unrest than the information sought to be quashed.
While Facebook has been transparent about its law enforcement guidelines, information regarding its processes when it comes to international requests is vague - the data use policy allows disclosure when "consistent with internationally recognized standards," which are not defined. Facebook could enhance its transparency by clarifying its standards for complying with requests; even if its standards are perfect in everyway, users are legitimately concerned when they do not know what standards might apply.
Like Facebook, Yahoo also reported that the United States led the number of requests, with 12,444 data requests that included 40,322 Yahoo accounts. Yahoo handed content-related data, including communications in Yahoo Mail or Messenger, photos on Flickr or Yahoo Address Book entries, over to American agencies in 4,604 cases. The company gave the government non-content related information, which includes a person’s name, location or Internet Protocol address, in 6,798 cases.
Yahoo received fewer requests from the United Kingdom (1,709) and India (1,490) than did Facebook, with similar compliance rates. Once nice feature of Yahoo’s report is that it breaks down the type of data disclosure (non-content vs. content) in a pie chart for each country. In the UK, for example, 44% of requests were responded to with disclosures of non-content data, while in 20% of cases, content was disclosed to law enforcement.
Surprisingly, Yahoo received far more requests from Hong Kong than any other company, and complied with 100% of them (content was only disclosed in 1% of those cases). The South China Morning Post quoted lawmaker Charles Mok as saying that the number was high, and called on Yahoo to disclose which government agencies requested the data.
Related Issues: InternationalPrivacyNational Security LettersNSA SpyingTransparency
Federal law enforcement officers compromised the backbone of the Internet and violated the Fourth Amendment when they demanded private encryption keys from the email provider Lavabit, the Electronic Frontier Foundation (EFF) argues in a brief submitted Thursday afternoon to the US Court of Appeals for the Fourth Circuit. In the amicus brief, EFF asks the panel to overturn a contempt-of-court finding against Lavabit and its owner Ladar Levison for resisting a government subpoena and search warrant that would have put the private communications and data of Lavabit's 400,000 customers at risk of exposure to the government.
For nearly two decades, secure Internet communication has relied on HTTPS, a encryption system in which there are two keys: A public key that anyone can use to encrypt communications to a service provider, and a private key that only the service provide can use to decrypt the messages.
In July, the Department of Justice demanded Lavabit's private key—first with a subpoena, then with a search warrant. Although the government was investigating a single user, having access to the private key means the government would have the power to read all of Lavabit's customers' communications. The target of the investigation has not been named, but journalists have noted that the requests came shortly after reports that NSA whistleblower Edward Snowden used a Lavabit email account to communicate.
"Obtaining a warrant for a service's private key is no different than obtaining a warrant to search all the houses in a city to find the papers of one suspect," EFF Senior Staff Attorney Jennifer Lynch said. "This case represents an unprecedented use of subpoena power, with the government claiming it can compel a disclosure that would, in one fell swoop, expose the communications of every single one of Lavabit's users to government scrutiny."
EFF's concerns reach beyond this individual case, since the integrity of HTTPS is employed almost universally over the Internet, including in commercial, medical and financial transactions.
"When a private key has been discovered or disclosed to another party, all users' past and future communications are compromised," EFF Staff Technologist Dan Auerbach said. "If this was Facebook's private key, having it would mean unfettered access to the personal information of 20 percent of the earth's population. A private key not only protects communications on a given service; it also protects passwords, credit card information and a user's search engine query terms."
Initially, Levison resisted the government request. In response, a district court found Lavabit in contempt of court and levied a $5,000-per-day fine until the company complied. After Levison was forced to turn over Lavabit's key, the certificate authority GoDaddy revoked the key per standard protocol, rendering the secure site effectively unavailable to users.
Since Lavabit's business model is founded in protecting privacy, Levison shut down the service when it no longer could guarantee security to its customers.
"The government's request to Lavabit not only disrupts the security model on which the Internet depends, but also violates our Constitutional protections against unreasonable searches and seizures," EFF Staff Attorney Hanni Fakhoury said. "By effectively destroying Lavabit's legitimate business model when it complied with the subpoena, the action was unreasonably burdensome and violated the Fourth Amendment."
The deadline for the government's response brief is Nov. 12, 2013.
For EFF's full amicus brief:
Electronic Frontier Foundation
Electronic Frontier Foundation