Feed aggregator

Dating Apps Need to Learn How Consent Works

EFF: Updates - Mon, 07/21/2025 - 12:29pm

Staying safe whilst dating online should not be the responsibility of users—dating apps should be prioritizing our privacy by default, and laws should require companies to prioritize user privacy over their profit. But dating apps are taking shortcuts in safeguarding the privacy and security of users in favour of developing and deploying AI tools on their platforms, sometimes by using your most personal information to train their AI tools. 

Grindr has big plans for its gay wingman bot, Bumble launched AI Icebreakers, Tinder introduced AI tools to choose profile pictures for users, OKCupid teamed up with AI photo editing platform Photoroom to erase your ex from profile photos, and Hinge recently launched an AI tool to help users write prompts.

The list goes on, and the privacy harms are significant. Dating apps have built platforms that encourage people to be exceptionally open with sensitive and potentially dangerous personal information. But at the same time, the companies behind the platforms collect vast amounts of intimate details about their customers—everything from sexual preferences to precise location—who are often just searching for compatibility and connection. This data falling into the wrong hands can—and has—come with unacceptable consequences, especially for members of the LGBTQ+ community. 

This is why corporations should provide opt-in consent for AI training data obtained through channels like private messages, and employ minimization practices for all other data. Dating app users deserve the right to privacy, and should have a reasonable expectation that the contents of conversations—from text messages to private pictures—are not going to be shared or used for any purpose that opt-in consent has not been provided for. This includes the use of personal data for building AI tools, such as chatbots and picture selection tools. 

AI Icebreakers

Back in December 2023, Bumble introduced AI Icebreakers to the ‘Bumble for Friends’ section of the app to help users start conversations by providing them with AI-generated messages. Powered by OpenAI’s ChatGPT, the feature was deployed in the app without ever asking for their consent. Instead, the company presented users with a pop-up upon entering the app which repeatedly nudged people to click ‘Okay’ or face the same pop-up every time the app is reopened until individuals finally relent and tap ‘Okay.’

Obtaining user data without explicit opt-in consent is bad enough. But Bumble has taken this even further by sharing personal user data from its platform with OpenAI to feed into the company’s AI systems. By doing this, Bumble has forced its AI feature on millions of users in Europe—without their consent but with their personal data.

In response, European nonprofit noyb recently filed a complaint with the Austrian data protection authority on Bumble’s violation of its transparency obligations under Article 5(1)(a) GDPR. In its report, noyb flagged concerns around Bumble’s data sharing with OpenAI, which allowed the company to generate an opening message based on information users shared on the app. 

In its complaint, noyb specifically alleges that Bumble: 

  • Failed to provide information about the processing of personal data for its AI Icebreaker feature 
  • Confused users with a “fake” consent banner
  • Lacks a legal basis under Article 6(1) GDPR as it never sought user consent and cannot legally claim to base its processing on legitimate interest 
  • Can only process sensitive data—such as data involving sexual orientation—with explicit consent per Article 9 GDPR
  • Failed to adequately respond to the complainant’s access request, regulated through Article 15 GDPR.
AI Chatbots for Dating

Grindr recently launched its AI wingman. The feature operates like a chatbot and currently keeps track of favorite matches and suggests date locations. In the coming years, Grindr plans for the chatbot to send messages to other AI agents on behalf of users, and make restaurant reservations—all without human intervention. This might sound great: online dating without the time investment? A win for some! But privacy concerns remain. 

The chatbot is being built in collaboration with a third party company called Ex-human, which raises concerns about data sharing. Grindr has communicated that its users’ personal data will remain on its own infrastructure, which Ex-Human does not have access to, and that users will be “notified” when AI tools are available on the app. The company also said that it will ask users for permission to use their chat history for AI training. But AI data poses privacy risks that do not seem fully accounted for, particularly in places where it’s not safe to be outwardly gay. 

In building this ‘gay chatbot,’ Grindr’s CEO said one of its biggest limitations was preserving user privacy. It’s good that they are cognizant of these harms, particularly because the company has a terrible track record of protecting user privacy, and the company was also recently sued for allegedly revealing the HIV status of users. Further, direct messages on Grindr are stored on the company’s servers, where you have to trust they will be secured, respected, and not used to train AI models without your consent. Given Grindr’s poor record of not respecting user consent and autonomy on the platform, users need additional protections and guardrails for their personal data and privacy than currently being provided—especially for AI tools that are being built by third parties. 

AI Picture Selection  

In the past year, Tinder and Bumble have both introduced AI tools to help users choose better pictures for their profiles. Tinder’s AI-powered feature, Photo Selector, requires users to upload a selfie, after which its facial recognition technology can identify the person in their camera roll images. The Photo Selector then chooses a “curated selection of photos” direct from users’ devices based on Tinder’s “learnings” about good profile images. Users are not informed about the parameters behind choosing photos, nor is there a separate privacy policy introduced to guardrail privacy issues relating to the potential collection of biometric data, and collection, storage, and sale of camera roll images. 

The Way Forward: Opt-In Consent for AI Tools and Consumer Privacy Legislation 

Putting users in control of their own data is fundamental to protecting individual and collective privacy. We all deserve the right to control how our data is used and by whom. And when it comes to data like profile photos and private messages, all companies should require opt-in consent before processing those messages for AI. Finding love should not involve such a privacy impinging tradeoff.

At EFF, we’ve also long advocated for the introduction of comprehensive consumer privacy legislation to limit the collection of our personal data at its source and prevent retained data being sold or given away, breached by hackers, disclosed to law enforcement, or used to manipulate a user’s choices through online behavioral advertising. This would help protect users on dating apps as reducing the amount of data collected prevents the subsequent use in ways like building AI tools and training AI models. 

The privacy options at our disposal may seem inadequate to meet the difficult moments ahead of us, especially for vulnerable communities, but these steps are essential to protecting users on dating apps. We urge companies to put people over profit and protect privacy on their platforms.

When Your Power Meter Becomes a Tool of Mass Surveillance

EFF: Updates - Mon, 07/21/2025 - 11:57am

Simply using extra electricity to power some Christmas lights or a big fish tank shouldn’t bring the police to your door. In fact, in California, the law explicitly protects the privacy of power customers, prohibiting public utilities from disclosing precise “smart” meter data in most cases. 

Despite this, Sacramento’s power company and law enforcement agencies have been running an illegal mass surveillance scheme for years, using our power meters as home-mounted spies. The Electronic Frontier Foundation (EFF) is seeking to end Sacramento’s dragnet surveillance of energy customers and have asked for a court order to stop this practice for good.

For a decade, the Sacramento Municipal Utilities District (SMUD) has been searching through all of its customers’ energy data, and passed on more than 33,000 tips about supposedly “high” usage households to police. Ostensibly looking for homes that were growing illegal amounts of cannabis, SMUD analysts have admitted that such “high” power usage could come from houses using air conditioning or heat pumps or just being large. And the threshold of so-called “suspicion” has steadily dropped, from 7,000 kWh per month in 2014 to just 2,800 kWh a month in 2023. One SMUD analyst admitted that they themselves “used 3500 [kWh] last month.”

This scheme has targeted Asian customers. SMUD analysts deemed one home suspicious because it was “4k [kWh], Asian,” and another suspicious because “multiple Asians have reported there.” Sacramento police sent accusatory letters in English and Chinese, but no other language, to residents who used above-average amounts of electricity.

In 2022, EFF and the law firm Vallejo, Antolin, Agarwal, Kanter LLP sued SMUD and the City of Sacramento, representing the Asian American Liberation Network and two Sacramento County residents. One is an immigrant from Vietnam. Sheriff’s deputies showed up unannounced at his home, falsely accused him of growing cannabis based on an erroneous SMUD tip, demanded entry for a search, and threatened him with arrest when he refused. He has never grown cannabis; rather, he consumes more than average electricity due to a spinal injury.

Last week, we filed our main brief explaining how this surveillance program violates the law and why it must be stopped. California’s state constitution bars unreasonable searches. This type of dragnet surveillance — suspicionless searches of entire zip codes worth of customer energy data — is inherently unreasonable. Additionally, a state statute generally prohibits public utilities from sharing such data. As we write in our brief, the Sacramento’s mass surveillance scheme does not qualify for one of the narrow exceptions to this rule. 

Mass surveillance violates the privacy of many individuals, as police without individualized suspicion seek (possibly non-existent) evidence of some kind of offense by some unknown person. As we’ve seen time and time again, innocent people inevitably get caught in the dragnet. For decades, EFF has been exposing and fighting these kinds of dangerous schemes. We remain committed to protecting digital privacy, whether it’s being threatened by national governments – or your local power company.

Related Cases: Asian American Liberation Network v. SMUD, et al.

The unique, mathematical shortcuts language models use to predict dynamic scenarios

MIT Latest News - Mon, 07/21/2025 - 8:00am

Let’s say you’re reading a story, or playing a game of chess. You may not have noticed, but each step of the way, your mind kept track of how the situation (or “state of the world”) was changing. You can imagine this as a sort of sequence of events list, which we use to update our prediction of what will happen next.

Language models like ChatGPT also track changes inside their own “mind” when finishing off a block of code or anticipating what you’ll write next. They typically make educated guesses using transformers — internal architectures that help the models understand sequential data — but the systems are sometimes incorrect because of flawed thinking patterns. Identifying and tweaking these underlying mechanisms helps language models become more reliable prognosticators, especially with more dynamic tasks like forecasting weather and financial markets.

But do these AI systems process developing situations like we do? A new paper from researchers in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Electrical Engineering and Computer Science shows that the models instead use clever mathematical shortcuts between each progressive step in a sequence, eventually making reasonable predictions. The team made this observation by going under the hood of language models, evaluating how closely they could keep track of objects that change position rapidly. Their findings show that engineers can control when language models use particular workarounds as a way to improve the systems’ predictive capabilities.

Shell games

The researchers analyzed the inner workings of these models using a clever experiment reminiscent of a classic concentration game. Ever had to guess the final location of an object after it’s placed under a cup and shuffled with identical containers? The team used a similar test, where the model guessed the final arrangement of particular digits (also called a permutation). The models were given a starting sequence, such as “42135,” and instructions about when and where to move each digit, like moving the “4” to the third position and onward, without knowing the final result.

In these experiments, transformer-based models gradually learned to predict the correct final arrangements. Instead of shuffling the digits based on the instructions they were given, though, the systems aggregated information between successive states (or individual steps within the sequence) and calculated the final permutation.

One go-to pattern the team observed, called the “Associative Algorithm,” essentially organizes nearby steps into groups and then calculates a final guess. You can think of this process as being structured like a tree, where the initial numerical arrangement is the “root.” As you move up the tree, adjacent steps are grouped into different branches and multiplied together. At the top of the tree is the final combination of numbers, computed by multiplying each resulting sequence on the branches together.

The other way language models guessed the final permutation was through a crafty mechanism called the “Parity-Associative Algorithm,” which essentially whittles down options before grouping them. It determines whether the final arrangement is the result of an even or odd number of rearrangements of individual digits. Then, the mechanism groups adjacent sequences from different steps before multiplying them, just like the Associative Algorithm.

“These behaviors tell us that transformers perform simulation by associative scan. Instead of following state changes step-by-step, the models organize them into hierarchies,” says MIT PhD student and CSAIL affiliate Belinda Li SM ’23, a lead author on the paper. “How do we encourage transformers to learn better state tracking? Instead of imposing that these systems form inferences about data in a human-like, sequential way, perhaps we should cater to the approaches they naturally use when tracking state changes.”

“One avenue of research has been to expand test-time computing along the depth dimension, rather than the token dimension — by increasing the number of transformer layers rather than the number of chain-of-thought tokens during test-time reasoning,” adds Li. “Our work suggests that this approach would allow transformers to build deeper reasoning trees.”

Through the looking glass

Li and her co-authors observed how the Associative and Parity-Associative algorithms worked using tools that allowed them to peer inside the “mind” of language models. 

They first used a method called “probing,” which shows what information flows through an AI system. Imagine you could look into a model’s brain to see its thoughts at a specific moment — in a similar way, the technique maps out the system’s mid-experiment predictions about the final arrangement of digits.

A tool called “activation patching” was then used to show where the language model processes changes to a situation. It involves meddling with some of the system’s “ideas,” injecting incorrect information into certain parts of the network while keeping other parts constant, and seeing how the system will adjust its predictions.

These tools revealed when the algorithms would make errors and when the systems “figured out” how to correctly guess the final permutations. They observed that the Associative Algorithm learned faster than the Parity-Associative Algorithm, while also performing better on longer sequences. Li attributes the latter’s difficulties with more elaborate instructions to an over-reliance on heuristics (or rules that allow us to compute a reasonable solution fast) to predict permutations.

“We’ve found that when language models use a heuristic early on in training, they’ll start to build these tricks into their mechanisms,” says Li. “However, those models tend to generalize worse than ones that don’t rely on heuristics. We found that certain pre-training objectives can deter or encourage these patterns, so in the future, we may look to design techniques that discourage models from picking up bad habits.”

The researchers note that their experiments were done on small-scale language models fine-tuned on synthetic data, but found the model size had little effect on the results. This suggests that fine-tuning larger language models, like GPT 4.1, would likely yield similar results. The team plans to examine their hypotheses more closely by testing language models of different sizes that haven’t been fine-tuned, evaluating their performance on dynamic real-world tasks such as tracking code and following how stories evolve.

Harvard University postdoc Keyon Vafa, who was not involved in the paper, says that the researchers’ findings could create opportunities to advance language models. “Many uses of large language models rely on tracking state: anything from providing recipes to writing code to keeping track of details in a conversation,” he says. “This paper makes significant progress in understanding how language models perform these tasks. This progress provides us with interesting insights into what language models are doing and offers promising new strategies for improving them.”

Li wrote the paper with MIT undergraduate student Zifan “Carl” Guo and senior author Jacob Andreas, who is an MIT associate professor of electrical engineering and computer science and CSAIL principal investigator. Their research was supported, in part, by Open Philanthropy, the MIT Quest for Intelligence, the National Science Foundation, the Clare Boothe Luce Program for Women in STEM, and a Sloan Research Fellowship.

The researchers presented their research at the International Conference on Machine Learning (ICML) this week.

Another Supply Chain Vulnerability

Schneier on Security - Mon, 07/21/2025 - 7:04am

ProPublica is reporting:

Microsoft is using engineers in China to help maintain the Defense Department’s computer systems—with minimal supervision by U.S. personnel—leaving some of the nation’s most sensitive data vulnerable to hacking from its leading cyber adversary, a ProPublica investigation has found.

The arrangement, which was critical to Microsoft winning the federal government’s cloud computing business a decade ago, relies on U.S. citizens with security clearances to oversee the work and serve as a barrier against espionage and sabotage...

For sale or lease: NASA satellites, slightly used

ClimateWire News - Mon, 07/21/2025 - 6:18am
Government spacecraft would be available for purchase under a plan being discussed by the Trump administration.

EPA shuffles major climate program office

ClimateWire News - Mon, 07/21/2025 - 6:17am
The office created under President Joe Biden to run the $27 billion Greenhouse Gas Reduction Fund has been turned into an oversight arm.

Texas GOP vows ‘serious’ flood response in special session

ClimateWire News - Mon, 07/21/2025 - 6:16am
Top Republicans say they're focused on the nuts and bolts of disaster policy in the aftermath of the deadly flash floods.

UN court to rule on countries’ duty to curb climate change

ClimateWire News - Mon, 07/21/2025 - 6:15am
The International Court of Justice heard testimony from more than 100 nations and organizations in the lead-up to its decision.

Counties urge Congress to reject legal immunity for fossil fuel industry

ClimateWire News - Mon, 07/21/2025 - 6:14am
The National Association of Counties passed a resolution opposing efforts to limit climate lawsuits against the oil and gas industry.

From green icon to housing villain: The fall of California’s landmark environmental law

ClimateWire News - Mon, 07/21/2025 - 6:09am
Democrats pared back one of the state's preeminent policies to restore trust with voters frustrated by the high cost of living.

Far-right lawmaker to lead talks on EU climate goal he called ‘utter madness’

ClimateWire News - Mon, 07/21/2025 - 6:09am
Parliament’s centrist and left-wing forces have vowed to try to stop Ondřej Knotek from stalling work on the 2040 target.

Trump’s tariffs push Asia toward undermining climate goals

ClimateWire News - Mon, 07/21/2025 - 6:08am
Asian countries are offering to buy more U.S. liquefied natural gas as a way to alleviate tensions over American trade deficits and forestall higher tariffs.

Analysts see ESG bond issuance dropping ‘considerably’ in 2025

ClimateWire News - Mon, 07/21/2025 - 6:07am
Interest in financial products claiming to target environmental, social and governance goals is flagging amid politically motivated attacks.

What Americans actually think about taxes

MIT Latest News - Mon, 07/21/2025 - 12:00am

Doing your taxes can feel like a very complicated task. Even so, it might be less intricate than trying to make sense of what people think about taxes.

Several years ago, MIT political scientist Andrea Campbell undertook an expansive research project to understand public opinion about taxation. Her efforts have now reached fruition, in a new book uncovering many complexities about attitudes toward taxes. Those complexities include a central tension: In the U.S., most people say they support the principle of progressive taxation — in which higher earners pay higher shares of their income. Yet people also say they prefer specific forms of taxes that are regressive, hitting lower- and middle-income earners relatively harder.

For instance, state sales taxes are considered regressive, since people who make less money spend a larger percentage of their incomes, meaning sales taxes eat up a larger proportion of their earnings. But a substantial portion of the public still finds them to be fair, partly because the wealthy cannot wriggle out of them.

“At an abstract or conceptual level, people say they like progressive tax systems more than flat or regressive tax systems,” Campbell says. “But when you look at public attitudes toward specific taxes, people’s views flip upside down. People say federal and state income taxes are unfair, but they say sales taxes, which are very regressive, are fair. Their attitudes on individual taxes are the opposite of what their overall commitments are.”

Now Campbell analyzes these issues in detail in her book, “Taxation and Resentment,” just published by Princeton University Press. Campbell is the Arthur and Ruth Sloan Professor of Political Science at MIT and a former head of MIT’s  Department of Political Science.

Filling out the record

Campbell originally planned “Taxation and Resentment” as a strictly historically-oriented look at the subject. But the absence of any one book compiling public-opinion data in this area was striking. So, she assembled data going back to the end of World War II, and even designed and ran a couple of her own public research surveys, which help undergird the book’s numbers.

“Political scientists write a lot about public attitudes toward spending in the United States, but not so much about attitudes toward taxes,” Campbell says. “The public-opinion record is very thin.”

The complexities of U.S. public opinion on taxes are plainly linked to the presence of numerous forms of taxes, including federal and state income taxes, sales taxes, payroll taxes, estate taxes, and capital gains taxes. The best-known, of course, is the federal income tax, whose quirks and loopholes seem to irk citizens.

“That really seizes people’s imaginations,” Campbell says. “Keeping the focus on federal income tax has been a clever strategy among those who want to cut it. People think it’s unfair because they look at all the tax breaks the rich get and think, ‘I don’t have access to those.’ Those breaks increase complexity, undermine people’s knowledge, heighten their anger, and of course are in there because they help rich people pay less. So, there ends up being a cycle.”

That same sense of unfairness does not translate to all other forms of taxation, however. Large majorities of people have supported lowering the estate tax, for example, even though the threshold at which the federal estate tax kicks in — $13.5 million — applies to very few families.

Then too, the public seems to perceive sales taxes as being fair because of the simplicity and lack of loopholes — an understandable view, but one that ignores the way that state sales taxes, as opposed to state income taxes, place a bigger burden on middle-class and lower-income workers.

“A regressive tax like a sales tax is more difficult to comprehend,” Campbell says. “We all pay the same rate, so it seems like a flat tax, but as your income goes up, the bite of that tax goes down. And that’s just very difficult for people to understand.”

Overall, as Campbell details, income levels do not have huge predictive value when it comes to tax attitudes. Party affiliation also has less impact than many people might suspect — Democrats and Republicans differ on taxes, though not as much, in some ways, as political independents, who often have the most anti-tax views of all.

Meanwhile, Campbell finds, white Americans with heightened concerns about redistribution of public goods among varying demographic groups are more opposed to taxes than those who do not share those redistribution concerns. And Black and Hispanic Americans, who may wind up on the short end of regressive policies, also express significantly anti-tax perspectives, albeit while expressing more support for the state functions funded by taxation.

“There are so many factors and components of public opinion around taxes,” Campbell says. “Many political and demographic groups have their own reasons for disliking the status quo.”

How much does public opinion matter?

The research in “Taxation and Resentment” will be of high value to many kinds of scholars. However, as Campbell notes, political scientists do not have consensus about how much public opinion influences policy. Some experts contend that donors and lobbyists essentially determine policy while the larger public is ignored. But Campbell does not agree that public sentiment amounts to nothing. Consider, she says, the vigorous and successful public campaign to lower the estate tax in the first decade of the 2000s.

“If public opinion doesn’t matter, then why were there these PR campaigns to try to convince people the estate tax was bad for small businesses, farmers, and other groups?” Campbell asks. “Clearly it’s because public opinion does matter. It’s far easier to get these policies implemented if the public is on your side than if the public is in opposition. Public opinion is not the only factor in policymaking, but it’s a contributing factor.”

To be sure, even in the formation of public opinion, there are complexities and nuance, as Campbell notes in the book. A system of progressive taxation means the people taxed at the highest rate are the most motivated to oppose the system — and may heavily influence public opinion, in a top-down manner.

Scholars in the field have praised “Taxation and Resentment.” Martin Gilens, chair of the Department of Public Policy at the University of California at Los Angeles, has called it an “important and very welcome addition to the literature on public attitudes about public policies … with rich and often unexpected findings.” Vanessa Williamson, a senior fellow at the Brookings Institution, has said the book is “essential reading for anyone who wants to understand what Americans actually think about taxes. The scope of the data Campbell brings to bear on this question is unparalleled, and the depth of her analysis of public opinion across time and demography is a monumental achievement.”

For her part, Campbell says she hopes people in a variety of groups will read the book — including policymakers, scholars in multiple fields, and students. Certainly, she thinks, after studying the issue, more people could stand to know more about taxes.

“The tax system is complex,” Campbell says, “and people don’t always understand their own stakes. There is often a fog surrounding taxes.”

Friday Squid Blogging: The Giant Squid Nebula

Schneier on Security - Fri, 07/18/2025 - 5:06pm

Beautiful photo.

Difficult to capture, this mysterious, squid-shaped interstellar cloud spans nearly three full moons in planet Earth’s sky. Discovered in 2011 by French astro-imager Nicolas Outters, the Squid Nebula’s bipolar shape is distinguished here by the telltale blue emission from doubly ionized oxygen atoms. Though apparently surrounded by the reddish hydrogen emission region Sh2-129, the true distance and nature of the Squid Nebula have been difficult to determine. Still, one investigation suggests Ou4 really does lie within Sh2-129 some 2,300 light-years away. Consistent with that scenario, the cosmic squid would represent a spectacular outflow of material driven by a ...

EFF to Court: The DMCA Didn't Create a New Right of Attribution, You Shouldn't Either

EFF: Updates - Fri, 07/18/2025 - 4:37pm

Amid a wave of lawsuits targeting how AI companies use copyrighted works to train large language models that generate new works, a peculiar provision of copyright law is suddenly in the spotlight: Section 1202 of the Digital Millennium Copyright Act (DMCA). Section 1202 restricts intentionally removing or changing copyright management information (CMI), such as a signature on a painting or attached to a photograph. Passed in 1998, the rule was supposed to help rightsholders identify potentially infringing uses of their works and encourage licensing.

Open AI and Microsoft used code from Github as part of the training data for their LLMs, along with billions of other works. A group of anonymous Github contributors sued, arguing that those LLMs generated new snippets of code that were substantially similar to theirs—but with the CMI stripped. Notably, they did not claim that the new code was copyright infringement—they are relying solely on Section 1202 of the DMCA. Their problem? The generated code is different from their original work, and courts across the US have adopted an “identicality rule,” on the theory that Section 1202 is supposed to apply only when CMI is removed from existing works, not when it’s simply missing from a new one.

It may sound like an obscure legal question, but the outcome of this battle—currently before the Ninth Circuit Court of Appeals—could have far-reaching implications beyond generative AI technologies. If the rightholders were correct, Section 1202 effectively creates a freestanding right of attribution, creating potential liability even for non-infringing uses, such as fair use, if those new uses simply omit the CMI. While many fair users might ultimately escape liability under other limitations built into Section 1202, the looming threat of litigation, backed by risk of high and unpredictable statutory penalties, will be enough to pressure many defendants to settle. Indeed, an entire legal industry of “copyright trolls” has emerged to exploit this dynamic, with no corollary benefit to creativity or innovation.

Fortunately, as we explain in a brief filed today, the text of Section 1202 doesn’t support such an expansive interpretation. The provision repeatedly refers to “works” and “copies of works”—not “substantially similar” excerpts or new adaptations—and its focus on “removal or alteration” clearly contemplates actions taken with respect to existing works, not new ones. Congress could have chosen otherwise and written the law differently. Wisely it did not, thereby ensuring that rightsholders couldn’t leverage the omission of CMI to punish or unfairly threaten otherwise lawful re-uses of a work.

Given the proliferation of copyrighted works in virtually every facet of daily life, the last thing any court should do is give rightsholders a new, freestanding weapon against fair uses. As the Supreme Court once observed, copyright is a “tax on readers for the purpose of giving a bounty to writers.” That tax—including the expense of litigation—can be an important way to encourage new creativity, but it should not be levied unless the Copyright Act clearly requires it.

California A.B. 412 Stalls Out—A Win for Innovation and Fair Use

EFF: Updates - Fri, 07/18/2025 - 2:49pm

A.B. 412, the flawed California bill that threatened small developers in the name of AI “transparency,” has been delayed and turned into a two-year bill. That means it won’t move forward in 2025—a significant victory for innovation, freedom to code, and the open web.

EFF opposed this bill from the start. A.B. 412 tried to regulate generative AI, not by looking at the public interest, but by mandating training data “reading lists” designed to pave the way for new copyright lawsuits, many of which are filed by large content companies. 

Transparency in AI development is a laudable goal. But A.B. 412 failed to offer a fair or effective path to get there. Instead, it gave companies large and small the impossible task of differentiating between what content was copyrighted and what wasn’t—with severe penalties for anyone who couldn’t meet that regulation. That would have protected the largest AI companies, but frozen out smaller and non-commercial developers who might want to tweak or fine-tune AI systems for the public good. 

The most interesting work in AI won’t necessarily come from the biggest companies. It will come from small teams, fine-tuning for accessibility, privacy, and building tools that identify AI harms. And some of the most valuable work will be done using source code under permissive licenses. 

A.B. 412 ignored those facts, and would have punished some of the most worthwhile projects. 

The Bill Blew Off Fair Use Rights

The question of whether—and how much—AI training qualifies as fair use is being actively litigated right now in federal courts. And so far, courts have found much of this work to be fair use. In a recent landmark AI case, Bartz v. Anthropic, for example, a federal judge found that AI training work is “transformative—spectacularly so.” He compared it to how search engines copy images and text in order to provide useful search results to users.

Copyright is federally governed. When states try to rewrite the rules, they create confusion—and more litigation that doesn’t help anyone.

If lawmakers want to revisit AI transparency, they need to do so without giving rights-holders a tool to weaponize copyright claims. That means rejecting A.B. 412’s approach—and crafting laws that protect speech, competition, and the public’s interest in a robust, open, and fair AI ecosystem. 

Amazon Ring Cashes in on Techno-Authoritarianism and Mass Surveillance

EFF: Updates - Fri, 07/18/2025 - 10:37am

Ring founder Jamie Siminoff is back at the helm of the surveillance doorbell company, and with him is the surveillance-first-privacy-last approach that made Ring one of the most maligned tech devices. Not only is the company reintroducing new versions of old features which would allow police to request footage directly from Ring users, it is also introducing a new feature that would allow police to request live-stream access to people’s home security devices. 

This is a bad, bad step for Ring and the broader public. 

Ring is rolling back many of the reforms it’s made in the last few years by easing police access to footage from millions of homes in the United States. This is a grave threat to civil liberties in the United States. After all, police have used Ring footage to spy on protestors, and obtained footage without a warrant or consent of the user. It is easy to imagine that law enforcement officials will use their renewed access to Ring information to find people who have had abortions or track down people for immigration enforcement

Siminoff has announced in a memo seen by Business Insider that the company will now be reimagined from the ground up to be “AI first”—whatever that means for a home security camera that lets you see who is ringing your doorbell. We fear that this may signal the introduction of video analytics or face recognition to an already problematic surveillance device. 

It was also reported that employees at Ring will have to show proof that they use AI in order to get promoted. 

Not to be undone with new bad features, they are also planning on rolling back some of the necessary reforms Ring has made: namely partnering with Axon to build a new tool that would allow police to request Ring footage directly from users, and also allow users to consent to letting police livestream directly from their device. 

After years of serving as the eyes and ears of police, the company was compelled by public pressure to make a number of necessary changes. They introduced end-to-end encryption, they ended their formal partnerships with police which were an ethical minefield, and they ended their tool that facilitated police requests for footage directly to customers. Now they are pivoting back to being a tool of mass surveillance. 

Why now? It is hard to believe the company is betraying the trust of its millions of customers in the name of “safety” when violent crime in the United States is reaching near-historically low levels. It’s probably not about their customers—the FTC had to compel Ring to take its users’ privacy seriously. 

No, this is most likely about Ring cashing in on the rising tide of techno-authoritarianism, that is, authoritarianism aided by surveillance tech. Too many tech companies want to profit from our shrinking liberties. Google likewise recently ended an old ethical commitment that prohibited it from profiting off of surveillance and warfare. Companies are locking down billion-dollar contracts by selling their products to the defense sector or police.

Shame on Ring.

MIT launches a “moonshot for menstruation science”

MIT Latest News - Fri, 07/18/2025 - 9:50am

The MIT Health and Life Sciences Collaborative (MIT HEALS) has announced the establishment of the Fairbairn Menstruation Science Fund, supporting a bold, high-impact initiative designed to revolutionize women’s health research.

Established through a gift from Emily and Malcolm Fairbairn, the fund will advance groundbreaking research on the function of the human uterus and its impact on sex-based differences in human immunology that contribute to gynecological disorders such as endometriosis, as well as other chronic systemic inflammatory diseases that disproportionately affect women, such as Lyme disease and lupus. The Fairbairns, based in the San Francisco Bay Area, have committed $10 million, with a call to action for an additional $10 million in matching funds.

“I’m deeply grateful to Emily and Malcolm Fairbairn for their visionary support of menstruation science at MIT. For too long, this area of research has lacked broad scientific investment and visibility, despite its profound impact on the health and lives of over half the population,” says Anantha P. Chandrakasan, MIT provost who was chief innovation and strategy officer and dean of engineering at the time of the gift, and Vannevar Bush Professor of Electrical Engineering and Computer Science.

Chandrakasan adds: “Thanks to groundbreaking work from researchers like Professor Linda Griffith and her team at the MIT Center for Gynepathology Research (CGR), we have an opportunity to advance our understanding and address critical challenges in menstruation science.”

Griffith, professor of biological and mechanical engineering and director of CGR, says the Fairbairn Fund will permit the illumination of “the enormous sex-based differences in human immunity” and advance next-generation drug-discovery technologies.

One main thrust of the new initiative will further the development of “organs on chips,” living models of patients. Using living cells or tissues, such devices allow researchers to replicate and experiment with interactions that can occur in the body. Griffith and an interdisciplinary team of researchers have engineered a powerful microfluidic platform that supports chips that foster growth of tissues complete with blood vessels and circulating immune cells. The technology was developed for building endometriosis lesions from individual patients with known clinical characteristics. The chip allows the researchers to do preclinical testing of drugs on the human patient-derived endometriosis model rather than on laboratory animals, which often do not menstruate naturally and whose immune systems function differently than that of humans.

The Fairbairn Fund will build the infrastructure for a “living patient avatar” facility to develop such physiomimetic models for all kinds of health conditions.

“We acknowledge that there are some big-picture phenomenological questions that one can study in animals, but human immunology is so very different,” Griffith says. “Pharma and biotech realize that we need living models of patients and the computational models of carefully curated patient data if we are to move into greater success in clinical trials.”

The computational models of patient data that Griffith refers to are a key element in choosing how to design the patient avatars and determine which therapeutics to test on them. For instance, by using systems biology analysis of inflammation in patient abdominal fluid, Griffith and her collaborators identified an intracellular enzyme called jun kinase (JNK). They are now working with a biotech company to test specific inhibitors of JNK in their model. Griffith has also collaborated with Michal “Mikki” Tal, a principal scientist in MIT’s Department of Biological Engineering, on investigating a possible link between prior infection, such as by the Lyme-causing bacterium Borrelia, and a number of chronic inflammatory diseases in women. Automating assays of patient samples for higher throughput could systematically speed the generation of hypotheses guiding the development of patient model experimentation.

“This fund is catalytic,” Griffith says. “Industry and government, along with other foundations, will invest if the foundational infrastructure exists. They want to employ the technologies, but it is hard to get them developed to the point they are proven to be useful. This gets us through that difficult part of the journey.”

The fund will also support public engagement efforts to reduce stigma around menstruation and neglect of such conditions as abnormal uterine bleeding and debilitating anemia, endometriosis, and polycystic ovary syndrome — and in general bring greater attention to women’s health research. Endometriosis, for instance, in which tissue that resembles the uterine lining starts growing outside the uterus and causes painful inflammation, affects one in 10 women. It often goes undiagnosed for years, and can require repeated surgeries to remove its lesions. Meanwhile, little is known about what causes it, how to prevent it, or what could effectively stop it.

Women’s health research could further advance in many areas of medicine beyond conditions that disproportionately affect females. Griffith points out that the uterus, which sheds and regenerates its lining every month, demonstrates “scarless healing” that could warrant investigation. Also, deepened study of the uterus could shed light on immune tolerance for transplants, given that in a successful pregnancy an implanted fetus is not rejected, despite containing foreign material from the biological father.

For Emily Fairbairn, the fund is a critical step toward major advances in an often-overlooked area of medicine.

“My mission is to support intellectually honest, open-minded scientists who embrace risk, treat failure as feedback, and remain committed to discovery over dogma. This fund is a direct extension of that philosophy. It’s designed to fuel research into the biological realities of diseases that remain poorly understood, frequently dismissed, or disproportionately misdiagnosed in women,” Fairbairn says. “I’ve chosen to make this gift to MIT because Linda Griffith exemplifies the rare combination of scientific integrity and bold innovation — qualities essential for tackling the most neglected challenges in medicine.”

Fairbairn also refers to Griffith collaborator Michal Tal as being “deeply inspiring.”

“Her work embodies what’s possible when scientific excellence meets institutional courage. It is this spirit — bold, rigorous, and fearless — that inspired this gift and fuels our hope for the future of women’s health,” she says.

Fairbairn, who has suffered from both Lyme disease and endometriosis that required multiple surgeries, originally directed her philanthropy, including previous gifts to MIT, toward the study of Lyme disease and associated infections.

“My own experience with both Lyme and endometriosis deepened my conviction that science must better account for how female physiology, genetics, and psychology differ from men’s,” she says. “MIT stands out for treating women’s health not as a niche, but as a frontier. The Institute’s willingness to bridge immunology, neurobiology, bioengineering, and data science — alongside its development of cutting-edge platforms like human chips — offers a rare and necessary seriousness of purpose.”

For her part, Griffith refers to Fairbairn as “a citizen scientist who inspires us daily.”

“Her tireless advocacy for patients, especially women, who are dismissed and gas-lit, is priceless,” Griffith adds. “Emily has made me a better scientist, in service of humanity.”

New Mobile Phone Forensics Tool

Schneier on Security - Fri, 07/18/2025 - 7:07am

The Chinese have a new tool called Massistant.

  • Massistant is the presumed successor to Chinese forensics tool, “MFSocket”, reported in 2019 and attributed to publicly traded cybersecurity company, Meiya Pico.
  • The forensics tool works in tandem with a corresponding desktop software.
  • Massistant gains access to device GPS location data, SMS messages, images, audio, contacts and phone services.
  • Meiya Pico maintains partnerships with domestic and international law enforcement partners, both as a surveillance hardware and software provider, as well as through training programs for law enforcement personnel...

Pages