Feed aggregator
States shrug at Trump’s order targeting their climate laws
Clean energy transition will persist under Trump, analyses say
Judge orders Trump admin to take ‘immediate steps’ to resume climate funding
Trump’s fossil fuel fundamentalism is heading for the UK
Australia emitters rely on credits to meet new climate goals
Bank climate group says strategy pivot wins ‘overwhelming support’
Amendment to Peruvian law raises fears of rainforest destruction
Spring drought threatens Europe’s farms and rivers
Beneath the biotech boom
It’s considered a scientific landmark: A 1975 meeting at the Asilomar Conference Center in Pacific Grove, California, shaped a new safety regime for recombinant DNA, ensuring that researchers would apply caution to gene splicing. Those ideas have been so useful that in the decades since, when new topics in scientific safety arise, there are still calls for Asilomar-type conferences to craft good ground rules.
There’s something missing from this narrative, though: It took more than the Asilomar conference to set today’s standards. The Asilomar concepts were created with academic research in mind — but the biotechnology industry also makes products, and standards for that were formulated after Asilomar.
“The Asilomar meeting and Asilomar principles did not settle the question of the safety of genetic engineering,” says MIT scholar Robin Scheffler, author of a newly published research paper on the subject.
Instead, as Scheffler documents in the paper, Asilomar helped generate further debate, but those industry principles were set down later in the 1970s — first in Cambridge, Massachusetts, where politicians and concerned citizens wanted local biotech firms to be good neighbors. In response, the city passed safety laws for the emerging industry. And rather than heading off to places with zero regulations, local firms — including a fledgling Biogen — stayed put. Over the decades, the Boston area became the world leader in biotech.
Why stay? In essence, regulations gave biotech firms the certainty they needed to grow — and build. Lenders and real-estate developers needed signals that long-term investment in labs and facilities made sense. Generally, as Scheffler notes, even though “the idea that regulations can be anchoring for business does not have a lot of pull” in economic theory, in this case, regulations did matter.
“The trajectory of the industry in Cambridge, including biotechnology companies deciding to accommodate regulation, is remarkable,” says Scheffler. “It’s hard to imagine the American biotechnology industry without this dense cluster in Boston and Cambridge. These things that happened on a very local scale had huge echoes.”
Scheffler’s article, “Asilomar Goes Underground: The Long Legacy of Recombinant DNA Hazard Debates for the Greater Boston Area Biotechnology Industry,” appears in the latest issue of the Journal of the History of Biology. Scheffler is an associate professor in MIT’s Program in Science, Technology, and Society.
Business: Banking on certainty
To be clear, the Asilomar conference of 1975 did produce real results. Asilomar led to a system that helped evaluate projects’ potential risk and determine appropriate safety measures. The U.S. federal government subsequently adopted Asilomar-like principles for research it funded.
But in 1976, debate over the subject arose again in Cambridge, especially following a cover story in a local newspaper, the Boston Phoenix. Residents became concerned that recombinant DNA projects would lead to, hypothetically, new microorganisms that could damage public health.
“Scientists had not considered urban public health,” Scheffler says. “The Cambridge recombinant DNA debate in the 1970s made it a matter of what your neighbors think.”
After several months of hearings, research, and public debate (sometimes involving MIT faculty) stretching into early 1977, Cambridge adopted a somewhat stricter framework than the federal government had proposed for the handling of materials used in recombinant DNA work.
“Asilomar took on a new life in local regulations,” says Scheffler, whose research included government archives, news accounts, industry records, and more.
But a funny thing happened after Cambridge passed its recombinant DNA rules: The nascent biotech industry took root, and other area towns passed their own versions of the Cambridge rules.
“Not only did cities create more safety regulations,” Scheffler observes, “but the people asking for them switched from being left-wing activists or populist mayors to the Massachusetts Biotechnology Council and real estate development concerns.”
Indeed, he adds, “What’s interesting is how quickly safety concerns about recombinant DNA evaporated. Many people against recombinant DNA came to change their thinking.” And while some local residents continued to express concerns about the environmental impact of labs, “those are questions people ask when they no longer worry about the safety of the core work itself.”
Unlike federal regulations, these local laws applied to not only lab research but also products, and as such they let firms know they could work in a stable business environment with regulatory certainty. That mattered financially, and in a specific way: It helped companies build the buildings they needed to produce the products they had invented.
“The venture capital cycle for biotechnology companies was very focused on the research and exciting intellectual ideas, but then you have the bricks and mortar,” Scheffler says, referring to biotech production facilities. “The bricks and mortar is actually the harder problem for a lot of startup biotechnology companies.”
After all, he notes, “Venture capital will throw money after big discoveries, but a banker issuing a construction loan has very different priorities and is much more sensitive to things like factory permits and access to sewers 10 years from now. That’s why all these towns around Massachusetts passed regulations, as a way of assuring that.”
To grow globally, act locally
Of course, one additional reason biotech firms decided to land in the Boston area was the intellectual capital: With so many local universities, there was a lot of industry talent in the region. Local faculty co-founded some of the high-flying firms.
“The defining trait of the Cambridge-Boston biotechnology cluster is its density, right around the universities,” Scheffler says. “That’s a unique feature local regulations encouraged.”
It’s also the case, Scheffler notes, that some biotech firms did engage in venue-shopping to avoid regulations at first, although that was more the case in California, another state where the industry emerged. Still, the Boston-area regulations seemed to assuage both industry and public worries about the subject.
The foundations of biotechnology regulation in Massachusetts contain some additional historical quirks, including the time in the late 1970s when the city of Cambridge mistakenly omitted the recombinant DNA safety rules from its annually published bylaws, meaning the regulations were inactive. Officials at Biogen sent them a reminder to restore the laws to the books.
Half a century on from Asilomar, its broad downstream effects are not just a set of research principles — but also, refracted through the Cambridge episode, key ideas about public discussion and input; reducing uncertainty for business, the particular financing needs of industries; the impact of local and regional regulation; and the openness of startups to recognizing what might help them thrive.
“It’s a different way to think about the legacy of Asilomar,” Scheffler says. “And it’s a real contrast with what some people might expect from following scientists alone.”
A faster way to solve complex planning problems
When some commuter trains arrive at the end of the line, they must travel to a switching platform to be turned around so they can depart the station later, often from a different platform than the one at which they arrived.
Engineers use software programs called algorithmic solvers to plan these movements, but at a station with thousands of weekly arrivals and departures, the problem becomes too complex for a traditional solver to unravel all at once.
Using machine learning, MIT researchers have developed an improved planning system that reduces the solve time by up to 50 percent and produces a solution that better meets a user’s objective, such as on-time train departures. The new method could also be used for efficiently solving other complex logistical problems, such as scheduling hospital staff, assigning airline crews, or allotting tasks to factory machines.
Engineers often break these kinds of problems down into a sequence of overlapping subproblems that can each be solved in a feasible amount of time. But the overlaps cause many decisions to be needlessly recomputed, so it takes the solver much longer to reach an optimal solution.
The new, artificial intelligence-enhanced approach learns which parts of each subproblem should remain unchanged, freezing those variables to avoid redundant computations. Then a traditional algorithmic solver tackles the remaining variables.
“Often, a dedicated team could spend months or even years designing an algorithm to solve just one of these combinatorial problems. Modern deep learning gives us an opportunity to use new advances to help streamline the design of these algorithms. We can take what we know works well, and use AI to accelerate it,” says Cathy Wu, the Thomas D. and Virginia W. Cabot Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of the Laboratory for Information and Decision Systems (LIDS).
She is joined on the paper by lead author Sirui Li, an IDSS graduate student; Wenbin Ouyang, a CEE graduate student; and Yining Ma, a LIDS postdoc. The research will be presented at the International Conference on Learning Representations.
Eliminating redundance
One motivation for this research is a practical problem identified by a master’s student Devin Camille Wilkins in Wu’s entry-level transportation course. The student wanted to apply reinforcement learning to a real train-dispatch problem at Boston’s North Station. The transit organization needs to assign many trains to a limited number of platforms where they can be turned around well in advance of their arrival at the station.
This turns out to be a very complex combinatorial scheduling problem — the exact type of problem Wu’s lab has spent the past few years working on.
When faced with a long-term problem that involves assigning a limited set of resources, like factory tasks, to a group of machines, planners often frame the problem as Flexible Job Shop Scheduling.
In Flexible Job Shop Scheduling, each task needs a different amount of time to complete, but tasks can be assigned to any machine. At the same time, each task is composed of operations that must be performed in the correct order.
Such problems quickly become too large and unwieldy for traditional solvers, so users can employ rolling horizon optimization (RHO) to break the problem into manageable chunks that can be solved faster.
With RHO, a user assigns an initial few tasks to machines in a fixed planning horizon, perhaps a four-hour time window. Then, they execute the first task in that sequence and shift the four-hour planning horizon forward to add the next task, repeating the process until the entire problem is solved and the final schedule of task-machine assignments is created.
A planning horizon should be longer than any one task’s duration, since the solution will be better if the algorithm also considers tasks that will be coming up.
But when the planning horizon advances, this creates some overlap with operations in the previous planning horizon. The algorithm already came up with preliminary solutions to these overlapping operations.
“Maybe these preliminary solutions are good and don’t need to be computed again, but maybe they aren’t good. This is where machine learning comes in,” Wu explains.
For their technique, which they call learning-guided rolling horizon optimization (L-RHO), the researchers teach a machine-learning model to predict which operations, or variables, should be recomputed when the planning horizon rolls forward.
L-RHO requires data to train the model, so the researchers solve a set of subproblems using a classical algorithmic solver. They took the best solutions — the ones with the most operations that don’t need to be recomputed — and used these as training data.
Once trained, the machine-learning model receives a new subproblem it hasn’t seen before and predicts which operations should not be recomputed. The remaining operations are fed back into the algorithmic solver, which executes the task, recomputes these operations, and moves the planning horizon forward. Then the loop starts all over again.
“If, in hindsight, we didn’t need to reoptimize them, then we can remove those variables from the problem. Because these problems grow exponentially in size, it can be quite advantageous if we can drop some of those variables,” she adds.
An adaptable, scalable approach
To test their approach, the researchers compared L-RHO to several base algorithmic solvers, specialized solvers, and approaches that only use machine learning. It outperformed them all, reducing solve time by 54 percent and improving solution quality by up to 21 percent.
In addition, their method continued to outperform all baselines when they tested it on more complex variants of the problem, such as when factory machines break down or when there is extra train congestion. It even outperformed additional baselines the researchers created to challenge their solver.
“Our approach can be applied without modification to all these different variants, which is really what we set out to do with this line of research,” she says.
L-RHO can also adapt if the objectives change, automatically generating a new algorithm to solve the problem — all it needs is a new training dataset.
In the future, the researchers want to better understand the logic behind their model’s decision to freeze some variables, but not others. They also want to integrate their approach into other types of complex optimization problems like inventory management or vehicle routing.
This work was supported, in part, by the National Science Foundation, MIT’s Research Support Committee, an Amazon Robotics PhD Fellowship, and MathWorks.
Carbon uptake rate dominates changes in vegetation productivity over time
Nature Climate Change, Published online: 16 April 2025; doi:10.1038/s41558-025-02316-y
In the past decades, the duration and rate of carbon uptake have increased, enhancing ecosystem productivity. The uptake rate has a larger effect than the duration has on the temporal changes in productivity. Changes in productivity during the early and the late growing seasons are asymmetric, owing to inconsistent changes in the duration of carbon uptake over time.The impact of Antarctic ice-shelf cavities on Earth system dynamics
Nature Climate Change, Published online: 16 April 2025; doi:10.1038/s41558-025-02307-z
An Earth system model including Antarctic ice-shelf cavities is used to explore the response and feedback of Antarctic basal melt in various climate scenarios. The inclusion of ice-shelf cavities provides more comprehensive insight into Southern Ocean dynamics and could improve future climate models.EFF Urges Court to Avoid Fair Use Shortcuts in Kadrey v. Meta Platforms
EFF has filed an amicus brief in Kadrey v. Meta, one of the many ongoing copyright lawsuits against AI developers. Most of the AI copyright cases raise an important new issue: whether the copying necessary to train a generative AI model is a non-infringing fair use.
Kadrey, however, attempts to side-step fair use. The plaintiffs—including Sarah Silverman and other authors—sued Meta for allegedly using BitTorrent to download “pirated” copies of their books to train Llama, a large language model. In other words, their legal claims challenge how Meta obtained the training materials, not what it did with them.
But some of the plaintiffs’ arguments, if successful, could harm AI developers’ defenses in other cases, where fair use is directly at issue.
How courts decide this issue will profoundly shape the future of this transformative technology, including its capabilities, its costs, and whether its evolution will be shaped by the democratizing forces of the open market or the whims of an oligopoly.
A question this important deserves careful consideration on a full record—not the hyperbolic cries of “piracy” and the legal shortcuts that the plaintiffs in this case are seeking. As EFF explained to the court, the question of whether fair use applies to training generative AI is far too important to decide based on Kadrey’s back-door challenge.
And, as EFF explained, whether a developer can legally train an AI on a wide variety of creative works shouldn’t turn on which technology they used to obtain those materials. As we wrote in our brief, the “Court should not allow the tail of Meta’s alleged BitTorrent use to wag the dog of the important legal questions this case presents. Nor should it accept Plaintiffs’ invitation to let hyperbole about BitTorrent and 'unmitigated piracy' derail the thoughtful and fact-specific fair use analysis the law requires.”
We also urged the court to reject the plaintiffs’ attempt to create a carve out in copyright law for copies obtained using “BitTorrent.”
This dangerous argument seeks to categorically foreclose the possibility that even the most transformative, socially beneficial uses—such as AI training—could be fair use.
As EFF explained in its brief, adopting an exemption from the flexible, fact-specific fair use analysis for “BitTorrent,” “internet piracy,” “P2P downloading,” or something else, would defeat the purpose of the fair use doctrine as a safeguard for the application of copyright law to new technologies.
Bridging Earth and space, and art and science, with global voices
On board Intuitive Machines’ Athena spacecraft, which made a moon landing on March 6, were cutting-edge MIT payloads: a depth-mapping camera and a mini-rover called “AstroAnt.” Also on that craft were the words and voices of people from around the world speaking in dozens of languages. These were etched on a 2-inch silicon wafer computationally designed by Professor Craig Carter of the MIT Department of Materials Science and Engineering and mounted on the mission’s Lunar Outpost MAPP Rover.
Dubbed the Humanity United with MIT Art and Nanotechnology in Space (HUMANS), the project is a collaboration of art and science, bringing together experts from across MIT — with technical expertise from the departments of Aeronautics and Astronautics, Mechanical Engineering, and Electrical Engineering and Computer Science; nano-etching and testing from MIT.nano; audio processing from the MIT Media Lab’s Opera of the Future and the Music and Theater Arts Section; and lunar mission support from the Media Lab’s Space Exploration Initiative.
While a 6-inch HUMANS wafer flew on the Axiom-2 mission to the International Space Station in 2023, the 2-inch wafer was a part of the IM-2 mission to the lunar south polar region, linked to the MIT Media Lab’s To the Moon to Stay program, which reimagines humankind’s return to the moon. IM-2 ended prematurely after the Athena spacecraft tipped onto its side shortly after landing in March, but the HUMANS wafer fulfilled its mission by successfully reaching the lunar surface.
“If you ask a person on the street: ‘What does MIT do?’ Well, that person might say they’re a bunch of STEM nerds who make devices and create apps. But that’s not the entire MIT. It’s more multifaceted than that,” Carter says. “This project embodies that. It says, ‘We’re not just one-trick ponies.’”
A message etched in silicon
The HUMANS project, initially conceived of by MIT students, was inspired by the Golden Record, a pair of gold-plated phonograph records launched in 1977 aboard the Voyager 1 and 2 spacecraft, with human voices, music, and images. Designed to explore the outer solar system, the Voyagers have since traveled into interstellar space, beyond the sun’s heliosphere. But while the earlier project was intended to introduce humanity to an extraterrestrial audience, the HUMANS message is directed at fellow human beings — reminding us that space belongs to all.
Maya Nasr PhD ’23, now a researcher at Harvard University, has led the project since 2020, when she was a graduate student in the MIT Department of Aeronautics and Astronautics. She co-founded it with Lihui Zhang SM ’21, from the MIT Technology and Policy Program. The team invited people to share what space means to them, in writing or audio, to create a “symbol of unity that promotes global representation in space.”
When Nasr and Zhang sought an expert to translate their vision into a physical artifact, they turned to Carter, who had previously created the designs and algorithms for many art projects and, most recently, for One.MIT, a series of mosaics composed of the names of MIT faculty, students, and staff. Carter quickly agreed.
“I love figuring out how to turn equations into code, into artifacts,” Carter says. “Whether they’re art or not is a difficult question. They’re definitely artful. They’re definitely artisanal.”
Carter played a pivotal role in the computational design and fabrication of the silicon wafer now on the surface of the moon. He first translated the submitted phrases, in 64 languages, into numerical representations that could be turned into fonts. He also reverse-engineered a typesetting language to “kern” the text — adjusting the spacing between letters for visual clarity.
“Kerning is important for the aesthetics of written text. You’d want a Y to be not-too-close to a neighboring T, but farther from a W,” Carter said. “All of the phrases were sequences of words like D-O-G, and it’s not as simple as, put a D, put an O, put a G. It’s put a D, figure out where the O should be, put the O, figure out where the G should be, put the G.”
After refining the text placement, Carter designed an algorithm that geometrically transformed both the text and the audio messages’ digital waveforms — graphical representations of sound — into spirals on the wafer. The design pays homage to the Voyagers’ Golden Records, which featured spiral grooves, much like a vinyl record.
In the center of the disc is an image of a globe, or map projection — Carter found publicly available geospatial coordinates and mapped them into the design.
“I took those coordinates and then created something like an image from the coordinates. It had to be geometry, not pixels,” he says.
Once the spirals and globe imagery were in place, Carter handed the data for the design to MIT.nano, which has specialized instruments for high-precision etching and fabrication.
Human voices, lunar surface
“I hope people on Earth feel a deep sense of connection and belonging — that their voices, stories, and dreams are now part of this new chapter in lunar exploration,” Nasr says. “When we look at the moon, we can feel an even deeper connection, knowing that our words — in all their diversity — are now part of its surface, carrying the spirit of humanity forward.”
For Carter, the project conveys the human capacity for wonder and a shared sense of what’s possible. “In many cases, looking outward forces you to look inward at the same time to put the wonder in some kind of personal context,” Carter says. “So if this project somehow conveys that we are all wondering about this marvelous universe together in all of our languages, I would consider that a victory.”
The project’s link to the Golden Record — an artifact launched nearly 50 years ago and now traveling beyond the solar system — strikes another chord with Carter.
“It’s unimaginably far away, and so the notion that we can connect to something in time and space, to something that’s out there, I think it is just a wonderful connection.”
Slopsquatting
As AI coding assistants invent nonexistent software libraries to download and use, enterprising attackers create and upload libraries with those names—laced with malware, of course.
Privacy on the Map: How States Are Fighting Location Surveillance
Your location data isn't just a pin on a map—it's a powerful tool that reveals far more than most people realize. It can expose where you work, where you pray, who you spend time with, and, sometimes dangerously, where you seek healthcare. In today’s world, your most private movements are harvested, aggregated, and sold to anyone with a credit card. For those seeking reproductive or gender-affirming care, or visiting a protest or a immigration law clinic, this data is a ticking time bomb.
Last year, we sounded the alarm, urging lawmakers to protect individuals from the growing threats of location tracking tools—tools that are increasingly being used to target and criminalize people seeking essential reproductive healthcare.
The good news? Lawmakers in California, Massachusetts, Illinois and elsewhere are stepping up, leading the way to protect privacy and ensure that healthcare access and other exercise of our rights remain safe from invasive surveillance.
The Dangers of Location DataImagine this: you leave your home in Alabama, drop your kids off at daycare, and then drive across state lines to visit an abortion clinic in Florida. You spend two hours there before driving back home. Along the way, you used your phone’s GPS app to navigate or a free radio app to listen to the news. Unbeknownst to you, this “free” app tracked your entire route and sold it to a data broker. That broker then mapped your journey and made it available to anyone who would pay for it. This is exactly what happened when privacy advocates used a tool called Locate X, developed by Babel Street, to track a person’s device as they traveled from Alabama—where abortion is completely banned—to Florida, where abortion access is severely restricted but still available.
Despite this tool being marketed as solely for law enforcement use, private investigators were able to access it by falsely claiming they would work with law enforcement, revealing a major flaw in our data privacy system. In a time when government surveillance of private personal decisions is on the rise, the fact that law enforcement (and adversaries pretending to be law enforcement) can access these tools puts our personal privacy in serious danger.
The unregulated market for location data enables anyone, from law enforcement to anti-abortion groups, to access and misuse this sensitive information. For example, a data broker called Near Intelligence sold location data of people visiting Planned Parenthood clinics to an anti-abortion group. Likewise, law enforcement in Idaho used cell phone location data to charge a mother and her son with “aiding and abetting” abortion, a clear example of how this information can be weaponized to enforce abortion restrictions for patients and anyone else in their orbit.
States Taking ActionAs we’ve seen time and time again, the collection and sale of location data can be weaponized to target many vulnerable groups—immigrants, the LGBTQ+ community, and anyone seeking reproductive healthcare. In response to these growing threats, states like California, Massachusetts, and Illinois are leading the charge by introducing bills aimed at regulating the collection and use of location data.
These bills are a powerful response to the growing threat. The bills are grounded in well-established principles of privacy law, including informed consent and data minimization, and they ensure that only essential data is collected, and that it’s kept secure. Importantly, they give residents—whether they reside in the state or are traveling from other states—the confidence to exercise their rights (such as seeking health care) without fear of surveillance or retaliation.
This post outlines some of the key features of these location data privacy laws, to show authors and advocates of legislative proposals how best to protect their communities. Specifically, we recommend:
- Strong definitions,
- Clear rules,
- Affirmation that all location data is sensitive,
- Empowerment of consumers through a strong private right of action,
- Prohibition of “pay-for-privacy” schemes, and
- Transparency through clear privacy policies.
Effective location privacy legislation starts with clear definitions. Without them, courts may interpret key terms too narrowly—weakening the law's intent. And in the absence of clear judicial guidance, regulated entities may exploit ambiguity to sidestep compliance altogether.
The following are some good definitions from the recent bills:
- In the Massachusetts bill, "consent" must be “freely given, specific, informed, unambiguous, [and] opt-in.” Further, it must be free from dark patterns—ensuring people truly understand what they’re agreeing to.
- In the Illinois bill, a “covered entity” includes all manner of private actors, including individuals, corporations, and associations, exempting only individuals acting in noncommercial contexts.
- "Location information" must clearly refer to data derived from a device that reveals the past or present location of a person or device. The Massachusetts bill sets a common radius in defining protected location data: 1,850 feet (about one-third of a mile). The California bill goes much bigger: five miles. EFF has supported both radiuses.
- A “permissible purpose” (which is key to the minimization rule) should be narrowly defined to include only: (1) delivering a product or service that the data subject asked for, (2) fulfilling an order, (3) complying with federal or state law, or (4) responding to an imminent threat to life.
“Data minimization” is the privacy principle that corporations and other private actors must not process a person’s data except as necessary to give them what they asked for, with narrow exceptions. A virtue of this rule is that a person does not need to do anything in order to enjoy their statutory privacy rights; the burden is on the data processor to process less data. Together, these definitions and rules create a framework that ensures privacy is the default, not the exception.
One key data minimization rule, as in the Massachusetts bill, is: “It shall be unlawful for a covered entity to collect or process an individual’s location data except for a permissible purpose.” Read along with the definition above, this across-the-board rule means a covered entity can only collect or process someone’s location data to fulfil their request (with exceptions for emergencies and compliance with federal and state law).
Additional data minimization rules, as in the Illinois bill, back this up by restraining particular data practices:
- Covered entities can not collect more precise data than strictly necessary, or use location data to make inferences beyond what is needed to provide the service.
- Data must be deleted once it’s no longer necessary for the permissible purpose.
- No selling, renting, trading, or leasing location data – full stop.
- No disclosure of location data to government, except with a warrant, as required by state or federal law, on request of the data subject, or an emergency threat of serious bodily injury or death (defined to not include abortion).
- No other disclosure of location data, except as required for a permissible purpose or when requested by the individual.
The California bill rests largely on data minimization rules like these. The Illinois and Massachestts bills place an additional limit: no collection or processing of location data absent opt-in consent from the data subject. Critically, consent in these two bills is not an exception to the minimization rule, but rather an added requirement. EFF has supported both models of data privacy legislation: just a minimization requirement; and paired minimization and consent requirements.
All Location Data is SensitiveTo best safeguard against invasive location tracking, it’s essential to place legal restrictions on the collection and use of all location data—not just data associated with sensitive places like reproductive health clinics. Narrow protections may offer partial help, but they fall short of full privacy.
Consider the example at the beginning of the blog: if someone travels from Alabama to Florida for abortion care, and the law only shields data at sensitive sites, law enforcement in Alabama could still trace their route from home up to near the clinic. Once the person enters a protected “healthcare” zone, their device would vanish from view temporarily, only to reappear shortly after they leave. This gap in the tracking data could make it relatively easy to deduce where they were during that time, essentially revealing their clinic visit.
To avoid this kind of loophole, the most effective approach is to limit the collection and retention of all location data—no exceptions. This is the approach in all three of the bills highlighted in this post: California, Illinois, and Massachusetts.
Empowering Consumers Through a Strong PRATo truly protect people’s location privacy, legislation must include a strong private right of action (PRA)—giving individuals the power to sue companies that violate their rights. A private right of action ensures companies can’t ignore the law and empowers people to seek justice directly when their sensitive data is misused. This is a top priority for EFF in any data privacy legislation.
The bills in Illinois and Massachusetts offer strong models. They make clear that any violation of the law is an injury and allow individuals to bring civil suits:“A violation of this [law] … regarding an individual’s location information constitutes an injury to that individual. … Any individual alleging a violation of this [law] … may bring a civil action …” Further, these bills provide a baseline amount of damages (sometimes called “liquidated” or “statutory” damages), because an invasion of statutory privacy rights is a real injury, even if it is hard for the injured party to prove out-of-pocket expenses from theft, bodily harm, or the like. Absent this kind of statutory language, some victims of privacy violations will lose their day in court.
These bills also override mandatory arbitration clauses that limit access to court. Corporations should not be able to avoid being sued by forcing their customers to sign lengthy contracts that nobody reads.
Other remedies include actual damages, punitive damages, injunctive relief, and attorney’s fees. These provisions give the law real teeth and ensure accountability can’t be signed away in fine print.
No Pay-for-Privacy SchemesStrong location data privacy laws must protect everyone equally—and that means rejecting “pay-for-privacy” schemes that allow companies to charge users for basic privacy protections. Privacy is a fundamental right, not a luxury add-on or subscription perk. Allowing companies to offer privacy only to those who can afford to pay creates a two-tiered system where low-income individuals are forced to trade away their sensitive location data in exchange for access to essential services. These schemes also incentivize everyone to abandon privacy.
Legislation should make clear that companies cannot condition privacy protections on payment, loyalty programs, or any other exchange of value. This ensures that everyone—regardless of income—has equal protection from surveillance and data exploitation. Privacy rights shouldn’t come with a price tag.
We commend this language from the Illinois and Massachusetts bills:
A covered entity may not take adverse action against an individual because the individual exercised or refused to waive any of such individual’s rights under [this law], unless location data is essential to the provision of the good, service, or service feature that the individual requests, and then only to the extent that this data is essential. This prohibition includes, but is not limited to: (1) refusing to provide a good or service to the individual; (2) charging different prices or rates for goods or services, including through the use of discounts or other benefits or imposing penalties; or (3) providing a different level of quality of goods or services to the individual.
Transparency Through Clear Privacy PoliciesIt is helpful for data privacy laws to require covered entities to be transparent about their data practices. All three bills discussed in this post require covered entities to make available a privacy policy to the data subject—a solid baseline. This ensures that people aren’t left in the dark about how their location data is being collected, used, or shared. Clear, accessible policies are a foundational element of informed consent and give individuals the information they need to protect themselves and assert their rights.
It is also helpful for privacy laws like these to require covered entities to prominently publish their privacy policies on their websites. This allows all members of the public – as well as privacy advocates and government enforcement agencies – to track whether data processors are living up to their promises.
Next Steps: More States Must JoinThe bottom line is clear: location data is highly sensitive, and without proper protections, it can be used to harm those who are already vulnerable. The digital trail we leave behind can reveal far more than we think, and without laws in place to protect us, we are all at risk.
While some states are making progress, much more needs to be done. More states need to follow suit by introducing and passing legislation that protects location data privacy. We cannot allow location tracking to be used as a tool for harassment, surveillance, or criminalization.
To help protect your digital privacy while we wait for stronger privacy protection laws, we’ve published a guide specifically for how to minimize intrusion from Locate X, and have additional tips on EFF’s Surveillance Self-Defense site. Many general privacy practices also offer strong protection against location tracking.
If you live in California, Illinois, Massachusetts – or any state that has yet to address location data privacy – now is the time to act. Contact your lawmakers and urge them to introduce or support bills that protect our sensitive data from exploitation. Demand stronger privacy protections for all, and call for more transparency and accountability from companies that collect and sell location data. Together, we can create a future where individuals are free to travel without the threat of surveillance and retaliation.
MIT Lincoln Laboratory is a workhorse for national security
In 1949, the U.S. Air Force called upon MIT with an urgent need. Soviet aircraft carrying atomic bombs were capable of reaching the U.S. homeland, and the nation was defenseless. A dedicated center — MIT Lincoln Laboratory — was established. The brightest minds from MIT came together in service to the nation, making scientific and engineering leaps to prototype the first real-time air defense system. The commercial sector and the U.S. Department of Defense (DoD) then produced and deployed the system, called SAGE, continent-wide.
The SAGE story still describes MIT Lincoln Laboratory’s approach to national security innovation today. The laboratory works with DoD agencies to identify challenging national security gaps, determines if technology can contribute to a solution, and then executes an R&D program to advance critical technologies. The principal products of these programs are advanced technology prototypes, which are often rapidly fabricated and demonstrated through test and evaluation.
Throughout this process, the laboratory closely coordinates with the DoD and other federal agency sponsors, and then transfers the technology in many forms to industry for manufacturing at scale to meet national needs. For nearly 75 years, these technologies have saved lives, responded to emergencies, fueled the nation’s economy, and impacted the daily life of Americans and our allies.
"Lincoln Laboratory accelerates the pace of national security technology development, in partnership with the government, private industry, and the broader national security ecosystem," says Melissa Choi, director of MIT Lincoln Laboratory. "We integrate high-performance teams with advanced facilities and the best technology available to bring novel prototypes to life, providing lasting benefits to the United States."
The Air Force and MIT recently renewed their contract for the continued operation of Lincoln Laboratory. The contract was awarded by the Air Force Lifecycle Management Center Strategic Services Division on Hanscom Air Force Base for a term of five years, with an option for an additional five years. Since Lincoln Laboratory’s founding, MIT has operated the laboratory in the national interest for no fee and strictly on a cost-reimbursement basis. The contract award is indicative of the DoD’s continuing recognition of the long-term value of, and necessity for, cutting-edge R&D in service of national security.
Critical contributions to national security
MIT Lincoln Laboratory is the DoD’s largest federally funded research and development center R&D laboratory. Sponsored by the under secretary of defense for research and engineering, it contributes to a broad range of national security missions and domains.
Among the most critical domains are air and missile defense. Laboratory researchers pioneer advanced radar systems and algorithms crucial for detecting, tracking, and targeting ballistic missiles and aircraft, and serve as scientific advisors to the Reagan Test Site. They also conduct comprehensive studies on missile defense needs, such as the recent National Defense Authorization Act–directed study on the defense of Guam, and provide actionable insights to Congress.
MIT Lincoln Laboratory is also at the forefront of space systems and technologies, enabling the military to monitor space activities and communicate at very high bandwidths. Laboratory engineers developed the innovatively curved detector within the Space Surveillance Telescope that allows the U.S. Space Force to track tiny space objects. It also operates the world's highest-resolution long-range radar for imaging satellites. Recently, the laboratory worked closely with NASA to demonstrate laser communications systems in space, setting a record for the fastest satellite downlink and farthest lasercom link ever achieved. These breakthroughs are heralding a new era in satellite communications for defense and civil missions.
Perhaps most importantly, MIT Lincoln Laboratory is asked to rapidly prototype solutions to urgent and emerging threats. These solutions are both transferred to industry for production and fielded directly to war-fighters, saving lives. To combat improvised explosive devices in Iraq and Afghanistan, the laboratory quickly and iteratively developed several novel systems to detect and defeat explosive devices and insurgent networks. When insurgents were attacking forward-operating bases at night, the laboratory developed an advanced infrared camera system to prevent the attacks. Like other multi-use technologies developed at the laboratory, that system led to a successful commercial startup, which was recently acquired by Anduril.
Responding to domestic crises is also a key part of the laboratory’s mission. After the attacks of 9/11/2001, the laboratory quickly integrated a system to defend the airspace around critical locations in the capital region. More recently, the laboratory’s application of AI to video forensics and physical screening has resulted in commercialized systems deployed in airports and mass transit settings. Over the last decade, the laboratory has adapted its technology for many other homeland security needs, including responses to natural disasters. As one example, researchers repurposed a world-class lidar system first used by the military for terrain mapping to quickly quantify damage after hurricanes.
For all of these efforts, the laboratory exercises responsible stewardship of taxpayer funds, identifying multiple uses for the technologies it develops and introducing disruptive approaches to reduce costs for the government. Sometimes, the system architecture or design results in cost savings, as is the case with the U.S. Air Force's SensorSat; the laboratory’s unique sensor design enabled a satellite 10 times smaller and cheaper than those typically used for space surveillance. Another approach is by creating novel systems from low-cost components. For instance, laboratory researchers discovered a way to make phased-array radars using cell phone electronics instead of traditional expensive components, greatly reducing the cost of deploying the radars for weather and aircraft surveillance.
The laboratory also pursues emerging technology to bring about transformative solutions. In the 1960s, such vision brought semiconductor lasers into the world, and in the 1990s shrunk transistors more than industry imagined possible. Today, laboratory staff are pursuing other new realms: making imagers reconfigurable at the pixel level, designing quantum sensors to transform navigation technology, and developing superconducting electronics to improve computing efficiency.
A long, beneficial relationship between MIT and the DoD
"Lincoln Laboratory has created a deep understanding and knowledge base in core national security missions and associated technologies. We look forward to continuing to work closely with government sponsors, industry, and academia through our trusted, collaborative relationships to address current and future national security challenges and ensure technological superiority," says Scott Anderson, assistant director for operations at MIT Lincoln Laboratory.
"MIT has always been proud to support the nation through its operation of Lincoln Laboratory. The long-standing relationship between MIT and the Department of Defense through this storied laboratory has been a difference-maker for the safety, economy, and industrial power of the United States, and we look forward to seeing the innovations ahead of us," notes Ian Waitz, MIT vice president for research.
Under the terms of the renewed contract, MIT will ensure that Lincoln Laboratory remains ready to meet R&D challenges that are critical to national security.