Feed aggregator
Florida cities and counties sue over sweeping hurricane emergency law
Nations rethink plans for Brazil climate summit as costs soar
Suriname pledges to shield 90% of its forests
Deadly tropical storm from former typhoon rips through Vietnam
MIT joins in constructing the Giant Magellan Telescope
The following article is adapted from a joint press release issued today by MIT and the Giant Magellan Telescope.
MIT is lending its support to the Giant Magellan Telescope, joining the international consortium to advance the $2.6 billion observatory in Chile. The Institute’s participation, enabled by a transformational gift from philanthropists Phillip (Terry) Ragon ’72 and Susan Ragon, adds to the momentum to construct the Giant Magellan Telescope, whose 25.4-meter aperture will have five times the light-collecting area and up to 200 times the power of existing observatories.
“As philanthropists, Terry and Susan have an unerring instinct for finding the big levers: those interventions that truly transform the scientific landscape,” says MIT President Sally Kornbluth. “We saw this with their founding of the Ragon Institute, which pursues daring approaches to harnessing the immune system to prevent and cure human diseases. With today’s landmark gift, the Ragons enable an equally lofty mission to better understand the universe — and we could not be more grateful for their visionary support."
MIT will be the 16th member of the international consortium advancing the Giant Magellan Telescope and the 10th participant based in the United States. Together, the consortium has invested $1 billion in the observatory — the largest-ever private investment in ground-based astronomy. The Giant Magellan Telescope is already 40 percent under construction, with major components being designed and manufactured across 36 U.S. states.
“MIT is honored to join the consortium and participate in this exceptional scientific endeavor,” says Ian A. Waitz, MIT’s vice president for research. “The Giant Magellan Telescope will bring tremendous new capabilities to MIT astronomy and to U.S. leadership in fundamental science. The construction of this uniquely powerful telescope represents a vital private and public investment in scientific excellence for decades to come.”
MIT brings to the consortium powerful scientific capabilities and a legacy of astronomical excellence. MIT’s departments of Physics and of Earth, Atmospheric and Planetary Sciences, and the MIT Kavli Institute for Astrophysics and Space Research, are internationally recognized for research in exoplanets, cosmology, and environments of extreme gravity, such as black holes and compact binary stars. MIT’s involvement will strengthen the Giant Magellan Telescope’s unique capabilities in high-resolution spectroscopy, adaptive optics, and the search for life beyond Earth. It also deepens a long-standing scientific relationship: MIT is already a partner in the existing twin Magellan Telescopes at Las Campanas Observatory in Chile — one of the most scientifically valuable observing sites on Earth, and the same site where the Giant Magellan Telescope is now under construction.
“Since Galileo’s first spyglass, the world’s largest telescope has doubled in aperture every 40 to 50 years,” says Robert A. Simcoe, director of the MIT Kavli Institute and the Francis L. Friedman Professor of Physics. “Each generation’s leading instruments have resolved important scientific questions of the day and then surprised their builders with new discoveries not yet even imagined, helping humans understand our place in the universe. Together with the Giant Magellan Telescope, MIT is helping to realize our generation’s contribution to this lineage, consistent with our mission to advance the frontier of fundamental science by undertaking the most audacious and advanced engineering challenges.”
Contributing to the national strategy
MIT’s support comes at a pivotal time for the observatory. In June 2025, the National Science Foundation (NSF) advanced the Giant Magellan Telescope into its Final Design Phase, one of the final steps before it becomes eligible for federal construction funding. To demonstrate readiness and a strong commitment to U.S. leadership, the consortium offered to privately fund this phase, which is traditionally supported by the NSF.
MIT’s investment is an integral part of the national strategy to secure U.S. access to the next generation of research facilities known as “extremely large telescopes.” The Giant Magellan Telescope is a core partner in the U.S. Extremely Large Telescope Program, the nation’s top priority in astronomy. The National Academies’ Astro2020 Decadal Survey called the program “absolutely essential if the United States is to maintain a position as a leader in ground-based astronomy.” This long-term strategy also includes the recently commissioned Vera C. Rubin Observatory in Chile. Rubin is scanning the sky to detect rare, fast-changing cosmic events, while the Giant Magellan Telescope will provide the sensitivity, resolution, and spectroscopic instruments needed to study them in detail. Together, these Southern Hemisphere observatories will give U.S. scientists the tools they need to lead 21st-century astrophysics.
“Without direct access to the Giant Magellan Telescope, the U.S. risks falling behind in fundamental astronomy, as Rubin’s most transformational discoveries will be utilized by other nations with access to their own ‘extremely large telescopes’ under development,” says Walter Massey, board chair of the Giant Magellan Telescope.
MIT’s participation brings the United States a step closer to completing the promise of this powerful new observatory on a globally competitive timeline. With federal construction funding, it is expected that the observatory could reach 90 percent completion in less than two years and become operational by the 2030s.
“MIT brings critical expertise and momentum at a time when global leadership in astronomy hangs in the balance,” says Robert Shelton, president of the Giant Magellan Telescope. “With MIT, we are not just adding a partner; we are accelerating a shared vision for the future and reinforcing the United States’ position at the forefront of science.”
Other members of the Giant Magellan Telescope consortium include the University of Arizona, Carnegie Institution for Science, The University of Texas at Austin, Korea Astronomy and Space Science Institute, University of Chicago, São Paulo Research Foundation (FAPESP), Texas A&M University, Northwestern University, Harvard University, Astronomy Australia Ltd., Australian National University, Smithsonian Institution, Weizmann Institute of Science, Academia Sinica Institute of Astronomy and Astrophysics, and Arizona State University.
A boon for astrophysics research and education
Access to the world’s best optical telescopes is a critical resource for MIT researchers. More than 150 individual science programs at MIT have relied on major astronomical observatories in the past three years, engaging faculty, researchers, and students in investigations into the marvels of the universe. Recent research projects have included chemical studies of the universe’s oldest stars, led by Professor Anna Frebel; spectroscopy of stars shredded by dormant black holes, led by Professor Erin Kara; and measurements of a white dwarf teetering on the precipice of a black hole, led by Professor Kevin Burdge.
“Over many decades, researchers at the MIT Kavli Institute have used unparalleled instruments to discover previously undetected cosmic phenomena from both ground-based observations and spaceflight missions,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics. “I have no doubt our brilliant colleagues will carry on that tradition with the Giant Magellan Telescope, and I can’t wait to see what they will discover next.”
The Giant Magellan Telescope will also provide a platform for advanced R&D in remote sensing, creating opportunities to build custom infrared and optical spectrometers and high-speed imagers to further study our universe.
“One cannot have a leading physics program without a leading astrophysics program. Access to time on the Giant Magellan Telescope will ensure that future generations of MIT researchers will continue to work at the forefront of astrophysical discovery for decades to come,” says Deepto Chakrabarty, head of the MIT Department of Physics, the William A. M. Burden Professor in Astrophysics, and principal investigator at the MIT Kavli Institute. “Our institutional access will help attract and retain top researchers in astrophysics, planetary science, and advanced optics, and will give our PhD students and postdocs unrivaled educational opportunities.”
Protecting Access to the Law—and Beneficial Uses of AI
As the first copyright cases concerning AI reach appeals courts, EFF wants to protect important, beneficial uses of this technology—including AI for legal research. That’s why we weighed in on the long-running case of Thomson Reuters v. ROSS Intelligence. This case raises at least two important issues: the use of (possibly) copyrighted material to train a machine learning AI system, and public access to legal texts.
ROSS Intelligence was a legal research startup that built an AI-based tool for locating judges’ written opinions based on natural language queries—a competitor to ubiquitous legal research platforms like Lexis and Thomson Reuters’ Westlaw. To build its tool, ROSS hired another firm to read through thousands of the “West headnotes” that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call “holdings”) that the headnotes identified. ROSS used those paraphrases to train its tool. Importantly, the ROSS tool didn’t output any West headnotes, or even the paraphrases of those headnotes—it simply directed the user to the original judges’ decisions. Still, Thomson sued ROSS for copyright infringement, arguing that using the headnotes without permission was illegal.
Early decisions in the suit were encouraging. EFF wrote about how the court allowed ROSS to bring an antitrust counterclaim against Thomson Reuters, letting them try to prove that Thomson was abusing monopoly power. And the trial judge initially ruled that ROSS’s use of the West headnotes was fair use under copyright law.
The case then took turns for the worse. ROSS was unable to prove its antitrust claim. The trial judge issued a new opinion reversing his earlier decision and finding that ROSS’s use was not fair but rather infringed Thomson’s copyrights. And in the meantime, ROSS had gone out of business (though it continues to defend itself in court).
The court’s new decision on copyright was particularly worrisome. It ruled that West headnotes—a few lines of text copying or summarizing a single legal conclusion from a judge’s written opinion—could be copyrighted, and that using them to train the ROSS tool was not fair use, in part because ROSS was a competitor to Thomson Reuters. And the court rejected ROSS’s attempt to avoid any illegal copying by using a “clean room” procedure often used in software development. The decision also threatens to limit the public’s access to legal texts.
EFF weighed in with an amicus brief joined by the American Library Association, the Association of Research Libraries, the Internet Archive, Public Knowledge, and Public.Resource.Org. We argued that West headnotes are not copyrightable in the first place, since they simply restate individual points from judges’ opinions with no meaningful creative contributions. And even if copyright does attach to the headnotes, we argued, the source material is entirely factual statements about what the law is, and West’s contribution was minimal, so fair use should have tipped in ROSS’s favor. The trial judge had found that the factual nature of the headnotes favored ROSS, but dismissed this factor as unimportant, effectively writing it out of the law.
This case is one of the first to touch on copyright and AI, and is likely to influence many of the other cases that are already pending (with more being filed all the time). That’s why we’re trying to help the appeals court get this one right. The law should encourage the creation of AI tools to digest and identify facts for use by researchers, including facts about the law.
Synchronization of global peak river discharge since the 1980s
Nature Climate Change, Published online: 30 September 2025; doi:10.1038/s41558-025-02427-6
River floods that occur simultaneously in multiple locations can lead to higher damages than individual events. Here, the authors show that the likelihood of concurrent high river discharge has increased over the last decades.Responding to the climate impact of generative AI
In part 2 of our two-part series on generative artificial intelligence’s environmental impacts, MIT News explores some of the ways experts are working to reduce the technology’s carbon footprint.
The energy demands of generative AI are expected to continue increasing dramatically over the next decade.
For instance, an April 2025 report from the International Energy Agency predicts that the global electricity demand from data centers, which house the computing infrastructure to train and deploy AI models, will more than double by 2030, to around 945 terawatt-hours. While not all operations performed in a data center are AI-related, this total amount is slightly more than the energy consumption of Japan.
Moreover, an August 2025 analysis from Goldman Sachs Research forecasts that about 60 percent of the increasing electricity demands from data centers will be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. In comparison, driving a gas-powered car for 5,000 miles produces about 1 ton of carbon dioxide.
These statistics are staggering, but at the same time, scientists and engineers at MIT and around the world are studying innovations and interventions to mitigate AI’s ballooning carbon footprint, from boosting the efficiency of algorithms to rethinking the design of data centers.
Considering carbon emissions
Talk of reducing generative AI’s carbon footprint is typically centered on “operational carbon” — the emissions used by the powerful processors, known as GPUs, inside a data center. It often ignores “embodied carbon,” which are emissions created by building the data center in the first place, says Vijay Gadepally, senior scientist at MIT Lincoln Laboratory, who leads research projects in the Lincoln Laboratory Supercomputing Center.
Constructing and retrofitting a data center, built from tons of steel and concrete and filled with air conditioning units, computing hardware, and miles of cable, consumes a huge amount of carbon. In fact, the environmental impact of building data centers is one reason companies like Meta and Google are exploring more sustainable building materials. (Cost is another factor.)
Plus, data centers are enormous buildings — the world’s largest, the China Telecomm-Inner Mongolia Information Park, engulfs roughly 10 million square feet — with about 10 to 50 times the energy density of a normal office building, Gadepally adds.
“The operational side is only part of the story. Some things we are working on to reduce operational emissions may lend themselves to reducing embodied carbon, too, but we need to do more on that front in the future,” he says.
Reducing operational carbon emissions
When it comes to reducing operational carbon emissions of AI data centers, there are many parallels with home energy-saving measures. For one, we can simply turn down the lights.
“Even if you have the worst lightbulbs in your house from an efficiency standpoint, turning them off or dimming them will always use less energy than leaving them running at full blast,” Gadepally says.
In the same fashion, research from the Supercomputing Center has shown that “turning down” the GPUs in a data center so they consume about three-tenths the energy has minimal impacts on the performance of AI models, while also making the hardware easier to cool.
Another strategy is to use less energy-intensive computing hardware.
Demanding generative AI workloads, such as training new reasoning models like GPT-5, usually need many GPUs working simultaneously. The Goldman Sachs analysis estimates that a state-of-the-art system could soon have as many as 576 connected GPUs operating at once.
But engineers can sometimes achieve similar results by reducing the precision of computing hardware, perhaps by switching to less powerful processors that have been tuned to handle a specific AI workload.
There are also measures that boost the efficiency of training power-hungry deep-learning models before they are deployed.
Gadepally’s group found that about half the electricity used for training an AI model is spent to get the last 2 or 3 percentage points in accuracy. Stopping the training process early can save a lot of that energy.
“There might be cases where 70 percent accuracy is good enough for one particular application, like a recommender system for e-commerce,” he says.
Researchers can also take advantage of efficiency-boosting measures.
For instance, a postdoc in the Supercomputing Center realized the group might run a thousand simulations during the training process to pick the two or three best AI models for their project.
By building a tool that allowed them to avoid about 80 percent of those wasted computing cycles, they dramatically reduced the energy demands of training with no reduction in model accuracy, Gadepally says.
Leveraging efficiency improvements
Constant innovation in computing hardware, such as denser arrays of transistors on semiconductor chips, is still enabling dramatic improvements in the energy efficiency of AI models.
Even though energy efficiency improvements have been slowing for most chips since about 2005, the amount of computation that GPUs can do per joule of energy has been improving by 50 to 60 percent each year, says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Laboratory and a principal investigator at MIT’s Initiative on the Digital Economy.
“The still-ongoing ‘Moore’s Law’ trend of getting more and more transistors on chip still matters for a lot of these AI systems, since running operations in parallel is still very valuable for improving efficiency,” says Thomspon.
Even more significant, his group’s research indicates that efficiency gains from new model architectures that can solve complex problems faster, consuming less energy to achieve the same or better results, is doubling every eight or nine months.
Thompson coined the term “negaflop” to describe this effect. The same way a “negawatt” represents electricity saved due to energy-saving measures, a “negaflop” is a computing operation that doesn’t need to be performed due to algorithmic improvements.
These could be things like “pruning” away unnecessary components of a neural network or employing compression techniques that enable users to do more with less computation.
“If you need to use a really powerful model today to complete your task, in just a few years, you might be able to use a significantly smaller model to do the same thing, which would carry much less environmental burden. Making these models more efficient is the single-most important thing you can do to reduce the environmental costs of AI,” Thompson says.
Maximizing energy savings
While reducing the overall energy use of AI algorithms and computing hardware will cut greenhouse gas emissions, not all energy is the same, Gadepally adds.
“The amount of carbon emissions in 1 kilowatt hour varies quite significantly, even just during the day, as well as over the month and year,” he says.
Engineers can take advantage of these variations by leveraging the flexibility of AI workloads and data center operations to maximize emissions reductions. For instance, some generative AI workloads don’t need to be performed in their entirety at the same time.
Splitting computing operations so some are performed later, when more of the electricity fed into the grid is from renewable sources like solar and wind, can go a long way toward reducing a data center’s carbon footprint, says Deepjyoti Deka, a research scientist in the MIT Energy Initiative.
Deka and his team are also studying “smarter” data centers where the AI workloads of multiple companies using the same computing equipment are flexibly adjusted to improve energy efficiency.
“By looking at the system as a whole, our hope is to minimize energy use as well as dependence on fossil fuels, while still maintaining reliability standards for AI companies and users,” Deka says.
He and others at MITEI are building a flexibility model of a data center that considers the differing energy demands of training a deep-learning model versus deploying that model. Their hope is to uncover the best strategies for scheduling and streamlining computing operations to improve energy efficiency.
The researchers are also exploring the use of long-duration energy storage units at data centers, which store excess energy for times when it is needed.
With these systems in place, a data center could use stored energy that was generated by renewable sources during a high-demand period, or avoid the use of diesel backup generators if there are fluctuations in the grid.
“Long-duration energy storage could be a game-changer here because we can design operations that really change the emission mix of the system to rely more on renewable energy,” Deka says.
In addition, researchers at MIT and Princeton University are developing a software tool for investment planning in the power sector, called GenX, which could be used to help companies determine the ideal place to locate a data center to minimize environmental impacts and costs.
Location can have a big impact on reducing a data center’s carbon footprint. For instance, Meta operates a data center in Lulea, a city on the coast of northern Sweden where cooler temperatures reduce the amount of electricity needed to cool computing hardware.
Thinking farther outside the box (way farther), some governments are even exploring the construction of data centers on the moon where they could potentially be operated with nearly all renewable energy.
AI-based solutions
Currently, the expansion of renewable energy generation here on Earth isn’t keeping pace with the rapid growth of AI, which is one major roadblock to reducing its carbon footprint, says Jennifer Turliuk MBA ’25, a short-term lecturer, former Sloan Fellow, and former practice leader of climate and energy AI at the Martin Trust Center for MIT Entrepreneurship.
The local, state, and federal review processes required for a new renewable energy projects can take years.
Researchers at MIT and elsewhere are exploring the use of AI to speed up the process of connecting new renewable energy systems to the power grid.
For instance, a generative AI model could streamline interconnection studies that determine how a new project will impact the power grid, a step that often takes years to complete.
And when it comes to accelerating the development and implementation of clean energy technologies, AI could play a major role.
“Machine learning is great for tackling complex situations, and the electrical grid is said to be one of the largest and most complex machines in the world,” Turliuk adds.
For instance, AI could help optimize the prediction of solar and wind energy generation or identify ideal locations for new facilities.
It could also be used to perform predictive maintenance and fault detection for solar panels or other green energy infrastructure, or to monitor the capacity of transmission wires to maximize efficiency.
By helping researchers gather and analyze huge amounts of data, AI could also inform targeted policy interventions aimed at getting the biggest “bang for the buck” from areas such as renewable energy, Turliuk says.
To help policymakers, scientists, and enterprises consider the multifaceted costs and benefits of AI systems, she and her collaborators developed the Net Climate Impact Score.
The score is a framework that can be used to help determine the net climate impact of AI projects, considering emissions and other environmental costs along with potential environmental benefits in the future.
At the end of the day, the most effective solutions will likely result from collaborations among companies, regulators, and researchers, with academia leading the way, Turliuk adds.
“Every day counts. We are on a path where the effects of climate change won’t be fully known until it is too late to do anything about it. This is a once-in-a-lifetime opportunity to innovate and make AI systems less carbon-intense,” she says.
A beacon of light
Placing a lit candle in a window to welcome friends and strangers is an old Irish tradition that took on greater significance when Mary Robinson was elected president of Ireland in 1990. At the time, Robinson placed a lamp in Áras an Uachtaráin — the official residence of Ireland’s presidents — noting that the Irish diaspora and all others are always welcome in Ireland. Decades later, a lit lamp remains in a window in Áras an Uachtaráin.
The symbolism of Robinson’s lamp was shared by Hashim Sarkis, dean of the MIT School of Architecture and Planning (SA+P), at the school’s graduation ceremony in May, where Robinson addressed the class of 2025. To replicate the generous intentions of Robinson’s lamp and commemorate her visit to MIT, Sarkis commissioned a unique lantern as a gift for Robinson. He commissioned an identical one for his office, which is in the front portico of MIT at 77 Massachusetts Ave.
“The lamp will welcome all citizens of the world to MIT,” says Sarkis.
No ordinary lantern
The bespoke lantern was created by Marcelo Coelho SM ’08, PhD ’12, director of the Design Intelligence Lab and associate professor of the practice in the Department of Architecture.
One of several projects in the Geoletric research at the Design Intelligence Lab, the lantern showcases the use of geopolymers as a sustainable material alternative for embedded computers and consumer electronics.
“The materials that we use to make computers have a negative impact on climate, so we’re rethinking how we make products with embedded electronics — such as a lamp or lantern — from a climate perspective,” says Coelho.
Consumer electronics rely on materials that are high in carbon emissions and difficult to recycle. As the demand for embedded computing increases, so too does the need for alternative materials that have a reduced environmental impact while supporting electronic functionality.
The Geolectric lantern advances the formulation and application of geopolymers — a class of inorganic materials that form covalently bonded, non-crystalline networks. Unlike traditional ceramics, geopolymers do not require high-temperature firing, allowing electronic components to be embedded seamlessly during production.
Geopolymers are similar to ceramics, but have a lower carbon footprint and present a sustainable alternative for consumer electronics, product design, and architecture. The minerals Coelho uses to make the geopolymers — aluminum silicate and sodium silicate — are those regularly used to make ceramics.
“Geopolymers aren’t particularly new, but are becoming more popular,” says Coelho. “They have high strength in both tension and compression, superior durability, fire resistance, and thermal insulation. Compared to concrete, geopolymers don’t release carbon dioxide. Compared to ceramics, you don’t have to worry about firing them. What’s even more interesting is that they can be made from industrial byproducts and waste materials, contributing to a circular economy and reducing waste.”
The lantern is embedded with custom electronics that serve as a proximity and touch sensor. When a hand is placed over the top, light shines down the glass tubes.
The timeless design of the Geoelectric lantern — minimalist, composed of natural materials — belies its future-forward function. Coelho’s academic background is in fine arts and computer science. Much of his work, he says, “bridges these two worlds.”
Working at the Design Intelligence Lab with Coelho on the lanterns are Jacob Payne, a graduate architecture student, and Jean-Baptiste Labrune, a research affiliate.
A light for MIT
A few weeks before commencement, Sarkis saw the Geoelectric lantern in Palazzo Diedo Berggruen Arts and Culture in Venice, Italy. The exhibition, a collateral event of the Venice Biennale’s 19th International Architecture Exhibition, featured the work of 40 MIT architecture faculty.
The sustainability feature of Geolectric is the key reason Sarkis regarded the lantern as the perfect gift for Robinson. After her career in politics, Robinson founded the Mary Robinson Foundation — Climate Justice, an international center addressing the impacts of climate change on marginalized communities.
The third iteration of Geolectric for Sarkis’ office is currently underway. While the lantern was a technical prototype and an opportunity to showcase his lab’s research, Coelho — an immigrant from Brazil — was profoundly touched by how Sarkis created the perfect symbolism to both embody the welcoming spirit of the school and honor President Robinson.
“When the world feels most fragile, we need to urgently find sustainable and resilient solutions for our built environment. It’s in the darkest times when we need light the most,” says Coelho.
Towards the 10th Summit of the Americas: Concerns and Recommendations from Civil Society
This post is an adapted version of the article originally published at Silla Vacía
Heads of state and governments of the Americas will gather this December at the Tenth Summit of the Americas in the Dominican Republic to discuss challenges and opportunities facing the region’s nations. As part of the Summit of the Americas’ Process, which had its first meeting in 1994, the theme of this year’s summit is "Building a Secure and Sustainable Hemisphere with Shared Prosperity.”
More than twenty civil society organizations, including EFF, released a joint contribution ahead of the summit addressing the intersection between technology and human rights. Although the meeting's concept paper is silent about the role of digital technologies in the scope of this year's summit, the joint contribution stresses that the development and use of technologies is a cross-cutting issue and will likely be integrated into policies and actions agreed upon at the meeting.
Human Security, Its Core Dimensions, and Digital Technologies
The concept paper indicates that people in the Americas, like the rest of the world, are living in times of uncertainty and geopolitical, socioeconomic, and environmental challenges that require urgent actions to ensure human security in multiple dimensions. It identifies four key areas: citizen security, food security, energy security, and water security.
The potential of digital technologies cuts across these areas of concern and will very likely be considered in the measures, plans, and policies that states take up in the context of the summit, both at the national level and through regional cooperation. Yet, when harnessing the potential of emerging technologies, their challenges also surface. For example, AI algorithms can help predict demand peaks and manage energy flows in real time on power grids, but the infrastructure required for the growing and massive operation of AI systems itself poses challenges to energy security.
In Latin America, the imperative of safeguarding rights in the face of already documented risks and harmful impacts stands out particularly in citizen security. The abuse of surveillance powers, enhanced by digital technologies, is a recurring and widespread problem in the region.
It is intertwined with deep historical roots of a culture of secrecy and permissiveness that obstructs implementing robust privacy safeguards, effective independent oversight, and adequate remedies for violations. The proposal in the concept paper for creating a Hemispheric Platform of Action for Citizen and Community Security cannot ignore—and above all, must not reinforce—these problems.
It is crucial that the notion of security embedded in the Tenth Summit's focus on human security be based on human development, the protection of rights, and the promotion of social well-being, especially for historically discriminated against groups. It is also essential that it moves away from securitization and militarization, which have been used for social control, silencing dissent, harassing human rights defenders and community leaders, and restricting the rights and guarantees of migrants and people in situations of mobility.
Toward Regional Commitments Anchored in Human Rights
In light of these concerns, the joint contribution signed by EFF, Derechos Digitales, Wikimedia Foundation, CELE, ARTICLE 19 – Office for Mexico and Central America, among other civil society organizations, addresses the following:
-- The importance of strengthening the digital civic space, which requires robust digital infrastructure and policies for connectivity and digital inclusion, as well as civic participation and transparency in the formulation of public policies.
-- Challenges posed by the growing surveillance capabilities of states in the region through the increasing adoption of ever more intrusive technologies and practices without necessary safeguards.
-- State obligations established under the Inter-American Human Rights System and key standards affirmed by the Inter-American Court in the case of Members of the Jose Alvear Restrepo Lawyers Collective (CAJAR) v. Colombia.
-- A perspective on state digitalization and innovation centered on human rights, based on thorough analysis of current problems and gaps and their detrimental impacts on people. The insufficiency or absence of meaningful mechanisms for public participation, transparency, and evaluation are striking features of various experiences across countries in the Americas.
Finally, the contribution makes recommendations for regional cooperation, promoting shared solutions and joint efforts at the regional level anchored in human rights, justice, and inclusion.
We hope the joint contribution reinforces a human rights-based perspective across the debates and agreements at the summit. When security-related abuses abound facilitated by digital technologies, regional cooperation towards shared prosperity must take into account these risks and put justice and people's well-being at the center of any unfolding initiatives.
The first animals on Earth may have been sea sponges, study suggests
A team of MIT geochemists has unearthed new evidence in very old rocks suggesting that some of the first animals on Earth were likely ancestors of the modern sea sponge.
In a study appearing today in the Proceedings of the National Academy of Sciences, the researchers report that they have identified “chemical fossils” that may have been left by ancient sponges in rocks that are more than 541 million years old. A chemical fossil is a remnant of a biomolecule that originated from a living organism that has since been buried, transformed, and preserved in sediment, sometimes for hundreds of millions of years.
The newly identified chemical fossils are special types of steranes, which are the geologically stable form of sterols, such as cholesterol, that are found in the cell membranes of complex organisms. The researchers traced these special steranes to a class of sea sponges known as demosponges. Today, demosponges come in a huge variety of sizes and colors, and live throughout the oceans as soft and squishy filter feeders. Their ancient counterparts may have shared similar characteristics.
“We don’t know exactly what these organisms would have looked like back then, but they absolutely would have lived in the ocean, they would have been soft-bodied, and we presume they didn’t have a silica skeleton,” says Roger Summons, the Schlumberger Professor of Geobiology Emeritus in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).
The group’s discovery of sponge-specific chemical fossils offers strong evidence that the ancestors of demosponges were among the first animals to evolve, and that they likely did so much earlier than the rest of Earth’s major animal groups.
The study’s authors, including Summons, are lead author and former MIT EAPS Crosby Postdoctoral Fellow Lubna Shawar, who is now a research scientist at Caltech, along with Gordon Love from the University of California at Riverside, Benjamin Uveges of Cornell University, Alex Zumberge of GeoMark Research in Houston, Paco Cárdenas of Uppsala University in Sweden, and José-Luis Giner of the State University of New York College of Environmental Science and Forestry.
Sponges on steroids
The new study builds on findings that the group first reported in 2009. In that study, the team identified the first chemical fossils that appeared to derive from ancient sponges. They analyzed rock samples from an outcrop in Oman and found a surprising abundance of steranes that they determined were the preserved remnants of 30-carbon (C30) sterols — a rare form of steroid that they showed was likely derived from ancient sea sponges.
The steranes were found in rocks that were very old and formed during the Ediacaran Period — which spans from roughly 541 million to about 635 million years ago. This period took place just before the Cambrian, when the Earth experienced a sudden and global explosion of complex multicellular life. The team’s discovery suggested that ancient sponges appeared much earlier than most multicellular life, and were possibly one of Earth’s first animals.
However, soon after these findings were released, alternative hypotheses swirled to explain the C30 steranes’ origins, including that the chemicals could have been generated by other groups of organisms or by nonliving geological processes.
The team says the new study reinforces their earlier hypothesis that ancient sponges left behind this special chemical record, as they have identified a new chemical fossil in the same Precambrian rocks that is almost certainly biological in origin.
Building evidence
Just as in their previous work, the researchers looked for chemical fossils in rocks that date back to the Ediacaran Period. They acquired samples from drill cores and outcrops in Oman, western India, and Siberia, and analyzed the rocks for signatures of steranes, the geologically stable form of sterols found in all eukaryotes (plants, animals, and any organism with a nucleus and membrane-bound organelles).
“You’re not a eukaryote if you don’t have sterols or comparable membrane lipids,” Summons says.
A sterol’s core structure consists of four fused carbon rings. Additional carbon side chain and chemical add-ons can attach to and extend a sterol’s structure, depending on what an organism’s particular genes can produce. In humans, for instance, the sterol cholesterol contains 27 carbon atoms, while the sterols in plants generally have 29 carbon atoms.
“It’s very unusual to find a sterol with 30 carbons,” Shawar says.
The chemical fossil the researchers identified in 2009 was a 30-carbon sterol. What’s more, the team determined that the compound could be synthesized because of the presence of a distinctive enzyme which is encoded by a gene that is common to demosponges.
In their new study, the team focused on the chemistry of these compounds and realized the same sponge-derived gene could produce an even rarer sterol, with 31 carbon atoms (C31). When they analyzed their rock samples for C31 steranes, they found it in surprising abundance, along with the aforementioned C30 steranes.
“These special steranes were there all along,” Shawar says. “It took asking the right questions to seek them out and to really understand their meaning and from where they come.”
The researchers also obtained samples of modern-day demosponges and analyzed them for C31 sterols. They found that, indeed, the sterols — biological precursors of the C31 steranes found in rocks — are present in some species of contemporary demosponges. Going a step further, they chemically synthesized eight different C31 sterols in the lab as reference standards to verify their chemical structures. Then, they processed the molecules in ways that simulate how the sterols would change when deposited, buried, and pressurized over hundreds of millions of years. They found that the products of only two such sterols were an exact match with the form of C31 sterols that they found in ancient rock samples. The presence of two and the absence of the other six demonstrates that these compounds were not produced by a random nonbiological process.
The findings, reinforced by multiple lines of inquiry, strongly support the idea that the steranes that were found in ancient rocks were indeed produced by living organisms, rather than through geological processes. What’s more, those organisms were likely the ancestors of demosponges, which to this day have retained the ability to produce the same series of compounds.
“It’s a combination of what’s in the rock, what’s in the sponge, and what you can make in a chemistry laboratory,” Summons says. “You’ve got three supportive, mutually agreeing lines of evidence, pointing to these sponges being among the earliest animals on Earth.”
“In this study we show how to authenticate a biomarker, verifying that a signal truly comes from life rather than contamination or non-biological chemistry,” Shawar adds.
Now that the team has shown C30 and C31 sterols are reliable signals of ancient sponges, they plan to look for the chemical fossils in ancient rocks from other regions of the world. They can only tell from the rocks they’ve sampled so far that the sediments, and the sponges, formed some time during the Ediacaran Period. With more samples, they will have a chance to narrow in on when some of the first animals took form.
This research was supported, in part, by the MIT Crosby Fund, the Distinguished Postdoctoral Fellowship program, the Simons Foundation Collaboration on the Origins of Life, and the NASA Exobiology Program.
EFF Urges Virgina Court of Appeals to Require Search Warrants to Access ALPR Databases
This post was co-authored by EFF legal intern Olivia Miller.
For most Americans—driving is a part of everyday life. Practically speaking, many of us drive to work, school, play, and anywhere in between. Not only do we visit places that give insights into our personal lives, but we sometimes use vehicles as a mode of displaying information about our political beliefs, socioeconomic status, and other intimate details.
All of this personal activity can be tracked and identified through Automatic License Plate Reader (ALPR) data—a popular surveillance tool used by law enforcement agencies across the country. That’s why, in an amicus brief filed with the Virginia Court of Appeals, EFF, the ACLU of Virginia, and NACDL urged the court to require police to seek a warrant before searching ALPR data.
In Commonwealth v. Church, a police officer in Norfolk, Virginia searched license plate data without a warrant—not to prove that defendant Ronnie Church was at the scene of the crime, but merely to try to show he had a “guilty mind.” The lower court, in a one-page ruling relying on Commonwealth v. Bell, held this warrantless search violated the Fourth Amendment and suppressed the ALPR evidence. We argued the appellate court should uphold this decision.
Like the cellphone location data the Supreme Court protected in Carpenter v. United States, ALPR data threatens peoples’ privacy because it is collected indiscriminately over time and can provide police with a detailed picture of a person’s movements. ALPR data includes photos of license plates, vehicle make and model, any distinctive features of the vehicle, and precise time and location information. Once an ALPR logs a car’s data, the information is uploaded to the cloud and made accessible to law enforcement agencies at the local, state, and federal level—creating a near real-time tracking tool that can follow individuals across vast distances.
Think police only use ALPRs to track suspected criminals? Think again. ALPRs are ubiquitous; every car traveling into the camera’s view generates a detailed dataset, regardless of any suspected criminal activity. In fact, a survey of 173 law enforcement agencies employing ALPRs nationwide revealed that 99.5% of scans belonged to people who had no association to crime.
Norfolk County, Virginia, is home to over 170 ALPR cameras operated by Flock, a surveillance company that maintains over 83,000 ALPRs nationwide. The resulting surveillance network is so large that Norfolk county’s police chief suggested “it would be difficult to drive any distance and not be recorded by one.”
Recent and near-horizon advancements in Flock’s products will continue to threaten our privacy and further the surveillance state. For example, Flock’s ALPR data has been used for immigration raids, to track individuals seeking abortion-related care, to conduct fishing expeditions, and to identify relationships between people who may be traveling together but in different cars. With the help of artificial intelligence, ALPR databases could be aggregated with other information from data breaches and data brokers, to create “people lookup tools.” Even public safety advocates and law enforcement, like the International Association of Chiefs of Police, have warned that ALPR tech creates a risk “that individuals will become more cautious in their exercise of their protected rights of expression, protest, association, political participation because they consider themselves under constant surveillance.”
This is why a warrant requirement for ALPR data is so important. As the Virginia trial court previously found in Bell, prolonged tracking of public movements with surveillance invades peoples’ reasonable expectation of privacy in the entirety of their movements. Recent Fourth Amendment jurisprudence, including Carpenter and Leaders of a Beautiful Struggle from the federal Fourth Circuit Court of Appeals favors a warrant requirement as well. Like the technologies at issue in those cases, ALPRs give police the ability to chronicle movements in a “detailed, encyclopedic” record, akin to “attaching an ankle monitor to every person in the city.”
The Virginia Court of Appeals has a chance to draw a clear line on warrantless ALPR surveillance, and to tell Norfolk PD what the Fourth Amendment already says: come back with a warrant.
Chat Control Is Back on the Menu in the EU. It Still Must Be Stopped
The European Union Council is once again debating its controversial message scanning proposal, aka “Chat Control,” that would lead to the scanning of private conversations of billions of people.
Chat Control, which EFF has strongly opposed since it was first introduced in 2022, keeps being mildly tweaked and pushed by one Council presidency after another.
Chat Control is a dangerous legislative proposal that would make it mandatory for service providers, including end-to-end encrypted communication and storage services, to scan all communications and files to detect “abusive material.” This would happen through a method called client-side scanning, which scans for specific content on a device before it’s sent. In practice, Chat Control is chat surveillance and functions by having access to everything on a device with indiscriminate monitoring of everything. In a memo, the Danish Presidency claimed this does not break end-to-end encryption.
This is absurd.
We have written extensively that client-side scanning fundamentally undermines end-to-end encryption, and obliterates our right to private spaces. If the government has access to one of the “ends” of an end-to-end encrypted communication, that communication is no longer safe and secure. Pursuing this approach is dangerous for everyone, but is especially perilous for journalists, whistleblowers, activists, lawyers, and human rights workers.
If passed, Chat Control would undermine the privacy promises of end-to-end encrypted communication tools, like Signal and WhatsApp. The proposal is so dangerous that Signal has stated it would pull its app out of the EU if Chat Control is passed. Proponents even seem to realize how dangerous this is, because state communications are exempt from this scanning in the latest compromise proposal.
This doesn’t just affect people in the EU, it affects everyone around the world, including in the United States. If platforms decide to stay in the EU, they would be forced to scan the conversation of everyone in the EU. If you’re not in the EU, but you chat with someone who is, then your privacy is compromised too. Passing this proposal would pave the way for authoritarian and tyrannical governments around the world to follow suit with their own demands for access to encrypted communication apps.
Even if you take it in good faith that the government would never do anything wrong with this power, events like Salt Typhoon show there’s no such thing as a system that’s only for the “good guys.”
Despite strong opposition, Denmark is pushing forward and taking its current proposal to the Justice and Home Affairs Council meeting on October 14th.
We urge the Danish Presidency to drop its push for scanning our private communication and consider fundamental rights concerns. Any draft that compromises end-to-end encryption and permits scanning of our private communication should be blocked or voted down.
Phones and laptops must work for the users who own them, not act as “bugs in our pockets” in the service of governments, foreign or domestic. The mass scanning of everything on our devices is invasive, untenable, and must be rejected.
Further reading:
After Years Behind Bars, Alaa Is Free at Last
Alaa Abd El Fattah is finally free and at home with his family. On September 22, it was announced that Egyptian President Abdel Fattah al-Sisi had issued a pardon for Alaa’s release after six years in prison. One day later, the BBC shared video of Alaa dancing with his family in their Cairo home and hugging his mother Laila and sister Sanaa, as well as other visitors.
Alaa's sister, Mona Seif, posted on X: "An exceptionally kind day. Alaa is free."
Alaa has spent most of the last decade behind bars, punished for little more than his words. In June 2014, Egypt accused him of violating its protest law and attacking a police officer. He was convicted in absentia and sentenced to fifteen years in prison, after being prohibited from entering the courthouse. Following an appeal, Alaa was granted a retrial, and sentenced in February 2015 to five years in prison. In 2019, he was finally released, first into police custody then to his family. As part of his parole, he was told he would have to spend every night of the next five years at a police station, but six months later—on September 29, 2019—Alaa was re-arrested in a massive sweep of activists and charged with spreading false news and belonging to a terrorist organisation after sharing a Facebook post about torture in Egypt.
Despite that sentence effectively ending on September 29, 2024, one year ago today, Egyptian authorities continued his detention, stating that he would be released in January 2027—violating both international legal norms and Egypt’s own domestic law. As Amnesty International reported, Alaa faced inhumane conditions during his imprisonment, “including denial of access to lawyers, consular visits, fresh air, and sunlight,” and his family repeatedly spoke of concerns about his health, particularly during periods in which he engaged in hunger strike.
When Egyptian authorities failed to release Alaa last year, his mother, Laila Soueif, launched a hunger strike. Her action stretched to an astonishing 287 days, during which she was hospitalized twice in London and nearly lost her life. She continued until July of this year, when she finally ended the strike following direct commitments from UK officials that Alaa would be freed.
Throughout this time, a broad coalition, including EFF, rallied around Alaa: international human rights organizations, senior UK parliamentarians, former British Ambassador John Casson, and fellow former political prisoner Nazanin Zaghari-Ratcliffe all lent their voices. Celebrities joined the call, while the UN Working Group on Arbitrary Detention declared his imprisonment unlawful and demanded his release. This groundswell of solidarity was decisive in securing his release.
Alaa’s release is an extraordinary relief for his family and all who have campaigned on his behalf. EFF wholeheartedly celebrates Alaa’s freedom and reunification with his family.
But we must remain vigilant. Alaa must be allowed to travel to the UK to be reunited with his son Khaled, who currently lives with his mother and attends school there. Furthermore, we continue to press for the release of those who remain imprisoned for nothing more than exercising their right to speak.
Abusing Notion’s AI Agent for Data Theft
Notion just released version 3.0, complete with AI agents. Because the system contains Simon Willson’s lethal trifecta, it’s vulnerable to data theft though prompt injection.
First, the trifecta:
The lethal trifecta of capabilities is:
- Access to your private data—one of the most common purposes of tools in the first place!
- Exposure to untrusted content—any mechanism by which text (or images) controlled by a malicious attacker could become available to your LLM
- The ability to externally communicate in a way that could be used to steal your data (I often call this “exfiltration” but I’m not confident that term is widely understood.)...