Feed aggregator
A greener way to 3D print stronger stuff
3D printing has come a long way since its invention in 1983 by Chuck Hull, who pioneered stereolithography, a technique that solidifies liquid resin into solid objects using ultraviolet lasers. Over the decades, 3D printers have evolved from experimental curiosities into tools capable of producing everything from custom prosthetics to complex food designs, architectural models, and even functioning human organs.
But as the technology matures, its environmental footprint has become increasingly difficult to set aside. The vast majority of consumer and industrial 3D printing still relies on petroleum-based plastic filament. And while “greener” alternatives made from biodegradable or recycled materials exist, they come with a serious trade-off: they’re often not as strong. These eco-friendly filaments tend to become brittle under stress, making them ill-suited for structural applications or load-bearing parts — exactly where strength matters most.
This trade-off between sustainability and mechanical performance prompted researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Hasso Plattner Institute to ask: Is it possible to build objects that are mostly eco-friendly, but still strong where it counts?
Their answer is SustainaPrint, a new software and hardware toolkit designed to help users strategically combine strong and weak filaments to get the best of both worlds. Instead of printing an entire object with high-performance plastic, the system analyzes a model through finite element analysis simulations, predicts where the object is most likely to experience stress, and then reinforces just those zones with stronger material. The rest of the part can be printed using greener, weaker filament, reducing plastic use while preserving structural integrity.
“Our hope is that SustainaPrint can be used in industrial and distributed manufacturing settings one day, where local material stocks may vary in quality and composition,” says MIT PhD student and CSAIL researcher Maxine Perroni-Scharf, who is a lead author on a paper presenting the project. “In these contexts, the testing toolkit could help ensure the reliability of available filaments, while the software’s reinforcement strategy could reduce overall material consumption without sacrificing function.”
For their experiments, the team used Polymaker’s PolyTerra PLA as the eco-friendly filament, and standard or Tough PLA from Ultimaker for reinforcement. They used a 20 percent reinforcement threshold to show that even a small amount of strong plastic goes a long way. Using this ratio, SustainaPrint was able to recover up to 70 percent of the strength of an object printed entirely with high-performance plastic.
They printed dozens of objects, from simple mechanical shapes like rings and beams to more functional household items such as headphone stands, wall hooks, and plant pots. Each object was printed three ways: once using only eco-friendly filament, once using only strong PLA, and once with the hybrid SustainaPrint configuration. The printed parts were then mechanically tested by pulling, bending, or otherwise breaking them to measure how much force each configuration could withstand.
In many cases, the hybrid prints held up nearly as well as the full-strength versions. For example, in one test involving a dome-like shape, the hybrid version outperformed the version printed entirely in Tough PLA. The team believes this may be due to the reinforced version’s ability to distribute stress more evenly, avoiding the brittle failure sometimes caused by excessive stiffness.
“This indicates that in certain geometries and loading conditions, mixing materials strategically may actually outperform a single homogenous material,” says Perroni-Scharf. “It’s a reminder that real-world mechanical behavior is full of complexity, especially in 3D printing, where interlayer adhesion and tool path decisions can affect performance in unexpected ways.”
A lean, green, eco-friendly printing machine
SustainaPrint starts off by letting a user upload their 3D model into a custom interface. By selecting fixed regions and areas where forces will be applied, the software then uses an approach called “Finite Element Analysis” to simulate how the object will deform under stress. It then creates a map showing pressure distribution inside the structure, highlighting areas under compression or tension, and applies heuristics to segment the object into two categories: those that need reinforcement, and those that don’t.
Recognizing the need for accessible and low-cost testing, the team also developed a DIY testing toolkit to help users assess strength before printing. The kit has a 3D-printable device with modules for measuring both tensile and flexural strength. Users can pair the device with common items like pull-up bars or digital scales to get rough, but reliable performance metrics. The team benchmarked their results against manufacturer data and found that their measurements consistently fell within one standard deviation, even for filaments that had undergone multiple recycling cycles.
Although the current system is designed for dual-extrusion printers, the researchers believe that with some manual filament swapping and calibration, it could be adapted for single-extruder setups, too. In current form, the system simplifies the modeling process by allowing just one force and one fixed boundary per simulation. While this covers a wide range of common use cases, the team sees future work expanding the software to support more complex and dynamic loading conditions. The team also sees potential in using AI to infer the object’s intended use based on its geometry, which could allow for fully automated stress modeling without manual input of forces or boundaries.
3D for free
The researchers plan to release SustainaPrint open-source, making both the software and testing toolkit available for public use and modification. Another initiative they aspire to bring to life in the future: education. “In a classroom, SustainaPrint isn’t just a tool, it’s a way to teach students about material science, structural engineering, and sustainable design, all in one project,” says Perroni-Scharf. “It turns these abstract concepts into something tangible.”
As 3D printing becomes more embedded in how we manufacture and prototype everything from consumer goods to emergency equipment, sustainability concerns will only grow. With tools like SustainaPrint, those concerns no longer need to come at the expense of performance. Instead, they can become part of the design process: built into the very geometry of the things we make.
Co-author Patrick Baudisch, who is a professor at the Hasso Plattner Institute, adds that “the project addresses a key question: What is the point of collecting material for the purpose of recycling, when there is no plan to actually ever use that material? Maxine presents the missing link between the theoretical/abstract idea of 3D printing material recycling and what it actually takes to make this idea relevant.”
Perroni-Scharf and Baudisch wrote the paper with CSAIL research assistant Jennifer Xiao; MIT Department of Electrical Engineering and Computer Science master’s student Cole Paulin ’24; master’s student Ray Wang SM ’25 and PhD student Ticha Sethapakdi SM ’19 (both CSAIL members); Hasso Plattner Institute PhD student Muhammad Abdullah; and Associate Professor Stefanie Mueller, lead of the Human-Computer Interaction Engineering Group at CSAIL.
The researchers’ work was supported by a Designing for Sustainability Grant from the Designing for Sustainability MIT-HPI Research Program. Their work will be presented at the ACM Symposium on User Interface Software and Technology in September.
California Lawmakers: Support S.B. 524 to Rein in AI Written Police Reports
EFF urges California state lawmakers to pass S.B. 524, authored by Sen. Jesse Arreguín. This bill is an important first step in regaining control over police using generative AI to write their narrative police reports.
This bill does several important things: It mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer. It also says that any reports written by AI must retain their first draft. That way, it should be easier for defense attorneys, judges, police supervisors, or any other auditing entity to see which portions of the final report were written by AI and which parts were written by the officer. Further, the bill requires officers to sign and verify that they read the report and its facts are correct. And it bans AI vendors from selling or sharing the information a police agency provided to the AI.
These common-sense, first-step reforms are important: watchdogs are struggling to figure out where and how AI is being used in a police context. In fact, a popular AI police report writing tool, Axon’s Draft One, would be out of compliance with this bill, which would require them to redesign their tool to make it more transparent.
This bill is an important first step in regaining control over police using generative AI to write their narrative police reports.
Draft One takes audio from an officer’s body-worn camera, and uses AI to turn that dialogue into a narrative police report. Because independent researchers have been unable to test it, there are important questions about how the system handles things like sarcasm, out of context comments, or interactions with members of the public that speak languages other than English. Another major concern is Draft One’s inability to keep track of which parts of a report were written by people and which parts were written by AI. By design, their product does not retain different iterations of the draft—making it easy for an officer to say, “I didn’t lie in my police report, the AI wrote that part.”
All lawmakers should pass regulations of AI written police reports. This technology could be nearly everywhere, and soon. Axon is a top supplier of body-worn cameras in the United States, which means they have a massive ready-made customer base. Through the bundling of products, AI-written police reports could be at a vast percentage of police departments.
AI-written police reports are unproven in terms of their accuracy, and their overall effects on the criminal justice system. Vendors still have a long way to go to prove this technology can be transparent and auditable. While it would not solve all of the many problems of AI encroaching on the criminal justice system, S.B. 524 is a good first step to rein in an unaccountable piece of technology.
We urge California lawmakers to pass S.B. 524.
EFF Awards Spotlight ✨ Erie Meyer
In 1992 EFF presented our very first awards recognizing key leaders and organizations advancing innovation and championing civil liberties and human rights online. Now in 2025 we're continuing to celebrate the accomplishments of people working toward a better future for everyone with the EFF Awards!
All are invited to attend the EFF Awards on Wednesday, September 10 at the San Francisco Design Center. Whether you're an activist, an EFF supporter, a student interested in cyberlaw, or someone who wants to munch on a strolling dinner with other likeminded individuals, anyone can enjoy the ceremony!
GENERAL ADMISSION: $55 | CURRENT EFF MEMBERS: $45 | STUDENTS: $35
If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.
We are honored to present the three winners of this year's EFF Awards: Just Futures Law, Erie Meyer, and Software Freedom Law Center, India. But, before we kick off the ceremony next week, let's take a closer look at each of the honorees. This time—Erie Meyer, winner of the EFF Award for Protecting Americans' Data:
Erie Meyer is a Senior Fellow at the Vanderbilt Policy Accelerator where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the Georgetown Law Institute for Technology Law & Policy. Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she filed a declaration in federal court in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed a declaration in another case warning about using private-sector AI on government information. That same month, she testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.”
We're excited to celebrate Erie Meyer and the other EFF Award winners in person in San Francisco on September 10! We hope that you'll join us there.
Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.
Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.
EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.
Questions? Email us at events@eff.org.
From Libraries to Schools: Why Organizations Should Install Privacy Badger
In an era of pervasive online surveillance, organizations have an important role to play in protecting their communities’ privacy. Millions of people browse the web on computers provided by their schools, libraries, and employers. By default, popular browsers on these computers leave people exposed to hidden trackers.
Organizations can enhance privacy and security on their devices by installing Privacy Badger, EFF’s free, open source browser extension that automatically blocks trackers. Privacy Badger is already used by millions to fight online surveillance and take back control of their data.
Why Should Organizations Install Privacy Badger on Managed Devices?Protect People from Online Surveillance
Most websites contain hidden trackers that let advertisers, data brokers, and Big Tech companies monitor people’s browsing activity. This surveillance has serious consequences: it fuels scams, government spying, predatory advertising, and surveillance pricing.
By installing Privacy Badger on managed devices, organizations can protect entire communities from these harms. Most people don’t realize the risks of browsing the web unprotected. Organizations can step in to make online privacy available to everyone, not just the people who know they need it.
Ad Blocking is a Cybersecurity Best Practice
Privacy Badger helps reduce cybersecurity threats by blocking ads that track you (unfortunately, that’s most ads these days). Targeted ads aren’t just a privacy nightmare. They can also be a vehicle for malware and phishing attacks. Cybercriminals have tricked legitimate ad networks into distributing malware, a tactic known as malvertising.
The risks are serious enough that the U.S. Cybersecurity and Infrastructure Security Agency (CISA) recommends federal agencies deploy ad-blocking software. The NSA, CIA, and other intelligence agencies already follow this guidance. These agencies are using advertising systems to surveil others, but blocking ads for their own employees.
All organizations, not just spy agencies, should make ad blocking part of their security strategy.
A Tracker Blocker You Can Trust
Four million users already trust Privacy Badger, which has been recommended by The New York Times' Wirecutter, Consumer Reports, and The Washington Post.
Trust is crucial when choosing an ad-blocking or tracker-blocking extension because they require high levels of browser permissions. Unfortunately, not all extensions deserve that trust. Avast’s “privacy” extension was caught collecting and selling users’ browsing data to third parties—the very practice it claimed to prevent.
Privacy Badger is different. EFF released it over a decade ago, and the extension has been open-source—meaning other developers and researchers can inspect its code—that entire time. Built by a nonprofit with a 35-year history fighting for user rights, organizations can trust that Privacy Badger works for its users, not for profit.
Which organizations should deploy Privacy Badger?All of them! Installing Privacy Badger on managed devices improves privacy and security across an organization. That said, Privacy Badger is most beneficial for two types of organizations: libraries and schools. Both can better serve their communities by safeguarding the computers they provide.
Libraries
The American Library Association (ALA) already recommends installing Privacy Badger on public computers to block third-party tracking. Librarians have a long history of defending privacy. The ALA’s guidance is a natural extension of that legacy for the digital age. While librarians protect the privacy of books people check out, Privacy Badger protects the privacy of websites they visit on library computers.
Millions of Americans depend on libraries for internet access. That makes libraries uniquely positioned to promote equitable access to private browsing. With Privacy Badger, libraries can ensure that safe and private browsing is the default for anyone using their computers.
Libraries also play a key role in promoting safe internet use through their digital literacy trainings. By including Privacy Badger in these trainings, librarians can teach patrons about a simple, free tool that protects their privacy and security online.
Schools
Schools should protect their students’ from online surveillance by installing Privacy Badger on computers they provide. Parents are rightfully worried about their children’s privacy online, with a Pew survey showing 85% worry about advertisers using data about what kids do online to target ads. Deploying Privacy Badger is a concrete step schools can take to address these concerns.
By blocking online trackers, schools can protect students from manipulative ads and limit the personal data fueling social media algorithms. Privacy Badger can even block tracking in Ed Tech products that schools require students to use. Alarmingly, a Human Rights Watch analysis of Ed Tech products found that 89% shared children’s personal data with advertisers or other companies.
Instead of deploying invasive student monitoring tools, schools should keep students safe by keeping their data safe. Students deserve to learn without being tracked, profiled, and targeted online. Privacy Badger can help make that happen.
How can organizations deploy Privacy Badger on managed devices?System administrators can deploy and configure Privacy Badger on managed devices by setting up an enterprise policy. Chrome, Firefox, and Edge provide instructions for automatically installing extensions organization-wide. You’ll be able to configure certain Privacy Badger settings for all devices. For example, you can specify websites where Privacy Badger is disabled or prevent Privacy Badger’s welcome page from popping up on computers that get reset after every session.
We recommend educating users about the addition of Privacy Badger and what it does. Since some websites deeply embed tracking, privacy protections can occasionally break website functionality. For example, a video might not play or a comments section might not appear. If this happens, users should know that they can easily turn off Privacy Badger on any website. Just open the Privacy Badger popup and click “Disable for this site.”
Don't hesitate to reach out if you're interested in deploying Privacy Badger at scale. Our team is here to help you protect your community's privacy. And if you're already deploying Privacy Badger across your organization, we'd love to hear how it’s going!
Make Private Browsing the Default at Your OrganizationSchools, libraries, and other organizations can make private browsing the norm by deploying Privacy Badger on devices they manage. If you work at an organization with managed devices, talk to your IT team about Privacy Badger. You can help strengthen the security and privacy of your entire organization while joining the fight against online surveillance.
Generative AI as a Cybercrime Assistant
Anthropic reports on a Claude user:
We recently disrupted a sophisticated cybercriminal that used Claude Code to commit large-scale theft and extortion of personal data. The actor targeted at least 17 distinct organizations, including in healthcare, the emergency services, and government and religious institutions. Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded $500,000.
The actor used AI to what we believe is an unprecedented degree. Claude Code was used to automate reconnaissance, harvesting victims’ credentials, and penetrating networks. Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines...
Don Jr. and Eric Trump are investors in a crypto company that calls climate change a threat
Republicans probe National Academies’ ‘partisan’ climate review
European pension fund fires BlackRock over climate investments
Texas law targeting climate guidance blocked for now
Private sector unlikely to play major role in climate adaptation
California releases draft corporate climate risk disclosure guidelines
Poland argues for more imported carbon credits
UN chief praises Papua New Guinea’s ‘bold climate action’
Vatican to open farm center inspired by Pope Francis
Verifying Trust in Digital ID Is Still Incomplete
In the past few years, governments across the world have rolled out different digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the second in a short series that explains digital ID and the pending use case of age verification. Upcoming posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.
Digital identity encompasses various aspects of an individual's identity that are presented and verified through either the internet or in person. This could mean a digital credential issued by a certification body or a mobile driver’s license provisioned to someone’s mobile wallet. They can be presented in plain text on a device, as a scannable QR code, or through tapping your device to something called a Near Field Communication (NFC) reader. There are other ways to present credential information that is a little more privacy preserving, but in practice those three methods are how we are seeing digital ID being used today.
Advocates of digital ID often use a framework they call the "Triangle of Trust." This is usually presented as a triangle of exchange between the holder of an ID—those who use a phone or wallet application to access a service; the issuer of an ID—this is normally a government entity, like the state Departments of Motor Vehicles in the U.S, or a banking system; and the verifier of an ID—the entity that wants to confirm your identity, such as law enforcement, a university, a government benefits office, a porn site, or an online retailer.
This triangle implies that the issuer and verifier—for example, the government who provides the ID and the website checking your age—never need to talk to one another. This theoretically avoids the tracking and surveillance threats that arise by preventing your ID, by design, from phoning home every time you verify your ID with another party.
But it also makes a lot of questionable assumptions, such as:
1) the verifier will only ever ask for a limited amount of information.
2) the verifier won’t store information it collects.
3) the verifier is always trustworthy.
The third assumption is especially problematic. How do you trust that the verifier will protect your most personal information and not use, store, or sell it beyond what you have consented to? Any of the following could be verifiers:
- Law enforcement when doing a traffic stop and verifying your ID as valid.
- A government benefits office that requires ID verification to sign up for social security benefits.
- A porn site in a state or country which requires age verification or identity verification before allowing access.
- An online retailer selling products like alcohol or tobacco.
Looking at the triangle again, this isn’t quite an equal exchange. Your personal ID like a driver’s license or government ID is both one of the most centralized and sensitive documents you have—you can’t control how it is issued or create your own, having to go through your government to obtain one. This relationship will always be imbalanced. But we have to make sure digital ID does not exacerbate these imbalances.
The effort to answer the questions of how to prevent verifier abuse is ongoing. But instead of working on the harms that these systems cause, the push for this technology is being fast-tracked by governments around the world scrambling to solve what they see as a crisis of online harms by mandating age verification. And current implementations of the Triangle of Trust have already proven disastrous.
One key example of the speed of implementation outpacing proper protections is the Digital Credential API. Initially launched by Google and now supported by Apple, this rollout allows for mass, unfettered verification by apps and websites to use the API to request information from your digital ID. The introduction of this technology to people’s devices came with no limits or checks on what information verifiers can seek—incentivizing verifiers to over-ask for ID information beyond the question of whether a holder is over a certain age, simply because they can.
Digital Credential API also incentivizes for a variety of websites to ask for ID information that aren’t required and did not commonly do so previously. For example, food delivery services, medical services, and gaming sites, and literally anyone else interested in being a verifier, may become one tomorrow with digital ID and the Digital Credential API. This is both an erosion of personal privacy, as well as a pathway into further surveillance. There must be established limitations and scope, including:
- verifiers establishing who they are and what they plan to ask from holders. There should also be an established plan for transparency on verifiers and their data retention policies.
- ways to identify and report abusive verifiers, as well as real consequences, like revoking or blocking a verifier from requesting IDs in the future.
- unlinkable presentations that do not allow for verifier and issuer collusion. As well as no data shared between verifiers you attest to. Preventing tracking of your movements in person or online every time you attest your age.
A further point of concern arises in cases of abuse or deception. A malicious verifier can send a request with no limiting mechanisms or checks and the user who rejects the request could be fully blocked from the website or application. There must be provisions that ensure people have access to vital services that will require age verification from visitors.
Government's efforts to tackle verifiers potentially abusing digital ID requests haven’t come to fruition yet. For example, the EU Commission recently launched its age verification “mini app” ahead of the EU ID wallet for 2026. The mini app will not have a registry for verifiers, as EU regulators had promised and then withdrew. Without verifier accountability, the wallet cannot tell if a request is legitimate. As a result, verifiers and issuers will demand verification from the people who want to use online services, but those same people are unable to insist on verification and accountability from the other sides of the triangle.
While digital ID gets pushed as the solution to the problem of uploading IDs to each site users access, the security and privacy on them varies based on implementation. But when privacy is involved, regulators must make room for negotiation. There should be more thoughtful and protective measures for holders interacting with more and more potential verifiers over time. Otherwise digital ID solutions will just exacerbate existing harms and inequalities, rather than improving internet accessibility and information access for all.
A new generative AI approach to predicting chemical reactions
Many attempts have been made to harness the power of new artificial intelligence and large language models (LLMs) to try to predict the outcomes of new chemical reactions. These have had limited success, in part because until now they have not been grounded in an understanding of fundamental physical principles, such as the laws of conservation of mass. Now, a team of researchers at MIT has come up with a way of incorporating these physical constraints on a reaction prediction model, and thus greatly improving the accuracy and reliability of its outputs.
The new work was reported Aug. 20 in the journal Nature, in a paper by recent postdoc Joonyoung Joung (now an assistant professor at Kookmin University, South Korea); former software engineer Mun Hong Fong (now at Duke University); chemical engineering graduate student Nicholas Casetti; postdoc Jordan Liles; physics undergraduate student Ne Dassanayake; and senior author Connor Coley, who is the Class of 1957 Career Development Professor in the MIT departments of Chemical Engineering and Electrical Engineering and Computer Science.
“The prediction of reaction outcomes is a very important task,” Joung explains. For example, if you want to make a new drug, “you need to know how to make it. So, this requires us to know what product is likely” to result from a given set of chemical inputs to a reaction. But most previous efforts to carry out such predictions look only at a set of inputs and a set of outputs, without looking at the intermediate steps or considering the constraints of ensuring that no mass is gained or lost in the process, which is not possible in actual reactions.
Joung points out that while large language models such as ChatGPT have been very successful in many areas of research, these models do not provide a way to limit their outputs to physically realistic possibilities, such as by requiring them to adhere to conservation of mass. These models use computational “tokens,” which in this case represent individual atoms, but “if you don’t conserve the tokens, the LLM model starts to make new atoms, or deletes atoms in the reaction.” Instead of being grounded in real scientific understanding, “this is kind of like alchemy,” he says. While many attempts at reaction prediction only look at the final products, “we want to track all the chemicals, and how the chemicals are transformed” throughout the reaction process from start to end, he says.
In order to address the problem, the team made use of a method developed back in the 1970s by chemist Ivar Ugi, which uses a bond-electron matrix to represent the electrons in a reaction. They used this system as the basis for their new program, called FlowER (Flow matching for Electron Redistribution), which allows them to explicitly keep track of all the electrons in the reaction to ensure that none are spuriously added or deleted in the process.
The system uses a matrix to represent the electrons in a reaction, and uses nonzero values to represent bonds or lone electron pairs and zeros to represent a lack thereof. “That helps us to conserve both atoms and electrons at the same time,” says Fong. This representation, he says, was one of the key elements to including mass conservation in their prediction system.
The system they developed is still at an early stage, Coley says. “The system as it stands is a demonstration — a proof of concept that this generative approach of flow matching is very well suited to the task of chemical reaction prediction.” While the team is excited about this promising approach, he says, “we’re aware that it does have specific limitations as far as the breadth of different chemistries that it’s seen.” Although the model was trained using data on more than a million chemical reactions, obtained from a U.S. Patent Office database, those data do not include certain metals and some kinds of catalytic reactions, he says.
“We’re incredibly excited about the fact that we can get such reliable predictions of chemical mechanisms” from the existing system, he says. “It conserves mass, it conserves electrons, but we certainly acknowledge that there’s a lot more expansion and robustness to work on in the coming years as well.”
But even in its present form, which is being made freely available through the online platform GitHub, “we think it will make accurate predictions and be helpful as a tool for assessing reactivity and mapping out reaction pathways,” Coley says. “If we’re looking toward the future of really advancing the state of the art of mechanistic understanding and helping to invent new reactions, we’re not quite there. But we hope this will be a steppingstone toward that.”
“It’s all open source,” says Fong. “The models, the data, all of them are up there,” including a previous dataset developed by Joung that exhaustively lists the mechanistic steps of known reactions. “I think we are one of the pioneering groups making this dataset, and making it available open-source, and making this usable for everyone,” he says.
The FlowER model matches or outperforms existing approaches in finding standard mechanistic pathways, the team says, and makes it possible to generalize to previously unseen reaction types. They say the model could potentially be relevant for predicting reactions for medicinal chemistry, materials discovery, combustion, atmospheric chemistry, and electrochemical systems.
In their comparisons with existing reaction prediction systems, Coley says, “using the architecture choices that we’ve made, we get this massive increase in validity and conservation, and we get a matching or a little bit better accuracy in terms of performance.”
He adds that “what’s unique about our approach is that while we are using these textbook understandings of mechanisms to generate this dataset, we’re anchoring the reactants and products of the overall reaction in experimentally validated data from the patent literature.” They are inferring the underlying mechanisms, he says, rather than just making them up. “We’re imputing them from experimental data, and that’s not something that has been done and shared at this kind of scale before.”
The next step, he says, is “we are quite interested in expanding the model’s understanding of metals and catalytic cycles. We’ve just scratched the surface in this first paper,” and most of the reactions included so far don’t include metals or catalysts, “so that’s a direction we’re quite interested in.”
In the long term, he says, “a lot of the excitement is in using this kind of system to help discover new complex reactions and help elucidate new mechanisms. I think that the long-term potential impact is big, but this is of course just a first step.”
The work was supported by the Machine Learning for Pharmaceutical Discovery and Synthesis consortium and the National Science Foundation.
EFF Statement on ICE Use of Paragon Solutions Malware
This statement can be attributed to EFF Senior Staff Technologist Cooper Quintin
It was recently reported by Jack Poulson on Substack that ICE has reactivated its 2 million dollar contract with Paragon Solutions, a cyber-mercenary and spyware manufacturer.
The reactivation of the contract between the Department of Homeland Security and Paragon Solutions, a known spyware vendor, is extremely troubling.
Paragon's “Graphite” malware has been implicated in widespread misuse by the Italian government. Researchers at Citizen Lab at the Monk School of Global Affairs at the University of Toronto and with Meta found that it has been used in Italy to spy on journalists and civil society actors, including humanitarian workers. Without strong legal guardrails, there is a risk that the malware will be misused in a similar manner by the U.S. Government.
These reports undermine Paragon Solutions’s public marketing of itself as a more ethical provider of surveillance malware.
Reportedly, the contract is being reactivated because the US arm of Paragon Solutions was acquired by a Miami based private equity firm, AE Industrial Partners, and then merged into a Virginia based cybersecurity company, REDLattice, allowing ICE to circumvent Executive Order 14093 which bans the acquisition of spyware controlled by a foreign government or person. Even though this order was always insufficient in preventing the acquisition of dangerous spyware, it was the best protection we had. This end run around the executive order both ignores the spirit of the rule and does not actually do anything to prevent misuse of Paragon Malware for human rights abuses. Nor will it prevent insider threats at Paragon using their malware to spy on US government officials, or US government officials from misusing it to spy on their personal enemies, rivals, or spouses.
The contract between Paragon and ICE requires all US users to adjust their threat models and take extra precautions. Paragon’s Graphite isn’t magical, it’s still just malware. It still needs a zero day exploit in order to compromise a phone with the latest security updates and those are expensive. The best thing you can do to protect yourself against Graphite is to keep your phone up to date and enable Lockdown Mode in your operating system if you are using an iPhone or Advanced Protection Mode on Android. Turning on disappearing messages is also helpful that way if someone in your network does get compromised you don’t also reveal your entire message history. For more tips on protecting yourself from malware check out our Surveillance Self Defense guides.
EFF Awards Spotlight ✨ Just Futures Law
In 1992 EFF presented our very first awards recognizing key leaders and organizations advancing innovation and championing civil liberties and human rights online. Now in 2025 we're continuing to celebrate the accomplishments of people working toward a better future for everyone with the EFF Awards!
All are invited to attend the EFF Awards on Wednesday, September 10 at the San Francisco Design Center. Whether you're an activist, an EFF supporter, a student interested in cyberlaw, or someone who wants to munch on a strolling dinner with other likeminded individuals, anyone can enjoy the ceremony!
GENERAL ADMISSION: $55 | CURRENT EFF MEMBERS: $45 | STUDENTS: $35
If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.
We are honored to present the three winners of this year's EFF Awards: Just Futures Law, Erie Meyer, and Software Freedom Law Center, India. But, before we kick off the ceremony next week, let's take a closer look at each of the honorees. First up—Just Futures Law, winner of the EFF Award for Leading Immigration and Surveillance Litigation:
Just Futures Law is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States. In the past year, Just Futures sued the Department of Homeland Security and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and sued the Trump Administration for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like Clearview AI, it has worked with Mijente to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with Grassroots Leadership to fight for the release of detained individuals under Operation Lone Star.
We're excited to celebrate Just Futures Law and the other EFF Award winners in person in San Francisco on September 10! We hope that you'll join us there.
Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.
Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.
EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.
Questions? Email us at events@eff.org.
🤐 This Censorship Law Turns Parents Into Content Cops | EFFector 37.11
School is back in session! Perfect timing to hit the books and catch up on the latest digital rights news. We've got you covered with bite-sized updates in this issue of our EFFector newsletter.
This time, we're breaking down why Wyoming’s new age verification law is a free speech disaster. You’ll also read about a big win for transparency around police surveillance, how the Trump administration’s war on “woke AI” threatens civil liberties, and a welcome decision in a landmark human rights case.
Prefer to listen? Be sure to check out the audio companion to EFFector! We're interviewing EFF staff about some of the important issues they are working on. This time, EFF Legislative Activist Rindala Alajaji discusses the real harms of age verification laws like the one passed in Wyoming. Tune in on YouTube or the Internet Archive.
EFFECTOR 37.11 - This Censorship Law Turns Parents Into Content Cops
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
Indirect Prompt Injection Attacks Against LLM Assistants
Really good research on practical attacks against LLM agents.
Abstract: The growing integration of LLMs into applications has introduced new security risks, notably known as Promptware—maliciously engineered prompts designed to manipulate LLMs to compromise the CIA triad of these applications. While prior research warned about a potential shift in the threat landscape for LLM-powered applications, the risk posed by Promptware is frequently perceived as low. In this paper, we investigate the risk Promptware poses to users of Gemini-powered assistants (web application, mobile application, and Google Assistant). We propose a novel Threat Analysis and Risk Assessment (TARA) framework to assess Promptware risks for end users. Our analysis focuses on a new variant of Promptware called Targeted Promptware Attacks, which leverage indirect prompt injection via common user interactions such as emails, calendar invitations, and shared documents. We demonstrate 14 attack scenarios applied against Gemini-powered assistants across five identified threat classes: Short-term Context Poisoning, Permanent Memory Poisoning, Tool Misuse, Automatic Agent Invocation, and Automatic App Invocation. These attacks highlight both digital and physical consequences, including spamming, phishing, disinformation campaigns, data exfiltration, unapproved user video streaming, and control of home automation devices. We reveal Promptware’s potential for on-device lateral movement, escaping the boundaries of the LLM-powered application, to trigger malicious actions using a device’s applications. Our TARA reveals that 73% of the analyzed threats pose High-Critical risk to end users. We discuss mitigations and reassess the risk (in response to deployed mitigations) and show that the risk could be reduced significantly to Very Low-Medium. We disclosed our findings to Google, which deployed dedicated mitigations...