Feed aggregator
European Parliament rejects EU anti-deforestation black list
Climate change makes South Asia’s monsoons more erratic and intense
US faces more extreme weather, but attitudes and actions aren’t keeping up
BYD, other EV battery makers face more pressure to cut emissions
Consequential differences in satellite-era sea surface temperature trends across datasets
Nature Climate Change, Published online: 11 July 2025; doi:10.1038/s41558-025-02362-6
Global datasets of surface temperature and sea surface temperature (SST) are routinely used in climate change studies. Here the authors show that while surface temperature datasets closely agree, four main SST datasets show substantial variation, with implications for their application.Gift from Dick Larson establishes Distinguished Professorship in Data, Systems, and Society
The MIT Institute for Data, Systems, and Society (IDSS) announced the creation of a new endowed chair made possible by the generosity of IDSS professor post-tenure and “MIT lifer” Richard “Dick” Larson. Effective July 1, the fund provides a full professorship for senior IDSS faculty: the Distinguished Professorship in Data, Systems, and Society.
“As a faculty member, MIT has not only accepted but embraced my several mid-career changes of direction,” says Larson. “I have called five different academic departments my home, starting with Electrical Engineering (that is what it was called in the 1960s) and now finalized with the interdepartmental, interdisciplinary IDSS — Institute for Data, Systems and Society. Those beautiful three words — data, systems, society — they represent my energy and commitment over the second half of my career. My gifted chair is an effort to keep alive those three words, with others following me doing research, teaching and mentoring centered around data, systems, society.”
Larson’s career has focused his operations research and systems expertise on a wide variety of problems, in both public and private sectors. His contributions span the fields of urban service systems (especially emergency response systems), disaster planning, pandemics, queueing, logistics, technology-enabled education, smart-energy houses, and workforce planning. His latest book, “Model Thinking for Everyday Life,” draws on decades of experience as a champion of STEM education at MIT and beyond, such as his leadership of MIT BLOSSOMS.
“Dick Larson has been making an impact at MIT for over half a century,” says IDSS Director Fotini Christia, the Ford International Professor in Political Science. “This gift extends his already considerable legacy and ensures his impact will continue to be felt for many years to come.”
Christia is pleased that IDSS and brain and cognitive science professor Alexander “Sasha” Rakhlin is the inaugural holder of the new professorship. The selection recognizes Rakhlin’s distinguished scholarly record, dedicated service to IDSS, excellence in teaching, and contributions to research in statistics and computation.
“Sasha’s analysis of neural network complexity, and his work developing tools for online prediction, are perfect examples of research which builds bridges across disciplines, and also connects different departments and units at MIT,” says Michale Fee, the Glen V. and Phyllis F. Dorflinger Professor of Neuroscience, and head of the Department of Brain and Cognitive Sciences. “It’s wonderful to see Sasha’s contributions recognized in this way, and I’m grateful to Dick Larson for supporting this vision.”
Rakhlin’s research is in machine learning, with an emphasis on statistics and computation. He is interested in formalizing the process of learning, in analyzing learning models, and in deriving and implementing emerging learning methods. A significant thrust of his research is in developing theoretical and algorithmic tools for online prediction, a learning framework where data arrives in a sequential fashion.
“I am honored to be the inaugural holder of the Distinguished Professorship in Data, Systems, and Society,” says Rakhlin. “Professor Larson’s commitment to education and service to MIT both serve as models to follow.”
Walk-through screening system enhances security at airports nationwide
A new security screener that people can simply walk past may soon be coming to an airport near you. Last year, U.S. airports nationwide began adopting HEXWAVE — a commercialized walkthrough security screening system based on microwave imaging technology developed at MIT Lincoln Laboratory — to satisfy a new Transportation Security Administration (TSA) mandate for enhanced employee screening to detect metallic and nonmetallic threats. The TSA is now in the process of evaluating HEXWAVE as a potential replacement of metal detectors to screen PreCheck passengers.
Typically, when you arrive at an airport security checkpoint line, you place your carry-on items on the conveyer belt, remove your shoes and any metallic items, and enter a body scanner. As you hold still for a few seconds with your feet spread apart and your arms extended over your head, the scanner creates a generic, featureless 3D body outline revealing any metallic or nonmetallic concealed weapons or other prohibited items.
Requiring individuals to stop, remove clothing and belongings, and pose for scans impedes traffic flow in airports and other highly populated venues, such as stadiums, shopping malls, mass transit stations, and schools. To enable more efficient screening of unstructured crowds and ensure public safety, the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) sponsored Lincoln Laboratory to prototype a high-resolution imaging system capable of scanning people and their belongings as they walk by. This R&D effort was conducted as part of S&T's Surface Transportation Explosive Threat Detection Program, which aims to provide the surface-transportation end user-community (e.g., mass transit) with a layered and integrated capability to detect threat items at the speed of the traveling public.
The laboratory's prototype microwave imager, which consists of a set of antennas installed on flat panels, operates under the same fundamental principle as existing body scanners: low-energy radio waves (less powerful than those transmitted by a cellphone) are transmitted from antennas toward a person's body and reflect off skin and any hidden objects; the reflected waves return to the antennas and are processed by a computer to create an image, which security personnel then review to identify any potential concealed threats.
The novelty of the laboratory's invention lies in its ability to discreetly handle a constant stream of subjects in motion, measuring each subject very quickly (within tens of milliseconds) and reconstructing 3D microwave images of each subject at a video rate. To meet these challenging requirements, the laboratory team developed a cost-effective antenna array and efficient image-reconstruction algorithms. Compared to existing systems, the laboratory's 3D microwave imager runs 100 times faster using the same computing hardware. In 2017, the team demonstrated the prototype's ability to detect various simulated threat items at varying distances on a rail platform at the Massachusetts Bay Transit Authority (MBTA) Emergency Training Center in Boston.
"The goal of our work is to provide security staff with more effective tools to protect public spaces. To that end, microwave imaging technology can quickly and unobtrusively provide visibility of items carried into a venue," says William Moulder, who led the technology's development at Lincoln Laboratory.
In 2018, the security company Liberty Defense licensed the imaging technology and entered into a cooperative research and development agreement (CRADA) with Lincoln Laboratory. Transitioning technology to industry for commercialization is part of the laboratory's role as a federally funded research and development center, and CRADAs provide a mechanism for such transition to happen. Through the CRADA, Liberty Defense maintained Lincoln Laboratory's core image-reconstruction intellectual property and made the technology enhancements required for commercialization, including an entirely new hardware architecture, radio frequency (RF) antenna modules, and a transceiver system that meets Federal Communications Commission waveform and RF performance requirements for indoor and outdoor operation. The co-organizational team facilitating the transition of the technology was recognized by the Federal Laboratory Consortium for Technology Transfer with a 2019 Excellence in Technology Transfer Award for the Northeast region.
By 2021, Liberty Defense had prototyped a walk-through security screening system, HEXWAVE. That same year, through the TSA's On-Person Screening Capability Program, Liberty Defense received a contract award to demonstrate HEXWAVE's enhanced threat-detection and high-throughput capabilities for screening aviation workers. Following successful testing of HEXWAVE at sports complexes, entertainment arenas, and shopping centers, both nationally and internationally, Liberty Defense began offering the product for sale.
"HEXWAVE is a great example of how federally funded R&D can be successfully transitioned to industry to meet real-world security needs," says Asha Rajagopal, the laboratory's chief technology transfer officer. "By working with Liberty Defense, we helped accelerate the delivery of a critical capability into the hands of those protecting public spaces."
In 2023, TSA began testing HEXWAVE as a potential replacement of metal detectors used to screen passengers in TSA PreCheck lanes. Airports across the United States started deploying HEXWAVE in 2024 to meet the TSA's employee screening mandate by the April 2026 deadline. Liberty Defense notes various other markets for HEXWAVE; the first units for commercial applications were delivered to Los Alamos National Laboratory in 2023, and the technology has since been deployed at other national labs, correctional facilities, government buildings, and courthouses.
"Liberty was extremely fortunate to license the technology from MIT Lincoln Laboratory," says Bill Frain, CEO of Liberty Defense. "From the outset, they've been a true partner — bringing not only deep innovation and technical expertise, but also a clear vision for commercial deployment. Together, we've successfully brought next-generation technology to market to help protect people in public spaces."
EFF Tells Virginia Court That Constitutional Privacy Protections Forbid Cops from Finding out Everyone Who Searched for a Keyword
This post was co-authored by EFF legal intern Noam Shemtov.
We are in a constant dialogue with Internet search engines, ranging from the mundane to the confessional. We ask search engines everything: What movies are playing (and which are worth seeing)? Where’s the nearest clinic (and how do I get there)? Who’s running in the sheriff’s race (and what are their views)? These online queries can give insight into our private details and innermost thoughts, but police increasingly access them without adhering to longstanding limits on government investigative power.
A Virginia appeals court is poised to review such a request in a case called Commonwealth v. Clements. In Clements, police sought evidence under a “reverse-keyword warrant,” a novel court order that compels search engines like Google to hand over information about every person who has looked up a word or phrase online. While the trial judge correctly recognized the privacy interest in our Internet queries, he overlooked the other wide-ranging harms that keyword warrants enable and upheld the search.
But as EFF and the ACLU explained in our amicus brief on appeal, reverse keyword warrants simply cannot be conducted in a lawful way. They invert privacy protections, threaten free speech and inquiry, and fundamentally conflict with the principles underlying the Fourth Amendment and its analog in the Virginia Constitution. The court of appeals now has a chance to say so and protect the rights of Internet users well beyond state lines.
To comply with a keyword warrant, a search engine has to trawl through its entire database of user queries to pinpoint the accounts or devices that made a responsive search. For a dominant service like Google, that means billions of records. Such a wide dragnet will predictably pull in people with no plausible connection to a crime under investigation if their searches happened to include keywords police are interested in.
Critically, investigators seldom have a suspect in mind when they seek a reverse-keyword warrant. That isn’t surprising. True to their name, these searches “work in reverse” from the traditional investigative process. What makes them so useful is precisely their ability to identify Internet users on the sole basis of what they searched online. But what makes a search technique convenient to the government does not always make it constitutional. Quite the opposite: the constitution is anathema to inherently suspicionless dragnets.
The Fourth Amendment forbids “exploratory rummaging”—in fact, it was drafted in direct response to British colonial soldiers’ practice of indiscriminately searching people’s homes and papers for evidence of their opposition to the Crown. To secure a lawful warrant, police must have a specific basis to believe evidence will be found in a given location. They must also describe that location in some detail and say what evidence they expect to find there. It’s hard to think of a less specific description than “all the Internet searches in the world” or a weaker hunch than “whoever committed the crime probably looked up search term x.” Because those airy assertions are all law enforcement can martial in support of keyword warrants, they are “tantamount to high-tech versions of the reviled ‘general warrants’ that first gave rise to the . . . Fourth Amendment” and Virginia’s even stronger search-and-seizure provision.
What’s more, since keyword warrants compel search engine companies to hand over records about anyone anywhere who looked up a particular search term within a given timeframe, they effectively make a suspect out of every person whose online activity falls within the warrant’s sweep. As one court has said about related geofences, this approach “invert[s] probable cause” and “cannot stand.”
Keyword warrants’ fatal flaws are even more drastic considering that privacy rights apply with special force to searches of items—like diaries, booklists, and Internet search queries—that reflect a person’s free thought and expression. As both law and lived experience affirm, the Internet is “the most important place[] . . . for the exchange of views.” Using it—and using keyword searches to navigate the practical infinity of its contents—is “indispensable to participation in modern society.” We shouldn’t have to engage in that core endeavor with the fear that our searches will incriminate us, subject to police officers’ discretion about what keywords are worthy of suspicion. That outcome would predictably chill people from accessing information about sensitive and important topics like reproductive health, public safety, or events in the news that could be relevant to a criminal investigation.
The Virginia Court of Appeals now has the opportunity in Clements to protect privacy and speech rights by affirming that keyword warrants can’t be reconciled with constitutional protections guaranteed at the federal or state level. We hope it does so.
Designing across cultural and geographic divides
In addition to the typical rigors of MIT classes, Terrascope Subject 2.00C/1.016/EC.746 (Design for Complex Environmental Issues) poses some unusual hurdles for students to navigate: collaborating across time zones, bridging different cultural and institutional experiences, and trying to do hands-on work over Zoom. That’s because the class includes students from not only MIT, but also Diné College in Tsaile, Arizona, within the Navajo Nation, and the University of Puerto Rico-Ponce (UPRP).
Despite being thousands of miles apart, students work in teams to tackle a real-world problem for a client, based on the Terrascope theme for the year. “Understanding how to collaborate over long distances with people who are not like themselves will be an important item in many of these students’ toolbelts going forward, in some cases just as much as — or more than — any particular design technique,” says Ari Epstein, Terrascope associate director and senior lecturer. Over the past several years, Epstein has taught the class along with Joel Grimm of MIT Beaver Works and Libby Hsu of MIT D-Lab, as well instructors from the two collaborating institutions. Undergraduate teaching fellows from all three schools are also key members of the instructional staff.
Since the partnership began three years ago (initially with Diné College, with the addition of UPRP two years ago), the class themes have included food security and sustainable agriculture in Navajo Nation; access to reliable electrical power in Puerto Rico; and this year, increasing museum visitors’ engagement with artworks depicting mining and landscape alteration in Nevada.
Each team — which includes students from all three colleges — meets with clients online early in the term to understand their needs; then, through an iterative process, teams work on designing prototypes. During MIT’s spring break, teams travel to meet with the clients onsite to get feedback and continue to refine their prototypes. At the end of the term, students present their final products to the clients, an expert panel, and their communities at a hybrid showcase event held simultaneously on all three campuses.
Free-range design engineering
“I really loved the class,” says Graciela Leon, a second-year mechanical engineering major who took the subject in 2024. “It was not at all what I was expecting,” she adds. While the learning objectives on the syllabus are fairly traditional — using an iterative engineering design process, developing teamwork skills, and deepening communication skills, to name a few — the approach is not. “Terrascope is just kind of like throwing you into a real-world problem … it feels a lot more like you are being trusted with this actual challenge,” Leon says.
The 2024 challenge was to find a way to help the clients, Puerto Rican senior citizens, turn on gasoline-powered generators when the electrical power grid fails; some of them struggle with the pull cords necessary to start the generators. The students were tasked with designing solutions to make starting the generators easier.
Terrascope instructors teach fundamental skills such as iterative design spirals and scrum workflow frameworks, but they also give students ample freedom to follow their ideas. Leon admits she was a bit frustrated at first, because she wasn’t sure what she was supposed to be doing. “I wanted to be building things and thought, ‘Wow, I have to do all these other things, I have to write some kind of client profile and understand my client’s needs.’ I was just like, ‘Hand me a drill! I want to design something!’”
When he took the class last year, Uziel Rodriguez-Andujar was also thrown off initially by the independence teams had. Now a second-year UPRP student in mechanical engineering, he’s accustomed to lecture-based classes. “What I found so interesting is the way [they] teach the class, which is, ‘You make your own project, and we need you to find a solution to this. How it will look, and when you have it — that’s up to you,’” he says.
Clearing hurdles
Teaching the course on three different campuses introduces a number of challenges for students and instructors to overcome — among them, operating in three different time zones, overcoming language barriers, navigating different cultural and institutional norms, communicating effectively, and designing and building prototypes over Zoom.
“The culture span is huge,” explains Epstein. “There are different ways of speaking, different ways of listening, and each organization has different resources.”
First-year MIT student EJ Rodriguez found that one of the biggest obstacles was trying to convey ideas to teammates clearly. He took the class this year, when the theme revolved around the environmental impacts of lithium mining. The client, the Nevada Museum of Art, wanted to find ways to engage visitors with its artwork collection related to mining-related landscape changes.
Rodriguez and his team designed a pendulum with a light affixed to it that illuminates a painting by a Native American artist. When the pendulum swings, it changes how the visitor experiences the artwork. The team built parts for the pendulum on different campuses, and they reached a point where they realized their pieces were incompatible. “We had different visions of what we wanted for the project, and different vocabulary we were using to describe our ideas. Sometimes there would be a misunderstanding … It required a lot of honesty from each campus to be like, ‘OK, I thought we were doing exactly this,’ and obviously in a really respectful way.”
It’s not uncommon for students at Diné College and UPRP to experience an initial hurdle that their MIT peers do not. Epstein notes, “There’s a tendency for some folks outside MIT to see MIT students as these brilliant people that they don’t belong in the same room with.” But the other students soon realize not only that they can hold their own intellectually, but also that their backgrounds and experiences are incredibly valuable. “Their life experiences actually put them way ahead of many MIT students in some ways, when you think about design and fabrication, like repairing farm equipment or rebuilding transmissions,” he adds.
That’s how Cauy Bia felt when he took the class in 2024. Currently a first-year graduate student in biology at Diné College, Bia questioned whether he’d be on par with the MIT students. “I’ve grown up on a farm, and we do a lot of building, a lot of calculations, a lot of hands-on stuff. But going into this, I was sweating it so hard [wondering], ‘Am I smart enough to work with these students?’ And then, at the end of the day, that was never an issue,” he says.
The value of reflection
Every two weeks, Terrascope students write personal reflections about their experiences in the class, which helps them appreciate their academic and personal development. “I really felt that I had undergone a process that made me grow as an engineer,” says Leon. “I understood the importance of people and engineering more, including teamwork, working with clients, and de-centering the project away from what I wanted to build and design.”
When Bia began the semester, he says, he was more of a “make-or-break-type person” and tended to see things in black and white. “But working with all three campuses, it kind of opened up my thought process so I can assess more ideas, more voices and opinions. And I can get broader perspectives and get bigger ideas from that point,” he says. It was also a powerful experience culturally for him, particularly “drawing parallels between Navajo history, Navajo culture, and seeing the similarities between that and Puerto Rican culture, seeing how close we are as two nations.”
Rodriguez-Andujar gained an appreciation for the “constant struggle between simplicity and complexity” in engineering. “You have all these engineers trying to over-engineer everything,” he says. “And after you get your client feedback [halfway through the semester], it turns out, ‘Oh, that doesn’t work for me. I’m sorry — you have to scale it down like a hundred times and make it a lot simpler.’”
For instructors, the students’ reflections are invaluable as they strive to make improvements every year. In many ways, you might say the class is an iterative design spiral, too. “The past three years have themselves been prototypes,” Epstein says, “and all of the instructional staff are looking forward to continuing these exciting partnerships.”
No Face, No Case: California’s S.B. 627 Demands Cops Show Their Faces
Across the country, people are collecting and sharing footage of masked law enforcement officers from both federal and local agencies deputized to do so-called immigration enforcement: arresting civilians, in some cases violently and/or warrantlessly. That footage is part of a long tradition of recording law enforcement during their operations to ensure some level of accountability if people observe misconduct and/or unconstitutional practices. However, as essential as recording police can be in proving allegations of misconduct, the footage is rendered far less useful when officers conceal their badges and/or faces. Further, lawyers, journalists, and activists cannot then identify officers in public records requests for body-worn camera footage to view the interaction from the officers’ point of view.
In response to these growing concerns, California has introduced S.B. 627 to prohibit law enforcement from covering their faces during these kinds of public encounters. This builds on legislation (in California and some other states and municipalities) that requires police, for example, “to wear a badge, nameplate, or other device which bears clearly on its face the identification number or name of the officer.” Similarly, police reform legislation passed in 2018 requires greater transparency by opening individual personnel files of law enforcement to public scrutiny when there are use of force cases or allegations of violent misconduct.
But in the case of ICE detentions in 2025, federal and federally deputized officers are not only covering up their badges—they're covering their faces as well. This bill would offer an important tool to prevent this practice, and to ensure that civilians who record the police can actually determine the identity of the officers they’re recording, in case further investigation is warranted. The legislation explicitly includes “any officer or anyone acting on behalf of a local, state, or federal law enforcement agency.”
This is a necessary move. The right to record police, and to hold government actors accountable for their actions, requires that we know who the government actors are in the first place. The new legislation seeks to cover federal officers in addition to state and local officials, protecting Californians from otherwise unaccountable law enforcement activity.
As EFF has stood up for the right to record police, we also stand up for the right to be able to identify officers in those recordings. We have submitted a letter to the state legislature to that effect. California should pass S.B. 627, and more states should follow suit to ensure that the right to record remains intact.
A bionic knee integrated into tissue can restore natural movement
MIT researchers have developed a new bionic knee that can help people with above-the-knee amputations walk faster, climb stairs, and avoid obstacles more easily than they could with a traditional prosthesis.
Unlike prostheses in which the residual limb sits within a socket, the new system is directly integrated with the user’s muscle and bone tissue. This enables greater stability and gives the user much more control over the movement of the prosthesis.
Participants in a small clinical study also reported that the limb felt more like a part of their own body, compared to people who had more traditional above-the-knee amputations.
“A prosthesis that's tissue-integrated — anchored to the bone and directly controlled by the nervous system — is not merely a lifeless, separate device, but rather a system that is carefully integrated into human physiology, offering a greater level of prosthetic embodiment. It’s not simply a tool that the human employs, but rather an integral part of self,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, an associate member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.
Tony Shu PhD ’24 is the lead author of the paper, which appears today in Science.
Better control
Over the past several years, Herr’s lab has been working on new prostheses that can extract neural information from muscles left behind after an amputation and use that information to help guide a prosthetic limb.
During a traditional amputation, pairs of muscles that take turns stretching and contracting are usually severed, disrupting the normal agonist-antagonist relationship of the muscles. This disruption makes it very difficult for the nervous system to sense the position of a muscle and how fast it’s contracting.
Using the new surgical approach developed by Herr and his colleagues, known as agonist-antagonist myoneuronal interface (AMI), muscle pairs are reconnected during surgery so that they still dynamically communicate with each other within the residual limb. This sensory feedback helps the wearer of the prosthesis to decide how to move the limb, and also generates electrical signals that can be used to control the prosthetic limb.
In a 2024 study, the researchers showed that people with amputations below the knee who received the AMI surgery were able to walk faster and navigate around obstacles much more naturally than people with traditional below-the-knee amputations.
In the new study, the researchers extended the approach to better serve people with amputations above the knee. They wanted to create a system that could not only read out signals from the muscles using AMI but also be integrated into the bone, offering more stability and better sensory feedback.
To achieve that, the researchers developed a procedure to insert a titanium rod into the residual femur bone at the amputation site. This implant allows for better mechanical control and load bearing than a traditional prosthesis. Additionally, the implant contains 16 wires that collect information from electrodes located on the AMI muscles inside the body, which enables more accurate transduction of the signals coming from the muscles.
This bone-integrated system, known as e-OPRA, transmits AMI signals to a new robotic controller developed specifically for this study. The controller uses this information to calculate the torque necessary to move the prosthesis the way that the user wants it to move.
“All parts work together to better get information into and out of the body and better interface mechanically with the device,” Shu says. “We’re directly loading the skeleton, which is the part of the body that’s supposed to be loaded, as opposed to using sockets, which is uncomfortable and can lead to frequent skin infections.”
In this study, two subjects received the combined AMI and e-OPRA system, known as an osseointegrated mechanoneural prosthesis (OMP). These users were compared with eight who had the AMI surgery but not the e-OPRA implant, and seven users who had neither AMI nor e-OPRA. All subjects took a turn at using an experimental powered knee prosthesis developed by the lab.
The researchers measured the participants’ ability to perform several types of tasks, including bending the knee to a specified angle, climbing stairs, and stepping over obstacles. In most of these tasks, users with the OMP system performed better than the subjects who had the AMI surgery but not the e-OPRA implant, and much better than users of traditional prostheses.
“This paper represents the fulfillment of a vision that the scientific community has had for a long time — the implementation and demonstration of a fully physiologically integrated, volitionally controlled robotic leg,” says Michael Goldfarb, a professor of mechanical engineering and director of the Center for Intelligent Mechatronics at Vanderbilt University, who was not involved in the research. “This is really difficult work, and the authors deserve tremendous credit for their efforts in realizing such a challenging goal.”
A sense of embodiment
In addition to testing gait and other movements, the researchers also asked questions designed to evaluate participants’ sense of embodiment — that is, to what extent their prosthetic limb felt like a part of their own body.
Questions included whether the patients felt as if they had two legs, if they felt as if the prosthesis was part of their body, and if they felt in control of the prosthesis. Each question was designed to evaluate the participants’ feelings of agency, ownership of device, and body representation.
The researchers found that as the study went on, the two participants with the OMP showed much greater increases in their feelings of agency and ownership than the other subjects.
“Another reason this paper is significant is that it looks into these embodiment questions and it shows large improvements in that sensation of embodiment,” Herr says. “No matter how sophisticated you make the AI systems of a robotic prosthesis, it’s still going to feel like a tool to the user, like an external device. But with this tissue-integrated approach, when you ask the human user what is their body, the more it’s integrated, the more they’re going to say the prosthesis is actually part of self.”
The AMI procedure is now done routinely on patients with below-the-knee amputations at Brigham and Women’s Hospital, and Herr expects it will soon become the standard for above-the-knee amputations as well. The combined OMP system will need larger clinical trials to receive FDA approval for commercial use, which Herr expects may take about five years.
The research was funded by the Yang Tan Collective and DARPA.
Axon’s Draft One is Designed to Defy Transparency
Axon Enterprise’s Draft One — a generative artificial intelligence product that writes police reports based on audio from officers’ body-worn cameras — seems deliberately designed to avoid audits that could provide any accountability to the public, an EFF investigation has found.
Our review of public records from police agencies already using the technology — including police reports, emails, procurement documents, department policies, software settings, and more — as well as Axon’s own user manuals and marketing materials revealed that it’s often impossible to tell which parts of a police report were generated by AI and which parts were written by an officer.
You can read our full report, which details what we found in those documents, how we filed those public records requests, and how you can file your own, here.
Everyone should have access to answers, evidence, and data regarding the effectiveness and dangers of this technology. Axon and its customers claim this technology will revolutionize policing, but it remains to be seen how it will change the criminal justice system, and who this technology benefits most.
For months, EFF and other organizations have warned about the threats this technology poses to accountability and transparency in an already flawed criminal justice system. Now we've concluded the situation is even worse than we thought: There is no meaningful way to audit Draft One usage, whether you're a police chief or an independent researcher, because Axon designed it that way.
Draft One uses a ChatGPT variant to process body-worn camera audio of public encounters and create police reports based only on the captured verbal dialogue; it does not process the video. The Draft One-generated text is sprinkled with bracketed placeholders that officers are encouraged to add additional observations or information—or can be quickly deleted. Officers are supposed to edit Draft One's report and correct anything the Gen AI misunderstood due to a lack of context, troubled translations, or just plain-old mistakes. When they're done, the officer is prompted to sign an acknowledgement that the report was generated using Draft One and that they have reviewed the report and made necessary edits to ensure it is consistent with the officer’s recollection. Then they can copy and paste the text into their report. When they close the window, the draft disappears.
Any new, untested, and problematic technology needs a robust process to evaluate its use by officers. In this case, one would expect police agencies to retain data that ensures officers are actually editing the AI-generated reports as required, or that officers can accurately answer if a judge demands to know whether, or which part of, reports used by the prosecution were written by AI.
"We love having new toys until the public gets wind of them."
One would expect audit systems to be readily available to police supervisors, researchers, and the public, so that anyone can make their own independent conclusions. And one would expect that Draft One would make it easy to discern its AI product from human product – after all, even your basic, free word processing software can track changes and save a document history.
But Draft One defies all these expectations, offering meager oversight features that deliberately conceal how it is used.
So when a police report includes biased language, inaccuracies, misinterpretations, or even outright lies, the record won't indicate whether the officer or the AI is to blame. That makes it extremely difficult, if not impossible, to assess how the system affects justice outcomes, because there is little non-anecdotal data from which to determine whether the technology is junk.
The disregard for transparency is perhaps best encapsulated by a short email that an administrator in the Frederick Police Department in Colorado, one of Axon's first Draft One customers, sent to a company representative after receiving a public records request related to AI-generated reports.
"We love having new toys until the public gets wind of them," the administrator wrote.
No Record of Who Wrote WhatThe first question anyone should have about a police report written using Draft One is which parts were written by AI and which were added by the officer. Once you know this, you can start to answer more questions, like:
- Are officers meaningfully editing and adding to the AI draft? Or are they reflexively rubber-stamping the drafts to move on as quickly as possible?
- How often are officers finding and correcting errors made by the AI, and are there patterns to these errors?
- If there is inappropriate language or a fabrication in the final report, was it introduced by the AI or the officer?
- Is the AI overstepping in its interpretation of the audio? If a report says, "the subject made a threatening gesture," was that added by the officer, or did the AI make a factual assumption based on the audio? If a suspect uses metaphorical slang, does the AI document literally? If a subject says "yeah" through a conversation as a verbal acknowledgement that they're listening to what the officer says, is that interpreted as an agreement or a confession?
"So we don’t store the original draft and that’s by design..."
Ironically, Draft One does not save the first draft it generates. Nor does the system store any subsequent versions. Instead, the officer copies and pastes the text into the police report, and the previous draft, originally created by Draft One, disappears as soon as the window closes. There is no log or record indicating which portions of a report were written by the computer and which portions were written by the officer, except for the officer's own recollection. If an officer generates a Draft One report multiple, there's no way to tell whether the AI interprets the audio differently each time.
Axon is open about not maintaining these records, at least when it markets directly to law enforcement.
In this video of a roundtable discussion about the Draft One product, Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added):
“So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices—so basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party RMS [records management system] system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”
To reiterate: Axon deliberately does not store the original draft written by the Gen AI, because "the last thing" they want is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit).
Following up on the same question, Axon's Director of Strategic Relationships at Axon Justice suggests this is fine, since a police officer using a word processor wouldn't be required to save every draft of a police report as they're re-writing it. This is, of course, misdirection and not remotely comparable. An officer with a word processor is one thought process and a record created by one party; Draft One is two processes from two parties–Axon and the officer. Ultimately, it could and should be considered two records: the version sent to the officer from Axon and the version edited by the officer.
The days of there being unexpected consequences of police departments writing reports in word processors may be over, but Draft One is still unproven. After all, every AI-evangelist, including Axon, claims this technology is a game-changer. So, why wouldn't an agency want to maintain a record that can establish the technology’s accuracy?
It also appears that Draft One isn't simply hewing to long-established norms of police report-writing; it may fundamentally change them. In one email, the Campbell Police Department's Police Records Supervisor tells staff, “You may notice a significant difference with the narrative format…if the DA’s office has comments regarding our report narratives, please let me know.” It's more than a little shocking that a police department would implement such a change without fully soliciting and addressing the input of prosecutors. In this case, the Santa Clara County District Attorney had already suggested police include a disclosure when Axon Draft One is used in each report, but Axon's engineers had yet to finalize the feature at the time it was rolled out.
One of the main concerns, of course, is that this system effectively creates a smokescreen over truth-telling in police reports. If an officer lies or uses inappropriate language in a police report, who is to say that the officer wrote it or the AI? An officer can be punished severely for official dishonesty, but the consequences may be more lenient for a cop who blames it on the AI. There has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the "guardrails" that supposedly deter officers from submitting AI-generated reports without reading them first, as Axon disclosed to the Frederick Police Department.
To serve and protect the public interest, the AI output must be continually and aggressively evaluated whenever and wherever it's used. But Axon has intentionally made this difficult.
What the Audit Trail Actually Looks LikeYou may have seen news stories or other public statements asserting that Draft One does, indeed, have auditing features. So, we dug through the user manuals to figure out what that exactly means.
The first thing to note is that, based on our review of the documentation, there appears to be no feature in Axon software that allows departments to export a list of all police officers who have used Draft One. Nor is it possible to export a list of all reports created by Draft One, unless the department has customized its process (we'll get to that in a minute).
This is disappointing because, without this information, it's near impossible to do even the most basic statistical analysis: how many officers are using the technology and how often.
Based on the documentation, you can only export two types of very basic logs, with the process differing depending on whether an agency uses Evidence or Records/Standards products. These are:
- A log of basic actions taken on a particular report. If the officer requested a Draft One report or signed the Draft One liability disclosure related to the police report, it will show here. But nothing more than that.
- A log of an individual officer/user's basic activity in the Axon Evidence/Records system. This audit log shows things such as when an officer logs into the system, uploads videos, or accesses a piece of evidence. The only Draft One-related activities this tracks are whether the officer ran a Draft One request, signed the Draft One liability disclosure, or changed the Draft One settings.
This means that, to do a comprehensive review, an evaluator may need to go through the record management system and look up each officer individually to identify whether that officer used Draft One and when. That could mean combing through dozens, hundreds, or in some cases, thousands of individual user logs.
An example of Draft One usage in an audit log.
An auditor could also go report-by-report as well to see which ones involved Draft One, but the sheer number of reports generated by an agency means this method would require a massive amount of time.
But can agencies even create a list of police reports that were co-written with AI? It depends on whether the agency has included a disclosure in the body of the text, such as "I acknowledge this report was generated from a digital recording using Draft One by Axon." If so, then an administrator can use "Draft One" as a keyword search to find relevant reports.
Agencies that do not require that language told us they could not identify which reports were written with Draft One. For example, one of those agencies and one of Axon's most promoted clients, the Lafayette Police Department in Indiana, told us:
"Regarding the attached request, we do not have the ability to create a list of reports created through Draft One. They are not searchable. This request is now closed."
Meanwhile, in response to a similar public records request, the Palm Beach County Sheriff's Office, which does require a disclosure at the bottom of each report that it had been written by AI, was able to isolate more than 3,000 Draft One reports generated between December 2024 and March 2025.
They told us: "We are able to do a keyword and a timeframe search. I used the words draft one and the system generated all the draft one reports for that timeframe."
We have requested further clarification from Axon, but they have yet to respond.
However, as we learned from email exchanges between the Frederick Police Department in Colorado and Axon, Axon is tracking police use of the technology at a level that isn't available to the police department itself.
In response to a request from Politico's Alfred Ng in August 2024 for Draft One-generated police reports, the police department was struggling to isolate those reports.
An Axon representative responded: "Unfortunately, there’s no filter for DraftOne reports so you’d have to pull a User’s audit trail and look for Draft One entries. To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy."
But then, Axon followed up: "We track which reports use Draft One internally so I exported the data." Then, a few days later, Axon provided Frederick with some custom JSON code to extract the data in the future.
What is Being Done About Draft One
The California Assembly is currently considering SB 524, a bill that addresses transparency measures for AI-written police reports. The legislation would require disclosure whenever police use artificial intelligence to partially or fully write official reports, as well as “require the first draft created to be retained for as long as the final report is retained.” Because Draft One is designed not to retain the first or any previous drafts of a report, it cannot comply with this common-sense and first-step bill, and any law enforcement usage would be unlawful.
Axon markets Draft One as a solution to a problem police have been complaining about for at least a century: that they do too much paperwork. Or, at least, they spend too much time doing paperwork. The current research on whether Draft One remedies this issue shows mixed results, from some agencies claiming it has no real-time savings, with others agencies extolling its virtues (although their data also shows that results vary even within the department).
In the justice system, police must prioritize accuracy over speed. Public safety and a trustworthy legal system demand quality over corner-cutting. Time saved should not be the only metric, or even the most important one. It's like evaluating a drive-through restaurant based only on how fast the food comes out, while deliberately concealing the ingredients and nutritional information and failing to inspect whether the kitchen is up to health and safety standards.
Given how untested this technology is and how much the company is in a hurry to sell Draft One, many local lawmakers and prosecutors have taken it upon themselves to try to regulate the product’s use. Utah is currently considering a bill that would mandate disclosure for any police reports generated by AI, thus sidestepping one of the current major transparency issues: it’s nearly impossible to tell which finished reports started as an AI draft.
In King County, Washington, which includes Seattle, the district attorney’s office has been clear in their instructions: police should not use AI to write police reports. Their memo says
We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.
We urge other prosecutors to follow suit and demand that police in their jurisdiction not unleash this new, unaccountable, and intentionally opaque AI product.
ConclusionPolice should not be using AI to write police reports. There are just too many unanswered questions about how AI would translate the audio of situations and whether police will actually edit those drafts, while simultaneously, there is no way for the public to reliably discern what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might compound and exacerbate existing problems or create new ones in an already unfair and untransparent criminal justice system.
EFF will continue to research and advocate against the use of this technology but for now, the lesson is clear: Anyone with control or influence over police departments, be they lawmakers or people in the criminal justice system, has a duty to be informed about the potential harms and challenges posed by AI-written police reports.
EFF's Guide to Getting Records About Axon's Draft One AI-Generated Police Reports
The moment Axon Enterprise announced a new product, Draft One, that would allow law enforcement officers to use artificial intelligence to automatically generate incident report narratives based on body-worn camera audio, everyone in the police accountability community immediately started asking the same questions.
What do AI-generated police reports look like? What kind of paper trail does this system leave? How do we get a hold of documentation using public records laws?
Unfortunately, obtaining these records isn't easy. In many cases, it's straight-up impossible.
Read our full report on how Axon's Draft One defies transparency expectations by design here.
In some jurisdictions, the documents are walled off behind government-created barriers. For example, California fully exempts police narrative reports from public disclosure, while other states charge fees to access individual reports that become astronomical if you want to analyze the output in bulk. Then there are technical barriers: Axon's product itself does not allow agencies to isolate reports that contain an AI-generated narrative, although an agency can voluntarily institute measures to make them searchable by a keyword.
This spring, EFF tested out different public records request templates and sent them to dozens of law enforcement agencies we believed are using Draft One.
We asked each agency for the Draft One-generated police reports themselves, knowing that in most cases this would be a long shot. We also dug into Axon's user manuals to figure out what kind of logs are generated and how to carefully phrase our public records request to get them. We asked for the current system settings for Draft One, since there are a lot of levers police administrators can pull that drastically change how and when officers can use the software. We also requested the standard records that we usually ask for when researching new technologies: procurement documents, agreements, training manuals, policies, and emails with vendors.
Like all mass public records campaigns, the results were… mixed. Some agencies were refreshingly open with their records. Others assessed us records fees well outside the usual range for a non-profit organization.
What we learned about the process is worth sharing. Axon has thousands of clients nationwide that use its Tasers, body-worn cameras and bundles of surveillance equipment, and the company is using those existing relationships to heavily promote Draft One. We expect many more cities to deploy the technology over the next few years. Watchdogging police use of AI will require a nationwide effort by journalists, advocacy organizations and community volunteers.
Below we’re sharing some sample language you can use in your own public records requests about Draft One — but be warned. It’s likely that the more you include, the longer it might take and the higher the fees will get. The template language and our suggestions for filing public records requests are not legal advice. If you have specific questions about a public records request you filed, consult a lawyer.
1. Police ReportsLanguage to try in your public records request:
- All police report narratives, supplemental report narratives, warrant affidavits, statements, and other narratives generated using Axon Draft One to document law enforcement-related incidents for the period between [DATE IN THE LAST FEW WEEKS] and the date this request is received. If your agency requires a Draft One disclosure in the text of the message, you can use "Draft One" as a keyword search term.
Or
- The [NUMBER] most recent police report narratives that were generated using Axon Draft One between [DATE IN THE LAST FEW WEEKS] and the date this request is received.
If you are curious about a particular officer's Draft One usage, you can also ask for their reports specifically. However it may be helpful to obtain their usage log first (see section 2).
- All police report narratives, supplemental report narratives, warrant affidavits, statements, and other narratives generated by [OFFICER NAME] using Axon Draft One to document law enforcement-related incidents for the period between [DATE IN THE LAST FEW WEEKS] and the date this request is received.
We suggest using weeks, not months, because the sheer number of reports can get costly very quickly.
As an add-on to Axon's evidence and records management platforms, Draft One uses ChatGPT to convert audio taken from Axon body-worn cameras into the so-called first draft of the narrative portion of a police report.
When Politico surveyed seven agencies in September 2024, reporter Alfred Ng found that police administrators did not have the technical ability to identify which reports contained AI-generated language. As Ng reported. “There is no way for us to search for these on our end,” a Lafayette, IN police captain told Ng. Six months later, EFF received the same no-can-do response from the Lafayette Police Department.
Although Lafayette Police could not create a list on their own, it turns out that Axon's engineers can generate these reports for police if asked. When the Frederick Police Department in Colorado received a similar request from Ng, the agency contacted Axon for help. The company does internally track reports written with Draft One and was able to provide a spreadsheet of Draft One reports (.csv) and even provided Frederick Police with computer code to allow the agency to create similar lists in the future. Axon told them they would look at making this a feature in the future, but that appears not to have happened yet.
But we also struck gold with two agencies: the Palm Beach County Sheriff's Office (PBCSO) in Florida and the Lake Havasu City Police Department in Arizona. In both cases, the agencies require officers to include a disclosure that they used Draft One at the end of the police narrative. Here's a slide from the Palm Beach County Sheriff's Draft One training:
And here's the boilerplate disclosure:
I acknowledge this report was generated from a digital recording using Draft One by Axon. I further acknowledge that I have I reviewed the report, made any necessary edits, and believe it to be an accurate representation of my recollection of the reported events. I am willing to testify to the accuracy of this report.
As small a gesture as it may seem, that disclosure makes all the difference when it comes to responding to a public records request. Lafayette Police could not isolate the reports because its policy does not require the disclosure. A Frederick Police Department sergeant noted in an email to Axon that they could isolate reports when the auto-disclosure was turned on, but not after they decided to turn it off. This year, Utah legislators introduced a bill to require this kind of disclosure on AI-generated reports.
As the PBCSO records manager told us: "We are able to do a keyword and a timeframe search. I used the words ‘Draft One’ and the system generated all the Draft One reports for that timeframe." In fact, in Palm Beach County and Lake Havasu, records administrators dug up huge numbers of records. But, once we saw the estimated price tag, we ultimately narrowed our request to just 10 reports.
Here is an example of a report from PBCSO, which only allows Draft One to be used in incidents that don't involve a criminal charge. As a result, many of the reports were related to mental health or domestic dispute responses.
A machine readable text version of this report is available here. Full version here.
And here is an example from the Lake Havasu City Police Department, whose clerk was kind enough to provide us with a diverse sample of requests.
A machine readable text version of this report is available here. Full version here.
EFF redacted some of these records to protect the identity of members of the public who were captured on body-worn cameras. Black-bar redactions were made by the agencies, while bars with X's were made by us. You can view all the examples we received below:
- 10 Axon Draft One-assisted reports from the Palm Beach County Sheriff's Office
- 10 Axon Draft One-assisted reports from the Lake Havasu Police Department
We also received police reports (perhaps unintentionally) from two other agencies that were contained as email attachments in response to another part of our request (see section 7).
2. Audit LogsLanguage to try in your public records request:
Note: You can save time by determining in advance whether the agency uses Axon Evidence or Axon Records and Standards, then choose the applicable option below. If you don't know, you can always request both.
Audit logs from Axon Evidence
- Audit logs for the period December 1, 2024 through the date this request is received, for the 10 most recently active users.
According to Axon's online user manual, through Axon Evidence agencies are able to view audit logs of individual officers to ascertain whether they have requested the use of Draft One, signed a Draft One liability disclosure or changed Draft One settings (https://my.axon.com/s/article/View-the-audit-trail-in-Axon-Evidence-Draft-One?language=en_US). In order to obtain these audit logs, you may follow the instructions on this Axon page: https://my.axon.com/s/article/Viewing-a-user-audit-trail?language=en_US.
In order to produce a list of the 10 most recent active users, you may click the arrow next to "Last Active" then select the most 10 recent. The [...] menu item allows you to export the audit log. We would prefer these audits as .csv files if possible.
Alternatively, if you know the names of specific officers, you can name them rather than selecting the most recent.
Or
Audit logs from Axon Records and Axon Standards
- According to Axon's online user manual, through Axon Records and Standards, agencies are able to view audit logs of individual officers to ascertain whether they have requested a Draft One draft or signed a Draft One liability disclosure. https://my.axon.com/s/article/View-the-audit-log-in-Axon-Records-and-Standards-Draft-One?language=en_US
To obtain these logs using the Axon Records Audit Tool, follow these instructions: https://my.axon.com/s/article/Audit-Log-Tool-Axon-Records?language=en_US
a. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "M" into the audit tool. If no user comes up with M, please try "Mi."
b. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "J" into the audit tool. If no user comes up with J, please try "Jo."
c. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "S" into the audit tool. If no user comes up with S, please try "Sa."
You could also tell the agency you are only interested in Draft One related items, which may save the agency time in reviewing and redacting the documents.
Generally, many of the basic actions a police officer takes using Axon technology — whether it's signing in, changing a password, accessing evidence or uploading BWC footage — is logged in the system.
This also includes some actions when an officer uses Draft One. However, the system only logs three types of activities: requesting that Draft One generate a report, signing a Draft One liability disclosure, or changing Draft One's settings. And these reports are one of the only ways to identify which reports were written with AI and how widely the technology is used.
Unfortunately, Axon appears to have designed its system so that administrators cannot create a list of all Draft One activities taken by the entire police force. Instead, all they can do is view an individual officer's audit log to see when they used Draft One or look at the log for a particular piece of evidence to see if Draft One was used. These can be exported as a spreadsheet or a PDF. (When the Frederick Police Department asked Axon how to create a list of Draft One reports, the Axon rep told them that feature wasn't available and they would have to follow the above method. "To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy," Axon wrote in August 2024, then suggested it might come up with a long-term solution. We emailed Axon back in March to see if this was still the case, but they did not provide a response.)
Here's an excerpt from a PDF version from the Bishop Police Department in California:
Here are some additional audit log examples:
- Campbell Police Department, California (XLSX)
- Lafayette Police Department, Indiana (XLSX)
- Bishop Police Department, California (PDF)
- Pasco Police Department, Washington (CSV)
If you know the name of an individual officer, you can try to request their audit logs to see if they used Draft One. Since we didn't have a particular officer in mind, we had to get creative.
An agency may manage their documents with one of a few different Axon offerings: Axon Evidence, Axon Records, or Axon Standards. The process for requesting records is slightly different depending on which one is used. We dug through the user manuals and came up with a few ways to export a random(ish) example. We also linked the manuals and gave clear instructions for the records officers.
With Axon Evidence, an administrator can simply sort the system to show the 10 most recent users then export their usage logs. With Axon Records/Standard, the administrator has to start typing in a name and then it auto-populates with suggestions. So, we ask them to export the audit logs for the first few users who came up when they typed the letters M, J, and S into the search (since those letters are common at the beginning of names).
Unfortunately, this method is a little bit of a gamble. Many officers still aren't using Draft One, so you may end up with hundreds of pages of logs that don't mention Draft One at all (as was the case with the records we received from Monroe County, NY).
3. SettingsLanguage to try in your public records request:
- A copy of all settings and configurations made by this agency in its use of the Axon Draft One platform, including all opt-in features that the department has elected to use and the incident types for which the software can be used. A screen capture of these settings will suffice.
We knew the Draft One system offers department managers the option to customize how it can be used, including the categories of crime for which reports can be generated and whether or not there is a disclaimer automatically added to the bottom of the report disclosing the use of AI in its generation. So we asked for a copy of these settings and configurations. In some cases, agencies claimed this was exempted from their public records laws, while other agencies did provide the information. Here is an example from the Campbell Police Department in California:
(It's worth noting that while Campbell does require each police report to contain a disclosure that Draft One was used, the California Public Records Act exempts police reports from being released.)
Examples of settings:
- Bishop Police Department, California
- Campbell Police Department, California
- Pasco Police Department, Washington
Language to try in your public records request:
- All contracts, memorandums of understanding, and any other written agreements between this agency and Axon related to the use of Draft One, Narrative Assistant, or any other AI-assisted report generation tool provided by Axon. Responsive records include all associated amendments, exhibits, and supplemental and supporting documentation, as well as all relevant terms of use, licensing agreements, and any other guiding materials. If access to Draft One or similar tools is being provided via an existing contract or through an informal agreement, please provide the relevant contract or the relevant communication or agreement that facilitated the access. This includes all agreements, both formal and informal, including all trial access, even if that access does not or did not involve financial obligations.
It can be helpful to know how much Draft One costs, how many user licenses the agency paid for, and what the terms of the agreement are. That information is often contained in records related to the contracting process. Agencies will often provide these records with minimal pushback or redactions. Many of these records may already be online, so a requester can save time and effort by looking around first. These are often found in city council agenda packets. Also, law enforcement agencies often will bump these requests to the city or county clerk instead.
Here's an excerpt from the Monroe County Sheriff's Office in New York:
These kinds of procurement records describe the nature and cost of the relationship between the police department and the company. They can be very helpful for understanding how much a continuing service subscription will cost and what else was bundled in as part of the purchase. Draft One, so far, is often accessed as an additional feature along with other Axon products.
We received too many documents to list them all, but here is a representative example of some of the other documents you might receive, courtesy of the Dacono Police Department in Colorado.
5. Training, Manuals and PoliciesAll training materials relevant to Draft One or Axon Narrative Assistant generated by this agency, including but not limited to:
- All training material provided by Axon to this agency regarding its use of Draft One;
- All internal training materials regarding the use of Draft One;
- All user manuals, other guidance materials, help documents, or related materials;
- Guides, safety tests, and other supplementary material that mention Draft One provided by Axon from January 1, 2024 and the date this request is received;
- Any and all policies and general orders related to the use of Draft One, the Narrative Assistant, or any other AI-assisted report generation offerings provided by Axon (An example of one such policy can be found here: https://cdn.muckrock.com/foia_files/2024/11/26/608_Computer_Software_and_Transcription-Assisted_Report_Generation.pdf).
In addition to seeing when Draft One was used and how it was acquired, it can be helpful to know what rules officers must follow, what directions they're given for using it, and what features are available to users. That's where manuals, policies and training materials come in handy.
User manuals are typically going to come from Axon itself. In general, if you can get your hands on one, this will help you to better understand the mechanisms of the system, and it will help you align the way you craft your request with the way the system actually works. Luckily, Axon has published many of the materials online and we've already obtained the user manual from multiple agencies. However, Axon does update the manual from time to time, so it can be helpful to know which version the agency is working from.
Here's one from December 2024:
Policies are internal police department guidance for using Draft One. Not all agencies have developed a policy, but the ones they do have may reveal useful information, such as other records you might be able to request. Here are some examples:
- Palm Beach County Sheriff's Office General Order 563 - Axon Draft One
- Colorado Springs Police Department General Order 1904 - Use of Specialized Axon System
- Lake Havasu Police Department Policy 342 - Report Preparation
- Campbell Police Department Policy 344 - Report Preparation
- Lafayette Police Department Policy 608 - Computer Software and Transcription-Assisted Report Generation
Training and user manuals also might reveal crucial information about how the technology is used. In some cases these documents are provided by Axon to the customer. These records may illuminate the specific direction that departments are emphasizing about using the product.
Here are a few examples of training presentations:
- Colorado Springs Police Department 2025-Q1-Draft-One-Training
- Palm Beach County Sheriff's Office - Axon Draft One Training Material
- Pasco Police Department - Axon Draft One Presentation
Language to try in your public records request:
- All final reports, evaluations, reports, or other documentation concluding or summarizing a trial or evaluation period or pilot project
Many departments are getting access to Draft One as part of a trial or pilot program. The outcome of those experiments with the product can be eye-opening or eyebrow-raising. There might also be additional data or a formal report that reviews what the department was hoping to get from the experience, how they structured any evaluation of its time-saving value for the department, and other details about how officers did or did not use Draft One.
Here are some examples we received:
- The Effect of Artificial Intelligence has on Time Spent Writing Reports: An analysis of data from the Lake Havasu City Police Department
- Colorado Springs Police Department: Spreadsheets measuring amount of time officers spent writing reports versus using Draft One (zip)
Language to try in your public records request:
• All communications sent or received by any representative of this agency with individuals representing Axon referencing the following term, including emails and attachments:
- Draft One
- Narrative Assistant
- AI-generated report
• All communications sent to or received by any representative of this agency with each of the following email addresses, including attachments:
- [INSERT EMAIL ADDRESSES]
Note: We are not including the specific email addresses here that we used, since they are subject to change when employees are hired, promoted, or find new gigs. However, you can find the emails we used in our requests on MuckRock.
The communications we wanted were primarily the emails between Axon and the law enforcement agency. As you can imagine, these emails could reveal the back-and-forth between the company and its potential customers, and these conversations could include the marketing pitch made to the department, the questions and problems police may have had with it, and more.
In some cases, these emails reveal cozy relationships between salespeople and law enforcement officials. Take, for example, this email exchange between the Dickinson Police Department and an Axon rep:
Or this email between a Frederick Police Department sergeant and an Axon representative, in which a sergeant describes himself as "doing sales" for Axon by providing demos to other agencies.
A machine readable text version of this email is available here.
Emails like this also show what other agencies are considering using Draft One in the future. For example, in this email we received from the Campbell Police Department shows that the San Francisco Police Department was testing Draft One as early as October 2024 (the usage was confirmed in June 2025 by the San Francisco Standard).
A machine readable text version of this email is available here.
Your mileage will certainly vary for these email requests, in part because the ability for agencies to search their communications can vary. Some agencies can search by a keyword like "Draft One” or "Axon" and while other agencies can only search by the specific email address.
Communications can be one of the more expensive parts of the request. We've found that adding a date range and key terms or email addresses has helped limit these costs and made our requests a bit clearer for the agency. Axon sends a lot of automated emails to its subscribers, so the agency may quote a large fee for hundreds or thousands of emails that aren't particularly interesting. Many agencies respond positively if a requester reaches out to say they're open to narrowing or focusing their request.
Asking for Body-Worn Camera FootageOne of the big questions is how do the Draft One-generated reports compare to the BWC audio the narrative is based on? Are the reports accurate? Are they twisting people's words? Does Draft One hallucinate?
Finding these answers requires both obtaining the police report and the footage of the incident that was fed into the system. The laws and process for obtaining BWC footage vary dramatically state to state, and even department to department. Depending on where you live, it can also get expensive very quickly, since some states allow agencies to charge you not only for the footage but the time it takes to redact the footage. So before requesting footage, read up on your state’s public access laws or consult a lawyer.
However, once you have a copy of a Draft One report, you should have enough information to file a follow-up request for the BWC footage.
So far, EFF has not requested BWC footage. In addition to the aforementioned financial and legal hurdles, the footage can implicate both individual privacy and transparency regarding police activity. As an organization that advocates for both, we want to make sure we get this balance right. Afterall, BWCs are a surveillance technology that collects intelligence on suspects, victims, witnesses, and random passersby. When the Palm Beach County Sheriff's Office gave us an AI-generated account of a teenager being hospitalized for suicidal ideations, we of course felt that the minor's privacy outweighed our interest in evaluating the AI. But do we feel the same way about a Draft One-generated narrative about a spring break brawl in Lake Havasu?
Ultimately, we may try to obtain a limited amount of BWC footage, but we also recognize that we shouldn't make the public wait while we work it out for ourselves. Accountability requires different methods, different expertise, and different interests, and with this guide we hope to not only shine light on Draft One, but to provide the schematics for others–including academics, journalists, and local advocates–to build their own spotlights to expose police use of this problematic technology.
Where to Find More DocsDespite the variation in how agencies responded, we did have some requests that proved fruitful. You can find these requests and the documents we got via the linked police department names below.
Please note that we filed two different types of requests, so not all the elements above may be represented in each link.
Via Document Cloud (PDFs)
- Dacono Police Department, Colorado
- Mount Vernon Police Department, Illinois
- Monroe County Sheriff's Office, New York
- Joliet Police Department, Illinois
- Elgin Police Department, Illinois
- Bishop Police Department, California
- Palm Beach County Sheriff's Office
- Lake Havasu City Police Department, Arizona
- Dickinson Police Department, ND
- Firestone Police Department, Colo.
- Frederick Police Department (DocumentCloud and Google Drive. Frederick provided us a large number of emails in a difficult-to-manage PST format. We unpacked that PST into individual EML files. Because the agency did a keyword search, you may find that some of the emails are not relevant to the issue, but do include the term "draft one." To reduce the noise, we removed emails that were generated prior to the existence of Draft One. We also removed emails that contained police reports with PII. We redacted those reports and uploaded them independently. While Document Cloud allowed us to convert EML files to PDF files, it did not allow us to keep the relationship between the emails and attachments. You can find those records with the relationships somewhat maintained in Google Drive.)
Via MuckRock (Assorted filetypes)
- Pasco Police Department, Washington (Part 1, Part 2)
- Colorado Springs Police Department, Colorado
- Fort Collins Police Department, Colorado
- Campbell Police Department, California (Part 1, Part 2)
- Lafayette Police Department, Indiana
- East Palo Alto Police Department, California
Special credit goes to EFF Research Assistant Jesse Cabrera for public records request coordination.
Using Signal Groups for Activism
Good tutorial by Micah Lee. It includes some nonobvious use cases.