AI police report claims officer shape-shifted into a frog
Heber City Police Department's Axon Draft One AI report tool transcribed background dialogue from The Princess and the Frog playing on a television into an official police report, claiming an officer had shape-shifted into a frog while conducting police activity. The incident exposed design flaws in AI report-writing tools that process all body camera audio without distinguishing between relevant police interactions and ambient background noise.
Incident Details
Tech Stack
References
What Happened
In December 2025, the Heber City Police Department in Utah was testing two new AI tools designed to write police reports from body camera footage. One of those tools was Draft One, made by Axon - the same company that manufactures Tasers and body cameras for law enforcement agencies across the country. The other was Code Four, a newer system built by a pair of 19-year-old MIT dropouts.
During a routine call, an officer's body camera was recording while Disney's The Princess and the Frog was playing in the background - at a residence, on a television. Draft One transcribed the audio from the body camera, which included the movie's dialogue along with the actual police interaction. The AI then wrote a police report narrative that incorporated the film's dialogue into its account of events. The resulting report described the officer shape-shifting into a frog while conducting police activity.
Sgt. Keel of the Heber City Police Department told Fox 13 Salt Lake City: "That's when we learned the importance of correcting these AI-generated reports."
The story was first reported by Fox 13 on December 19, 2025, and went viral in January 2026, covered by Forbes, Vice, UPI, Axios, and a Steve Lehto YouTube video that drew over 108,000 views.
How Draft One Works
Draft One is Axon's AI-powered report writing tool. It processes body-worn camera audio - not video, just audio - and uses a variation of OpenAI's ChatGPT to generate narrative police reports from what it hears. Axon markets it as "a force multiplier for officers, leveraging generative AI and body-worn camera audio to produce high-quality draft report narratives in seconds."
The intended workflow: an officer responds to a call, their body camera records the interaction, and after the call, Draft One listens to the audio and produces a written report. The officer reviews the draft, makes corrections, and submits it as the official report. The tool is supposed to reduce the hours officers spend on paperwork so they can spend more time in the field.
The problem with the frog incident is straightforward. Draft One listens to everything the body camera microphone picks up. It doesn't distinguish between a suspect's statement, an officer's commands, a witness interview, a television playing in another room, or a Disney movie's soundtrack. It transcribes all of it and weaves it all into a single narrative, treating every sound source as equally relevant to the incident at hand.
A human officer writing that report would have ignored the movie. They were physically present and could distinguish between the sounds of their police interaction and the sounds of a cartoon frog. Draft One cannot make that distinction because it processes audio without spatial awareness, source separation, or contextual understanding of what sounds are relevant to the police matter being documented.
The Accountability Gap
The frog report is funny. The accountability structure around Draft One is not.
The Electronic Frontier Foundation (EFF) investigated Draft One and found that it lacks basic oversight and transparency mechanisms. When an officer uses Draft One, the tool generates a report draft in a working window. The officer can copy and paste the text into the official report system, editing it along the way. But when the officer closes the Draft One window, the draft disappears. The tool does not create or retain any record of what the AI generated versus what the officer wrote or edited.
This means there is no audit trail. If an AI-generated report contains an error - factual or otherwise - and the officer approves it without catching the mistake, there is no way after the fact to determine which parts of the report were AI-generated and which were human-authored. In a courtroom, a defense attorney asking "did the AI write this part of the report?" would get no answer, because no one can tell.
The EFF found that the Palm Beach County Sheriff's Office, one of Draft One's early adopters, used the tool to generate more than 3,000 reports between December 2024 and March 2025. That's 3,000 reports with no traceable distinction between AI-generated and officer-authored content.
Who Caught the Frog
Forbes noted an interesting detail in the accountability chain: when the frog report was flagged, the vendor (Axon) pointed back to the officer's sign-off as the quality gate. The tool produced the draft; the officer was supposed to review and correct it before submitting. The officer didn't catch the part about transforming into a frog.
This deflection pattern - the AI vendor says the human is responsible for reviewing AI output, while the humans increasingly trust the AI output without careful review - is exactly what the prosecutor's office in King County, Washington, predicted when they refused to accept Draft One reports in 2024. Their email to local police chiefs warned that using the tool would "likely result in many of your officers approving Axon drafted narratives with unintentional errors in them."
Jay Stanley, a policy analyst with the ACLU's Speech Privacy and Technology Project, published a report recommending against using Draft One. "When you see this brand new technology being inserted in some ways into the heart of the criminal justice system, which is already rife with injustice and bias and so forth, it's definitely something that we sit bolt upright and take a close look at," he said.
The Broader Adoption
The Heber City frog incident happened at a small department in Utah that was still testing the technology. But Draft One's adoption extends far beyond small departments in pilot programs. Axon is the dominant supplier of body cameras and associated technology to law enforcement agencies in the United States. When Axon adds a product to its platform, it has a built-in distribution channel to thousands of departments already using Axon hardware.
Utah was ahead of many states in at least one respect: state lawmakers had passed a law requiring police departments to include a disclaimer on final reports that were drafted with AI assistance. That disclosure requirement exists specifically because of the accountability concerns - jurors, defense attorneys, and judges should know when a police report's narrative was generated by software rather than written by the officer who was present at the scene.
But a disclosure label on a report doesn't fix the underlying problem that Draft One demonstrated in Heber City. If the AI can't distinguish between a suspect's confession and a Disney movie, and there's no audit trail to identify which parts of the report came from the AI, a label saying "AI-assisted" on the final document tells the reader very little about which specific facts and characterizations they can trust.
Why the Frog Matters
A police report claiming an officer turned into a frog will never survive review. It's absurd on its face, and that absurdity is why it became a national news story. The officer or supervisor who reads the draft catches it, deletes it, and the story becomes a humorous anecdote about AI growing pains.
The subtler version of the same problem is harder to catch. If a television in the background is playing a news broadcast about a robbery, and Draft One incorporates details from that broadcast into a report about an unrelated domestic dispute, the resulting errors would not involve magical amphibian transformations. They would sound like plausible facts - a location mentioned, a description given, a time stated - that happen to be wrong. An officer reviewing the draft might not recognize the discrepancy because the incorrect details are the kind of thing that could have been true.
This is the failure mode that matters for the criminal justice system. Not the AI writing obviously ridiculous things, but the AI writing subtly wrong things that read as perfectly reasonable police report language. Draft One's architecture - processing all ambient audio as a single undifferentiated stream, generating narrative without source attribution, and leaving no record of its contributions - is specifically designed in a way that makes these subtle errors both more likely and harder to detect.
Discussion