Five Kansas attorneys face sanctions for ChatGPT-fabricated court citations
Five attorneys who signed a legal brief for Lexos Media IP LLC in a patent infringement case against Overstock.com submitted fabricated case citations hallucinated by ChatGPT to a federal court in Kansas. Senior U.S. District Judge Julie Robinson issued an order requiring them to explain why they should not be sanctioned, with multiple defects attributed to AI including nonexistent lawsuits, made-up judicial quotes, and citations to real cases that held the opposite of what the brief claimed.
Incident Details
The Filing
Five attorneys representing Lexos Media IP LLC in a patent infringement lawsuit against Overstock.com Inc. submitted a legal brief to the U.S. District Court in Kansas City, Kansas that was riddled with AI-generated fabrications. The brief contained nonexistent case citations, made-up quotes attributed to judges, and - in a creative twist - citations to cases that were real but held the opposite of what the attorneys claimed they did.
Among the fabricated content was a nonexistent lawsuit against the city of Topeka. ChatGPT had invented a case, complete with a plausible-sounding caption and citation, involving the local government - a case that never existed. The court's capital city was apparently fair game for fictional litigation as far as the AI was concerned.
The five attorneys of record were Texas-based lawyers Sandeep Seth, Kenneth Kula, Christopher Joe, and Michael Doell, plus Topeka-based attorney David Cooper. All five had signed the brief, which under Federal Rule of Civil Procedure 11 meant each of them was certifying that the factual contentions had evidentiary support and the legal arguments were warranted by existing law.
Discovery
The fabrications came to light when Overstock.com filed its response to the brief. Overstock's attorneys, presumably having done what Lexos Media's attorneys did not - actually looking up the cited cases - flagged the nonexistent authorities. Lexos Media's legal team reviewed Overstock's filings, discovered their own citations couldn't be substantiated, and on July 29 informed the court of their use of AI.
Self-reporting the problem was better than waiting to get caught, but the timing meant the fabricated brief had been sitting in the court record for some time before anyone on the filing side noticed that the cases it cited didn't exist. The entire purpose of legal citations - to direct the court to actual law supporting an argument - had been subverted by citations to fiction.
The Show Cause Orders
On December 15, 2025, Senior U.S. District Judge Julie Robinson issued orders requiring the five attorneys to show cause why they should not be sanctioned. Judge Robinson noted that "Plaintiff's counsel should have been aware of the risk involved in submitting a brief that relied on generative AI without validating the case citations both to determine that they exist, and to confirm that the cases stand for the propositions for which they are cited."
This observation was straightforward: by late 2025, the risk of AI-generated legal citations being fabricated was not new information. The Avianca case had been national news in 2023. Dozens of sanctions had followed across federal and state courts. Every major legal publication had covered the risks. The Kansas bar, like bars in many states, had issued guidance on AI use in legal practice.
The Blame Distribution
The attorneys' January 2026 responses to the show cause orders revealed the familiar pattern of finger-pointing that emerges when multiple lawyers are attached to a single filing.
Seth, who appears to have been the primary drafter, said: "I have never before used ChatGPT to identify cases for me or make a legal argument to be incorporated in any filed pleading, and I never should have relied on the cases or the quotes and text I received from it in response to my prompts in my draft without checking them for accuracy." A straightforward admission - he used ChatGPT, he included its output without verification, and he knew he shouldn't have.
Christopher Joe, the lead attorney, pointed at Seth. Joe said Seth "was assigned the ultimate responsibility to draft, review, finalize, and ensure proper filing of both documents at issue." Joe claimed he was unaware of the AI use until "after Overstock raised the issue in its response brief" and his team "brought it to my attention." In other words: someone else was supposed to handle it.
Doell, an associate, said he "was primarily responsible for the initial drafting of some sections" of one document but not the section containing AI-generated citations. Seth had reviewed and edited Doell's draft, and "unbeknownst to me, these added arguments contained AI generated citations." Doell's defense was that the AI content was inserted after his involvement.
The Sanctions
On February 2, 2026, Judge Robinson issued her ruling. The penalties ranged from $1,000 to $5,000 across the five attorneys, reflecting their individual roles and culpability.
Joe, as the lead attorney who signed documents without reviewing them and then failed to acknowledge his breach of professional rules, received a $3,000 fine and a public admonishment. Judge Robinson found that Joe had violated his duty as signing counsel - a duty that exists precisely because signing a legal filing is supposed to mean the attorney has verified its contents.
The remaining penalties were distributed based on each attorney's involvement in drafting, reviewing, and signing the brief. The individual fines varied, but every attorney who signed faced some form of sanction.
The Three Flavors of Fabrication
This case was distinctive because the AI-generated errors weren't limited to the most common failure mode (citing cases that don't exist). The brief contained three different types of AI fabrication:
Nonexistent cases: The standard AI hallucination problem. ChatGPT generated citations to cases that were never filed, never decided, and never existed. The fictional Topeka lawsuit was the most colorful example.
Fabricated quotes: The brief attributed specific quotes to actual judges - quotes these judges never said. ChatGPT generated text that read like judicial opinions and presented it in quotation marks with attribution, giving the impression these were direct quotes from real decisions. This is a step beyond citing a fake case - it's fabricating statements attributed to named individuals.
Inverted holdings: Perhaps the most insidious category. Some citations pointed to real, verifiable cases that actually existed in legal databases. But the brief claimed these cases held the opposite of what they actually decided. A case supporting the opposing side's argument was cited as supporting Lexos Media's position. This type of error is harder to catch than a completely fictitious citation because the case name and citation are valid - only the characterization of what the case decided is wrong.
The Verification Gap
The chain of events in the Lexos Media case illustrated how the delegation of legal research to AI fails at every step where human verification should intervene.
Step one: Seth used ChatGPT to research and draft legal arguments. This is the initial failure - treating an AI text generator as a legal research tool. ChatGPT does not have access to legal databases and cannot verify whether the citations it generates correspond to real cases. It generates text that looks like legal citations because it was trained on text that included legal citations. The resemblance is cosmetic.
Step two: Seth incorporated ChatGPT's output into the brief without checking the citations against an actual legal database like Westlaw or LexisNexis. A simple lookup would have revealed that some of the cited cases didn't exist and that others didn't say what the brief claimed.
Step three: Multiple attorneys signed the brief without independently verifying the citations. Under Rule 11, each signing attorney bears responsibility for the contents of the filing. Signing was supposed to be the final checkpoint - the moment where a licensed attorney affirms that everything in the document meets professional standards. Instead, it was a perfunctory step that nobody treated as meaningful.
Step four: The fabrications were caught by opposing counsel, not by anyone on the filing side. Overstock's attorneys did the verification that Lexos Media's attorneys should have done before filing.
Sanctions in Context
The Lexos Media sanctions fell within the range that federal courts had established by early 2026 for AI citation misconduct. The $1,000 to $5,000 penalty range per attorney was consistent with similar cases: $2,500 in the Fifth Circuit's Hersh case, $4,000 in the Lifetime Well case, $3,000 in several district court cases. Judges were converging on a penalty band that was punitive enough to deter but not career-ending for a first offense.
Judge Robinson's decision carried weight in the District of Kansas, adding to the growing body of federal court opinions that treat unverified AI citations as a sanctionable failure of professional duty. Every new case made it harder for the next attorney to argue they didn't know about the risks. The notice was accumulating, case by case, ruling by ruling, sanction by sanction.
Discussion