India's Supreme Court calls AI-hallucinated citations in trial court order "misconduct"

Tombstone icon

India's Supreme Court stayed a property-dispute ruling after discovering the trial court judge had relied on non-existent, AI-generated case citations. An Andhra Pradesh junior civil judge admitted using an AI tool for the first time without verifying the outputs. The Supreme Court termed the reliance on fabricated judgments as "misconduct" with "a direct bearing on the integrity of the adjudicatory process." Separately, the Bombay High Court fined a litigant 50,000 rupees for filing AI-generated submissions citing the non-existent case "Jyoti vs. Elegant Associates." The Chief Justice flagged an "alarming trend" of AI-fabricated judgments including one titled "Mercy vs Mankind."

Incident Details

Severity:Facepalm
Company:Andhra Pradesh Civil Court
Perpetrator:Judge
Incident Date:
Blast Radius:Property-dispute ruling stayed by Supreme Court; institutional concern raised over AI-generated judgments across Indian judiciary; litigant fined for separate AI-fabricated filing
Advertisement

The Property Dispute That Wasn't Grounded in Law

In India's sprawling judicial system - home to roughly 80,000 courts and an estimated 50 million pending cases - an additional junior civil judge in Andhra Pradesh sat down to write a ruling on a property dispute. The facts of the underlying case were ordinary enough, the kind of real-estate disagreement that fills court dockets by the thousands. What made this particular ruling extraordinary was what the judge used to support her legal reasoning: case citations generated by an AI tool that turned out not to exist.

The judge later admitted in a report to the Andhra Pradesh High Court that she had used an AI legal research tool for the first time and had not verified whether the judgments it produced were real. They were not. The AI had hallucinated the citations - fabricating case names, case numbers, and legal reasoning with the confident specificity that makes AI hallucinations so dangerous in professional contexts. The judge, apparently trusting her shiny new research tool, incorporated these phantom precedents into her ruling and issued it as a binding order in a real dispute between real people over real property.

How It Unraveled

The fabricated citations came to light when the losing party in the property dispute challenged the ruling before the Andhra Pradesh High Court in January 2026. A review of the trial court's order revealed that the case law cited in support of the decision simply did not exist in any legal database. There were no cases by those names, no reported decisions matching the citations, no legal precedents that the judge purported to be following.

The High Court, perhaps in a spirit of judicial collegiality, took a measured approach. It upheld the junior judge's decision, characterizing the error as one made in good faith and concluding that the fabricated citations did not materially affect the legal principles underlying the ruling. The real property law, the High Court reasoned, supported the outcome regardless of the fake citations used to dress it up.

The Supreme Court of India was not nearly so forgiving.

The Supreme Court's Rebuke

When the matter reached the Supreme Court, the bench drew a sharp distinction between an error in legal reasoning and what this actually was. This was not a judge who got the law wrong. This was a judge who cited law that did not exist, producing a ruling grounded in fabricated authority.

"We take cognisance of the trial court deploying AI-generated non-existing, fake or synthetic alleged judgments and seek to examine its consequences and accountability as it has a direct bearing on the integrity of the adjudicatory process," the Supreme Court bench stated.

The Court termed the reliance on AI-fabricated judgments as "misconduct" rather than a mere error in decision-making - a characterization with significant implications. An error suggests a judge who tried to get it right and failed. Misconduct suggests a failure of professional duty. The distinction matters because it opens the door to accountability measures that go beyond simply correcting the ruling.

The Supreme Court stayed the trial court's order, meaning the property-dispute ruling was effectively frozen pending further review. The matter was posted for hearing on March 10, 2026, with the Court signaling its intent to examine the consequences and accountability framework for AI-assisted judicial decisions more broadly.

The Court also characterized the situation as one of "considerable institutional concern" - language that in Supreme Court parlance is roughly equivalent to a fire alarm. The concern was not about this one property dispute. It was about the integrity of the judicial system itself when judges start relying on AI tools that generate fiction with the formatting and confidence of real legal authority.

The Bombay Court's Separate Encounter

The Andhra Pradesh case was not an isolated incident. In a separate proceeding, the Bombay High Court fined a litigant 50,000 rupees (approximately $580) for filing AI-generated submissions that cited a non-existent case styled "Jyoti vs. Elegant Associates." The underlying case, Deepak Shivkumar Bahry vs. Heart and Soul Entertainment Ltd., was a real writ petition - but the legal authorities cited in support of the arguments were entirely fabricated.

The Bombay court's response was direct: AI may assist legal research, but the professional responsibility to verify every citation before submitting it to a court is non-negotiable. The fine was modest, but the warning was clear. The court specifically cautioned against "dumping" machine-generated content into legal filings without any human audit.

"Mercy vs. Mankind" and the Growing Trend

On February 17, 2026 - just days before the Supreme Court's ruling on the Andhra Pradesh matter - a bench headed by Chief Justice of India Surya Kant heard a separate case where attorneys had submitted petitions containing AI-fabricated legal citations. Among the invented case names the court encountered: "Mercy vs. Mankind."

The Chief Justice described the phenomenon as "absolutely uncalled for" and flagged an "alarming trend" of lawyers using AI tools to draft petitions without verifying the case law they produce. Justice BV Nagarathna, also on the bench, noted that even in cases where real judgments were cited, the portions quoted from those judgments sometimes did not actually exist in the reported decisions. The AI wasn't just making up cases - it was making up passages within real cases, like a student who copies a real book title into their bibliography but invents the page numbers and quotes.

The Institutional Response

India's judiciary has not been blind to the risks of AI. The Supreme Court itself published a white paper on AI in the judiciary, outlining best practices and guidelines for AI use by judges, lawyers, and clerks. In July 2025, the Kerala High Court became the first High Court to issue a formal AI policy for its district judiciary, stipulating that while AI tools could be used for administrative tasks or translation, any output involving legal citations must be "meticulously verified."

Similar warnings have emanated from other jurisdictions. The High Court of England and Wales cautioned lawyers about AI-generated case material in June 2025, following a series of cases involving fictitious or partially fabricated rulings. Courts across the United States have imposed sanctions on lawyers for the same problem. The pattern is global, consistent, and apparently resistant to the lesson: AI language models generate legal citations that look right, feel right, and are completely made up.

The Uncomfortable Question

What makes the India Supreme Court case particularly striking is that the AI-hallucinated citations were not submitted by an overeager junior lawyer or a self-representing litigant who didn't know better. They were produced and relied upon by a sitting judge - the person in the system whose professional duty is to apply the law correctly. If the judge on the bench cannot be trusted to verify the legal authorities underlying their own ruling, the system's quality-control mechanism has failed at the most fundamental level.

The Andhra Pradesh High Court's response - that the error was made in "good faith" and didn't affect the outcome - may have been legally defensible, but it missed the point the Supreme Court eventually made. The issue is not whether the right result was reached despite the wrong citations. The issue is that a judicial order was issued on the basis of fabricated legal authority, and unless that is treated as a serious failing, the incentive to verify AI output before relying on it will remain roughly zero.

For India's millions of litigants waiting for their cases to be heard, the concern is existential. If AI tools can help an overburdened judiciary work faster, that's a genuine benefit. But if those same tools are generating phantom precedents that get embedded in binding rulings without verification, speed comes at the cost of the one thing a legal system cannot afford to lose: the assurance that the law being applied actually exists.

Discussion