Is it real or is it AI? Judges punishing lawyers who rely on tech for legal filings
With tight deadlines and time-consuming research demands, attorneys increasingly lean on artificial intelligence (AI) to help draft legal documents. At the same time, courts are discovering filings riddled with errors, fake case citations and so-called “hallucinations.”
The 6th U.S. Circuit Court of Appeals, for example, recently fined two attorneys $15,000 each for including more than two dozen fake citations and factual misrepresentations in documents related to a lawsuit over a fireworks show in Athens, Tennessee. Lawyers Van Irion and Russ Egli, the court said, “have sullied the reputation of our bar, which must now litigate under the cloud of their conduct.”
The message from this and other courts is clear: Using AI is acceptable. Failing to fact-check it is not.
“Citing even a single fake case can be sanctionable,” the appellate judges wrote, “because ‘no brief, pleading, motion, or any other paper filed in any court should contain any citations — whether provided by generative AI or any other source — that a lawyer has not personally ‘read and verified.’”
Using — and misusing — AI in legal documents
One of the earliest high-profile examples of AI misuse in legal filings came to light in 2023, when a federal judge fined two New York attorneys $5,000 for submitting documents that contained fictitious case law and legal arguments. Since then, similar cases have piled up.
That same year on the West Coast, the State Bar of California, which oversees the licensing and discipline of California attorneys, became the first in the country to provide formal AI guidance.
“In California, regardless of whether attorneys use technology in their work, including generative artificial intelligence, they remain responsible for their work product,” George Cardona, chief trial counsel for the State Bar of California, told Straight Arrow News.
“Attorneys in all practice areas must ensure competent use of generative AI,” Cardona said. “This includes understanding, to a reasonable degree, how the technology functions, its limitations and the terms of use and policies governing client data.”
In 2024, the American Bar Association issued its first formal ethics opinion on lawyers’ use of AI. Similar to the California rule, the ABA opinion said AI use requires human oversight and fact-checking.
Still, legal documents continued to contain egregious errors from AI. In mid-2025, The National Law Review documented 156 cases in which lawyers had cited fake cases.
“Apparently, it happens more frequently than one would think,” wrote attorney Linn F. Freedman of the law firm Robinson & Cole in Providence, Rhode Island. “Many lawyers have already been sanctioned by courts to send the message that citing fake cases generated by AI is a waste of the court’s time, as well as a waste of the time and resources of opposing counsel and parties.”
This year, the sanctions continue. In New Orleans, a federal appellate court fined a lawyer $2,500 for failing to verify her AI-generated brief. In Kansas, a federal judge fined four lawyers a total of $12,000 for submitting AI-assisted briefs flush with errors. One of the four, Sandeep Seth, used ChatGPT to help write briefs, while the other three signed off on AI-generated documents without checking them.
“This has been an embarrassing lesson,” Seth told Reuters. “Firms should not use AI as a tool in any capacity without strict policies in place in order to avoid errors.”
Judges have not been immune to AI’s promises of making work easier.
Last year, in two unrelated cases on the same day, two federal judges — Henry Wingate of Mississippi and Julien Neals of New Jersey — both issued error-riddled rulings after failing to verify the AI’s output.
A profession adopting AI
As AI tools have become widely accessible in just a few years, many in the legal field have embraced the technology. A 2025 study by Thomson Reuters found that 72% of legal professionals “view AI as a force for good in their profession.”
Lawyers are using AI tools, such as ChatGPT and Claude, to draft contracts, summarize cases and conduct research — tasks that typically require hours of billable work. AI does it effortlessly and quickly — sometimes accurately, and sometimes not.
“AI is everywhere and it is inevitable,” Carey Wood, senior partner and litigator at the California-based firm The Lemon Law Experts, told SAN. “[A]s everyone keeps saying, if you are not using it, you are behind because the other side is.”
However, Wood uses AI with caution.
“I have found that I can use AI, but only because I already possess legal knowledge and can use it as a tool to find things more quickly,” Wood said. “What is much more dangerous is asking AI to do legal analysis.”
While AI can summarize cases or laws, she said, “it is not capable of analyzing and applying laws in the context of case law or applying law to facts.”
Meanwhile, tools such as RealityCheck are being developed to help spot AI errors and hallucinations — before judges or opposing counsel do.
But regulations struggle to keep up with the increasing speed of technology.
Lawyers misusing AI have been sanctioned under a federal procedural rule — originally adopted in 1937 and last updated in 1993 — that requires attorneys to ensure documents filed with courts are factual and legally sound. Other judges cite a rule, adopted by most states in the 1980s, that generally requires attorneys to be honest with courts.
As the use of AI becomes more pervasive in legal practice, some states, such as California, and individual courts are creating their own rules or warnings, creating a patchwork system of rules, rather than binding laws.
Many questions remain unanswered: How much can lawyers lean on AI to create court filings? Should the failure to edit AI-generated content be considered negligence or misconduct? Should disclosing the use of AI be mandatory?
The only clear direction for lawyers using AI seems to be if they don’t fact-check the work, they can be held accountable for errors.
“Attorneys remain responsible for all work submitted on behalf of a client and must independently review and verify any AI-generated output, including analysis and citations, for accuracy before relying on it in practice or submitting it to a court,” Cardona told SAN.
