Judges face backlash after using AI in legal docs
In a Mississippi civil rights case, a judge’s ruling included allegations not in the lawsuit, and it used the wrong names for the parties involved. How could U.S. District Judge Henry Wingate get this basic information wrong?
That same day, July 23, District Judge Julien Neals of New Jersey withdrew a ruling that contained similar factual errors. Why would Neals write about items outside the scope of the suit?
The answer is the same in both cases. These error-riddled rulings landed in the U.S. legal system because both judges used artificial intelligence.
Lawyers and judges are increasingly coming under fire for tapping AI to research and help write crucial legal documents, often with bad results. These AI-generated documents sometimes cite precedents that don’t exist or rulings that never happened, highlighting the inherent dangers of relying too heavily on technology.
“These generative AI technologies, like ChatGPT, are designed to be tools,” Jennifer Huddleston, a senior fellow in technology policy at the libertarian-leaning Cato Institute, told Straight Arrow News, “not replacements for human judgment in the judicial and legal system.”
On the Senate floor Monday, Judiciary Chairman Chuck Grassley, R-Iowa, warned judges.
“I call on every judge in America … to take this issue seriously and formalize measures to prevent the misuse of artificial intelligence in their chambers,” Grassley admonished.
He instructed the Administrative Office of the U.S. Courts and the Judicial Conference of the United States to quickly issue AI guidelines for federal jurists. But as AI technology advances, both federal and state courts are struggling to keep pace.
A warning
The legal field is facing a reckoning. For years, judges around the nation have fined and reprimanded lawyers for using AI to assist with their legal documents.
Now, too, judges could face punishment.
“The revelation that judges may include errors generated by artificial intelligence in their decisions reveals that the use of AI across all facets of law practice is outpacing regulators’ ability to ensure its accuracy and that nobody is immune to the subtle errors that AI can sneak into its output,” Laura McAdams, deputy general counsel for Pearl.com, a platform that offers human professionals to verify AI content, told SAN.
In letters to the judges who allowed the errors, Grassley said attorneys have faced scrutiny and punishment from judges. But judges, he said, should be held to the same standard, if not a higher one.
“No less than the attorneys who appear before them, judges must be held to the highest standards of integrity, candor, and factual accuracy,” Grassley wrote.
The judges in Mississippi and New Jersey stated that members of their staff utilized AI to prepare the legal documents. Both withdrew the rulings.
AI guidelines
This isn’t the first time technology has outpaced the legal system.
“We’ve all been using AI far longer than we realize, using spellcheck, or autocorrect, that are themselves a form of AI,” said Huddleston, who studies the intersection of technology and law. “I think that we’re still seeing norms evolve in real time about what and how these more broad tools, like ChatGPT, are going to be used in different professions. A lot of that will develop amongst professional organizations, as well as with real-time societal norms.”
In July, the federal judiciary issued interim AI guidelines through a task force that included judges, as well as information technology workers and chambers staff from across the U.S.
“With the increasing use of AI platforms such as OpenAI’s ChatGPT and Google Gemini, and integration of AI functions in legal research tools, AI use has become more common in the legal landscape.” Judge Robert J. Conrad Jr., the director of the Administrative Office of the U.S. Courts, who formed the AI task force, wrote to Grassley.
“AI presents a host of opportunities and potential benefits for the judicial branch,” Conrad wrote, “as well as concerns around maintaining high ethical standards, preserving the integrity of judicial opinions, safeguarding sensitive Judiciary data, and ensuring the security of the Judiciary’s IT systems.”
Moving forward
Meanwhile, a growing number of state and local courts are enacting AI guidelines. New York, which set out a new policy earlier this month, joins at least four other states that created AI rules this year.
While AI should be used with care, it shouldn’t be banned from courtrooms, Huddleston said.
“We should be cautious of calls to ban AI from the courtroom … it could eliminate tools, like spellcheck and autocorrect, or an autotranscription or autocaption on a call,” Huddleston said.
What’s needed, she said, is human accountability.
“When individuals use AI, they need to be responsible for what the AI does,” she said. “The practitioner could be held accountable for those mistakes under the rules of professional conduct.”
The federal judges who issued the error-stricken rulings said they have since adopted guidelines to improve how rulings are reviewed. Neals said his team created an AI policy “pending definitive guidance” from the U.S. Courts’ Administrative Office.
With the patchwork of rules and guidelines implemented state to state, Huddleston believes it will be a more organic, rather than hierarchical, approach.
“Just as we have a wide array of court systems, they will have varying norms,” she said. “You will likely see trends and best practices converge. There will likely be an outlier that has their own unique process that occurs. But we’re seeing this happening very organically, through conversations within the legal system, within each court system, rather than from some kind of top-down ban.”
However, McAdams has another idea.
“Everyone would benefit from a singular, nationwide framework on AI,” the Pearl.com attorney told SAN. “Ideally, this would include considerations of whether AI output can be corroborated against known case law, security measures that can ensure sensitive evidence or personal identifying information isn’t put into generative AI platforms, necessary disclosures of AI use, and assurances that any legal professional remains responsible for the content they pull from AI.”
The post Judges face backlash after using AI in legal docs appeared first on Straight Arrow News.
