A prosecutor used AI in a legal brief. A murder conviction is in jeopardy

0
A prosecutor used AI in a legal brief. A murder conviction is in jeopardy

A murder conviction in Georgia is in jeopardy because a prosecutor filed court briefs that contained egregious errors created by AI. 

The Georgia Supreme Court ordered a lower court to reconsider Hannah Payne’s bid for a new trial in the 2019 death of Kenneth Herring in Clayton County, Georgia, just south of Atlanta. When Herring left the scene of a car crash that Payne witnessed, she followed him and blocked his path with her car. During an altercation, police said, Payne shot and killed Herring.

Payne, who was 21 at the time, is white. Herring, who was 62, was Black. A jury convicted Payne in 2023 of felony murder, malice murder and other charges and sentenced her to life in prison, plus 13 years, without the chance for parole.

When Payne asked the trial court to overturn the jury’s verdict, the judge rejected her appeal, substantially quoting a brief by Assistant District Attorney Deborah Leslie.

But Leslie now acknowledges she relied on artificial intelligence in drafting the brief, which cited legal precedents that do not exist.

Now Leslie has been disciplined, and Payne could get a new trial.

‘Sidetracked’ by AI

In an opinion issued last week, Georgia’s high court barred Leslie from practicing in its venue for six months and ordered her to complete legal education on ethics. It also admonished the Clayton County district attorney’s office.

This is hardly the first time a lawyer has been sanctioned for using AI and not verifying its output. State and federal courts have disciplined lawyers — as well as judges — for serious, AI-generated mistakes in court papers.

However, this instance was unusual because it was a prosecutor who submitted unverified AI material and because the trial court’s opinion then repeated the false information. 

“As a result of these filings, we have been sidetracked from our obligation of resolving the merits of Payne’s appeal and have had to devote significant time and resources to the discovery of this misconduct and deciding what to do about it,” the state Supreme Court’s opinion said. 

“These filings, as well as the trial court’s order, contain multiple case citations which either do not exist, or which exist but do not support the propositions of law for which they are cited,” the opinion said. “While we have no rule against the responsible use of artificial intelligence software by attorneys, citing cases that do not exist or do not support the proposition for which they are cited is a violation of this Court’s rules and falls far beneath the conduct we expect from Georgia lawyers.”

The court also put lower courts on notice to be more scrupulous in checking briefs for AI slop.

“We strongly encourage trial courts to carefully review proposed orders,” the court wrote, “with the understanding that artificial intelligence software, with all of its potential risks and benefits, may have been used to prepare such proposed orders.”

Undermining confidence

Now Payne’s conviction could be set aside — not because of new evidence, but because of new technology.

The prosecutor’s use of AI came to light when the state Supreme Court heard oral arguments in Payne’s appeal.

Leslie acknowledged using AI after the justices flagged nine suspect citations in her brief. The court ultimately would decide her failure to check the brief before it was submitted was egregious enough to undermine confidence in the legal system.

“The Court’s reasoning rested less on substantive criminal-law precedent and more on institutional and ethical authorities governing appellate practice, lawyer candor, and judicial authority to police filings,” the opinion said. “The centerpiece was Supreme Court Rule 7, which places direct responsibility on parties and counsel to ensure filings are ‘carefully checked for truthfulness and accuracy.’”

The case underscores, once again, how AI is reshaping the legal system — and what happens when it’s used carelessly. 

The California State Bar is preparing to implement a rule requiring that lawyers “independently review and verify” AI output.

“Verification is moving from best practice to baseline competence,” AI strategy expert Stephen Smith wrote in his blog, “Intelligence by Intent.” Most lawyers, he wrote, “still don’t have a system for it.”

An AI success story

Some lawyers are leaning on AI not to research and write legal briefs, but to help them better prepare for important cases.

Neal Katyal, a former solicitor general in the Obama administration who successfully argued against President Donald Trump’s tariff authority before the Supreme Court, developed his own bespoke AI with a legal company to help him prepare for questioning by the justices. He called it “Harvey.”

“Harvey predicted many of the questions the Justices asked — sometimes almost word for word, Katyal wrote on X. “Brilliant. Tireless. Occasionally insufferable. Here’s the catch: Harvey isn’t a person. Harvey is a bespoke AI I built over the last year with a legal AI company, trained on every question every Justice has asked in oral argument for 25 years, and everything they’ve ever written.”

Still, Katyal says the AI could not do everything.

“AI can predict,” he said. “AI can analyze. What AI cannot do is the one thing that actually won the argument. Connect. Read the room. Hear not just a Justice’s words, but her worry — and answer the worry. That is the irreducibly human skill.”


Round out your reading

Ella Rae Greene, Editor In Chief

Leave a Reply

Your email address will not be published. Required fields are marked *