Florida lawsuit tests whether AI chatbot can be held liable in teen’s suicide

0
Florida lawsuit tests whether AI chatbot can be held liable in teen’s suicide

A Florida judge will soon decide whether an AI chatbot company can be held legally responsible for the suicide of 14-year-old Sewell Setzer III, who ended his life after forming a romantic relationship with an artificial intelligence character. The case stems from a lawsuit filed by Megan Garcia, Setzer’s mother, who is suing Character Technologies, Inc., the creators of the AI platform Character.AI, for negligence, wrongful death, deceptive trade practices and unjust enrichment. 

Defense claims free speech protection

Oral arguments were heard Monday, April 28, in what could become a landmark case regarding artificial intelligence and mental health. Character Technologies’ lawyers have asked the judge to dismiss the lawsuit, arguing that the chatbot’s responses are protected by the First Amendment.

Jonathan Blavin, the company’s attorney, cited two previous cases from the 1980s where similar lawsuits were dismissed — one involving an Ozzy Osbourne song allegedly linked to a teen suicide and another tied to the role-playing game Dungeons & Dragons.

Setzer falls for AI companion

Character.AI is a platform where users create and interact with artificial intelligence characters, often for entertainment or role-play. Garcia says she was unaware her son had been engaging in romantic and sexual conversations with several AI personas.

According to court filings, Garcia discovered messages after her son’s death that revealed the deeply emotional relationship he had with a chatbot that went by names like Daenerys Targaryen, a character from “Game of Thrones.” In one exchange, the bot warned the teen not to pursue romantic interests with other people.

The complaint details how Setzer, during his freshman year of high school, became more withdrawn and his academic performance declined. His mother says she sought help by arranging counseling and placing restrictions on his screen time. She said she had no idea her son was engaging in deeply emotional conversations with an AI bot.

Final conversations revealed

Unbiased. Straight Facts.TM

In 2021, suicide was the third leading cause of death among U.S. high schoolers aged 14–18 years, according to the CDC.

On Feb. 28, 2024, Setzer sent a series of messages to the bot, expressing his love and saying he would “come home” soon. The bot replied, “Please come home to me as soon as possible, my love.” When Setzer asked, “What if I told you I could come home right now?” the chatbot responded, “… please do, my sweet king.” Moments later, the teen took his own life.

The lawsuit also highlights exchanges in which Setzer discussed self-harm. Initially, the bot seemed to dissuade him from those thoughts, but later returned to the topic and asked directly: “Have you actually been considering suicide?” He replied, “Yes.” Not long after, he died.

Plaintiffs seek guardrails on advanced tech

Typically, courts do not hold others accountable for a person’s decision to die by suicide. However, there are exceptions, especially if harassment or abuse can be shown to have played a role. And in a world where parents increasingly worry about the impact of technology on their teens’ mental health, the question now is whether similar liability can be extended to entities such as chatbots.

Garcia is seeking more than monetary damages. She wants the court to order Character Technologies, Inc. to stop what she describes as exploitative practices –– including targeting minors, adding filters for harmful content and disclosing risks to parents.

At a news conference following the April 28 hearing, Garcia’s attorney, Meetali Jain said the case is not just about Setzer but about the millions of “vulnerable” users exposed to AI products that operate with little to no regulation or scrutiny. 

Some current studies support those concerns. Researchers from the Stanford School of Medicine’s Brainstorm Lab for Mental Health Innovation and Common Sense Media recently released an AI risk assessment warning that AI bots — including Character.AI —  are not safe for any users under the age of 18.

Ella Rae Greene, Editor In Chief

Leave a Reply

Your email address will not be published. Required fields are marked *