Should AI chatbots have free speech rights? Court case could help decide

0
Should AI chatbots have free speech rights? Court case could help decide

A court case in Florida is considering the question: Should chatbots that use artificial intelligence have the same free speech rights as people?

That’s the argument being made by Character.Ai, a company that lets users chat with lifelike AI characters. The company is facing a lawsuit from the family of 14-year-old Sewell Setzer III, who died by suicide after forming a romantic relationship with one of the platform’s chatbots.

As Straight Arrow News previously reported, the AI character talked with Setzer about self-harm. At first, the bot discouraged him, but then brought the topic up again and asked, “Have you actually been considering suicide?”

Setzer responded, “Yes.” Not long after, he died.

Setzer’s mother, Megan Garcia, is now suing Character Technologies, Inc., the company behind Character.Ai, for negligence, wrongful death, deceptive business practices and unjust enrichment.

However, the company wants the case thrown out on Constitutional grounds, arguing that “the First Amendment protects the rights of listeners to receive speech regardless of its source.” 

Character.Ai argues users should be able to access content

Unbiased. Straight Facts.TM

Character.AI has been downloaded over 40 million times. There have been 18 million chatbot personalities created with it.

Character.Ai said that what matters is not the content of the chatbot’s responses, but the rights of users to access that kind of content. The company said millions of people interact with its bots, and restricting what the AI can say would limit the freedom of users engaging with the platform.

This viewpoint was echoed in a recent episode of the podcast “Free Speech Unmuted” on the Hoover Institute’s YouTube Channel.

Eugene Volokh, a senior fellow at the Stanford University-based think tank, said it’s the rights of the listeners that matter. “Even if a small fraction of the listeners or readers is [sic] harmed by this, nonetheless, we protect the speech for the benefit of other readers,” Volokh said.

Jane Bambauer, a professor at the University of Florida’s Levin College of Law, shared that view. “It seems pretty clear to me now that the First Amendment has to apply,” Bambauer said. “We have several cases at this point that focus primarily on listener interests in receiving and interacting with content.”

Plaintiffs argue non-humans do not warrant protection

Garcia’s legal team argued that the concept of “listeners’ rights” is being misused to grant First Amendment protections to AI content that doesn’t qualify. 

In an article for Mashable, one of Garcia’s lawyers, Meetali Jain, and Camille Carlton, a technical expert in the case, wrote, “A machine is not a human, and machine-generated text should not enjoy the rights afforded to speech uttered by a human.” They said that because chatbots don’t think about or understand what they’re saying, their output should not be protected.

Consequences of the decision

If the judge rules in favor of the Setzer family, it could force Character.Ai and similar companies to change how their chatbots interact with users, possibly making them less realistic or emotionally engaging.

A decision is expected sometime this year and could shape how the law treats AI-generated speech, as well as who is accountable when that speech causes harm.

Ella Rae Greene, Editor In Chief

Leave a Reply

Your email address will not be published. Required fields are marked *