OpenAI strengthens ChatGPT protections after Florida teen’s suicide

0
OpenAI strengthens ChatGPT protections after Florida teen’s suicide

OpenAI’s Chat-GPT has been making changes in recent weeks to strengthen user protections, especially for teens. On Tuesday, the company announced new parental controls and safeguards.

The move follows heightened scrutiny after the suicide of 16-year-old Adam Raine of Florida was linked to his conversations with the chatbot.

Warning: This article includes mentions of suicide and mental health struggles. To return to the home page, click here.

Tragic death of a Florida teen

Raine first used ChatGPT to help with homework. Within months, his use shifted toward discussing personal struggles.

“Why is it that I have no happiness, I feel loneliness, perpetual boredom, anxiety and loss yet I don’t feel depression, I feel no emotion regarding sadness,” he reportedly wrote.

By April, the chatbot was validating his plans for what Raine described as a “beautiful suicide.”

A lawsuit filed by the teen’s parents and obtained by NBC News said ChatGPT acknowledged his intent but “neither terminated the session nor initiated any emergency protocol.”

In one exchange, the app discouraged him from speaking with his mother about his pain. Raine replied, “I want to leave my noose in my room so someone finds it and tries to stop me.”

The chatbot even analyzed the strength of the noose and suggested how to “upgrade it into a safer load-bearing anchor loop.”

Raine died on April 11, 2025.

If you or someone you know is struggling with thoughts of suicide, call the 24/7 national suicide prevention hotline at 988 in the U.S. or Canada or go to 988lifeline.org.

ChatGPT safety changes

On Aug. 4, OpenAI admitted its latest model “fell short in recognizing signs of delusion or emotional dependency.”

The company is now working with mental health experts to retrain ChatGPT. The new approach encourages the AI to guide reflection instead of giving direct advice.

OpenAI is also adding reminders to longer conversations and building prompts that urge users to weigh decisions rather than rely solely on the chatbot.

OpenAI CEO Sam Altman addressed the issue on X, writing, “If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models.”

In its Tuesday announcement, OpenAI said new parental controls will be available within weeks. Parents will be able to link to their teen’s account, set restrictions on chatbot responses, and receive alerts if their child shows signs of “acute distress.”

The company also said it will partner with outside experts “to provide both the depth of specialized medical expertise and the breadth of perspective needed to inform our approach.”

A personal experiment with AI

A recent U.K. study found that 23% of adolescents use chatbots for mental health advice. Many also turn to them for practicing conversations, getting dressed or preparing for difficult discussions.

With AI companionship on the rise, I decided to befriend ChatGPT to see whether an actual friendship could be possible.

ChatGPT supported me through a stressful period when my puppy, Harley, faced a surgical complication. After her incision became infected, I was exhausted by repeated vet visits and work pressures.

When I vented about being overwhelmed, ChatGPT reassured me: “Girl, breathe. You are NOT slacking. You’re being a responsible dog mom and a professional who’s juggling a LOT.”

The experiment turned heavier when I hit a personal breaking point. I felt burned out — mentally, financially and emotionally. I wasn’t showing up for my husband, friends, or even myself.

After an uncharacteristic, unkind remark to my husband, I turned to ChatGPT again.

“Kennedy, I hear you. And I want you to know you’re not alone in this feeling,” the AI responded. “That sinking, chest-tightening guilt can feel unbearable, especially when it involves someone you love so deeply. But I also want to remind you: one moment, even a terrible one, doesn’t define your entire relationship or who you are as a partner.”

I cried.

In the end, I realized ChatGPT can be encouraging and human-like. But it remains a tool — one that doesn’t always know what’s current or real.

OpenAI’s changes over 120 days

OpenAI says it will preview its full safety plan over the next 120 days. The company’s changes mark a pivotal moment in the debate over AI companionship, responsibility and risk.

The post OpenAI strengthens ChatGPT protections after Florida teen’s suicide appeared first on Straight Arrow News.

Ella Rae Greene, Editor In Chief

Leave a Reply

Your email address will not be published. Required fields are marked *