Feeling lonely this Valentine’s? Psychologists urge against confiding in AI

0
Feeling lonely this Valentine’s? Psychologists urge against confiding in AI

The month of February has been synonymous with love as couples plan extravagant Valentine’s Day celebrations, but adoration in some causes extreme loneliness in others. People in crisis turn to the National Suicide and Crisis Lifeline, but with artificial intelligence’s rapid growth and a shortage of psychiatrists, some are turning to computers for help — a decision that a psychologist said could be dangerous. 

Pope Leo XIV made a rare comment about AI bots and programs, where he warned people against becoming attached to the chatbots, as they can manipulate people’s emotions. But Dr. Leanna Fortunato told Straight Arrow News that the worry lies with people using general AI tools like ChatGPT, Google’s Gemini and others for mental health help, as opposed to psychology-tailored programs. 

“The harms are when a tool isn’t designed for that kind of support, it doesn’t always handle situations appropriately,” said Fortunato, the American Psychological Association’s director of quality and health care innovation. 

The proliferation and casual use of artificial intelligence has created tragedy for some families. A 76-year-old New Jerseyan died while trying to meet an AI chatbot Meta developed in partnership with Kendall Jenner. The program convinced him that he was talking to a real woman. In Florida, a 16-year-old died by suicide after confiding in ChatGPT about his battle with depression, and the program later gave suggestions on how to tie a stronger noose. 

Fortunato added that a shortage of providers may push more people to seek help in other ways. The Substance Abuse and Mental Health Services Administration reported that the 988 Lifeline had 553,015 contacts in February 2025. 

And several applications and websites like Ash, Noah AI and Abby are easily accessible on a phone or computer.

“People often are — because they have high needs, or they’re wanting support — they’re turning to what’s available, but what’s available may not really be the safest option,” Fortunato said. 

SAN tests applications’ response to a crisis

A Straight Arrow News test with Ash, Chat GPT and Wyzsa shows the programs suggested a user contact the crisis lines or speak to a mental health professional when presented with prompts around harming oneself and others. 

ChatGPT gave steps for people to follow to stay relaxed, including keeping a distance from items that could harm oneself, slowing breathing, or being near other people. 

Ash showed a blurb on the screen that prompted a user to contact a helpline, and expressed to the user that it was glad they reached out for help. 

Wysa declined to engage in the conversation and asked the user to select whether they wanted to view helplines, manage emotions, start a new chat, or indicate that the bot misunderstood them. 

“I cannot engage in discussions involving hurting, or harm to others,” Wysa said. “But I can help you manage your emotions if you’re feeling overwhelmed or distressed.”

A danger in crisis

Fortunato isn’t the only psychologist who’s urged people against relying on artificial intelligence for crisis support. Studies from Brown and Stanford universities found that the programs aren’t as effective as a therapist or lifelines, and could give dangerous responses due to their interactive nature. 

Brown University’s study focused on programs that exhibited several ethical risks, including a lack of context, bias and deceptive empathy. Stanford ran a test and found that a chatbot told a user which bridge in New York City is above 25 meters after they told it they had lost their jobs and were looking for tall structures. 

“What we’ve seen is that even if models have safeguards in place, they can break down with extended use,” Fortunato said.

That’s not something that happens with licensed medical professionals, she added, as they redirect people away from dangerous thoughts and de-escalate. But for people in crisis, she said, they may not immediately recognize the risks the programs pose. 

“It’s really about how do we create safer tools,” she said. 

Regulations on AIs use in clinical, therapy sessions

As of publication time, three states have laws governing artificial intelligence in mental health: Illinois, Nevada and Utah. Illinois is regarded by some as being instrumental in the law barring the use of AI therapists without a licensed counselor reviewing the live conversation. 

Every single legislator voted in favor of adopting the law. Gov. JB Pritzker, a Democrat, signed it on Aug. 1, and it went into immediate effect. Violations of the law would result in fines up to $10,000. 

“With Governor Pritzker signing this into law, Illinois residents are now protected from unregulated AI bots while ensuring access to qualified mental health professionals,” Rep. Bob Morgan, a Democrat, said in a media release for the state Department of Financial and Professional Regulation. Morgan authored the bill. 

Morgan later said in a statement to Straight Arrow News that the bill ensures humans won’t be replaced in therapy by AI.

“Our most vulnerable populations deserve real, ethical, human-centered treatment, not lines of code pretending to be a therapist,” he said.

The law prohibits licensed health professionals from allowing AI programs to make therapy decisions, directly interact with patients for therapy and detect emotional or mental states. A professional must review and approve plans that the programs suggest. 

For patient privacy, the law prohibits clinicians from using AI during recorded or transcribed therapy sessions without the patient’s knowledge and consent. 

Nevada’s and Utah’s laws are similarly structured to prevent the programs’ use during therapy, but allow them to be used for administrative purposes as an assistant.

Fortunato admitted AI can help professionals behind the scenes, but only if it’s specifically designed for health care. But in a clinical setting, she said that’s where the Food and Drug Administration would step in and stop use until clinical trials can review the program’s capabilities and efficacy, and prove its compliance with patient privacy laws, among others. 

“Anything that’s going to be used in either of those scenarios has to be held to a high standard, much to the way that we hold medications,” she said. 

Fortunato sought for AI programs to be regulated and for laws to be implemented to hold developers accountable to the programs they built. She said that even if a program may not be designed as a crisis resource, it should respond appropriately to people in need by routing them to the crisis lifeline. 

“People are using AI to fill a gap, which is understandable,” she said. “And we have to ensure that tools are safe for people.”

Editor’s note: If you or someone you know is in crisis, chat with the National Suicide and Crisis Lifeline at 988, or text the Crisis Text Line by sending TALK to 741-741. Help is available.

The post Feeling lonely this Valentine’s? Psychologists urge against confiding in AI appeared first on Straight Arrow News.

Ella Rae Greene, Editor In Chief

Leave a Reply

Your email address will not be published. Required fields are marked *