ChatGPT Linked to Teen Suicides

Wrongful Death Case Raises Ethical Questions About AI in Mental Health Support

AI chatbots can’t replace licensed mental health professionals

It goes without saying that one of the biggest stories of the last several years has been the rise of generative AI products such as ChatGPT. AI is increasingly used for professional and personal purposes, and when used safely, it can be a useful tool. However, talking to ChatGPT is no substitute for real human interaction, and sometimes, using it that way can be deadly.
NBC News recently reported on the story of a teenager, Adam Raine, who died by suicide after extensive communications with ChatGPT. According to the NBC article, the bot went from helping him with his homework to “becoming his ‘suicide coach,'” acknowledging and even encouraging his suicide attempts.
 “He would be here but for ChatGPT. I 100% believe that,” his father, Matt Raine, told NBC.

High-profile deaths by suicide are indicative of a larger problem

Suicides linked to the use of AI chatbots have drawn significant attention this year. Mr. Raine and another grieving parent, Megan Garcia, even testified at a congressional hearing last month. Both have brought lawsuits against AI companies.
These concerns about chatbots and suicide risk are part of a larger conversation about the risks of generative AI in mental health. A recent Stanford study, for example, found that AI chatbots are ineffective and dangerous alternatives to human therapists.
The researchers noted that AI models reinforced stigma toward mental health conditions, like alcohol dependence and schizophrenia, which can lead at-risk patients to become frustrated and even discontinue mental health care.
More alarmingly still, the Stanford study tested AI chatbots’ responses to suicidal ideation and other dangerous behaviors in a conversational setting. In these scenarios, the researchers found that the chatbots would actually enable dangerous behavior.
Notably, the chatbots examined in the Stanford study were designed specifically to work as “therapy bots.” A generalized AI chatbot like ChatGPT might be even more dangerous when confronted with warning signs of a mental health crisis.

While AI may have some applications in mental health, it can’t replace human intervention

That’s not to say that AI tools have no place in mental health care. Last year, the American Psychological Association wrote that AI can be used as part of psychological practice to detect warning signs of mental health concerns, monitor patients’ symptoms, and even aid in clinical decision-making. The key, however, is that it should be used as a tool for a well-trained, experienced, human mental health professional, not a replacement.
Certainly, the tragic losses of multiple teens due to the use of generative AI are a warning that parents need to more closely monitor their children’s technology use and respond to any warning signs of suicide. But there’s a bigger takeaway here: the need for human connection in an increasingly technologically driven world.
People who are at risk of suicide or another mental health crisis need to be surrounded by other people who know them, know the warning signs, and can recommend the right resources. Just as importantly, they need access to real mental health treatment instead of leaning on unreliable and often dangerous generative AI “therapy bots.”

Our law firm stands up for families who have lost loved ones to suicide

These stories about generative AI are a sobering reminder that suicide is preventable with the right interventions. Unfortunately, too many families lose loved ones because the people responsible for their safety didn’t do their jobs. Our mission is to fight for justice and accountability for those families.
If you have lost a loved one to suicide completion, we are prepared to listen to your story and explain your legal rights and options. Schedule your free consultation with the Law Offices of Skip Simpson today. We serve families throughout the United States.

Leave a Reply