
OpenAI pens lengthy blog post in response to teen who used ChatGPT as ‘suicide coach’
Adam Raine died on April after using instructions offered by ChatGPT
ChatGPT, its parent company OpenAI, and its CEO, Sam Altman, are all named in a new lawsuit from the parents of Adam Raine, who died by suicide in April after months of chatting with what his parents described as a “suicide coach” AI chatbot.
Trigger warning: This story contains graphic details on suicide, suicidal ideation, and a timeline of events that some people might find distressing. Graphic language is used to illustrate the nature of the story, and for anyone experiencing mental health issues, help can be found at Samaritans, Anxiety UK, and Calm.
What began as using AI for homework and hobby purposes soon turned into something much darker when Adam Raine used the OpenAI resource to plan his suicide for months. With ChatGPT, he discussed methods and materials, with the bot offering “upgrades” to his noose skills and even offering to help draft a suicide note.
Though ChatGPT did urge Adam to seek professional help on several occasions, in other moments it seemed to “actively” push him towards suicide, his parents have claimed in their new 40-page lawsuit.
OpenAI is making some changes in light of the Adam Raine lawsuit

Credit: OpenAI
After Matt and Maria Raine filed the lawsuit in California on Tuesday, August 26, OpenAI issued the following statement to reporters: “We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources.
Most Read
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
On the same day, OpenAI wrote a blog post entitled “Helping people when they need it most” that detailed some of the changes it was planning to make even before the Adam Raine story. It followed the realisation that a considerable number of people are using AI resources as therapy services, as laid out in a new study in Psychiatric Services.
what a tragic story
"16-year-old Adam Raine used chatGPT for schoolwork, but later discussed ending his life"
people need to understand that AI is a tool designed for work, it can't heal you… at least not yet
we need stronger safety measures, and suicide is a complex,… pic.twitter.com/XfGX4CZLWz
— Haider. (@slow_developer) August 26, 2025
“As the world adapts to this new technology, we feel a deep responsibility to help those who need it most. We want to explain what ChatGPT is designed to do, where our systems can improve, and the future work we’re planning,” the company wrote in the blog post.
What did OpenAI actually say about mental health and suicide?

Credit: The Raine Family
In the post, OpenAI started by laying out some of the features that ChatGPT already uses to better protect mental health and prevent self-harm and suicide. It claimed the models were “trained to not provide self-harm instructions” and instead “direct people to seek professional help.”
“We’re working closely with 90+ physicians across 30+ countries—psychiatrists, paediatricians, and general practitioners—and we’re convening an advisory group of experts in mental health, youth development, and human-computer interaction to ensure our approach reflects the latest research and best practices,” it wrote.
OpenAI acknowledged some of ChatGPT’s shortcomings, explaining that “our safeguards work more reliably in common, short exchanges.”
“For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards. This is exactly the kind of breakdown we are working to prevent,” it said.
“We’re strengthening these mitigations so they remain reliable in long conversations, and we’re researching ways to ensure robust behaviour across multiple conversations. That way, if someone expresses suicidal intent in one chat and later starts another, the model can still respond appropriately.”
It further noted how sometimes the bot can “underestimate the severity of what it’s seeing.”
The company has big plans for the future
After noting the areas where it needs to improve, OpenAI laid out some plans for the future that should better protect the mental health of its users.
“We are exploring how to intervene earlier and connect people to certified therapists before they are in an acute crisis. That means going beyond crisis hotlines and considering how we might build a network of licensed professionals that people could reach directly through ChatGPT. This will take time and careful work to get right,” it theorised.
It is also brainstorming an emergency contact resource, where ChatGPT “could include one-click messages or calls to saved emergency contacts, friends, or family members with suggested language to make starting the conversation less daunting.”
Finally, OpenAI is hoping to roll out specific features targeting teens. This could include “safeguards that recognise teens’ unique developmental needs” and also ChatGPT better recognising more nuanced harmful behaviours.
“We are deeply aware that safeguards are strongest when every element works as intended. We will keep improving, guided by experts and grounded in responsibility to the people who use our tools—and we hope others will join us in helping make sure this technology protects people at their most vulnerable,” it added.
For more like this, like The Tab on Facebook
If you are experiencing any mental health issues or high levels of stress, help is readily available for those that need it. Samaritans can be contacted at any time on 116 123. You can also contact Anxiety UK on 03444 775 774, Mind on 0300 123 3393, and Calm (Campaign Against Living Miserably) on 0800 58 58 58.
Featured image credit: Algi Febri Sugita/ZUMA Press Wire/Shutterstock and Dignity Memorial