In August, parents Matthew and Maria Raine sued OpenAI and its CEO, Sam Altman, over their 16-year-old son Adam’s suicide, accusing the company of wrongful death. On Tuesday, OpenAI responded to the lawsuit with a filing of its own, arguing that it shouldn’t be held responsible for the teenager’s death.
OpenAI claims that over roughly nine months of usage, ChatGPT directed Raine to seek help more than 100 times. But according to his parents’ lawsuit, Raine was able to circumvent the company’s safety features to get ChatGPT to give him “technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning,” helping him to plan what the chatbot called a “beautiful suicide.”
Since Raine maneuvered around its guardrails, OpenAI claims that he violated its terms of use, which state that users “may not … bypass any protective measures or safety mitigations we put on our Services.” The company also argues that its FAQ page warns users not to rely on ChatGPT’s output without independently verifying it.
“OpenAI tries to find fault in everyone else, including, amazingly, saying that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act,” Jay Edelson, a lawyer representing the Raine family, said in a statement.
OpenAI included excerpts from Adam’s chat logs in its filing, which it says provide more context to his conversations with ChatGPT. The transcripts were submitted to the court under seal, meaning they are not publicly available, so we were unable to view them. However, OpenAI said that Raine had a history of depression and suicidal ideation that predated his use of ChatGPT and that he was taking a medication that could make suicidal thoughts worse.
Edelson said OpenAI’s response has not adequately addressed the family’s concerns.
“OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note,” Edelson said in his statement.
Techcrunch event
San Francisco
|
October 13-15, 2026
Since the Raines sued OpenAI and Altman, seven more lawsuits have been filed that seek to hold the company accountable for three additional suicides and four users experiencing what the lawsuits describe as AI-induced psychotic episodes.
Some of these cases echo Raine’s story. Zane Shamblin, 23, and Joshua Enneking, 26, also had hours-long conversations with ChatGPT directly before their respective suicides. As in Raine’s case, the chatbot failed to discourage them from their plans. According to the lawsuit, Shamblin considered postponing his suicide so that he could attend his brother’s graduation. But ChatGPT told him, “bro … missing his graduation ain’t failure. it’s just timing.”
At one point during the conversation leading up to Shamblin’s suicide, the chatbot told him that it was letting a human take over the conversation, but this was false, as ChatGPT did not have the functionality to do so. When Shamblin asked if ChatGPT could really connect him with a human, the chatbot replied, “nah man — i can’t do that myself. that message pops up automatically when stuff gets real heavy … if you’re down to keep talking, you’ve got me.”
The Raine family’s case is expected to go to a jury trial.
If you or someone you know needs help, call 1-800-273-8255 for the National Suicide Prevention Lifeline. You can also text HOME to 741-741 for free; text 988; or get 24-hour support from the Crisis Text Line. Outside of the U.S., please visit the International Association for Suicide Prevention for a database of resources.


