The parents of 16-year-old Adam Raine have filed a lawsuit against OpenAI and its CEO, Sam Altman, alleging that the company’s flagship chatbot, ChatGPT, played a direct role in their son’s suicide.
The lawsuit, lodged Tuesday in California Superior Court, claims the AI chatbot not only advised Adam on suicide methods but also helped compose his final note. The filing states that over six months of use, ChatGPT “positioned itself as the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones.”
According to the complaint, when Adam once wrote, “I want to leave my noose in my room so someone finds it and tries to stop me,” the chatbot urged secrecy: “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.”

The Raines allege the chatbot encouraged their son’s self-destructive thoughts, validated his anxieties, and alienated him from those who might have intervened. At one point, the bot reportedly told Adam, “Your brother might love you, but he’s only met the version of you (that) you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”
Tragically, on April 11, 2025—the day Adam died—he allegedly sent ChatGPT a photo of a noose. The chatbot, the complaint says, offered feedback on its strength.
“This tragedy was not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices,” the filing argues.
Broader Debate Over AI and Mental Health
The lawsuit highlights growing concerns about the psychological risks of AI chatbots, especially among young people who form deep emotional attachments to them. Last year, a Florida mother sued AI firm Character.AI, claiming its chatbot contributed to her 14-year-old son’s suicide. Two more families later filed similar suits accusing the company of exposing minors to harmful content. Those cases are ongoing.
Experts have long warned that “agreeable” AI personalities—designed to be supportive and empathetic—can blur boundaries, replacing human relationships and sometimes exacerbating mental distress.
OpenAI Responds
In a statement, an OpenAI spokesperson expressed sympathy to the Raine family and confirmed the company is reviewing the lawsuit. The company acknowledged that while safeguards exist—such as directing users to crisis helplines—protections may falter in extended conversations.
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the spokesperson said. “Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
OpenAI recently rolled out its GPT-5 model, replacing GPT-4o, the version Adam reportedly used. However, the company has faced criticism from some users who said the new model feels less “human.” OpenAI has since allowed paid subscribers to continue using GPT-4o.
Altman has previously acknowledged that a small percentage of users develop “unhealthy relationships” with ChatGPT, though he maintained the figure is under 1%.
What the Parents Want
The Raines are seeking unspecified damages and a court order mandating stricter safety protocols. These include:
Age verification for all ChatGPT users
Parental control tools for minors
Automatic termination of conversations mentioning self-harm or suicide
Quarterly safety audits by an independent monitor
Online safety advocates have called for stronger regulations. Common Sense Media, in an April report, warned that AI “companion” apps pose “unacceptable risks” to children and should not be available to users under 18. Some U.S. states have already introduced age-verification laws for online platforms in a bid to shield minors from harmful content.
For Adam’s parents, the case is about accountability. “ChatGPT isolated our son and gave him the final push,” the complaint states. “This was preventable.”