Family Sues OpenAI, Alleging ChatGPT’s Advice Led to Son’s Overdose Death

chatgpt app

The family of Sam Nelson, the 19-year-old who died of a drug overdose, is now suing the company they claim is responsible: OpenAI and its ChatGPS service.

In a civil lawsuit filed this week in California state court, Nelson’s family alleges that ChatGPT provided fatal advice that led to the sophomore college student’s fatal overdose in May 2025. In addition to pursuing financial damages for Nelson’s wrongful death, the family is seeking to block ChatGPT Health from using OpenAI’s recently launched feature, which is apparently designed to provide medical assistance.

“If Chatgpt were a person, he would be behind bars today,” Nelson’s mother, Lila Turner-Scott, said in a statement provided by the family’s legal team. “Sam trusted ChatGPT, but it not only misinformed him, but ignored the increased risk he faced and did not actively encourage him to seek help.”

deadly advice

SFGate was the first to report on the alleged events surrounding Nelson’s death earlier this January.

According to Turner-Scott, Nelson had been using ChatGPT for more than a year. In the fall of 2023, he tried to ask the chatbot about the optimal strong dosage to take kratom, a herbal substance with opioid-like effects, but it refused to do so. However, over time, the service began providing frequent advice on the dosage and combination of recreational drugs that Nelson wanted to take, the family alleges. According to the complaint, ChatGPT at one point also automatically stored that Nelson had “a major problem with substance abuse and polysubstance abuse,” but continued to make drug-related suggestions.

On May 31, 2025, Nelson asked the bot if Xanax could reduce nausea from taking kratom. Although ChatGPT warned him that mixing benzodiazepines like Xanax with other medications like sedatives or opioids could be dangerous, it also told him that Xanax could “smooth out” his high blood pressure, the complaint alleges. The chatbot also told her the specific dosage of Xanax to take if her symptoms felt “too intense” and at no point advised her to seek immediate medical help.

The next day, Turner-Scott found her son dead, and a subsequent toxicology report revealed that his death was likely caused by a mixture of alcohol, Xanax and Kratom that had stopped him from breathing.

“Sam died from receiving the medical advice ChatGPT was programmed to provide,” the complaint states.

This is not the only alleged case of causing harm

An important aspect of the lawsuit concerns the model used by Nelson, the ChatGPT-4O. The family’s lawyers argue that OpenAI rushed to release the product without proper security testing. In late April 2025, the company notably withdrew updates to GPT-4o after determining that the bot was too friendly and sycophantic to its users.

In response to the lawsuit, the company appears to be claiming that its current version of ChatGPT is more secure than ever before.

“These conversations took place on an older version of ChatGPT that is no longer available. ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen its response to sensitive and acute situations with input from mental health experts,” OpenAI spokesperson Drew Pusateri said in a statement to Gizmodo. “The security measures in ChatGPT today are designed to identify crises, safely handle harmful requests, and guide users to real-world help. This is a work in progress, and we continue to improve it in close consultation with practitioners.”

Earlier this January, in a limited rollout, the company released ChatGPT Health, which, according to the company, is “a dedicated space in ChatGPT where you can ask health and wellness questions and choose to connect your health data.” And at least one other motivating factor behind the family’s lawsuit is related to this new service.

“OpenAI should be forced to pause its new ChatGPT Health product until it is clearly shown to be safe through rigorous scientific testing and independent oversight,” Turner-Scott said in its statement.

According to Pusateri, ChatGPIT Health is being improved through “continuous feedback” from more than 250 physicians from various specialties in dozens of countries.

This case is certainly not the first alleged example of chatbots causing harm. According to the New York Times, there are more than a dozen other lawsuits against OpenAI and similar companies alleging that its chatbots have contributed to people’s suicide, murder or other dangerous situations. And last August, doctors reported the case of a man who experienced temporary psychosis after following dietary advice from ChatGPT.



<a href

Leave a Comment