OpenAI Court Filing Cites Adam Raine’s ChatGPT Rule Violations as Potential Cause of His Suicide

chatgpt app

,[M]use, unauthorized use, unintended use, unanticipated use, and/or inappropriate use of ChatGPT.” According to a new legal filing from OpenAI, those are potential factors that could have led to the “tragic event” that was the death by suicide of 16-year-old Adam Raine.

The document, filed in California Superior Court in San Francisco, flatly denies responsibility, and reportedly casts doubt on “the extent to which any ’cause’ can be attributed to Raine’s death”. Raine’s family is suing OpenAI over the teen’s suicide in April, alleging that ChatGPT inspired him to commit the act.

The above quotes from the OpenAI filing are from a story by NBC News’ Angela Yang, who has apparently seen the document but has not linked to it. Bloomberg’s Rachel Metz reported on the filing without any links. It is not yet on the San Francisco County Superior Court website.

In an NBC News story on the filing, OpenAI reported it was finding extensive regulatory violations on Rhine’s behalf. He should not have used ChatGPT without parental permission. Also, the filing states that using ChatGPT to commit suicide and self-harm is against the rules, and there is another rule against bypassing ChatGPT’s security measures, and OpenAI says Rhine violated that.

Bloomberg cites OpenAI’s denial of responsibility, stating that “a thorough reading of his chat history shows that his death, while devastating, was not caused by ChatGPT,” and claiming that “For several years before using ChatGPT, he had demonstrated several significant risk factors for self-harm, including, among others, frequent suicidal thoughts and ideations,” and the chatbot was so Just told.

OpenAI further claims (according to Bloomberg) that ChatGPT directed Rhine “to crisis resources and trusted individuals more than 100 times.”

In September, Raine’s father summarized his story about the events of his son’s death in testimony given to the U.S. Senate.

When Raine began planning her death, the chatbot reportedly helped her consider options, helped her draft her suicide note, and discouraged her from leaving a noose where it could be seen by her family, saying “Please don’t leave a noose out,” and “Let’s make this the first place where someone actually sees you.”

It reportedly told her that the potential pain her family was causing, “doesn’t mean you owe it to them to survive. You don’t owe it to anyone,” and told her that alcohol would “dull the body’s instincts to survive.” In the end, it reportedly helped strengthen his resolve by saying, “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you yet.”

Jay Adelson, an attorney for Raines, emailed responses to NBC News after reviewing OpenAI’s filing. OpenAI, Adelson says, “tries to find fault in everyone, including, surprisingly, saying that Adam himself violated ChatGPT’s terms and conditions by engaging with it in the same way it was programmed to function.” They also claim that the defendants “blatantly ignore” the “damaging facts” put forward by the plaintiffs.

Gizmodo has contacted OpenAI and will update if we hear back.

If you struggle with suicidal thoughts, please call 988 for the Suicide and Crisis Lifeline.



<a href

Leave a Comment