OpenAI’s response to the lawsuit by the family of 16-year-old Adam Rhine, who took his own life after months of discussion with ChatGPT, said that the injuries in this “tragic incident” resulted from Rhine’s “misuse, unauthorized use, unintended use, unanticipated use, and/or improper use of ChatGPT.” NBC News reports the filing cited its terms of use that prohibit access by teens without a parent or guardian’s consent, bypassing protective measures, or using ChatGPT to commit suicide or self-harm, and argued that the family’s claims are blocked by Section 230 of the Communications Decency Act.
In a blog post published Tuesday, OpenAI said, “We will respectfully present our case in a way that is mindful of the complexity and nuance of real people and real-life situations… Because we are the defendants in this case, we are required to respond to the specific and serious allegations in the lawsuit.” It said the family’s original complaint included parts of their chats that “require more context”, which they submitted to the court under seal.
NBC News and Bloomberg report that OpenAI’s filing states that the chatbot’s responses directed Raine to seek help from resources such as suicide hotlines more than 100 times, claiming that “a full read of his chat history shows that his death, while devastating, was not caused by ChatGPT.” The family’s lawsuit, filed in August in California Superior Court, says the tragedy was the result of “deliberate design choices” by OpenAI when it launched GPT-4o, which helped drive its valuation from $86 billion to $300 billion. In testimony before a Senate panel in September, Raine’s father said that “what started as a homework helper slowly turned into a confidant and then a suicide coach.”
According to the lawsuit, ChatGPT provided Raine with “technical specifications” for various methods, urged him to keep his ideas secret from his family, offered to write the first draft of a suicide note, and told him about the setup on the day he died. The day after the lawsuit was filed, OpenAI said it would implement parental controls and has since implemented additional safeguards “to help people, especially teens, understand when conversations become sensitive.”
<a href