Raine died in April 2025 after a heavy engagement with ChatGPT, which included detailed discussion of his suicidal thinking. The 16-year-old’s family sued OpenAI and CEO Sam Altman in August, alleging that ChatGPT validated his suicidal thinking and gave him explicit instructions on how he could die. Her parents claim it also offered to write a suicide note for Raine.
In its first response to the Rhine family’s allegations, OpenAI argued that ChatGPT did not contribute to Adam’s death. Instead, the company points to his mental health history as well as his behavior as the driving force of his death, which is described as a “tragedy” in the filing.
Apple AirPods Pro 3 Noise Canceling Heart Rate Wireless Earbuds
,
$219.99
(List Price $249.00)
Apple iPad 11″ 128GB Wi-Fi Retina Tablet (Blue, 2025 Release)
,
$274.00
(List Price $349.00)
Sony WH-1000XM5 Wireless Noise Canceling Headphones
,
$248.00
(List price $399.99)
Blink Outdoor 4 1080p Security Camera (5-Pack)
,
$159.99
(List price $399.99)
Fire TV Stick 4K Streaming Device with Remote (2023 Model)
,
$24.99
(List price $49.99)
Bose Quiet Comfort Ultra Wireless Noise Canceling Headphones
,
$298.00
(List Price $429.00)
Shark AV2511AE AI Robot Vacuum with XL Self-Empty Base
,
$249.99
(List Price $599.00)
Apple Watch Series 11 (GPS, 42mm, S/M Black Sport Band)
,
$339.00
(List Price $399.00)
WD 6TB My Passport USB 3.0 Portable External Hard Drive
,
$138.65
(List price $179.99)
Amazon Fire HD 10 32GB Tablet (2023 Release, Black)
,
$69.99
(List price $139.99)
Products are available for purchase through affiliate links. If you buy something through links on our site, Mashable may earn an affiliate commission.
Expert: AI chatbots are unsafe for teens’ mental health
OpenAI claims that Raine’s entire chat history indicates that ChatGPT directed him more than 100 times to seek help for his suicidal feelings, and that he “failed to heed the warnings, seek help, or otherwise receive appropriate care.” The company also argues that people around Rhine “did not react to her obvious signs of distress.”
Additionally, Raine reportedly told ChatGPT that a new depression medication has increased her suicidal thinking. According to the filing, the unnamed drug has a black box warning for increased suicidal tendencies among teens.
OpenAI alleges that Rhine searched for and found detailed information about suicide elsewhere, including on other AI platforms. The company also blames Raine for talking about suicide on ChatGPIT, violating the platform’s usage policies, and trying to bypass guardrails to obtain information about suicide methods. However, OpenAI doesn’t stop the conversation about suicide.
“To the extent that this tragic incident can be attributed to any ’cause’, Plaintiff’s alleged injuries and damages were directly and proximately caused or contributed to, in whole or in part, by Adam Raine’s misuse, unauthorized use, unintended use, unanticipated use, and/or improper use of ChatGPT.”
Jay Adelson, the lead attorney for Raines’ wrongful death lawsuit, called OpenAI’s response “troubling.”
mashable light speed
,[O]”PenAI tries to find fault with everyone, including, surprisingly, the argument that Adam violated ChatGPT’s terms and conditions by engaging with it in the same way it was programmed to function,” Adelson said in a statement.
He said the company’s response does not address various claims in the lawsuit, including that the previous GPT-4o model, which Rhine used, was released to the public “without full testing” allegedly for reasons of market competition, and that the company changed its guidelines to allow ChatGPT to engage in discussions about self-harm.
The company has also acknowledged that it needs to improve ChatGPT’s response to sensitive conversations, including mental health. Altman publicly admitted that the GPT-4o model was too “flattering”.
Some of the safeguards that OpenAI cited in its filing, including parental controls and an expert-staffed wellness advisory council, were introduced after Rhine’s death.
In a blog post published Tuesday, OpenAI said it aims to respond to mental health lawsuits with care, transparency and respect.
The company said it is reviewing the new legal filings, which include seven lawsuits against it alleging that the use of ChatGPT resulted in wrongful death, assisted suicide and involuntary manslaughter, among other liability and negligence claims. The complaints were filed in November by the Tech Justice Law Project and the Social Media Victims Law Center.
Six cases involve adults. The seventh case focuses on 17-year-old Amaury Lacey, who originally used ChatGPT as a homework assistant. Lacey eventually shared suicidal thoughts with the chatbot, which reportedly provided detailed information that Lacey used to kill herself.
A recent review of major AI chatbots, including ChatGPT, by teen mental health experts found that none of them were safe enough to use to discuss mental health concerns. Experts called on the makers of those chatbots – Meta, OpenAI, Anthropic and Google – to disable the functionality for mental health support until the chatbot technology is redesigned to fix security problems identified by its researchers.
If you are feeling suicidal or experiencing a mental health crisis, please talk to someone. You can call or text the 988 Suicide and Crisis Lifeline at 988, or chat here 988lifeline.orgYou can reach Trans Lifeline by calling 877-565-8860 or The Trevor Project at 866-488-7386, Text “START” to the crisis text line at 741-741, Contact the NAMI Helpline at 1-800-950-NAMI, Monday through Friday 10:00 AM to 10:00 PM ET, or email [email protected]If you don’t like the phone, consider using 988 Suicide & Crisis Lifeline Chatis here List of International Resources,
Disclosure: Mashable’s parent company Ziff Davis filed a lawsuit against OpenAI in April, alleging it infringed Ziff Davis copyrights in the training and operation of its AI systems.
<a href