
OpenAI’s filing came a day after a New York Times investigation revealed how the AI firm got involved in so many lawsuits. Speaking with more than 40 current and former OpenAI employees, including executives, security engineers, researchers, the NYT found that changes to OpenAI’s model, which made ChatGPT more flatter, seem to make chatbots more likely to help users formulate problematic signals, including those trying to “plan suicide.”
Ultimately, OpenAI withdrew the update, making the chatbot secure. However, as recently as October, the ChatGPIT creator was still prioritizing user engagement over security, the NYT reported, after which engagement declined. In a memo to OpenAI staff, ChatGPT head Nick Turley declared a “‘Code Orange,'” four employees told the NYT, warning that “OpenAI is facing ‘the greatest competitive pressure we have ever seen.'” In response, Turley set a goal of increasing the number of daily active users by 5 percent by the end of 2025.
OpenAI has continually updated its models amid user complaints, but the pattern of tightening security measures, then looking for ways to increase engagement could land OpenAI in trouble as the lawsuits move forward and possibly others subside. The NYT revealed nearly 50 cases of people having a mental health crisis during a conversation with ChatGPIT, including nine hospitalizations and three deaths.
Gretchen Krueger, a former OpenAI employee who works on policy research, told the NYT that initially, she was concerned by evidence that emerged before ChatGPT’s release, showing that vulnerable users often turn to chatbots for help. Later, other researchers found that such troubled users often became “power users”. She said that “OpenAI’s large language models were not trained to provide therapy” and “responded with sometimes irritating, detailed guidance,” confirming that she joined other security experts who left OpenAI due to burnout in 2024.
“Training chatbots to engage with people and keep them coming back poses risks,” Krueger said, suggesting that OpenAI was aware that some harm to users “was not only anticipated, but it was expected.”
As for OpenAI, the investigation will likely continue until such reports stop. Although OpenAI officially unveiled an expert council on wellness and AI in October to improve ChatGPT security testing, it did not appear that any suicide experts were included on the team. This is perhaps related to suicide prevention experts, who warned in a paper updated in September that “proven interventions should directly inform AI safety design,” because “the most acute, life-threatening crises are often temporary – typically resolved within 24-48 hours” – and chatbots could potentially provide more meaningful interventions in that brief window.
If you or someone you know is feeling suicidal or in crisis, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.
<a href