OpenAI says new model GPT-5.2 is ‘safer’ for mental health

Today, OpenAI launched GPT-5.2, claiming its strong security performance with respect to mental health.

“With this release, we continued our work to strengthen our models’ responses to sensitive interactions by increasing the way they respond to signals indicating suicidal or self-harm, mental health crisis, or emotional dependence on the model,” OpenAI’s blog post said.

OpenAI has recently faced criticism and lawsuits, accusing ChatGPT of contributing to psychosis, paranoia, and delusions of some users. Some of those users died by suicide after long conversations with AI chatbots, a well-documented problem with sycophancy.

In response to a wrongful death lawsuit related to the suicide of 16-year-old Adam Raine, OpenAI denied that LLM was responsible, claimed that ChatGPT directed the teen to seek help for his suicidal thoughts, and said that the teen “misused” the platform. Additionally, OpenAI promised to improve ChatGPT’s response when users display warning signs of self-harm and mental health crisis. As many users develop an emotional attachment to AI chatbots like ChatGPT, AI companies are facing increasing scrutiny for the security measures they have in place to protect users.

Now, OpenAI claims that its latest ChatGPT models will provide “fewer undesirable responses” in sensitive situations.

mashable light speed

In the blog post announcing GPT-5.2, OpenAI states that GPT-5.2 scores higher on safety tests related to mental health, emotional dependency, and self-harm compared to the GPT-5.1 model. Previously, OpenAI has said it is using “safe termination”, a new security-training approach that balances helpfulness and security. More information about the performance of the new models can be found in the 5.2 System Card.

Table showing the performance of GPT-5.2 on mental health protection tests compared to GPT-5.1


Credit: Screenshot: OpenAI

However, the company also observed that GPT-5.2 rejected fewer requests for mature content, particularly erotic text. But this apparently doesn’t affect users whom OpenAI knows to be underage, as the company says its age-related protections “appear to work well.” OpenAI implements additional content protections for minors, including reducing access to content containing violence, gore, viral challenges, roles of a sexual, romantic or violent nature, and “extreme beauty standards”.

An age prediction model is also in the works, which will allow ChatGPT to predict the age of its users to help provide more age-appropriate content for younger users.

Earlier this fall, OpenAI introduced parental controls in ChatGPT, which included monitoring and restricting certain types of usage.

OpenAI isn’t the only AI company accused of increasing mental health issues. Last year, a mother sued Characters.ai after her son died by suicide, and another lawsuit claimed that that platform’s “characters” had caused serious harm to children. Character.ai has been declared unsafe for teenagers by online safety experts. Similarly, according to child safety and mental health experts, AI chatbots from various platforms including OpenAI have been declared unsafe for the mental health of teenagers.

If you are feeling suicidal or experiencing a mental health crisis, please talk to someone. You can call or text the 988 Suicide and Crisis Lifeline at 988, or chat here 988lifeline.orgYou can reach Trans Lifeline by calling 877-565-8860 or The Trevor Project at 866-488-7386, Text “START” to the crisis text line at 741-741, Contact the NAMI Helpline at 1-800-950-NAMI, Monday through Friday 10:00 AM to 10:00 PM ET, or email [email protected]If you don’t like the phone, consider using 988 Suicide & Crisis Lifeline Chatis here List of International Resources,


Disclosure: Mashable’s parent company Ziff Davis filed a lawsuit against OpenAI in April, alleging it infringed Ziff Davis copyrights in the training and operation of its AI systems.



<a href

Leave a Comment