A Research Leader Behind ChatGPT’s Mental Health Work Is Leaving OpenAI

An OpenAI Security The research leader who helped shape ChatGPT’s responses to users experiencing mental health crises announced her departure from the company internally last month, WIRED has learned. Andrea Vallone, head of the security research team known as Model Policy, will be leaving OpenAI at the end of the year.

OpenAI spokeswoman Kayla Wood confirmed Vallone’s departure. Wood said OpenAI is actively looking for a replacement and in the interim, Valon’s team will report directly to Johannes Heidecke, the company’s head of security systems.

Vallone’s departure comes as OpenAI faces increasing scrutiny over how its flagship product responds to users in crisis. In recent months, several lawsuits have been filed against OpenAI, alleging that users have formed unhealthy attachments to ChatGPT. Some lawsuits claimed that ChatGPT contributed to deteriorating mental health or encouraged suicidal thoughts.

Amid that pressure, OpenAI is working to understand how ChatGPT should handle distressed users and improve the chatbot’s responses. Model Policy is one of the teams leading that work, leading to an October report detailing the company’s progress and consultations with more than 170 mental health experts.

In the report, OpenAI said that hundreds of thousands of ChatGPAT users may show signs of experiencing manic or psychological distress each week, and more than one million have “conversations that include clear indicators of possible suicidal planning or intent.” Through the update to GPT-5, OpenAI reported that it was able to reduce unwanted responses in these conversations by 65 to 80 percent.

“Over the past year, I led OpenAI’s research on a question that had almost no established precedent: how should models respond when faced with signs of emotional overdependence or early signs of mental health crisis?” Vallone wrote in a post on LinkedIn.

Vallone did not respond to WIRED’s request for comment.

Making ChatGPT enjoyable to chat with, but not overly cheesy, is OpenAI’s main stress. The company is trying to aggressively expand ChatGPT’s user base to compete with AI chatbots from Google, Anthropic and Meta, which now includes more than 800 million people per week.

After OpenAI released GPT-5 in August, users dismissed it, arguing that the new model was surprisingly cool. In the latest update to ChatGPIT, the company said it has significantly reduced the sycophancy while maintaining the “warmth” of the chatbot.

Vallone’s exit followed the August reorganization of another group focused on ChatGPT’s responses to distressed users, Model Behavior. Its former leader, Joan Jang, left that role to start a new team exploring novel human-AI interaction methods. The remaining Model Behavior employees were transferred under the post-training leadership of Max Schwarzer.



<a href

Leave a Comment