
OpenAI announced today that it is launching a new mental health-focused safety feature for adult ChatGPT users.
Starting today, ChatGPT users can add what the company calls “trusted contacts,” who can be notified if the AI’s automated systems and trained reviewers determine that the user has engaged in a discussion about self-harm.
The new feature comes amid growing scrutiny over the impact of AI and other digital platforms on mental health. Last year, OpenAI revealed that 0.07% of its weekly users displayed symptoms of a “mental health emergency related to psychosis or mania”, while 0.15% expressed risk of “suicidality or suicide”, and another 0.15% showed signs of “emotional dependence on AI”. Considering the company’s claim that about 10% of the world’s population uses ChatGPT weekly, that could reach almost three million people.
The Trusted Contacts feature expands on ChatGPT’s existing parental safety notifications, which alert parents when a linked teen account shows signs of distress. Instagram introduced similar parental alerts earlier this year.
Now, OpenAI is giving these alerts to its adult users. The company said the feature was developed with guidance from mental health and suicide prevention practitioners, researchers and organizations.
“Trusted Contacts is designed to encourage connection with someone the user already trusts,” the company said in its announcement. “It does not replace professional care or crisis services, and is just one of many layers of safeguards to support people in crisis.”
OpenAI said ChatGPT would still encourage users to contact a crisis hotline or emergency services if necessary.
Any user 18 years of age or older can enable this feature through ChatGPT’s settings. From there, users can nominate another adult to serve as their trusted contact by submitting details like the contact’s phone number and email address.
The trusted contact will then receive an invitation explaining the feature and will have one week to accept. If they decline, the initial user can enroll another contact in their place.
Once the feature is activated, OpenAI’s automated monitoring systems can flag when a user is discussing self-harm in a way that indicates a serious safety concern. The system will then notify the user that their trusted contact may have been alerted and will encourage them to contact them directly. It will also provide some recommended conversation starters.
The company said a small team of specially trained reviewers will assess the situation and determine whether it is appropriate to notify a trusted contact.
If OpenAI decides to send an alert, the trusted contact can receive it via email, text message, or in-app notification. The alert will simply explain the common cause cited for self-harm and encourage the trusted contact to investigate. It will also include guidance on how to navigate those conversations.
OpenAI noted that the notifications will not include specific details or chat transcripts to protect user privacy.
<a href