Home / Technology / Tech News / OpenAI to alert 'Trusted Contact' if it detects potential self-harm risk
OpenAI to alert 'Trusted Contact' if it detects potential self-harm risk
OpenAI has introduced a 'Trusted Contact' feature in ChatGPT that can alert a chosen person if conversations suggest serious self-harm or emotional distress
OpenAI introduces Trusted Contact in ChatGPT, a new safety feature designed to alert a trusted person if conversations suggest serious emotional distress or self-harm risk. (Image: OpenAI)
OpenAI has started rolling out a new ChatGPT safety feature called “Trusted Contact” to help users during moments of serious emotional distress or potential self-harm risk. The optional feature allows adult users to add one trusted person, such as a family member, friend, or caregiver, who may be notified if ChatGPT’s systems detect conversations suggesting a serious safety concern. The move comes amid growing scrutiny around AI chatbots and mental health risks.
Over the past year, OpenAI has faced criticism and lawsuits from families alleging that conversations with ChatGPT contributed to self-harm incidents or suicide.
A BBC report published in 2025 also highlighted instances where ChatGPT allegedly provided harmful responses related to suicide methods.
OpenAI said it has since improved how the chatbot handles sensitive conversations and escalates potential risks.
What is Trusted Contact in ChatGPT?
According to OpenAI, Trusted Contact is an optional safety tool designed for users aged 18 or older globally, or 19 and older in South Korea.
The feature allows users to nominate one adult who can be contacted if OpenAI’s monitoring systems detect conversations involving possible self-harm or suicide risk.
The company said the feature is designed to encourage real-world human connection during a crisis rather than replace professional mental healthcare or emergency services.
Users will continue to receive crisis helpline suggestions and prompts encouraging them to seek professional support when needed.
Trusted Contact expands on OpenAI’s existing parental safety alerts, which notify parents or guardians if linked teen accounts show signs of severe emotional distress.
With the new update, adult users can now choose to add a trusted person who may receive similar alerts in serious situations.
Users can add a Trusted Contact through ChatGPT settings. The selected person receives an invitation explaining their role and must accept within one week for the feature to become active.
If the invitation is declined, the user can choose another contact.
If ChatGPT’s automated systems later detect conversations that may indicate serious self-harm concerns, the platform informs the user that their Trusted Contact may be notified. The system also encourages users to reach out directly using suggested conversation starters. Before an alert is sent, OpenAI’s trained safety team reviews the case.
If a serious concern is confirmed, the Trusted Contact receives a limited alert through email, text, or the app. The notification does not include chat transcripts and only mentions that concerning self-harm discussions were detected, along with links to expert guidance.
Users can remove or edit their Trusted Contact at any time through settings, while Trusted Contacts can remove themselves through OpenAI’s help centre.
OpenAI said every serious safety notification is reviewed by trained staff, with the company aiming to complete reviews within one hour. The company also acknowledged that no automated system is perfect and alerts may not always fully reflect a user’s actual situation.
Other safety measures in ChatGPT
Alongside Trusted Contact, OpenAI said ChatGPT includes several safeguards for sensitive conversations:
Real-world support: ChatGPT may suggest contacting emergency services, helplines, mental health experts, or trusted people.
Improved responses: OpenAI said it worked with more than 170 mental health experts to improve how ChatGPT detects and responds to distress.
Break reminders: In some cases, ChatGPT may encourage users to take breaks after extended usage.
Blocking harmful requests: The chatbot is trained to refuse requests related to suicide or self-harm instructions and instead direct users to safer resources.
Other technology companies have also been expanding mental health and self-harm safety tools on their platforms.
In February, Meta announced a new Instagram feature that alerts parents if teens repeatedly search for suicide or self-harm-related terms within a short period. The alerts work through Instagram’s parental supervision tools and are intended to help parents identify when teens may need additional support. Meta said Instagram already blocks harmful search results and redirects users to crisis helplines and support resources.
Meanwhile, in April, Google introduced updates to Gemini AI that help users connect to mental health support faster. If Gemini detects signs of emotional distress or self-harm risk, the chatbot can display crisis helpline options and direct links for calling, chatting, or texting support services directly from the conversation.