
OpenAI just added a "Trusted Contact" feature for adult ChatGPT users
If ChatGPT's monitoring systems detect signs of self-harm in your conversations, it can now notify a designated adult contact you've pre-approved — not with chat transcripts, but a general heads-up encouraging them to check in on you.
The feature is essentially an expansion of the existing teen parental alert system, now opened up to all users 18+. OpenAI says it was built with input from mental health clinicians and suicide prevention organizations, and it won't replace crisis services — just adds a human layer.
Reactions on X are mixed: skepticism around whether AI can accurately flag these situations, privacy concerns, and worries that the alerts could backfire emotionally. Valid points.
Source: ChatGPT Adds ‘Trusted Contact’ Feature to Send Alerts When Conversations Get Dangerous
But honestly? I still think this is one of the warmer AI updates we've seen in a while. The execution may not be perfect, but the intent — getting a real person involved when someone might be struggling — feels like the right instinct.