OpenAI is reportedly developing a new safety system for its chatbot, ChatGPT, that could allow users to nominate a trusted person who may be alerted in situations where the system detects signs of a possible mental health crisis.
The feature, still in development, would apply only to adult users and would require them to voluntarily select a family member, friend, or other trusted contact. According to details shared by the company, the aim is to create a pathway for human support when conversations suggest a user may be in distress.
How the system is expected to work
The proposed system would rely on automated detection of emotional signals in chat conversations. These signals may include repeated expressions of distress, mentions of harmful intent, or patterns of language that suggest emotional instability.
OpenAI has not published the exact thresholds that would trigger an alert, and it remains unclear how the system would separate genuine risk from normal emotional conversations, such as venting or creative writing.
A spokesperson for the company was quoted in reports as saying the feature is intended to remain “opt-in and user-controlled,” meaning users would decide in advance whether to activate it and who would be contacted.
Currently, AI companies are facing increasing attention over how chatbots respond to users in vulnerable emotional states.
OpenAI, the developer of ChatGPT, has been under scrutiny alongside other major AI firms, including Anthropic, over concerns that conversational systems may sometimes reinforce harmful beliefs or fail to respond safely in sensitive exchanges.
The proposed feature has raised concerns among privacy advocates and users who treat ChatGPT as a private space for reflection or emotional expression.
A key issue is consent and accuracy. Because the system would rely on automated interpretation of language, it could produce false alerts or misread context. OpenAI has acknowledged internally that defining reliable thresholds for “crisis detection” remains a technical challenge.
Another limitation is adoption. Since the system would require users to actively set up a trusted contact, its effectiveness may depend on whether people anticipate needing such support in advance.
For now, the “trusted contact” system remains a concept under development rather than a released feature. If implemented, it will be one of the most direct attempts by an AI company to connect chatbot interactions with real-world human support networks.
However, OpenAI has not confirmed a launch timeline.

