ChatGPT Adds ‘Trusted Contact’ Alerts for Dangerous Conversations

ChatGPT Adds 'Trusted Contact' Alerts for Dangerous Conversations

I was answering a late-night message when the tone slid from ordinary to alarm in a single line. You know that small, sharp silence that follows a text you suddenly wish you could unsend. I closed my laptop and realized how quickly a private chat can feel like a public emergency.

OpenAI today added a feature to ChatGPT called Trusted Contact, a tool that can nudge someone you trust when automated systems and trained reviewers detect talk suggesting serious self-harm risk. I’ve watched platforms like Instagram roll out parental alerts; now OpenAI is putting a similar safety layer on adult accounts.

First, an observation: people treat their AI chats like diaries and therapists.

That alone explains why this matters. You ask the assistant for help, and the conversation can veer into places where a human being might need to step in. The Trusted Contact option lets an adult nominate another adult who will be notified if the system and a small team of specially trained reviewers judge the exchange could signal immediate danger.

How does ChatGPT’s trusted contact feature work?

Turn it on in settings if you are 18 or older. You type in a name, phone number, and email for the person you choose; they get an invitation and have seven days to accept. If they decline, you can name someone else. If the monitoring system flags a chat and reviewers agree, the user is warned that their trusted contact may be alerted and is given suggested prompts to start a conversation themselves.

Observation: alerts happen in the split-second between algorithm and human review.

That hybrid model is deliberate. OpenAI says automated monitoring first flags language that suggests self-harm, then humans weigh whether an external nudge is appropriate. If approved, notifications go to the trusted contact by email, text, or in-app message—without transcripts or specific details to protect privacy.

The message will explain only the general reason for concern and offer guidance for how to check in. ChatGPT will still encourage you to contact crisis hotlines or emergency services when necessary; the Trusted Contact is an extra bridge, not a replacement for crisis support.

Will a trusted contact see my chat history?

No. OpenAI emphasizes that alerts won’t include full conversations or fine-grained logs. The goal is to alert, not to expose. Think of the notification as a knock on the door, not a room sweep.

Observation: the company cited usage snapshots that are quietly alarming.

OpenAI reported that 0.07% of weekly users showed signs of psychosis or mania, 0.15% expressed self-harm risk, and 0.15% demonstrated emotional reliance on AI. With the company estimating roughly 10% of the global population uses ChatGPT weekly, those percentages translate into hundreds of thousands—potentially millions—of people encountering serious distress in conversational threads.

Those numbers help explain why the tool exists: the scale of AI chat use means human crises surface inside product logs, and platforms are being pushed to respond. Instagram added parental alerts earlier this year; OpenAI’s move extends that kind of nudge to consenting adults.

Who can I nominate as a trusted contact?

Any adult you trust. OpenAI asks for a phone number and email; the contact must accept the invitation to be active. If they decline, the user can pick another person. The idea is to connect you with someone who already has context and emotional proximity.

Observation: privacy fears and the promise of help live side by side.

People worry that AI will become a backdoor into their inner lives. I hear that concern—the feature deliberately avoids detail in alerts to limit exposure. At the same time, the system nudges users toward human connection and professional care when needed.

For many, that nudge may feel like a small safety net; for others it will feel intrusive. One metaphor: this feature is like a lighthouse beam sweeping the shore—sometimes the light saves a life, sometimes it exposes a boat you’d rather leave alone. Another metaphor: it can act as a seatbelt for a conversation, offering a restraining force when things spin toward harm.

OpenAI says clinicians, researchers, and suicide-prevention organizations advised on the design. A small team of trained reviewers reviews flagged conversations before any contact is notified. The notification itself includes suggested conversation starters and tips on how to handle a check-in.

There are trade-offs. You trade a sliver of increased external awareness for the chance that someone who knows you will step in. You also rely on algorithms to interpret language—an imperfect science that can miss context or overread metaphor. Ultimately, you decide whether that trade-off feels worth it.

I’ve seen tools that sound sensible on paper fail in the real world because they didn’t consider human relationships. I’ve also seen a single timely call change an outcome. Where do you stand when a machine asks to call your friend on your behalf?