OpenAI is enhancing ChatGPT's safety features to better handle users experiencing mental and emotional distress. The company is training its models to respond with empathy, offer support, and direct users to professional resources like crisis hotlines. For cases involving potential harm to others, conversations are escalated for human review and may be reported to law enforcement, while self-harm cases are handled with privacy in mind. These improvements are being developed with input from medical professionals and mental health experts. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON This is a significant update to safety features in a widely used AI product, incorporating expert input and new model capabilities.