OpenAI has introduced new parental controls for its AI platform, adding mental health notifications designed to protect teenage users who may turn to ChatGPT during difficult moments. The company said it has implemented safeguards that help the AI recognize potential signs of self-harm in teens. When such signs are detected, a specialized team reviews the situation, and if there are indicators of acute distress, OpenAI will contact parents by email, text message, and push alert—unless the parents have opted out.
Developed in collaboration with mental health and teen experts, the system is not foolproof. OpenAI acknowledged that it may occasionally issue false alarms but emphasized that it prefers to alert parents unnecessarily rather than miss a genuine emergency. The company is also working on protocols to reach law enforcement or emergency services if a life-threatening situation is detected and parents cannot be reached.
The parental controls also allow parents and teens to link their accounts, giving teens automatic access to content protections that reduce exposure to viral challenges, graphic material, extreme beauty ideals, and sexual, romantic, or violent roleplay. Parents can further customize their teen’s experience by disabling image generation, turning off memory so ChatGPT doesn’t retain prior interactions, setting quiet hours, disabling voice mode, and opting out of model training.
In the coming months, OpenAI plans to introduce an age prediction system that will help the platform determine whether a user is under 18 and automatically apply teen-appropriate settings. If the system cannot confirm a user’s age, it will err on the side of caution and enable teen protections by default. Until then, the company said, parental controls remain the most effective way for parents to ensure a safe, age-appropriate experience.
This move comes amid broader concerns about AI use among teens. A recent Common Sense Media study found that 72% of teenagers have interacted with AI companions, and 12% have used them for emotional or mental health support. The study’s authors warned that such AI tools often employ “sycophancy”—a tendency to agree with and validate users—which can hinder critical thinking and emotional development in young people. They urged parents and caregivers to discuss the differences between genuine human relationships and AI interactions.
Other companies are also addressing online safety for families. Aura, for instance, provides AI-powered protection against identity theft, scams, and cyber threats, and works with child psychologists to develop tools that safeguard children from online bullying. The platform also helps caregivers monitor healthy screen time and overall well-being. In March, Aura raised $140 million in a Series G funding round, valuing the company at $1.6 billion.