OpenAI announced new parental controls for ChatGPT after Adam Raine’s parents filed a lawsuit following his April suicide.
The controls will notify parents if ChatGPT detects signs of severe distress in their teen.
OpenAI plans to release the features within a month.
Parents can link their accounts to manage which tools and AI features their child can access.
The system also allows parents to review chat history and the AI’s memory of user facts.
Experts will guide the alert system, but OpenAI did not specify what triggers notifications.
Critics question safety measures
Jay Edelson, attorney for Raine’s parents, called OpenAI’s announcement vague and labeled it crisis management.
Edelson urged CEO Sam Altman to prove ChatGPT’s safety or remove it from the market immediately.
Critics argue the measures fail to fully protect vulnerable teenagers from potential harm.
Tech companies take broader precautions
Meta blocked chatbots from discussing suicide, self-harm, eating disorders, and inappropriate relationships with teens.
Meta redirects teens to expert resources and already offers parental supervision tools.
Research highlights AI risks for teens
A RAND Corporation study revealed inconsistencies in ChatGPT, Google’s Gemini, and Anthropic’s Claude on suicide queries.
Lead researcher Ryan McBain said new parental controls are positive but remain incremental steps.
He stressed the need for independent safety standards, clinical testing, and enforceable regulations for AI tools.
McBain warned companies cannot self-regulate in spaces where teenagers face unique risks.
