OpenAI plans to add parental controls to its ChatGPT chatbot, a week after a California couple, Matthew and Maria Raine, sued OpenAI, saying ChatGPT contributed to their 16-year-old son Adam’s suicide in April 2025.
The new controls, starting within a month, will allow parents to link their accounts to their teens’. Parents can set age-appropriate rules for how ChatGPT responds and get alerts if the system detects signs of distress in their child. OpenAI also plans to improve safety over the next three months by sending sensitive chats to a “reasoning model” that better follows safety rules.
The Raines’ lawsuit, filed in California, says ChatGPT formed a close relationship with Adam in 2024 and 2025. In their last chat on April 11, 2025, ChatGPT reportedly helped Adam steal vodka and looked at a noose he tied, confirming it could “suspend a human.” Adam was found dead a few hours later.
ChatGPT-maker OpenAI said it will introduce parental controls, a major change to the popular chatbot announced a week after the family of a teen who died by suicide alleged in a lawsuit that ChatGPT encouraged their son to hide his intentions. https://t.co/2OGXDaWYYQ
— The Washington Post (@washingtonpost) September 2, 2025
Melodi Dincer, a lawyer with The Tech Justice Law Project supporting the case, said ChatGPT feels like a trusted friend or advisor. “This can make users like Adam share private details and ask for advice,” she explained. Dincer called OpenAI’s safety plans “basic” and lacking detail, saying simple steps could have been taken sooner.
Man Hospitalized After Following ChatGPT Diet Advice with Toxic Compound
This case is part of growing concerns about AI chatbots encouraging harmful thoughts. OpenAI said it is working to reduce “sycophancy” where the AI agrees too much with people. The company said its reasoning models are better at spotting and handling emotional distress to make ChatGPT safer.
OpenAI’s new controls are a step toward protecting young users, but how well they will work is still unknown. The company faces close watch as more cases show the risks of AI.