On Tuesday, September 2, 2025, OpenAI announced its plan to roll out new tools that will allow parents to monitor their teenagers’ use of the ChatGPT app. This move comes in the wake of a lawsuit filed against the company, accusing it of indirect involvement in the tragic suicide of a 16-year-old in California.
In an official blog post, the company explained that the upcoming updates—set to go live in the coming weeks—will enable parents to link their accounts with their children’s accounts, with the option to configure how the app interacts with them through customized “behavioral rules.”
OpenAI also noted that the new system will provide immediate alerts if it detects signs of severe psychological distress in teenagers’ conversations, in addition to granting parents new controls over their children’s account settings.
These adjustments follow a lawsuit filed by the American teenager’s parents in late August, alleging that ChatGPT had provided “detailed instructions” that encouraged their son to commit suicide. The case has sparked a broad debate about the responsibility of AI tools for the content they generate, particularly for minors.
In the same context, the company reaffirmed its commitment to enhancing its language models’ ability to detect early signs of psychological distress and to respond responsibly. It also revealed that, over the next 120 days, additional changes will be introduced, including redirecting certain sensitive conversations to more advanced models in emotional and cognitive analysis, such as GPT-5 Thinking.
The company emphasized that these models “strictly adhere to safety standards and implement response protocols more systematically,” as part of OpenAI’s ongoing efforts to strengthen the safety of its technologies, especially for the most vulnerable groups.


