Web Stories Monday, September 29

OpenAI is rolling out parental controls for ChatGPT on the web and mobile on Monday, following a lawsuit by the parents of a teen who died by suicide after the artificial intelligence startup’s chatbot allegedly coached him on methods of self-harm.

The controls let parents and teenagers opt in for stronger safeguards by linking their accounts, where one party sends an invitation and parental controls activate only if the other accepts, the company said.

U.S. regulators are increasingly scrutinizing AI companies over the potential negative impacts of chatbots. In August, Reuters had reported how Meta’s AI rules allowed flirty conversations with kids.

Under the new measures, parents will be able to reduce exposure to sensitive content, control whether ChatGPT remembers past chats, and decide if conversations can be used to train OpenAI’s models, the Microsoft-backed company said on X.

Parents will also be allowed to set quiet hours that block access during certain times and disable voice mode as well as image generation and editing, OpenAI said. However, parents will not have access to a teen’s chat transcripts, the company added.

In rare cases where systems and trained reviewers detect signs of a serious safety risk, parents may be notified with only the information needed to support the teen’s safety, OpenAI said, adding they will be informed if a teen unlinks the accounts.

OpenAI, which has about 700 million weekly active users for its ChatGPT products, is building an age prediction system⁠ to help it predict whether a user is under 18 so that the chatbot can automatically apply teen-appropriate settings.

Meta had also announced new teenager safeguards to its AI products last month. The company said it will train systems to avoid flirty conversations and discussions of self-harm or suicide with minors and temporarily restrict access to certain AI characters.

Share.

Leave A Reply

© 2025 The News Singapore. All Rights Reserved.