Artificial intelligence model ChatGPT will soon add parental controls for teenagers who use the platform. OpenAI, the company that owns ChatGPT, said on Tuesday that it is part of a 120-day plan to add safeguards for its users who turn to the chat model during times of mental and emotional distress. The news comes a week after OpenAI received a wrongful death lawsuit from California parents who claim ChatGPT is to blame after their 16-year-old son died by suicide.

ChatGPT offers a new set of safeguards for users experiencing mental distress

OpenAI shared a 120-day plan on Tuesday that aims to help ChatGPT models to “recognize and respond to signs of mental and emotional distress, guided by expert input.” New safeguards will include expanding interventions to more people in crisis, making it even easier to reach emergency services and get help from experts, enabling connections to trusted contacts and strengthening protections for teens, the company said in a news release.

The company said it will receive input from its “Expert Council on Well-Being,” which comprises experts in youth development, mental health and human-computer interaction. It will also work with its “Global Physician Network,” a group of over 250 physicians it will consult to develop new safeguards.

“While the council will advise on our product, research, and policy decisions, OpenAI remains accountable for the choices we make,” the company wrote.

ChatGPT adds parental controls

OpenAI also acknowledged the important role of artificial intelligence chat models in teenagers’ lives: “Many young people are already using AI. They are among the first ‘AI natives,’ growing up with these tools as part of daily life, much like earlier generations did with the internet or smartphones,” the company wrote. “That creates real opportunities for support, learning, and creativity, but it also means families and teens may need support in setting healthy guidelines that fit a teen’s unique stage of development.”

Starting in October, parents will be able to link their account with their teenager’s account via email invitation if they are over 13. They will also have the ability to control how ChatGPT responds to their teenager with age-appropriate model behavior rules, which the company said “are on by default.” Parents will also be able to disable features such as memory and chat history, as well as receive notifications “when the system detects their teen is in a moment of acute distress.”

OpenAI’s wrongful death lawsuit

Last week, OpenAI received its first wrongful death lawsuit from California parents who say ChatGPT helped lead to their 16-year-old son’s death by suicide. The lawsuit states that the chatbot discouraged Adam Raine from seeking human connection, offered him advice to write a suicide note and set up a noose, according to NBC News. ChatGPT also shared a suicide hotline number several times but was easily bypassed, Raine’s parents said.

Shortly after the lawsuit was filed, OpenAI said safeguards did exist on ChatGPT, including giving empathetic responses and rerouting conversations to human reviewers. 

“For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards,” the company wrote in an Aug. 26 news release. “We’re strengthening these mitigations so they remain reliable in long conversations, and we’re researching ways to ensure robust behavior across multiple conversations.”

Jay Edelson, the lead counsel for the Raine family, said OpenAI only made “vague promises” and that OpenAI CEO Sam Altman “should either unequivocally say that he believes ChatGPT is safe or immediately pull it from the market,” per NBC News.

“Don’t believe it: this is nothing more than OpenAI’s crisis management team trying to change the subject,” Edelson added.

Altman acknowledged that users have developed “different and stronger” attachments to AI chatbots compared to other technologies. 

“I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions,” he tweeted in August. “Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way.”