California, United States – OpenAI announced that it will impose restrictions on how ChatGPT responds to users it suspects are under the age of 18.
The company’s announcement came after a lawsuit filed by the family of a 16-year-old teenager who committed suicide last April.
This comes after months of conversations with the chatbot ChatGPT.
“Safety will take priority over privacy and freedom for teens,” CEO Sam Altman said in a blog post.
He stressed that “minors need great protection”.
Altman explained that the company intends to build an age prediction system. This system will estimate the age of users based on how they use the application.
If there is any doubt about the application, it will be assumed that the experiment concerns those under 18 years of age.
Altman noted that some users “in some cases or countries may also be asked to provide an ID card. This is to verify their age”.
Altman confirmed that the way ChatGPT responds to accounts classified as under 18 years of age will change.
Explicit sexual content will be banned.
The chatbot will also be trained not to flirt if asked to do so by minor users.
He will also not engage in discussions about suicide or self-harm even in creative writing contexts.
He continued, saying: “If a user under the age of 18 shows suicidal tendencies, we will seek to communicate with his parents. If this is not possible, we will address the authorities in the event of imminent danger”.
The family of the teenager who committed suicide claims a version of the ChatGPT app gave him guidance. It advised on whether the method he chose to end his life would work.
He also offered to help him write a suicide note to his parents.
Case documents stated that the teenager exchanged up to 650 messages with ChatGPT daily.