Openai announced Today, it is developing “different chat experiences” tailored to teenagers, a move that highlights the growing focus on the impact of AI chatbots on AI Mental health of young people.
The new teenager model is A broader security push The company reportedly lacked protection after a family lawsuit, which led to the death of their teenage son in suicide. Changes include age prediction technology to keep children under 18 in the standard version of Chatgpt. According to the announcement, if the system cannot confidently estimate someone’s age, Chatgpt will automatically default to the experience under 18 years old.
Openai CEO Sam Altman prioritizes security in “Privacy and Freedom of Youth”, a new and powerful technology that we believe minors need obvious protection. ” Blog Posts.
Don’t miss our unbiased technical content and lab-based reviews. Add CNET As the preferred source of Google.
New features in Chatgpt’s Teen Mode
Openai said the Teen Edition will come with stricter built-in restrictions, such as:
- Content Filter: Even in the context of fiction or creativity, there is no contemptuous dialogue or discussion about self-harm.
- Crisis response: If a teenager expresses suicidal thoughts, Openai may try to remind parents, even in an emergency, to contact the authorities.
- Parental Control: Parents can link their account, set Chatgpt’s response, and execute the “power outage time” rules within the application scope.
(Disclosure: CNET’s parent company Ziff Davis filed a lawsuit against Chatgpt Maker Openai in April, accusing it of infringing on Ziff Davis’ copyright in training and operating its AI systems.)
A bigger context
Openai’s announcement came hours before the Senate hearing in Washington, D.C., to examine the potential threats of AI to young people. Legislators have been exerting urging teenagers to safety, following lawsuits accusing AI platforms The deteriorating struggle for mental health Or provide harmful health advice.
Openai’s approach reflects early moves by companies like Google, which spans YouTube kids under critical and regulatory pressure. Altman’s blog post describes the move as part of a broader balance between security, privacy and freedom. He believes that adults should be treated less restrictively “like adults”, while teenagers need increased protection – even if that means compromise on privacy, such as asking for ID.
Please read also: Openai hopes you get a certificate at Chatgpt and find your next job
The company said it will launch this teen-focused experience by the end of this year. But history shows that savvy teenagers often find solutions to gain unrestricted access. Of course, whether these guardrails are enough to protect the technology to climb onto them comfortably is definitely a question.
If you feel you or someone you know is in direct danger, call 911 (or your local emergency number in your country) or go to the emergency room for immediate help. It means that this is a psychiatric emergency and someone is required to be trained in this situation. If you are struggling with negative thoughts or a sense of suicide, you can provide resources to help. In the United States, call National Suicide Lifeline by calling No. 988.