The FTC has begun investigating the negative effects of AI peers on children and adolescents who form tense, emotional “relationships” with them.
Peer is a software program designed to simulate human relationships through chatbots or digital avatars that use generative artificial intelligence to adopt fictional names and personalities. Technical regulators warn that more and more teenagers are having intense sexual conversations with them.
Independent agencies have voted unanimously to order seven social media companies to disclose how they profit from their peers and the steps they take to protect minors.
Chairman Andrew N. statement. “As AI technology evolves, it is important to consider the impact of chatbots on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry.”
FTC Commissioners Mark Meador and Melissa Holyoak cited examples of the dangers AI poses to isolated children and teenagers in separate statements.
“I am concerned about the reports that AI Chatbots can engage in shocking interactions with young users and suggest that their employees may warn companies that offer generative AI partner chatbots that they are deploying chatbots without adequate protection to protect younger users,” Ms Holyoak said.
Mr. Midor pointed to the example of Adam Raine, a 16-year-old boy who hanged himself on April 11 after Chatgpt said he did not “ow anyone” to survive and suggested that he use the best “load-bearing” noose.
“Many familiar internet platforms (for all potential drawbacks) present known risks and provide families with parental control to mitigate these risks,” Midor said. “Chatbots recognize that exploitation and physical harm pose a threat to a whole new order.”
The FTC order also requires large tech companies to share their steps to assess the safety of minor AI partners and inform parents of risks.
The seven entities specified in the action are OpenAI, Character AI, X.AI, SNAP, Instagram, Google’s parent company Alphabet, and Facebook Parent Meta.
The Washington Times extends a discussion with the company.
A spokesperson for launching role AI promised to work fully with the FTC survey, noting that the company has invested “a lot of resources” in age restrictions, parent notifications and other security features.
“We have obvious disclaimers in every chat to remind the user that the character is not real and that everything the character says should be considered novel,” the spokesperson said in an email.
In another email, a SNAP spokesperson touted “strict security and privacy processes” for its AI products.
A SNAP spokesperson said: “We share the focus of the FTC, committed to ensuring thoughtful development of AI generated and look forward to working with the Commission to develop AI policies that strengthen our innovation while protecting our communities.”
A Yuan spokesperson declined to comment on the investigation, but mentioned that the company announced temporary policy changes to the chatbot last month.
As part of the change, Meta promises to limit the chatbots to situations where teens talk about self-harm, suicide, eating disorders and romance.
“As our system adds more guardrails as additional precautions, including training our AIS to not engage with teens in these topics, but to guide them to obtain expert resources and limit teenagers’ current access to selected AI roles,” said Meta spokesman Stephanie Otway. “These updates are already in progress and we will continue to adapt to our approach to ensure teens have a safe, age-appropriate experience with AI.”
Thursday’s FTC action came two days after an industry regulator report warned that teenagers were more willing to engage in intense sexual interactions with their AI peers than any other purpose.
Boston-based parent monitoring app AURA found that 36.4% of 10,000 users aged 13 to 17 have interacted specifically with peers in the past six months for sexual or romantic role-playing, the most common topic.
Aura said the additional 23.2% of teenagers tracked by its app rely on creative hypothetical programs, while only 13.1% asked the bot about the help of homework.
Other users utilize AI peers to obtain emotional or mental health support (11.1%), advice or friendship (10.1%) and personal information (6.1%).
Clinical psychologist Scott Kollins, Aura’s chief medical officer and lead author of the report, praised Thursday’s FTC announcement as a “important step forward” to address any unhealthy validation that AI peers told them by young people.
“AURA data suggest that children often have 10 times longer information to their AI peers than friends, and that interactions can quickly turn into sexual or violent,” Mr. Collins said in an email. “We attribute to our children to fully understand how this far-reaching technology drives real-life consequences.”
His research found that teenagers averaged every message to Polybuzz, an AI-powered chatbot that allowed them to bring them sexually suggestive notes late at night.
By comparison, they have only 12.6 words per text message, and each snapchat message gives 11.1 words to real-life family and friends.
Romantic AI chatbots are less romantic in their interactions with Chatgpt, with their average average of 34.7 words per message.
In a separate analysis of 300 children aged 8 to 17, parents agreed to participate in a clinical study, and Aura found that age checks and parental consent failed to prevent nearly 20% of children under 13 from spending more than four hours a day on social media.
The October report from the Centers for Disease Control and Prevention links this number of screening times to symptoms of fatigue, anxiety and depression in adolescents.
Health and dating experts warn that AI peers blur people’s sense of reality, making it difficult for young people to build healthy relationships as they age.
An August survey by Datingadvice.com and Indiana University’s Kinsey Institute found that 61% of adult singles believe that sex or falling in love with AI peers is “cheating.”
“AI can feel like a safe space, a diary that will reply, but it’s important to remember that these conversations are not really a connection or relationship,” Amber Brooks, editor of Datingadvice.com, Florida, said in an email. “They’re the equivalent of an imaginary friend.”
Laura Decook, founder of California, who runs the Family Mental Health Symposium, predicts that federal inquiry will bring stricter rules for big tech companies.
“I hope more parents will start using excessive devices as health issues, not just discipline,” Ms. Decook said in an email. “We will also see increasing regulations and call on tech companies to take more responsibility.”