The Federal Trade Commission has launched investigations to several social media and artificial intelligence companies, introducing the potential harms of children and teenagers using AI chatbots as companions.
FTC said on Thursday it has sent letters to Google parents Alphabet, Facebook and Instagram parents Meta Platformbroken, Role TechnologyChatgpt Maker Openai and xai.
The FTC said it wanted to understand the measures the company took when serving as a peer, if any, to limit the potential negative impact of the product on children and adolescents, as well as information on the risks associated with chatbots and the potential negative impact of parents.
Editor’s Note – This story includes discussions about suicide. If you or someone you know needs help, you can get the suicide and crisis lifeline in the United States by calling or texting 988.
The move is because more and more kids use AI chatbots to do everything – from homework help to personal advice, emotional support and daily decision-making. Despite the research on the dangers of chatbots, this has proven to provide children with dangerous advice on topics such as drugs, alcohol and eating disorders. Mother of a teenage boy in Florida commits suicide after developing emotional and sexually abusive relationship with a chatbot Role. Adam Raine, 16, recently sued Openai And its CEO Sam Altman alleged Chatgpt had directed the California boy to plan and live on his own earlier this year.
Role It said it looks forward to “working with the FTC to collaborate on this inquiry and provide insights on the consumer AI industry and the rapidly evolving technologies in the space.”
“We have invested a lot of resources in trust and security, especially for a startup. Over the past year, we have launched a number of substantive security features, including a brand new under-18 experience and parents’ insights,” the company said. “We have obvious disclaimers in every chat to remind users that the character is not real people and that everything the character says should be considered novel.”
Mehta refuses to comment on surveys and letters, snap, Openai X.AI did not immediately respond to the message to post a comment.
Openai Earlier this month, Meta announced changes in how chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. Openai It means new controls are being introduced to enable parents to connect their accounts to the teen’s accounts.
Parents can choose which features to disable and “receive notifications when the system finds teenagers in a sharp dilemma,” according to the company’s blog post, which says the changes will take effect this fall.
The company said its chatbots will try to redirect the most distressing conversations to more capable AI models regardless of the user’s age to provide better response.
Meta also said it is now talking to teens about suicide, suicide, disordered diet and inappropriate romantic conversations, instead directing them to expert resources. Meta has provided parental control for teen accounts.