this Federal Trade Commission An investigation is being conducted on AI chatbots from seven companies, including Alphabet, Meta and Openai, to be used as peers. Query involves finding out how companies test, monitor and measure Potential harm to children and teenagers.
Common sense media Poll Of the 1,060 teenagers in April, more than 70% of people were found to use AI companions in May More than 50% of people use them consistently – Several times a month or more.
For some time, experts have been warning that contacting chatbots can be harmful to young people. A study reveals chatgpt Bad advice for teenagerssuch as how to hide eating disorders or personalize suicide fluids. In some cases, chatbots have Ignore comments It should be considered worrying and then jump onto the comments to continue the previous conversation. Psychologists are calling for guardrails to protect young people, such as reminders in chat that chatbots are not humans, and educators should prioritize school AI literacy
But it’s not just children and teenagers. Many adults experience negative consequences Rely on chatbots – Whether out of company, advice or facts and trustworthy sources of their personal search engines. Chatbots often Tell you what you want to hearwhich can lead to lies. Blindly following the instructions of a chatbot is not always The right thing.
FTC Chairman Andrew N. “The research we launched today will help us better understand how AI companies develop their products and the steps they take to protect children.”
one Role A spokesperson told CNET that there is an obvious disclaimer for every conversation on the service and that all chats should be considered novels.
“Over the past year, we have launched a number of substantial safety features, including brand new under-18 experience and parents’ insights,” a spokesperson said.
The company behind Snapchat’s social network also said it has taken steps to reduce risks. “Since the introduction of my AI, Snap has leveraged its strict security and privacy processes to create a product that is not only beneficial to our community, but also transparent and clear about its capabilities and limitations,” a spokesperson said.
Meta declined to comment, and neither the FTC nor the remaining four companies immediately responded to our request for comment.
FTC has Publish an order and is seeking a conference call with seven companies on September 25 about the timing and format of their comments. The companies surveyed include some of the world’s largest manufacturers of AI chatbots or manufacturers of popular social networks that generate AI:
- Alphabet (Google’s parent company)
- Role Technology
- Instagram;
- Meta platform;
- Openai;
- Break
- X.AI.
Since the end of last year, some of these companies have updated or enhanced their protection features for young people. The role begins Implement limits About how chatbots deal with people under 17 and increase control for parents. Introduced Instagram Teen Account Last year, all users under the age of 17 were transferred to them, and recently Setting the theme limitations Teens can use with chatbots.
The FTC is seeking information from seven companies:
- Money through user engagement
- Process user input and generate output in response to user queries
- Development and approval roles
- Measure negative impacts before and after deployment, measure and monitor
- Reduce negative impacts, especially for children
- Inform users and parents about features, features, expected audiences, potential negative impacts, and data collection and processing practices using disclosure, advertising and other representations
- Monitor and enforce compliance with company rules and terms of service (e.g., community guidelines and age restrictions) and
- Use or share personal information obtained through user conversations with chatbots