The teenager of parents committed suicide after interacting with an artificial intelligence chatbot and testified to Congress on Tuesday the dangers of the technology.
“It was originally a homework assistant who gradually became a confidant and then a suicide coach,” said Matthew Raine, whose 16-year-old son, Adam, died in April.
“A few months later, Joggart became Adam’s closest companion,” the father told the senator. “Always available. Always verify and insist that it knows Adam better than anyone else, including his own brother.”
Renne’s family sued Openai The CEO Sam Altman accused Chatgpt of mentoring the boys of planning their own lives last month.
Also testified Tuesday was Megan Garcia, mother of 14-year-old Sewell Setzer III, Florida.
Garcia sued another AI company last year, featured technology to commit a wrongful death and believed that Sewell became increasingly isolated with his real life before committing suicide because he had a highly personalized conversation with the chatbot.
___
Editor’s Note – This story includes discussions about suicide. If you or someone you know needs help, you can get the suicide and crisis lifeline in the United States by calling or texting 988.
___
Hours before the Senate hearing Openai Committed to introduce new safeguards for teenagers, including efforts to detect whether ChatGpt users are under 18 years old and can set “power outage time” controls when teenagers cannot use Chatgpt. The announcement was not enough for children’s advocacy groups to criticize.
“It’s a pretty common strategy – one that Meta has been using – it’s a big, splash announcement on the eve of the hearing that promises to harm the company,” said Josh Golin, executive director of Fairplay, which advocates children’s online safety.
“What they should do is not target the minors until they prove safe for them,” Goering said. “We shouldn’t be able to experiment with children without control just because the company has huge resources.
The Federal Trade Commission said last week that the commission had conducted an investigation into several companies on the potential harm to children and teenagers who use AI chatbots as companions.
The agency gives roles, ME OpenaiAnd Google, Snap and Xai.
According to a new study by Common Sense Media, more than 70% of teenagers in the United States have used AI chatbots for company and regularly use them, an advocate for researching and advocating digital media.
Robbie Torney, the organization’s AI program director, is also scheduled to testify on Tuesday, with experts from the American Psychological Association.
The association released a health consultation on the use of AI for adolescents in June, which urged technology companies to “prioritize the functions of preventing exploitation, manipulation and erosion of real-world relationships, including relationships with parents and caregivers.”