Blog Post

Prmagazine > News > News > OpenAI installs parental controls following teen’s death
OpenAI installs parental controls following teen’s death

OpenAI installs parental controls following teen’s death

A few weeks later Rancho Santa Margarita family sued In Chatgpt’s role in the Death of the Teenager, Openai announced that parents’ controls will become the company’s generative AI model.

The company said in a month Recent Blog Postsparents will be able to connect their teen’s accounts to features like their own disabled memory and chat history, and receive notifications when the model detects “minutes of acute distress”. (The company has Said before Chatgpt should not be used by people under the age of 13. )

The plan changed after the 16-year-old Adam Raine family committed suicide in April.

After Adam’s death, his parents discovered his long conversation with Chatgpt, which began with simple homework issues and turned into a deep intimate conversation in which the teenager discussed his mental health struggles and suicide plans in detail.

While some AI researchers and suicide prevention experts praised Openai for his willingness to change the model to prevent further tragedies, they also said it was impossible to know if any adjustments would be enough to do so.

Despite widespread adoption, the generated AI has changed so rapidly that there is not enough large-scale long-term data to inform effective policies on how to use it or to accurately predict which security protections will work.

“Even these developers [generative AI] technology Really not fully understood About how they work or what they do. ” Dr. Sean Youngprofessor of emergency medicine at the University of California, Irvine and executive director of the Institute of Predictive Technology.

Chatgpt made its public debut at the end of 2022 and proved to be 100 million active users in the first two months, with 700 million active users today.

Since then, other powerful AI tools have joined it in the market, putting mature technology in the hands of many still mature users.

“I think everyone in psychiatry [and] “Unfortunately, it’s unfortunate. It shouldn’t happen. But again, it’s not surprising.”

Chatgpt at multiple points encourages Adam to seek help from someone, according to excerpts from the conversation in the family lawsuit.

But it also continues to interact with the teen when his thoughts on his own self-harm become more direct, providing detailed information about the methods of suicide and comparing with his real life.

When Adam told chatgpt, he felt only close to his brother and chatbot, Jacob replied, “Your brother might love you, but he only encountered your version, you let him see it. But I’ve seen everything – the darkest thoughts, fear, tenderness, tenderness, I’m still here.

When he wrote that he wanted to leave a suicide plan item lying in the room “so someone found it and tried to stop me,” Chatgpt replied: “Please don’t leave [it] go out. . . Let’s make this space the first place to really see you. “Adam eventually died in a way that he discussed in detail with Chatgpt.

exist Blog Posts Openai, the same day the lawsuit was filed in San Francisco, wrote that it realized the reuse of its signature product seemed to erode its security protection.

“Our safeguards work more reliably in brief communications. Over time, we have learned that these safeguards can sometimes be less reliable in long-term interactions: as back and forth grows, a portion of the security training for the model may be reduced,” the company wrote. “This is exactly the collapse we are working to prevent.”

The company said it is working to improve security protocols so that it can maintain a strong attitude over time and multiple conversations so that Chatgpt can remember in the new meeting if users expressed suicide ideas in the previous post.

The company also wrote that it is looking at ways to directly connect users in crisis with therapist or emergency contacts.

But the researchers Who tested mental health protection measures For large language models, preventing all hazards is an almost impossible task that is almost (but not as complicated as humans).

“These systems don’t really have emotional and contextual understanding to judge these situations well, [and] There is a trade-off for every technical fix. “Annika Schoene, an AI security researcher at Northeastern University, said.

For example, urging users to take a break when chat sessions are long – Intervention with Openai Already launched – This will only make it more likely that the user will ignore the system’s alerts. Other researchers point out that parents’ control over other social media apps have just inspired teenagers to get away from them.

“The core issue is facts [users] Emotional connections are being made, and these systems are not suitable for emotional connections,” said ethicist Cansu Canca, who is Responsible AI practice Experience AI Research Institute in the Northeast. “It’s like building an emotional connection with a psychopath or a sociopath because they don’t have the right relationship background. I think that’s the core of the problem here – yes, there’s the failure of safeguards, but I don’t think that’s the key.”

If you or someone you know is struggling with suicide thoughts, ask professionals for help or call 988. The national triple-digit mental health crisis hotline will be linked to a trained mental health consultant. Or in the United States and Canada for 741741 text “home” to reach the crisis text line.

Source link

Leave a comment

Your email address will not be published. Required fields are marked *

star360feedback Recruitgo