Blog Post

Prmagazine > News > News > OpenAI to route sensitive conversations to GPT-5, introduce parental controls | TechCrunch
OpenAI to route sensitive conversations to GPT-5, introduce parental controls | TechCrunch

OpenAI to route sensitive conversations to GPT-5, introduce parental controls | TechCrunch

Openai explain On Tuesday, it plans to route sensitive conversations to inference models such as GPT-5 and launch parental control within next month as part of an ongoing response to recent security incidents involving Chatgpt’s failure to detect mental distress.

The new guardrail is after teenager Adam Raine committed suicide, discussing self-harm and planning to end his life with Chatgpt, even giving him information about specific methods of suicide. Rennes’ parents have An illegal death lawsuit filed Oppose Openai.

exist Blog Posts Last week, Openai acknowledged shortcomings in its security system, including the failure to keep the guardrail in extended conversations. Experts attribute these issues to Basic design elements: The model validates the trend of user statements and their next word prediction algorithms that cause chatbots to follow conversation threads rather than redirect potentially harmful discussions.

In Stein-Erik Soelberg Wall Street Journal On the weekend. Soelberg, who has a history of mental illness, uses chatgpt to verify and fuel his paranoia to show that he is a grand plot. His fantasy progressed so badly that he killed his mother and himself last month.

Openai believes that at least one solution to orbital conversations might be to automatically relocate sensitive chats into the “inference” model.

“We recently introduced a real-time router that allows for effective chat and inference models to choose between a model that is effective based on the context of the conversation,” Openai wrote in a blog post on Tuesday. “We will soon start to take some sensitive conversations (e.g. when our systems all detect signs of acute distress) to a model that is, such as GPT-5 thinking, so that it can provide a more helpful and beneficial response regardless of which model one chooses in the first place.”

Openai said its GPT-5 thinking and O3 models are designed to spend more time thinking and reasoning through context, meaning they are “more resistant to adversarial cues.”

The AI ​​company also said it will launch parental controls next month, allowing parents to connect their accounts to their teen accounts via email invitations. In late July, Openai launched Chatgpt Learning Mode To help students maintain critical thinking skills while learning, rather than eavesdropping on Chatgpt’s essays for them. Soon, parents will be able to respond to their children through “age-appropriate model behavior rules (by default).”

Parents will also be able to disable features such as memory and chat history, which experts say can lead to delusional thinking and other problematic behaviors, including dependency and attachment issues, reinforcement of harmful thinking patterns, and hallucinations of thought reading. In the case of Adam Raine, Chatgpt provides methods of suicide to reflect his hobby knowledge, According to the New York Times.

Perhaps the most important parental control that Openai intends to launch is that parents can receive notifications when the system discovers teenagers, which is at a time of “acute distress”.

TechCrunch has asked OpenAI for more information on how the company can tag acute distress moments in real time, by default, it already has “age-appropriate model behavior rules” and whether it is exploring time limits that allow parents to implement for teenagers using Chatgpt.

Openai has launched in-app reminders during long meetings to encourage all users to rest, but no longer cuts down on people who may be using Chatgpt to Spiral.

The AI ​​company said the safeguards are part of the 120-day plan to preview the improvements OpenAI hopes to launch this year. The company also said it is working with experts — including experts with expertise in the fields of eating disorders, drug use and adolescent health, its global network of physicians and the Welfare and AI expert committee to help “define and measure well-being, set priorities, and design future safeguards.”

TechCrunch asked OpenAI how many people in the program were involved in leading its board of experts’ initiatives and what advice mental health experts have made in terms of product, research and policy decisions.

Source link

Leave a comment

Your email address will not be published. Required fields are marked *

star360feedback Recruitgo