Blog Post

Prmagazine > News > News > OpenAI Rolls Out Teen Safety Features Amid Growing Scrutiny
OpenAI Rolls Out Teen Safety Features Amid Growing Scrutiny

OpenAI Rolls Out Teen Safety Features Amid Growing Scrutiny

Openai announces new Teenagers’ safety features on Tuesday’s CHATGPT as part of an ongoing effort to worry about how minors interact Chatbot. The company is building a Age prediction system This determines if the user is under 18 years old and routes it to “Suitable for ageSystems that block graphic content. If the system detects that the user is considering suicide or self-harm, it will contact the user’s parents. In an imminent danger, the system may contact the authorities if the user’s parents cannot achieve it.

exist Blog Posts Regarding the announcement, CEO Sam Altman wrote that the company is trying to balance freedom, privacy and teenage safety.

“We realize there is a conflict between these principles and not everyone agrees with how we resolve it,” Altman wrote. “These are hard decisions, but after talking to experts, we think it’s the best and want to be transparent in our intentions.”

While Openai tends to prioritize privacy and freedom for adult users, the company says this puts security first. By the end of September, the company will launch control of parents so that parents can connect their children’s accounts to their own, allowing them to manage conversations and disable features. Parents can also receive notifications when the system finds their teens in severe distress, according to the company’s blog post, and sets the time for the day when children can use Chatgpt.

The movement is disturbing titleAfter a prolonged conversation with the AI ​​chatbot, S continued to surface to commit suicide or commit violence against family members. Members have noticed that both Meta and Openai are under scrutiny. Earlier this month, the Federal Trade Commission asked Meta, OpenAI, Google and other AI companies to hand over information about how their technology affects children, According to Bloomberg.

Meanwhile, Openai is still under court orders to require it to retain consumer chats indefinitely, which the company is very unhappy about. Today’s news is both an important step in protecting minors and a smart PR campaign to enhance conversations with chatbots so personal that consumer privacy can only be damaged in the most extreme situations.

“Sexbot avatar in chatgpt”

From the sources I talked to on OpenAI, the burden of protecting users is a lot on many researchers. They want to create a fun and engaging user experience, but it can quickly turn to become Disastrous iCophanic. Companies like Openai are taking steps to protect minors, which is certainly true. Meanwhile, there is still nothing to force these companies to do the right thing without federal regulations.

exist Recent Interview, Tucker Carlson pushes Altman’s exact answer World Health Organization Decisions are being made that affect the rest of us. The Openai leader pointed out the model behavior team, which is responsible for adjusting the model for certain attributes. “I think it’s me who you should be responsible for these calls,” Altman added. “Like, I’m the public face. Ultimately, I’m the guy who can overturn one of those decisions or our board.”

Source link

Leave a comment

Your email address will not be published. Required fields are marked *

star360feedback Recruitgo