Blog Post

Prmagazine > News > News > OpenAI’s ex-policy lead criticizes the company for ‘rewriting’ its AI safety history | TechCrunch
OpenAI’s ex-policy lead criticizes the company for ‘rewriting’ its AI safety history | TechCrunch

OpenAI’s ex-policy lead criticizes the company for ‘rewriting’ its AI safety history | TechCrunch

The highly anticipated former Openai policy researcher, Miles Brundage,,,,, Enter social media On Wednesday, Openai was criticized for “rewriting the history of its deployment methods” to enable risky AI systems.

Earlier this week, Openai was published document It outlines its current philosophy on AI security and consistency, designing the process of AI systems designed in an ideal and explainable way. Openai said in the document that it sees the development of AGI, widely defined as an AI system that can perform any task humans can perform, as a “continuous path” that requires “iterative deployment and learning” from AI technology.

“In a discontinuous world […] Security courses come from careful handling of today’s systems, relative to their obvious power, [which] It’s the method we took [our AI model] GPT -2,” Openai wrote. “Now, we see the first AGI as a point along a series of systems that add practicality […] In a continuous world, the way to make the next system safe and beneficial is to learn from the current system. ”

But Brundage claims that GPT-2 is indeed worthy of caution when it is released, which is “100% consistent” with today’s OpenAI iterative deployment strategy.

“The GPT-2 released by Openai, which I participated, is 100% consistent [with and] Foreshadows Openai’s current philosophy of iterative deployment,” Brundage Write in a post on X. “The model was released step by step, with each step sharing the course. Many security experts at the time thanked us for our caution.”

Brundage joined Openai in 2018 as a research scientist and has been the company’s policy research director for many years. In Openai’s “AGI Preparation” team, he focused on responsible language generation systems such as OpenAI’s AI Chatbot Platform Chatgpt.

GPT-2Openai announced in 2019 that it is the ancestor of AI system power chatgpt. GPT-2 can answer questions about topics, summarize articles, and generate text at a certain level, sometimes indistinguishable from humans.

Although GPT-2 and its output may seem basic today, they were cutting-edge at the time. Openai cites the risk of malicious use, initially refusing to release the source code for GPT-2 instead of giving selected news media limited access to the demo.

The AI ​​industry has mixed reviews, and this decision is in line with it. Many experts believe that the threat posed by GPT-2 Being exaggeratedand there is no evidence that the model can be abused in the way Openai described. The gradient even published on AI-centric publications Open letter Openai was asked to release the model, thinking it was too technically important to stop it.

Openai finally released some versions of GPT-2 six months after the model was unveiled, and a complete system was carried out a few months later. Brundage thinks this is the right way.

“Which part [the GPT-2 release] Is it considered discontinuous or as a prerequisite? He said in an article on X: “What is the evidence that is ‘disproportionate’ of ex ante? Post, probably. It could have been possible, but that doesn’t mean it is responsible for yolo [sic] Information given at that time. ”

Brundage is concerned that Openai with the file’s goal is to set “Worry is the alarmist” and “You need overwhelming evidence of imminent danger to act on it”. He believes that this is a “very dangerous” mentality for advanced AI systems.

“If I were still working in Openai, I would ask why [document] Written as it is, and with a cautious attitude in this way, what Openai hopes to achieve. ” Brundage added.

Openai has history Accused Prioritize the priority of “shiny products” at the expense of safety Hurry product release Beat competitors to go public. Last year, Openai dissolved its AGI preparation team, and a series of AI security and policy researchers left the company’s competitors.

Competitive pressure will only intensify. DeepSeek, China Artificial Intelligence Laboratory Publicly available, attracting the attention of the world R1 Model, which matches many key benchmarks of OpenAI’s O1 “inference” model. Openai CEO Sam Altman has admit DeepSeek reduces Openai’s technological leadership, explain Openai will “extract some distributions” to compete better.

There is a lot of money online. Openai loses billions of dollars a year, and the company has It is said that Its annual losses are expected to reach $14 billion by 2026. Faster product issuance cycles can benefit Openai’s bottom line, but may come at the expense of long-term safety. Experts like Brundage question whether the trade-off is worth it.

Source link

Leave a comment

Your email address will not be published. Required fields are marked *

star360feedback