Blog Post

Prmagazine > News > News > OpenAI research lead Noam Brown thinks AI ‘reasoning’ models could’ve arrived decades ago | TechCrunch
OpenAI research lead Noam Brown thinks AI ‘reasoning’ models could’ve arrived decades ago | TechCrunch

OpenAI research lead Noam Brown thinks AI ‘reasoning’ models could’ve arrived decades ago | TechCrunch

Noam Brown, who leads AI reasoning research at Openai, says Openai’s “inference” AI model O1 Researchers who could arrive 20 years ago “known [the right] Methods and algorithms.

“This research direction has been ignored,” Brown said on a panel. NVIDIA’s GTC Conference Wednesday in San Jose. “I noticed during the research process, well, something is missing. [in AI]. ”

Brown is one of the main architects behind O1, and the AI ​​model uses a technology called Test time reasoning “Think” before responding to a query. Test time reasoning requires applying other calculations to run the model to drive the form of “inference”. Often, so-called inference models are more accurate and reliable than traditional models, especially in fields such as mathematics and science.

However, Brown stressed that pre-training – increasingly training in datasets – is not entirely “death”. AI labs, including OpenAI, have invested most of their efforts to expand pre-training. Now, according to Brown’s training and testing time reasoning between time – Brown is described as a complementary approach.

Given the general lack of access to computing resources in institutions, Brown was asked in the group if he could hope to experiment on the scale of an AI lab like Openai. He admits that in recent years, this has become increasingly difficult as models become larger, but scholars can make an impact by exploring areas that require reduced computing, such as model architecture design.

“[T]This is an opportunity for cooperation between border laboratories [and academia]Brown said. “Of course, the Border Laboratories are studying academic publications and thinking carefully whether this is convincing to say that if it is expanded further, it would be very effective. If there are convincing arguments in the paper, we will investigate in these laboratories.”

Brown’s comments came at the time the Trump administration made deep Make a scientific grant. AI experts, including Nobel Prize winner Geoffrey Hinton, criticized the cuts, Say they may threaten AI research work at home and abroad.

Brown uses AI benchmarks as a region where academia can have a significant impact. “The benchmark state in AI is really bad, and it doesn’t require a lot of calculations,” he said.

As we wrote before, today’s popular AI benchmarks tend to test Deep knowledge and give scores related to proficiency About tasks that most people care about. This leads to widely Confused About the functionality and improvements of the model.

Source link

Leave a comment

Your email address will not be published. Required fields are marked *

star360feedback