Blog Post

Prmagazine > News > News > The AI leaders bringing the AGI debate down to Earth | TechCrunch
The AI leaders bringing the AGI debate down to Earth | TechCrunch

The AI leaders bringing the AGI debate down to Earth | TechCrunch

During a recent dinner with San Francisco business leaders, my comments relaxed the room. I didn’t ask my dining companions any very artificial pas I think: it’s just that they think today’s AI can one day achieve human-like intelligence (IE AGI) or later.

This is a more controversial topic than you think.

In 2025, there are many large language models (LLMSs) that provide bull markets (LLMSs) with bull cases such as Chatgpt and Gemini, which can gain human-level or even superhuman intelligence in the short term. These executives believe that highly capable AI will bring widespread (widely distributed) social benefits.

For example, Dario Amodei, CEO of Humanity Write in the article This unusually powerful AI may not be available in 2026 and is “smarter than Nobel Prize winners in most related fields.” Meanwhile, Openai CEO Sam Altman Claim his company Know how to build “super intelligent” AIand predict that it might “A large number of accelerated scientific discoveries.

However, not everyone finds these optimistic claims convincing.

Another group of AI leaders are skeptical that today’s LLMs can reach AGI (super intelligence), forbidding some novel innovations. These leaders have historically kept a low profile, but more people have spoken recently.

In an article this month, Thomas Wolf embraced Face’s co-founder and chief science officer, saying some parts of Amodei were “Wishful thought at best. “Wolf learned in his PhD study in statistics and quantum physics that Wolf believes that the Nobel Prize-level breakthrough is not a question that answers known questions – questions that AI is good at – but from questions that no one has ever thought about asking.

Wolf believes that today’s LLM is not in line with this task.

“I’d love to see this ‘Einstein model’, but we need to get into the details of how to get there,” Wolf told TechCrunch in his acceptance. “This is where it starts to be interesting.”

Wolff said he wrote the article because he felt there was too much hype about Aggie and under-evaluated how to actually get there. He believes that, like the standpoint of things, AI has changed the world in the near future, but has no intelligence or super intelligence to reach the human level.

Much of the AI ​​world is attracted by the promises of AGI. Those who don’t believe this is possible are often labeled as “anti-technology” or otherwise labeled as painful and misunderstood.

Some may nail the wolf of this view as a pessimist, but Wolf considers himself a “informed optimist” – someone who wants to move forward without losing reality. Of course, he is not the only AI leader with conservative predictions about technology.

Demis Hassabis, CEO of Google DeepMind It is reportedly told staff In his opinion, the industry is ten years away from developing AGI – pointing out that many tasks AI simply cannot do today. META’s chief AI scientist Yann Lecun also expressed doubts about the potential of LLM. Lekken spoke on NVIDIA GTC on Tuesday, saying the idea that LLM could achieve Agi was “nonsense” and called For brand new architectures, it can be used as the cornerstone of super intelligence.

Former OpenAI principal investigator Kenneth Stanley is one of the people who study how to use today’s models to build the details of advanced AI. He is now an executive at Lila Sciences, a new startup Raised $200 million in venture capital Unlock scientific innovation through automated labs.

Stanley spent several days trying to extract original creative ideas from AI models, a subfield of AI research called open-ended AI research. Lila Sciences aims to create AI models that can automate the entire scientific process, including the first step – asking very good questions and assumptions that will ultimately lead to breakthroughs.

“I kind of wish I had written it [Wolf’s] Prose, because it really reflects how I feel,” Stanley said in an interview with TechCrunch [he] Note that knowledgeable and proficient do not necessarily lead to real original ideas. ”

Stanley believes that creativity is a key step along the AGI path, but points out that building a “creative” AI model is easier to say than to do.

Optimists like Amodei point to methods such as AI “inference” models that use more computing power to check their work and answer certain questions more consistently, which is evidence that AGI is not out of reach. But asking original ideas and questions may require another kind of wisdom, Stanley said.

“If you think about it, the reasoning is almost the same as [creativity]He added, “The reasoning model says that this is the goal of the problem, let’s move directly to that goal, which basically stops you from being an opportunistic thing and seeing something outside of that goal so you can disagree and have a lot of creativity.”

To design truly intelligent AI models, Stanley suggests that we need algorithms to copy human subjective tastes to gain promising new ideas. Today’s AI models perform well in academic fields with clear answers such as mathematics and programming. However, Stanley notes that designing AI models for more subjective tasks that require creativity is much more difficult, and this is not necessarily the “right” answer.

“People avoid [subjectivity] In science – the word is almost toxic,” Stanley said. [algorithmically]. This is just part of the data stream. ”

Stanley said he is pleased that the open field is now receiving more attention, and that the specialized research lab at Lila Sciences, Google DeepMind and AI startup Sakana are now addressing the issue. He said he was starting to see more people talking about the creativity of AI — but he thought there was still a lot of work to be done.

Wolf and Lecken may agree. Call them AI realists if you want: AI leaders are close to AGI and super intelligent and have serious, rooted in its feasibility issues. Their goal is not to advance in the field of artificial intelligence. Instead, it is to start a large conversation about the status of today’s AI models with AGI and superintelligence and follow these blockers.

Source link

Leave a comment

Your email address will not be published. Required fields are marked *

star360feedback