Join our daily and weekly newsletter for the latest updates and exclusive content on industry-leading AI coverage. learn more
AI in one Amazing pace. Science fiction from a few years ago seems to be an undeniable reality. Back in 2017, my company established an AI Center of Excellence. AI is certainly getting better in predictive analytics, with many machine learning (ML) algorithms used for speech recognition, spam detection, spell checking (and other applications), but it’s still early. At that time we believed we were just in the first game of the AI game.
The arrival of GPT-3, especially GPT 3.5 (for dialogue and as the basis for the first Chatgpt in November 2022) is a dramatic turning point that is now forever remembered as the “Chatgpt Moment”.
Since then, AI capabilities at hundreds of companies have exploded. In March 2023, OpenAI released GPT-4, which promised “Agi’s Spark” (Artificial General Intelligence). By then, it was clear that we were well past the first inning. Now, it felt like we were in the last part of a completely different movement.
Agi’s flame
Two years later, the flames of AGI began to appear.
Hard Fork in the latest episode podcastDario Amodei (Dario Amodei) has been in the AI industry for a decade, formerly Vice President of Research at OpenAI and now CEO of anthropomorphism. He said we have a 70% to 80% chance that we will have “a large number of AI systems that are almost all much smarter than humans by the end of the decade, and my guess is 2026 or 2026 or 20227.”

The evidence for this prediction is becoming clearer. Late last summer, Openai launches O1 – The first “inference model”. Since then, they have released O3, and other companies have launched their own inference models, including Google and the well-known DeepSeek. Reasoners use business chains (COTs) to break complex tasks into multiple logical steps at runtime, just like humans may approach complex tasks. The recent emergence of advanced AI agents including Openai’s in-depth research and Google’s AI co-scientists has made a huge change in the way they researched.
Unlike earlier large language models (LLM), pattern matching is mainly done from training data. Inference Model Represents the basic transition from statistical prediction to structured problem solving. This allows AI to solve new problems rather than training, enabling real reasoning rather than advanced pattern recognition.
I recently put my in-depth research into a project and recalled the words of Arthur C. Clarke: “Any technology that is advanced enough is indistinguishable from magic.” In five minutes, this AI produced 3 to 4 days. Is this perfect? No, are you getting close? Yes, very. These agents are quickly becoming truly magical and transformative, and are the first of many similarly powerful agents that will be available soon.
The most common definition of AGI is a system that can execute almost all Cognitive tasks People can do it. These early promoters of change suggest that Amodei and others who think we are close to AI levels may be right, and AGI will soon be here. This reality will lead to a lot of changes that require people and processes to adapt in the short term.
But is it really agi?
Various situations may occur from the recent arrival of powerful AI. We don’t really know how this will develop is challenging and fearful. New York Times Columnist Ezra Klein in Recent Podcasts: “We are rushing to AGI without really understanding what that is or means.” For example, he claims there is little critical thinking or contingency plan around meaning, for example, what this really means for employment.
Of course, as Gary Marcus gave an example, the perception about this uncertain future and lack of planning is that he believes that deep learning (and LLMS) in general will not lead to AGI. Marcus release What is equal to lowering Klein’s position, citing significant drawbacks in current AI technologies and suggests that we are still a long way from AGI.
Marcus may be right, but it may also be just an academic controversy about semantics. As an alternative to AGI terminology, Amodei simply mentioned “powerful artificial intelligence” in his machine of love and grace blogbecause it conveys a similar idea without an inaccurate definition of “sci-fi luggage and hype”. Call it your will, but AI will only grow bigger features.
Playing with Fire: Possible AI Futures
exist 60 minutes interviewSundar Pichai, CEO of Alphabet, said he believes that AI is “the deepest technology in humanity is working on. It’s deeper than fire, electricity or anything we have done in the past.” Of course, this coincides with the growing intensity of AI discussions. Like AI, fire is a world-changing discovery that promotes progress but requires control to prevent disasters. The same delicate balance applies to AI.
A huge force was discovered, by enabling warmth, cooking, metallurgy and industry to transform civilization. However, when out of control, this also causes damage. Whether AI becomes our biggest ally or our undoing will depend on our ability to manage the flames. To further consider this metaphor, there are a variety of situations that will soon emerge from more powerful AI:
- Controlled Flame (Utopia): In this case, AI is used as a force for human prosperity. Productive aircraft, new materials are found, personalized medicine is available for everyone, goods and services become rich and cheap, and individuals get rid of the tedious work to engage in more meaningful work and activities. This is a scenario advocated by many accelerators, in which AI brings progress without falling into too much chaos.
- Unstable Fire (Challenging): Here, AI brings undeniable benefits – revolutionary research, automation, new features, products and problem solving. However, these benefits are unevenly distributed, while some flourish while others face displacement, widening the economic divide and emphasizing social institutions. Misinformation propagation and security risks. In this case, society strives to balance commitment and danger. It can be said that this description is close to today’s reality.
- Wildfire (dystopia): The third path is one of the disasters, the possibility that is most closely related to the so-called “doomsday” and “probability of doom”. Whether through unexpected consequences, reckless deployments or AI systems beyond human control, AI actions become unrestricted and accidents occur. Believe in the erosion of truth. At the worst, AI thrives, threatening lives, industries and institutions as a whole.
While each of these scenarios seems reasonable, we really don’t know which situation is most likely, especially because the schedule is short and we are really uncomfortable. We can see early signs of each: AI-driven automation improves productivity, misleading misunderstandings, eroding trust, and focus on unidentified models that resist their guardrails. Each situation will arouse its own adaptation for individuals, businesses, governments and society.
Our lack of clarity on the trajectory of AI impacts suggests that some of the mixing of these three futures is inevitable. The rise of AI will lead to paradoxes that fuel prosperity while also bringing unexpected consequences. An astonishing breakthrough will occur, and accidents will occur. Some new areas will have attractive possibilities and job prospects, while other firm economies will gradually fade away.
We may not have all the answers, but we are writing about powerful AI and its future impact on humanity. What we saw at the recent Paris AI Action Summit was hoping for the best mindset, which is not a wise strategy. Governments, businesses and individuals must shape the trajectory of AI before shaping us. The future of AI will not be determined by technology alone, but by the collective choice we make about how to deploy it.
Gary Grossman is the EVP of Technology Practice Edelman.
Source link