In a policy paper published Wednesday, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang and AI Security Director Center Dan Hendrycks said the U.S. should not pursue the push for the Manhattan project style to use “superman” smart development of AI systems, also known as AGI.
The paper, titled “Super smart strategy“Asserted that the bidding for the US attack specifically to control super-intelligent AI systems could cause strong revenge from China, which could be carried out in the form of a cyber attack, which could destabilize international relations.
“[A] Manhattan Project [for AGI] Assume that competitors will acquiesce to lasting imbalances or omniniDe, rather than to prevent this imbalance. “The co-author wrote. “It was originally intended to promote super-armed and global control of risk that prompted hostile confrontation and escalation of tensions, thus undermining the stability the strategy claims to ensure. ”
The paper was co-authored by three highly influential figures in the US AI industry and only a few months later U.S. Congress Committee proposes “Manhattan Project Style” The effort to fund AGI development is based on the American atomic bomb program in the 1940s. U.S. Energy Secretary Chris Wright recently said the U.S. isThe beginning of the new Manhattan projectOn AI, stand in front of the supercomputer site with Openai co-founder Greg Brockman.
The Super Strategy Paper challenges this idea, and in recent months, supported by U.S. policy and industry leaders, government-backed programs that pursue AGI are the best way to compete with China.
In Schmidt, Wang and Hendrix believe that the United States is in a different attitude Mutual guaranteed destruction. Schmidt and his co-authors believe that global powers do not seek nuclear weapons monopolize nuclear weapons (which may trigger a preemptive strike from their opponents), and he believes that the United States should be cautious about competing towards a highly dominant AI system.
While comparing AI systems to nuclear weapons may sound extreme, world leaders have already considered AI as the highest military advantage. already, AI is helping speed up the killing chains of the army, the Pentagon says.
Schmidt et al. A concept they call AI failure (MAIM) is introduced, where governments can proactively disable threatening AI projects instead of waiting for adversaries to weaponize AGI.
Schmidt, Wang and Hendrycks Block other countries Create super intelligent AI from. The co-author believes that the government should “expand [its] The arsenal of cyberattacks disables threat AI projects”, controlled by other countries, and restricts adversaries’ access to advanced AI chips and open source models.
The co-authors identified a dichotomy that plays in the AI policy world. There are some “doomed” who believe that the catastrophic outcomes of AI development are a foregone conclusion and advocate for countries that reduce AI progress. On the other hand, there are some “ostrichs” who believe that countries should accelerate the development of AI, and essentially, they all hope that everything goes well.
This paper proposes a third method: developing AGI measurement methods, prioritizing defensive strategies.
Schmidt is particularly noteworthy that Schmidt has previously spoken about the U.S.’s need to actively compete with China in developing advanced AI systems. Just a few months ago, Schmidt made a sentence DeepSeek marks a turning point in the AI game between the United States and China.
The Trump administration seems to have worked hard to advance in the development of AI in the United States. However, as the co-author points out, the US decisions surrounding AGI do not exist in a vacuum.
When the world watches the U.S. pushing the limits of AI, Schmidt and his co-authors believe it may be wise to take a defensive approach.