Part 2: From AGI to Superintelligence

Commentary and summary on Leopold Aschenbrenner ‘s text exploring the possibility of an “intelligence explosion” resulting from the development of Artificial General Intelligence (AGI).

It is argued that AGI could automate AI research, leading to rapid progress toward superintelligence that would far surpass human capabilities.

This exponential progress is described using analogies with the evolution of nuclear weapons, from the atomic bomb to the hydrogen bomb. Possible limitations to this scenario, such as the availability of computational resources and complementarities with human research, are discussed, but it is concluded that a rapid transition to superintelligence remains plausible, with potentially transformative consequences for science, technology, economics and geopolitics.

The text emphasizes the urgency of preparing for the implications of this possible future.

The Intelligence Explosion

The text posits that once we achieve AGI, progress will not stop. Automating AI research could quickly lead to AI systems far superior to humans. This is described as an “intelligence explosion.”

“Once we have AGI, we’ll go one more turn, or two or three more, and AI systems will become superhuman, vastly superhuman.”

AI Research Automation

The task of an AI researcher (read, experiment, interpret, repeat) is highly susceptible to automation.

“We don’t need robotics, we don’t need a lot of things, for AI to automate AI research.”

Accelerating Algorithmic Progress

Automating AI research could accelerate algorithmic progress exponentially. Millions of automated AI researchers, working at superhuman speeds, could compress decades of progress into a single year.

«Automated AI research could accelerate algorithmic progress,
leading to 5+ OOMs of effective computational gains in one year.»

Comparison with the Atomic Bomb and the Hydrogen Bomb

The text draws an analogy with the development of nuclear weapons: just as the hydrogen bomb dramatically multiplied the destructive power of the atomic bomb, superintelligence will multiply the impact of AGI.

The Power of Superintelligence

Superintelligence is described as a power that could transform society in a number of areas, from scientific research to economics and warfare.

«By applying superintelligence to R&D in other fields, explosive progress would expand from just ML research; they would soon solve robotics, make dramatic leaps in other fields of science and technology within years, and an industrial explosion would follow.»

Possible Bottlenecks

The paper acknowledges potential limitations such as the availability of computing power, complementarity with humans, inherent limits to algorithmic progress, and the difficulty of finding new ideas. However, it argues that none of these obstacles are sufficient to completely stop the intelligence explosion.

«There are several plausible bottlenecks, including limited computation for experiments, complementarities with humans, and algorithmic progress becoming more difficult, which I will address, but none seem sufficient to slow things down definitively.»

Robotics

The paper argues that robots will not be a major obstacle. It sees robotics as primarily a problem of machine learning algorithms, which superintelligence can easily solve.

«It is becoming increasingly clear that robots are a problem for ML algorithms. »

A Moment of Transformation

The paper argues that the period of intelligence explosion and post-superintelligence will be extremely volatile, tense and dangerous.

“The intelligence explosion and the immediate aftermath of superintelligence will be one of the most volatile, tense, dangerous and savage periods in human history.”

Key Ideas and Important Facts

  • Exponential Acceleration: Algorithmic progress, once automated, is expected to accelerate dramatically from ~0.5 OOMs/year to 5+ OOMs/year.
  • Millions of Automated AI Researchers: The ability to run millions of copies of automated AI researchers is anticipated, possibly at speeds 100 times faster than humans.
  • Superhuman Advantages: Automated AI researchers would have superhuman advantages, such as the ability to read all ML papers, learn from every experiment, work tirelessly, and coordinate efficiently.
  • The Computing Bottleneck: Limited computing power for experiments is seen as the most significant obstacle, although it is argued that AI can use computing more efficiently.
  • Transformation in Multiple Domains: Superintelligence would have an impact on science, technology, economics, and warfare, among others.
  • Importance of Sequence: It is highlighted that automation of AI research can occur before other AI-related risks, such as the development of biological weapons.

The text paints a picture of a rapid and imminent transformation through the intelligence explosion, presenting both the potential for enormous advances and significant risks. The analogy with the atomic bomb and the hydrogen bomb, as well as the urgency in confronting the possibility of this “chain reaction,” emphasizes the importance of preparing for a future dominated by superintelligence.

Part 3: Racing to the Trillion-Dollar Cluster