Part 1: From GPT-4 to AGI: Counting the OOMs

I continue in this post with comments on the section From GPT-4 to AGI: Counting the OOMs by Leopold Aschenbrenner on his website https://situational-awareness.ai/

The text predicts the arrival of an Artificial General Intelligence (AGI) by 2027, based on the rapid progress of language models such as GPT. This progress is analysed through three factors: the increase in computing power, algorithmic improvements and the elimination of limitations in the models. An exponential growth in the capacity of these AIs is projected, going from capabilities similar to those of a high school student to those of an expert, even automating AI research. Finally, the uncertainty about data availability and the possibility of progress stagnating are discussed.

Based on the analysis I’ve reviewed, the development of AGI by 2027 is not only plausible, but could be closer than many believe. This analysis is based on three key areas: computational power, algorithmic efficiency, and the “untetherings” that make current AI models more useful. However, like any technological advancement, AGI is not without risks. In this article, I’ll delve into how these advancements can transform our lives, but also the dangers that could arise.

Extrapolating the future: OOMs and AGI

The concept of “OOMs” (orders of magnitude) is essential to understanding how we are progressing towards AGI. An OOM is a tenfold increase in computing capacity, something that is being achieved year after year. As we count these OOMs in areas such as algorithmic efficiency and processing power, it becomes clear that we are reaching significant inflection points. If we keep up this pace, we could be looking at a qualitative leap similar to what we saw between GPT-2 and GPT-4 models. By 2027, it is not difficult to imagine that AI models could reach the level of a human researcher or engineer.

But here comes a problem: advances in computing power and algorithmic efficiency are not without uncertainty. It is not a straight path. Although experts estimate that AGI could be a reality within this decade, it is not clear whether technological progression will actually keep up at the expected pace. If it slows down, the implications of AGI could be different than anticipated, with unexpected risks.

The qualitative leap: GPT-4 at the human level?

To understand how AGI might resemble human intelligence, it is useful to compare the AI ​​models we already know. GPT-2, for example, resembled a preschooler, with limited capabilities in terms of language understanding and generation. With GPT-3, things improved: models started to perform more complex tasks, such as correcting grammar and doing basic arithmetic, approaching that of a primary school child. GPT-4, on the other hand, is a step further: it can reason about complex topics, write sophisticated code, and pass university exams.

However, what we are looking at now is a scenario where this progression not only improves the ability to understand and generate, but may also allow these machines to become autonomous “working agents.” This transition could have a drastic impact on the labor market, and not necessarily in a positive way. If AGI becomes such an advanced tool, it could replace humans in many professional areas, raising concerns about mass automation and a lack of jobs.

The AGI “unleashings”: Greater control?

Unhobbling is a crucial component in the development of AGI. These are tweaks that unlock the potential of AI models, making them more useful and versatile. Techniques such as human feedback reinforcement learning (HFRL) or chain reasoning (CoT) help models perform more complex tasks, such as making autonomous decisions and applying tools in more elaborate contexts.

But here, too, risks arise. AGI could become capable of making decisions on its own, without human intervention. This type of autonomy raises an ethical and safety dilemma. If a machine is capable of reasoning and making decisions in real time, who is responsible for its actions? How can we ensure that it does not make decisions that harm humanity, whether through error or poor design? Here, the risks of these machines becoming out of control or being manipulated become serious, and the implications of allowing an intelligent system to act unchecked could be catastrophic.

The “data wall” and its implications

One of the main obstacles on the road to AGI is what is known as the “data wall.” Current AI models learn from information available on the internet, but this information has limits. Companies are working on ways to overcome this limitation by creating synthetic data or self-learning, but these methods are still in their early stages.

The problem of finite data could not only affect the ability of models to learn, but also influence the biases they might incorporate. If AIs only learn from a limited amount of data or from low-quality data, their decisions could be biased or misinformed. This adds a significant risk that machines will not only replicate human biases, but even amplify them, affecting sectors such as justice, politics and healthcare.

The risks are as great as the rewards

While AGI has the potential to profoundly transform technology, science, and society, the risks associated with its development should not be underestimated. As we approach 2027, the advancement of AI could improve our lives in impressive ways, but it also confronts us with an uncertain future where machines could act unpredictably and autonomously.

The development of AGI is not only a technical challenge, but also an ethical one. We need to question how we will manage this technology, what controls we will implement, and how we can ensure that we do not fall into the risks of uncontrolled automation. While advanced AI could be a powerful tool for humanity, if these advances are not managed correctly, the risks could outweigh the rewards.

Part 2: From AGI to Superintelligence: the Intelligence Explosion