Since the 1950s, the idea of Artificial General Intelligence (AGI) has captured the imagination of scientists and enthusiasts alike. Now, with the development of OpenAI's Q* project, we are at an inflection point. Are we living the prequel to a reality that had only been imagined in works such as '2001: A Space Odyssey'

OpenAI's Q*: A prequel to '2001: A Space Odyssey'?
The depiction of HAL 9000 in Stanley Kubrick and Arthur C. Clarke's '2001: A Space Odyssey' gave us an early glimpse of what an AGI could be. Currently, the progress in the project Q* de OpenAI They seem to be the first steps towards the realization of an artificial intelligence with capabilities comparable to humans. This project could represent a preliminary stage towards the type of AGI that HAL 9000 personified.
Theoretical framework of AGI since 1950
Aspirations toward AGI began in the 1950s, with researchers such as Herbert A. Simon and Marvin Minsky believing in the possibility of machines capable of performing any human job within a few decades. Over the years, this theoretical framework has evolved from neural network models to formal systems for deliberative reasoning and mathematical definitions of adaptive general intelligence.
Coincidences between Q* and the theoretical framework of the AGI
The development of Q* at OpenAI reflects several aspects of these early visions. Although the information available on Q* is limited, its apparent ability to solve Math Problems at the school level with a novel approach suggests a move towards the adaptability and versatility characteristic of AGI. This focus on improving the reasoning skills of AI models aligns with AGI's original goals of replicating complex human cognition.
Q* is a milestone
The Q* project marks a milestone in the history of artificial intelligence. While we have not yet reached a full AGI, initiatives such as Q* indicate that we are moving toward realizing a vision that began more than half a century ago. We are on the cusp of a new chapter in AI, one that could be remembered as the beginning of the true age of AGI.
The global race for AGI and the ideologies that drive it.
1. The ideological struggle in the Race to AGI:
In the current context, such as in the film '2001: A Space Odyssey', where an artificial intelligence is at the center of a human and technological drama, OpenAI finds itself in a significant internal struggle. On the one hand, effective altruists (EAs) adhere to the organization's original nonprofit mission, while on the other hand, a group driven by the commercial potential of ChatGPT and other AI developments seeks to accelerate its growth and application. This divide reflects an ideological war that extends beyond OpenAI, resonating on social media, internet forums, and in the minds of founders, researchers, and politicians shaping the future of AI.
2. Safety and Ethics in Artificial Intelligence:
AI security, an issue that has been of importance since the days of Isaac Asimov, has taken significant steps in the last ten years. With concerns growing about rogue AI that could end civilization, figures such as Nick Bostrom and Eliezer Yudkowsky have advocated for strict regulations. In parallel, the growth of the ideology of Effective Altruism (EA) has put security in AI into focus, especially in relation to the destructive potential of AGI.
3. Accelerationist Effect and Global Competition for AGI:
Meanwhile, effective accelerationism (e/acc) represents an emerging philosophy that challenges security concerns in AI, focusing on accelerating technological growth and societal change. Proponents of e/acc, such as Guillaume Verdon, argue that AI should be developed in an open and accessible way, without excessive security restrictions, believing that this would lead to emerging forms of consciousness and wider varieties of sentience. This approach clashes directly with the more cautious and regulated view of AI proposed by EAs and other proponents of AI security.
4. Reflections on the Future of AI:
Looking at these factions and their quasi-religious beliefs about AI and AGI, an uncertain future is posed. No one knows how the next decade will play out in relation to AI. This leads us to a crucial question: is it better to remain flexible and avoid dogma in a field as rapidly changing and potentially transformative as AI? History shows us that the balance between bold innovation and informed prudence has always been a delicate line, especially in a field as powerful and fast-moving as artificial intelligence.
Analogies with the film:
In the race towards AGI, we are seeing a reflection of the tensions and expectations that '2001: A Space Odyssey' raised decades ago. OpenAI's Q* could be just one step in this long odyssey, one that takes us through a maze of technological innovations, ethical dilemmas, and global competition. The race to AGI isn't just a technology race; It is a race of ideas, values and visions for the future of humanity.
Recent Developments in the Search for AGI
One of the most intriguing developments in the pursuit of AGI is OpenAI's recent Q* project. This project, revealed following an internal controversy that led to the brief ouster of Sam Altman as CEO of OpenAI, has raised expectations of a significant move towards AGI. The concern expressed by OpenAI researchers about the potential threats of a "powerful artificial intelligence system" underscores the scale and ethical implications of this endeavor.
Defining AGI has been a challenge in itself. Several experts have offered definitions, generally seeing it as an artificial intelligence capable of performing a wide range of tasks, similar to the human brain. OpenAI describes AGI as "highly autonomous systems that outperform humans in most economically valuable jobs." A team at DeepMind, in an effort to pin down these concepts, has proposed a framework for classifying the capabilities and behaviors of AGI models. It aims to establish a common language for measuring progress, comparing approaches, and assessing risks.
In their classification of AGI levels, the DeepMind team identified several levels, from "Level 0, No AGI" to "Level 5, Superhuman." Current models such as ChatGPT, Bard, and Llama 2 have only reached "Level 1, Emerging." This indicates that, despite the advances, we are still far from reaching a full AGI. Programs like Siri, Alexa, and Google Assistant are at "Level 2, Proficient," while DeepMind's AlphaFold, at "Level 5, Superhuman," represents the most advanced on this spectrum.
These advances in the definition and classification of AGI are crucial to understanding its trajectory and realistic expectations. As we get closer to achieving an AGI, ethical challenges and security considerations become increasingly important. In this race towards AGI, just like in '2001: A Space Odyssey', we not only face a journey of technological discovery but also a journey of introspection and definition of our values and visions for the future.
References:
- "History and Evolution of AGI: Tracing its Development from Theoretical Concept to Current State – Just Think AI". Retrieved on [date].
- «Report: OpenAI board's ouster of Sam Altman followed potential AGI breakthrough – SiliconANGLE». Retrieved on [date].
- "Artificial general intelligence – Wikipedia". Retrieved on [date].
- "The ideologies fighting for the soul (and future) of AI" by Charlie Guo.
- Additional information on AI security and the ideologies at play in the development of AGI.
