Is Artificial General Intelligence Closer Than We Think?

It's possible that Q* may have exhibited a level of reasoning previously unseen in the AI sector.

Will the development of human-like intelligence pose a threat?

In the aftermath of the OpenAI board's decision to fire Sam Altman, speculation swirled as to what the cause—explained only as Altman being "not consistently candid in his communications"—could possibly be.

Tensions between the non-profit and for-profit aims of the organization were likely a part of the issue. The board's remit was to foster the development of safe artificial general intelligence (AGI), while the for-profit arm was naturally focused on maximizing value for shareholders. Developments such as the recent announcement of the GPT Store, while good for profits, may have led the board to fear that Altman was moving too fast and taking unnecessary risks.

But there's another explanation that has now emerged, which is that OpenAI may have made significant developments towards artificial general intelligence.

What Is AGI?

Artificial General Intelligence refers to AI systems that have the ability to understand, learn, and apply knowledge across a wide range of tasks, closely replicating human intelligence. By contrast, "narrow" artificial intelligence is designed for specific tasks, even if the results can be impressive (as we've seen with ChatGPT v4).

Characteristics of AGI include:

  1. Generalization: The ability to apply knowledge and skills learned in one domain to perform tasks in unrelated areas.
  2. Learning Ability: AGI is not pre-programmed for specific tasks, but has the capacity to learn and improve its performance through experience and exposure to new information.
  3. Flexibility: AGI can adapt to changing environments and tasks, displaying problem-solving abilities in novel situations without explicit programming.
  4. Self-awareness: Some definitions of AGI include self-awareness or "consciousness", implying an understanding of its own existence, thoughts, and experiences.

Q*: OpenAI Pushes The Boundaries Further

Sources at OpenAI have mentioned the existence of Q* ("Q-star"), an internal project that appears to have characteristics of AGI. The system requires large amounts of computing resources, but appears to be able to solve certain mathematical problems, which require a high level of reasoning: A key test for AGI that previous models have not been able to perform. While the problems solved are currently fairly simple, this milestone is said to have been a factor in the board's decision to fire Altman.

Members of the OpenAI forum questioned the implications of the Q* program, expressing intense interest and skepticism.

Screenshot from OpenAI forum

At present, we have very little to go on, other than the unofficial reports of Q* successfully solving grade school math problems. However, if true, this could be a significant development and a step on the path to true AGI.

Subscribe to our newsletter and follow us on Twitter.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to REX Wire.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.