OpenAI's Recent Discovery Raises Concerns Before CEO's Return

OpenAI researchers warn of a new AI breakthrough's potential risks.

Is Terminator already just around the corner?

In a dramatic turn of events at OpenAI, researchers alerted the board of directors to a significant advance in AI, which could pose a potential threat to humanity. The revelation came just before CEO Sam Altman's temporary departure, according to insiders.

Sam Altman Reclaims Leadership At OpenAI With New Board Oversight
Sam Altman returns as OpenAI chief under a new board, following employee and investor demands, amidst a recent leadership crisis.

The confidential letter and the AI breakthrough were pivotal in the decision to temporarily remove Altman, a key figure in generative AI. This period saw more than 700 employees threatening to resign, with potential moves to Microsoft in support of Altman. The board's concerns included premature commercialization of AI advancements without fully grasping the consequences.

OpenAI, upon inquiry, neither confirmed nor denied the specifics but acknowledged an internal project, Q* (Q-Star), and a preceding letter to the board. Mira Murati, a senior executive, informed staff about certain media narratives, without commenting on their accuracy.

Q*: A Step Towards Artificial General Intelligence

Q*, a project at OpenAI, is believed to be a significant leap towards artificial general intelligence (AGI): Systems that can replicate human-level intelligence in a range of activities. Q*'s current capabilities include solving basic mathematical problems, a task considered a milestone in AI development.

This development in mathematics, a field where there is typically a single correct answer (rather than generative tasks, such as copywriting, where there are many possible outputs), indicates a shift towards AI with human-like reasoning abilities. Such capabilities could revolutionize scientific research and extend beyond current generative AI's prowess in language and writing.

Safety Concerns And Ethical Implications

Researchers, in their letter, highlighted both the potential and the risks of this newfound AI capability, though the specific safety concerns were not detailed. The apprehension about highly intelligent machines, potentially acting against human interests, is a long-standing debate in computer science.

An internal team, the "AI scientist" group, has been exploring ways to enhance AI's reasoning abilities and its application in scientific work.

AI At Bletchley Park: Modern Concerns On Historic Ground
Before the global AI summit at Bletchley Park, MPs flag existential risks, biases, and data privacy among 12 AI challenges.

Sam Altman's Vision And OpenAI's Journey

Altman's leadership has been instrumental in the rapid growth of ChatGPT and attracting significant investment and resources from Microsoft, propelling OpenAI towards achieving AGI. He recently hinted at imminent major advancements in AI during a global summit, emphasizing the significance of these breakthroughs in OpenAI's journey.

Altman's return, following a brief absence, marks a crucial moment for OpenAI as it navigates the complex landscape of groundbreaking AI developments and their broader implications.

Subscribe to our newsletter and follow us on Twitter.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to REX Wire.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.