Musk To Open Source Grok Amid OpenAI Legal Battle

Musk's move to open source Grok could ultimately mean that control over powerful AI models is lost.

What will the unintended consequences be of open-sourcing a powerful AI application?

Elon Musk is upset that OpenAI has gone against its original principles and created a for-profit arm. Not only has he started a lawsuit against the company, but now, he has committed to open-sourcing his own AI platform, Grok, as early as this week.

OpenAI Discloses Musk Emails Amid Legal Dispute
OpenAI reveals emails showing Elon Musk’s early support for its shift to a for-profit model amid lawsuit.

"Microsoft Subsidiary"

Musk recently filed the lawsuit against OpenAI, which is backed by Microsoft to the tune of billions of dollars, after it abandoned its "open" roots by adding a profit-making division to the organization. This, he says, effectively makes OpenAI no more than a subsidiary of Microsoft.

"To this day, OpenAI’s website continues to profess that its charter is to ensure that AGI 'benefits all of humanity.' In reality, however, OpenAI has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft."

However, emails disclosed by OpenAI show that Musk, who co-founded the AI giant, was originally in favor of the plan.

Nonetheless, Musk has long maintained that AI should be open source, suggesting that AI technology should not solely be held by profit-making tech giants. He is now putting his money where his mouth is by opening Grok up and allowing anyone to use and build on it.

Grok was released last year, with the promise of being more accurate than its competitors, which Musk claims have allowed political correctness to take precedence over truth and useful answers, and providing real-time information thanks to its links with Twitter/X.

Controversial Approach

Completely open-sourcing a powerful AI platform is not without its risks. The technology can be used for a wide range of applications, good and evil. Already, AI is being used to manufacture and disseminate misinformation on an industrial scale, which is a particular problem ahead of key elections in the US, UK, and other countries this year.

AI Poses Threat To Next UK Election
Advances in AI allow the creation and effective dissemination of misinformation, the UK’s National Cyber Security Centre warns.

The concern is that making the model widely available for use and modification would effectively mean it could no longer be controlled. It could be used by terrorists to create new chemical or biological weapons, or even developed into an artificial general intelligence. At present, the world lacks regulatory frameworks for AI, which has developed incredibly fast in recent months.


Subscribe to our newsletter and follow us on X/Twitter.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to REX Wire.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.