China-West Cooperation Identifies AI "Red Lines"

Researchers agree that no AI system should be capable of improving itself.

Will researchers stick to the "red lines" they have agreed

AI researchers from both Chinese and Western organizations have warned that the risks associated with artificial intelligence should lead to global cooperation, just as the risk of nuclear conflict during the Cold War did.

A recent gathering of international AI experts at the International Dialogue on AI Safety in Beijing recently established a series of "red lines" for AI development, particularly regarding the creation of bioweapons and the initiation of cyber attacks. The presence of Chinese government officials indicate implicit official support for the forum and its outcomes.

AGI Self-Enhancement Ruled Out

Experts discussed threats related to the development of artificial general intelligence (AGI): AI systems that match or surpass human capabilities. These would rival the best human minds, and could very quickly lead to the creation of artificial super intelligence.

Is Artificial General Intelligence Closer Than We Think?
It’s possible that Q* may have exhibited a level of reasoning previously unseen in the AI sector.

Notable signatories to the agreement included Geoffrey Hinton and Yoshua Bengio, recipients of the Turing Award for their pioneering work on neural networks, often revered as "godfathers" of AI; Stuart Russell, a renowned computer science professor at the University of California, Berkeley; and Andrew Yao, a prominent figure in China's computer science landscape.

"A central aspect of the discussion centered on the red lines that powerful AI systems must not cross, and governments worldwide should enforce during AI development and deployment," noted Bengio.

As autonomous systems rise in number and capability, the statements asserts that no AI system should autonomously copy or enhance itself without explicit human consent and assistance. Neither should it undertake actions that unreasonably amplify its power and influence.

Furthermore, the scientists agreed that no systems should substantially enhance actors' ability to design weapons of mass destruction, contravene biological or chemical weapons conventions, or execute autonomous cyber attacks resulting in severe financial losses or equivalent harm.

Existential Risks

The researchers cautioned that collective action on AI safety is vital to prevent "catastrophic or even existential risks to humanity within our lifetimes."

In the midst of the Cold War, international scientific and governmental coordination played a crucial role in preventing thermonuclear catastrophe. Once again, humanity must coordinate to avert potential catastrophe stemming from unprecedented technology.

The Beijing assembly reflects the mounting pressure from the academic community for tech entities and governments to collaborate meaningfully on AI safety, particularly by bridging the divide between the world's two technology powerhouses, China and the US.

Both US President Joe Biden and his Chinese counterpart Xi Jinping, during their November meeting, discussed AI safety and committed to establishing a dialogue on the matter. Moreover, leading AI companies worldwide have engaged in discussions with Chinese AI experts in recent months.

In November, 28 nations, including China, along with prominent AI firms, pledged broad commitments to cooperate on addressing existential risks stemming from advanced AI during UK Prime Minister Rishi Sunak's AI safety summit.


Subscribe to our newsletter and follow us on X/Twitter.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to REX Wire.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.