AI Tech Giants Urge UK To Expedite Safety Evaluations

The UK has ambitions to become a leader in the field of AI, but it needs to speed up its safety checks to make that happen.

What is AISI's role in AI safety testing?

In a bid for accelerated approval processes, leading artificial intelligence (AI) corporations are pressing the UK to hasten its safety assessments for AI technologies. This move has sparked a debate around the UK's ambition to spearhead the regulation of this rapidly evolving field.

Collaboration And Conflict

In November, industry behemoths such as OpenAI, Google DeepMind, Microsoft, and Meta pledged to subject their latest generative AI models to scrutiny by the newly established AI Safety Institute (AISI) of Britain. This commitment was aimed at refining these models should any technological shortcomings be identified. However, with the tech firms now seeking greater clarity on the evaluation criteria, duration, and response mechanisms of AISI's safety checks, tensions have surfaced. These companies assert that there is no legal mandate to alter or postpone product launches based on AISI's findings.

Ian Hogarth, AISI's chair, underscored in a recent LinkedIn post the consensus among companies on the necessity of governmental model testing prior to market release, highlighting ongoing collaborative assessments. The UK government, emphasizing its strategy for continuous model access for preemptive testing, has vowed to communicate outcomes to developers, expecting them to proactively mitigate identified risks before product introduction.

Ian Hogarth on LinkedIn: AI Safety Institute: third progress report
The AI Safety Institute has been in operation for almost eight months. We now have over 168 cumulative years of frontier AI experience and have now started…

The Push For Binding Regulations

This disagreement underscores the limitations of voluntary agreements in governing the rapid innovation in technology. Recent government statements have pointed towards the necessity for "future binding requirements" to hold AI developers accountable for system safety. This reflects a broader ambition by Prime Minister Rishi Sunak for the UK to play a pivotal role in addressing AI's potential existential threats, including cybersecurity vulnerabilities and the misuse in creating bioweapons.

Testing And Technological Safeguards

AISI has embarked on evaluating both existing and forthcoming AI models, like Google's Gemini Ultra, focusing on misuse risks such as cybersecurity. Utilizing expertise from the National Cyber Security Centre, efforts have concentrated on identifying vulnerabilities to "jailbreaking" and "spear-phishing" attacks, alongside developing automated reverse engineering tools to decipher models' functionalities. The government has allocated £1 million towards these testing capabilities, demonstrating a significant investment in AI safety.

Google DeepMind has acknowledged the importance of their partnership with AISI, highlighting efforts to refine AI model evaluations and establish industry best practices. This collaboration is seen as crucial for ensuring the long-term robustness and safety of AI technologies, marking a critical step towards managing the complex challenges posed by AI's advancement.


Subscribe to our newsletter and follow us on X/Twitter.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to REX Wire.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.