The United Kingdom will work with France, Germany, Canada and other middle powers to strengthen UK AI security and shape global testing standards, Technology Secretary Liz Kendall said.
Kendall said the UK’s AI Security Institute had drawn interest from several countries after evaluating Anthropic’s Claude Mythos model earlier this month.
She said the evaluation showed the model could exploit security weaknesses, highlighting the need for stronger testing standards and international cooperation.
“Many other countries are extremely interested in working with us to learn some of the lessons from what AISI is doing,” Kendall told reporters Tuesday.
The UK plans to publish best-practice guidance on AI model evaluations in July, when AI Security Institutes hold their next meeting.
The guidance will likely focus on how governments and researchers can test advanced AI systems for security risks before wider deployment.
The AI Security Institute operates under Kendall’s department and has agreements with OpenAI, Anthropic and other leading AI companies to evaluate their models.
Read: AI Agent Database Error Wipes PocketOS Records
The institute assesses potential risks, including cyber capabilities, misuse concerns and broader safety issues. Kendall said cooperation with peer countries should not weaken the UK’s “deep, close and enduring relationship” with the United States.
She said five US-based hyperscale companies control about 70% of global AI compute power.