US and UK sign deal to steer global AI safety standards

Flags of the United Kingdom and United States flying in the wind with a clear sky in background.
(Image credit: Getty Images)

The US and UK have agreed to a shared approach to AI safety, partnering on research, safety evaluations, and guidance.

A new Memorandum of Understanding (MOU) will see the two countries work together to develop tests for advanced AI models, systems, and agents, following commitments made at the AI Safety Summit in November last year.

The UK and US AI Safety Institutes said they plan to perform at least one joint testing exercise on a publicly accessible model. They also intend to create a collective pool of expertise by looking into the possibility of personnel exchanges between the two.

US secretary of commerce Gina Raimondo said the deal is a major step to fostering closer cooperation on the use of AI technologies between the UK and US.

“AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society,” she said.

“Our partnership makes clear that we aren’t running away from these concerns – we're running at them.

"Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance. By working together, we are furthering the long-lasting special relationship between the US and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future."

The partnership will take effect immediately, and the two countries said they have also committed to develop similar agreements with other nations to promote AI safety around the world.

The UK's AI Institute was launched in January and became the first state-backed organization dealing with advanced AI safety for the public interest. The institute is focused on some of the more long-term existential threats from AI, rather than its immediate dangers, however.

As a result, the government has come under widespread criticism over fears that it’s ignoring the more immediate issues and that its proposed 'light touch' legislation gives too much power to the industry.

In the US, meanwhile, the government launched its own AI Safety Institute Consortium (AISIC) in February 2024. The institute has more than 200 member companies and AI organizations, and aims to develop guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.

"We have always been clear that ensuring the safe development of AI is a shared global issue. Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives," said secretary of state for science, innovation and technology Michelle Donelan.

"The work of our two nations in driving forward AI safety will strengthen the foundations we laid at Bletchley Park in November, and I have no doubt that our shared expertise will continue to pave the way for countries tapping into AI’s enormous benefits safely and responsibly."

Anita Schjøll Abildgaard, CEO and co-founder of Iris.ai, welcomed the announcement, but warned that the collaboration between the US and UK should aim to incorporate more global stakeholders to ensure international alignment on AI safety. 

“By pooling expertise and aligning testing approaches, the UK and US are rising to meet one of the defining technological challenges of our era. Such bilateral cooperation is essential in ensuring governance frameworks keep pace with AI capabilities,” she said.

“However, for efforts like these to be effective globally, they must include the full range of stakeholder voices at the cutting edge of AI development and deployment, particularly those in Europe.

“The EU AI Act has already established the bloc as a pace-setter in pragmatic AI governance that balances innovation and risk mitigation.”

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.