UK government signs up Anthropic to improve public services
The deal comes as part of a new AI plan that includes rebranding the AI Safety Institute as the UK AI Security Institute


The UK government has signed a memorandum of understanding with Anthropic to explore how the company's Claude AI assistant could be used to improve access to public services.
According to a statement from government officials, the aim is to advance best practices for the responsible deployment of frontier AI capabilities across the public sector while fostering close cooperation between government and leading AI innovators.
The agreement comes amid the recasting of the UK’s AI Safety Institute as the UK AI Security Institute. The government said it will also look to sign further agreements with leading AI companies.
The collaboration with Anthropic will look at how AI can transform public services and improve the lives of citizens, as well as using the technology to drive new scientific research.
"AI has the potential to transform how governments serve their citizens. We look forward to exploring how Anthropic’s AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents," said Dario Amodei, CEO and co-founder of Anthropic.
"We will continue to work closely with the UK AI Security Institute to research and evaluate AI capabilities in order to ensure secure deployment."
Government eyes closer ties with Anthropic
The collaboration will also draw on Anthropic's recently-released Economic Index, which uses anonymized conversations on Claude.ai to understand AI's effects on labor markets and the economy over time. This will provide insights to help the UK adapt its workforce and innovation strategies for an AI-enabled future.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
There are also plans for collaboration on securing the supply chain for advanced AI and the UK’s future infrastructure, while Anthropic said it will provide tools to support the UK’s startup community, universities, and other organizations.
In a statement, Anthropic said it will work closely with the new UK AISI, adding that it is committed to developing 'robust safeguards' and driving responsible and secure deployment.
Similarly, the government revealed it will prioritize responsiveness to public needs, privacy preservation, and building trust as core principles guiding the development and implementation of its AI-enabled solutions.
The AI Security Institute, however, will now have less of an emphasis on privacy issues.
RELATED WHITEPAPER
Instead, it will concentrate on serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, along with how it can be used to carry out cyber attacks and enable crimes such as fraud and child sexual abuse.
"The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change," said secretary of state for science, innovation, and technology Peter Kyle.
"The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies - are protected from those who would look to use AI against our institutions, democratic values, and way of life."
MORE FROM ITPRO
- There's no single route to AI adoption, and Anthropic is charting all available paths
- Google to invest another $1 billion in Anthropic
- AWS bets big on Anthropic in race against Microsoft and OpenAI
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Cut through the chaos and reduce software, cloud and licensing complexity
whitepaper
-
Healthcare organizations report rampant email security failures – and Microsoft 365 is often the weakest link
News IT leaders say they're drowning in security alerts and missing real threats, thanks to limited resources, expanding attack surfaces, and weak security strategies
-
Australia's biggest bank cut staff for AI, then it backtracked – and it's one of many scrapping plans for automated customer support teams
News Think AI can manage customer service for your company? By all means, give it a go, but perhaps wait a few months before starting redundancies.
-
Punishing workers for refusing to use AI is a terrible idea, but these CEOs did it anyway
News Justifying big money investment in AI projects has reached extreme levels in recent months, with some leaders even sacking employees who refuse to embrace the technology.
-
AI tools are a game changer for enterprise productivity, but reliability issues are causing major headaches – ‘everyone’s using AI, but very few know how to keep it from falling over’
News Enterprises are flocking to AI tools, but very few lack the appropriate infrastructure to drive adoption at scale
-
CFOs were skeptical about AI investment, but they’ve changed their tune since the arrival of agents
News The introduction of agentic AI has CFOs changing their outlook on the technology
-
These are the top 'soft skills' your business needs to succeed with AI
News Technical capabilities can only take a business so far with AI adoption, according to Multiverse
-
The second enforcement deadline for the EU AI Act is approaching – here’s what businesses need to know about the General-Purpose AI Code of Practice
News General-purpose AI model providers will face heightened scrutiny
-
AI skills shortages exacerbated by surging salary demands
News Hiring staff with AI skills continues to be a pain point for companies
-
Who is Mustafa Suleyman?
From Oxford drop out to ethical AI pioneer, Mustafa Suleyman is one of the biggest players in AI