UK government signs up Anthropic to improve public services
The deal comes as part of a new AI plan that includes rebranding the AI Safety Institute as the UK AI Security Institute


The UK government has signed a memorandum of understanding with Anthropic to explore how the company's Claude AI assistant could be used to improve access to public services.
According to a statement from government officials, the aim is to advance best practices for the responsible deployment of frontier AI capabilities across the public sector while fostering close cooperation between government and leading AI innovators.
The agreement comes amid the recasting of the UK’s AI Safety Institute as the UK AI Security Institute. The government said it will also look to sign further agreements with leading AI companies.
The collaboration with Anthropic will look at how AI can transform public services and improve the lives of citizens, as well as using the technology to drive new scientific research.
"AI has the potential to transform how governments serve their citizens. We look forward to exploring how Anthropic’s AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents," said Dario Amodei, CEO and co-founder of Anthropic.
"We will continue to work closely with the UK AI Security Institute to research and evaluate AI capabilities in order to ensure secure deployment."
Government eyes closer ties with Anthropic
The collaboration will also draw on Anthropic's recently-released Economic Index, which uses anonymized conversations on Claude.ai to understand AI's effects on labor markets and the economy over time. This will provide insights to help the UK adapt its workforce and innovation strategies for an AI-enabled future.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
There are also plans for collaboration on securing the supply chain for advanced AI and the UK’s future infrastructure, while Anthropic said it will provide tools to support the UK’s startup community, universities, and other organizations.
In a statement, Anthropic said it will work closely with the new UK AISI, adding that it is committed to developing 'robust safeguards' and driving responsible and secure deployment.
Similarly, the government revealed it will prioritize responsiveness to public needs, privacy preservation, and building trust as core principles guiding the development and implementation of its AI-enabled solutions.
The AI Security Institute, however, will now have less of an emphasis on privacy issues.
RELATED WHITEPAPER
Instead, it will concentrate on serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, along with how it can be used to carry out cyber attacks and enable crimes such as fraud and child sexual abuse.
"The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change," said secretary of state for science, innovation, and technology Peter Kyle.
"The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies - are protected from those who would look to use AI against our institutions, democratic values, and way of life."
MORE FROM ITPRO
- There's no single route to AI adoption, and Anthropic is charting all available paths
- Google to invest another $1 billion in Anthropic
- AWS bets big on Anthropic in race against Microsoft and OpenAI
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Researchers sound alarm over AI hardware vulnerabilities that expose training data
News Hackers can abuse flaws in AI accelerators to break AI privacy – and a reliable fix could be years away
-
Are AI PCs becoming the norm?
ITPro Podcast As manufacturers increasingly embed NPUs in devices, what are the benefits to businesses?
-
UK government to fund regional tech programs up to £20m
news Local and regional partnerships invited to bid for support for established or developing projects
-
Microsoft unveils additional CEO to work alongside Nadella
News The move aims to free up Microsoft CEO Satya Nadella to focus on AI
-
AI is boosting personal productivity but slowing down teams – here’s why
News An Atlassian survey suggests AI is helping worker productivity, but a failure to collaborate means it isn't delivering ROI
-
Fiverr’s CEO told staff to upskill in AI – then cut 30% of the workforce to become an ‘AI-first’ company
News The warning earlier this year didn't help a third of the company's workforce
-
Enterprises are concerned about ‘critical shortages’ of staff with AI ethics and security expertise
News Tech leaders are reporting higher demand for AI literacy and “human skills”
-
Australia's biggest bank cut staff for AI, then it backtracked – and it's one of many scrapping plans for automated customer support teams
News Think AI can manage customer service for your company? By all means, give it a go, but perhaps wait a few months before starting redundancies.
-
Punishing workers for refusing to use AI is a terrible idea, but these CEOs did it anyway
News Justifying big money investment in AI projects has reached extreme levels in recent months, with some leaders even sacking employees who refuse to embrace the technology.
-
AI tools are a game changer for enterprise productivity, but reliability issues are causing major headaches – ‘everyone’s using AI, but very few know how to keep it from falling over’
News Enterprises are flocking to AI tools, but very few lack the appropriate infrastructure to drive adoption at scale