UK government signs up Anthropic to improve public services
The deal comes as part of a new AI plan that includes rebranding the AI Safety Institute as the UK AI Security Institute
The UK government has signed a memorandum of understanding with Anthropic to explore how the company's Claude AI assistant could be used to improve access to public services.
According to a statement from government officials, the aim is to advance best practices for the responsible deployment of frontier AI capabilities across the public sector while fostering close cooperation between government and leading AI innovators.
The agreement comes amid the recasting of the UK’s AI Safety Institute as the UK AI Security Institute. The government said it will also look to sign further agreements with leading AI companies.
The collaboration with Anthropic will look at how AI can transform public services and improve the lives of citizens, as well as using the technology to drive new scientific research.
"AI has the potential to transform how governments serve their citizens. We look forward to exploring how Anthropic’s AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents," said Dario Amodei, CEO and co-founder of Anthropic.
"We will continue to work closely with the UK AI Security Institute to research and evaluate AI capabilities in order to ensure secure deployment."
Government eyes closer ties with Anthropic
The collaboration will also draw on Anthropic's recently-released Economic Index, which uses anonymized conversations on Claude.ai to understand AI's effects on labor markets and the economy over time. This will provide insights to help the UK adapt its workforce and innovation strategies for an AI-enabled future.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
There are also plans for collaboration on securing the supply chain for advanced AI and the UK’s future infrastructure, while Anthropic said it will provide tools to support the UK’s startup community, universities, and other organizations.
In a statement, Anthropic said it will work closely with the new UK AISI, adding that it is committed to developing 'robust safeguards' and driving responsible and secure deployment.
Similarly, the government revealed it will prioritize responsiveness to public needs, privacy preservation, and building trust as core principles guiding the development and implementation of its AI-enabled solutions.
The AI Security Institute, however, will now have less of an emphasis on privacy issues.
RELATED WHITEPAPER
Instead, it will concentrate on serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, along with how it can be used to carry out cyber attacks and enable crimes such as fraud and child sexual abuse.
"The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change," said secretary of state for science, innovation, and technology Peter Kyle.
"The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies - are protected from those who would look to use AI against our institutions, democratic values, and way of life."
MORE FROM ITPRO
- There's no single route to AI adoption, and Anthropic is charting all available paths
- Google to invest another $1 billion in Anthropic
- AWS bets big on Anthropic in race against Microsoft and OpenAI
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
What businesses need to know about data sovereigntyWithout a firm strategy for data sovereignty, businesses put their data and reputations at risk
-
Anthropic says MCP will stay 'open, neutral, and community-driven' after donating project to Linux FoundationNews The AAIF aims to standardize agentic AI development and create an open ecosystem for developers
-
Global IT spending set to hit a 30-year high by end of 2025News Spending on hardware, software and IT services is growing faster than it has since 1996
-
IBM’s Confluent acquisition will give it a ‘competitive edge’ and supercharge its AI credentialsAnalysis IBM described Confluent as a “natural fit” for its hybrid cloud and AI strategy, enabling “end-to-end integration of applications, analytics, data systems and AI agents”.
-
Technical standards bodies hope to deliver AI success with ethical development practicesNews The ISO, IEC, and ITU are working together to develop standards that can support the development and deployment of trustworthy AI systems
-
CompTIA launches AI Essentials training to bridge workforce skills gapNews The new training series targets non-technical employees, aiming to boost productivity and security in the use of Generative AI tools like ChatGPT and Copilot
-
Government CIOs prepare for big funding boosts as AI takes hold in the public sectorNews Public sector IT leaders need to be mindful of falling into the AI hype trap
-
Chief data officers believe they'll be a 'pivotal' force in in the C-suite within five yearsNews Chief data officers might not be the most important execs in the C-suite right now, but they’ll soon rank among the most influential figures, according to research from Deloitte.
-
Big tech looks set to swerve AI regulations – at least for nowNews President Trump may be planning an executive order against AI regulation as the European Commission delays some aspects of AI Act
-
Want to get the most out of Anthropic’s Claude AI assistant? This new training course will give you prompt engineering tips and how to use Claude CodeNews New Coursera specializations aim to help Claude users of all levels brush up on their skills
