UK government signs up Anthropic to improve public services
The deal comes as part of a new AI plan that includes rebranding the AI Safety Institute as the UK AI Security Institute
The UK government has signed a memorandum of understanding with Anthropic to explore how the company's Claude AI assistant could be used to improve access to public services.
According to a statement from government officials, the aim is to advance best practices for the responsible deployment of frontier AI capabilities across the public sector while fostering close cooperation between government and leading AI innovators.
The agreement comes amid the recasting of the UK’s AI Safety Institute as the UK AI Security Institute. The government said it will also look to sign further agreements with leading AI companies.
The collaboration with Anthropic will look at how AI can transform public services and improve the lives of citizens, as well as using the technology to drive new scientific research.
"AI has the potential to transform how governments serve their citizens. We look forward to exploring how Anthropic’s AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents," said Dario Amodei, CEO and co-founder of Anthropic.
"We will continue to work closely with the UK AI Security Institute to research and evaluate AI capabilities in order to ensure secure deployment."
Government eyes closer ties with Anthropic
The collaboration will also draw on Anthropic's recently-released Economic Index, which uses anonymized conversations on Claude.ai to understand AI's effects on labor markets and the economy over time. This will provide insights to help the UK adapt its workforce and innovation strategies for an AI-enabled future.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
There are also plans for collaboration on securing the supply chain for advanced AI and the UK’s future infrastructure, while Anthropic said it will provide tools to support the UK’s startup community, universities, and other organizations.
In a statement, Anthropic said it will work closely with the new UK AISI, adding that it is committed to developing 'robust safeguards' and driving responsible and secure deployment.
Similarly, the government revealed it will prioritize responsiveness to public needs, privacy preservation, and building trust as core principles guiding the development and implementation of its AI-enabled solutions.
The AI Security Institute, however, will now have less of an emphasis on privacy issues.
RELATED WHITEPAPER
Instead, it will concentrate on serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, along with how it can be used to carry out cyber attacks and enable crimes such as fraud and child sexual abuse.
"The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change," said secretary of state for science, innovation, and technology Peter Kyle.
"The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies - are protected from those who would look to use AI against our institutions, democratic values, and way of life."
MORE FROM ITPRO
- There's no single route to AI adoption, and Anthropic is charting all available paths
- Google to invest another $1 billion in Anthropic
- AWS bets big on Anthropic in race against Microsoft and OpenAI
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Google CEO Sundar Pichai says vibe coding has made software development ‘exciting again’ — developers might disagreeNews Google CEO Sundar Pichai claims software development has become “exciting again” since the rise of vibe coding, but some devs are still on the fence about using AI to code.
-
15-year-old revealed as key player in Scattered LAPSUS$ HuntersNews 'Rey' says he's trying to leave Scattered LAPSUS$ Hunters and is prepared to cooperate with law enforcement
-
Government CIOs prepare for big funding boosts as AI takes hold in the public sectorNews Public sector IT leaders need to be mindful of falling into the AI hype trap
-
Chief data officers believe they'll be a 'pivotal' force in in the C-suite within five yearsNews Chief data officers might not be the most important execs in the C-suite right now, but they’ll soon rank among the most influential figures, according to research from Deloitte.
-
Big tech looks set to swerve AI regulations – at least for nowNews President Trump may be planning an executive order against AI regulation as the European Commission delays some aspects of AI Act
-
Want to get the most out of Anthropic’s Claude AI assistant? This new training course will give you prompt engineering tips and how to use Claude CodeNews New Coursera specializations aim to help Claude users of all levels brush up on their skills
-
Enterprises are cutting back on entry-level roles for AI – and it's going to create a nightmarish future skills shortageNews AI is eating into graduate jobs, and that brings problems for the internal talent pipeline
-
Pax8 and Microsoft are teaming up to supercharge MSP growthNews The new agreement includes integration between Pax8 and Microsoft Marketplace alongside a new OneCloud Guided Growth enablement initiative
-
Gartner says ‘AI will touch all IT work’ by 2030, and admins face a rocky road to adaptAnalysis Training and reskilling will be critical for IT teams as an influx of AI tools transforms operations.
-
Want to keep your job in the AI era? Start retraining nowNews Workers face critical decisions over the best way to upskill and retrain in the age of AI
