UK and US reject Paris AI summit agreement as “Atlantic rift” on regulation grows
The UK believes the declaration is too vague, while the US wants to put American interests first
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
The UK and US have refused to sign an international agreement on inclusive and sustainable AI at this week's global summit in Paris.
The UK said its decision was based on concerns that the declaration lacked 'practical clarity' on global governance and didn't address issues of national security.
Meanwhile, US vice president JD Vance told delegates to the summit that over-regulation of AI could hold the industry back.
"The Trump administration believes that AI will have countless revolutionary applications in economic innovation, job creation, national security, health care, free expression and beyond, and to restrict its development now will not only unfairly benefit incumbents in this space, it would mean paralyzing one of the most promising technologies we have seen in generations," he said.
Vance's speech was very much in line with the Trump policy of 'America First'. He beat the drum for the US AI industry, and issued a warning to nations aiming to restrict its power.
"The Trump administration is troubled by reports that some foreign governments are considering tightening screws on US tech companies with international footprints," he said.
"America cannot and will not accept that, and we think it's a terrible mistake."
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Vance's stance has been widely criticized by rights groups.
Alexandra Reeve Givens, CEO of the Center for Democracy and Technology (CDT), said that for AI to benefit both nations and individuals, robust safeguards and accountability must be implemented.
"When AI is being used to determine who gets a job, who gets a loan, who gets access to government services, the stakes are too high to shrug off the risks."
Meanwhile, Dr Andrew Bolster, senior research and development manager for data science at Black Duck, warned that the lack of transatlantic agreement might leave organizations deploying AI with a tricky course to steer.
"This growing Atlantic AI rift is a wake-up call for any organization looking to deploy or operate global AI solutions," he said.
"The regulatory landscape is not as settled as it may seem, and while alignment to existing principles such as GDPR, the California Consumer Privacy Act (CCPA) - and its amendment, the California Privacy Rights Act (CPRA) - or Australia’s Privacy Act may stand you in good stead, that is no guarantee of continued operations."
61 nations still signed the AI agreement
Despite the UK and US’ disapproval, the agreement was signed by 61 other countries, including Canada, Japan, India ,and China, as well as European nations.
They pledged to focus on ensuring that AI is 'open, inclusive, transparent, ethical, safe, secure and trustworthy'.
The agreement also stresses the importance of strengthening international coordination in AI governance and preventing market monopolization, along with working to keep AI sustainable.
RELATED WHITEPAPER
During the summit, European Commission president Ursula von der Leyen announced a €200 billion plan to advance AI research and infrastructure across the bloc.
InvestAI will finance four AI gigafactories across the EU focused on training models for complex applications such as medicine and science.
"This unique public-private partnership, akin to a CERN for AI, will enable all our scientists and companies – not just the biggest – to develop the most advanced very large models needed to make Europe an AI continent," she said.
MORE FROM ITPRO
- A big enforcement deadline for the EU AI Act just passed
- How the EU AI Act compares to other international legislation
- Looking to use DeepSeek R1 in the EU? Maybe think again
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Low-budget devices are the biggest casualty of the RAM crisisNews Say goodbye to budget devices; vendors are doubling down on high-end options to absorb costs
-
Sectigo taps Clint Maddox to lead global field operationsReviews The appointment follows a year of strong momentum for the security vendor as it expands its global channel footprint
-
Concerns are mounting over the cognitive impact of AI as workers report experiencing ‘brain fry’ – and it’s causing "increased employee errors, decision fatigue, and intention to quit"News Research from Boston Consulting Group backs earlier studies in highlighting the negative cognitive impact of AI at work
-
If you thought RTO battles were bad, wait until AI mandates start taking hold across the industryOpinion Forcing workers to adopt AI under the threat of poor performance reviews and losing out on promotions will only create friction
-
Sam Altman just said what everyone is thinking about AI layoffsNews AI layoff claims are overblown and increasingly used as an excuse for “traditional drivers” when implementing job cuts
-
Google says hacker groups are using Gemini to augment attacks – and companies are even ‘stealing’ its modelsNews Google Threat Intelligence Group has shut down repeated attempts to misuse the Gemini model family
-
Why Anthropic sent software stocks into freefallNews Anthropic's sector-specific plugins for Claude Cowork have investors worried about disruption to software and services companies
-
B2B Tech Future Focus - 2026Whitepaper Advice, insight, and trends for modern B2B IT leaders
-
What the UK's new Centre for AI Measurement means for the future of the industryNews The project, led by the National Physical Laboratory, aims to accelerate the development of secure, transparent, and trustworthy AI technologies
-
Half of agentic AI projects are still stuck at the pilot stage – but that’s not stopping enterprises from ramping up investmentNews Organizations are stymied by issues with security, privacy, and compliance, as well as the technical challenges of managing agents at scale
