The EU just shelved its AI liability directive
Industry stakeholders claim the decision could “clean up the patchwork of AI regulation”
The European Commission has shelved plans to impose civil liability rules on enterprises using harmful AI systems in a move critics have described as a “strategic mistake”.
First proposed in 2022, the AI Liability Directive aimed to overhaul existing rules on harmful AI systems and protect consumers.
However, the publication of the Commission’s final work program shows plans to introduce the rules will now be scrapped, noting that “no foreseeable agreement” has been reached by lawmakers.
The documents add that the Commission will “assess whether another proposal should be tabled or another type of approach should be chosen”.
The move comes in the wake of the AI Action Summit, held in Paris, which saw industry stakeholders come together to discuss the future of AI innovation both across the European Union (EU) and globally.
During the summit, US vice president JD Vance voiced concerns over the EU’s supposed heavy handed regulatory approach to the technology. Vance urged European enterprises and lawmakers to view the “new frontier of AI with optimism and not trepidation”.
“We want to embark on the AI revolution before us with the spirit of openness and collaboration, but to create that kind of trust we need international regulatory regimes that foster creation," he told attendees.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The liability directive was originally tabled alongside the EU AI Act, but has since taken a backseat amid the push to impose the landmark legislation.
Some EU lawmakers have voiced their disapproval,. According to reports from Euronews, Axel Voss, the EU Parliament’s lead representative for developing liability rules, described the move as a “strategic mistake”.
Voss told the publication the decision will lead to “legal uncertainty, corporate power imbalances, and a Wild West approach to AI liability that only benefits big tech”.
"The reality now is that AI liability will be dictated by a fragmented patchwork of 27 different national legal systems, suffocating European AI startups and SMEs,” he added.
Liability directive move could “clean up” patchwork AI regulation
Peter van der Putten, director of Pegasystems’ AI Lab and assistant professor at Leiden University, said that while the move may raise consumer protection concerns, the “impact may be relative” given new regulations such as the EU AI Act.
“The idea was that if a customer, citizen or business was claiming to have suffered harm, they wouldn’t have to prove in-depth causality between the AI system and the damage caused,” he explained.
“This would be more on the public or private organization operating the AI system (and/or underlying vendors).”
Ultimately, consumers and entities will still be protected against AI-related harms through the legislation, he noted, and the decision to shelve the proposals will create a more aligned regulatory environment.
“So whilst it is tempting to frame this all as a move towards less consumer protection in the global AI rat race, it can also just be seen as a sensible move to clean up the patchwork of AI regulation a bit, and not incite all kinds of litigation that in the end could either be resolved by existing regulation, or would likely not have been successful for claimants anyway,” van der Putten said.
Betting on a blended approach

The decision to withdraw from the AI Liability Directive is being read by critics as the EU retreating on consumer protection in the face of AI companies.
While the EU AI Act contains protections for citizens and measures to monitor and control the harms of AI model deployment, it does not provide a direct route for consumers making claims against AI developers for damages such as algorithmic bias.
The AI Liability Directive was specifically designed to set out such a route, establishing concrete law on civil liability relating to AI and assisting consumers in making claims.
Timing is everything when it comes to the optics of a decision like this. In dropping the AI Liability Directive just one day after JD Vance’s warning that “excessive regulation” could kill AI innovation, the Commission could invite unwanted suggestions that it’s moving in lock-step with US approaches on AI.
RELATED WHITEPAPER
But there’s every indication that rather than a reactive decision, this is more of a pragmatic move by the EU to maintain a handle on the AI sector. If it doesn’t keep its seat at the table by supporting EU-based AI developers and attracting investments from US tech giants, the EU Commission could lose any leverage it has over AI safety altogether.
The EU isn’t naïve and Vance’s statements on AI regulation wouldn’t have come as a surprise to anyone at the Paris Summit. As EU member states like France move to make the most of their established AI talent and the Commission gets more ambitious with its backing for AI infrastructure via its InvestAI initiative, there will be more pragmatic decisions to come.
Attracting US investment, alongside home-grown talent, will be a necessity for the time being.
For the EU Commission’s part, it has stated that it saw “no foreseeable agreement” on the terms of the directive, adding “the Commission will assess whether another proposal should be tabled or another type of approach should be chosen”.
As yet, an alternative approach has not been officially put forward.
Ultimately, the EU may have made the bet that any reduced consumer power in the short term can be balanced out by regional success at AI advancement. If it can carve out a space for innovative AI that doesn’t infringe on inviolable rights, it could beat the US at its own game.
But it still has significant ground to make up, especially in comparison to the US and China.
MORE FROM ITPRO
- A big enforcement deadline for the EU AI Act just passed
- Why regulatory uncertainty is holding back AI adoption
- AI is going to be a legal nightmare for years to come

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Trump's AI executive order could leave US in a 'regulatory vacuum'News Citing a "patchwork of 50 different regulatory regimes" and "ideological bias", President Trump wants rules to be set at a federal level
-
TPUs: Google's home advantageITPro Podcast How does TPU v7 stack up against Nvidia's latest chips – and can Google scale AI using only its own supply?
-
Global IT spending set to hit a 30-year high by end of 2025News Spending on hardware, software and IT services is growing faster than it has since 1996
-
IBM’s Confluent acquisition will give it a ‘competitive edge’ and supercharge its AI credentialsAnalysis IBM described Confluent as a “natural fit” for its hybrid cloud and AI strategy, enabling “end-to-end integration of applications, analytics, data systems and AI agents”.
-
Technical standards bodies hope to deliver AI success with ethical development practicesNews The ISO, IEC, and ITU are working together to develop standards that can support the development and deployment of trustworthy AI systems
-
CompTIA launches AI Essentials training to bridge workforce skills gapNews The new training series targets non-technical employees, aiming to boost productivity and security in the use of Generative AI tools like ChatGPT and Copilot
-
Government CIOs prepare for big funding boosts as AI takes hold in the public sectorNews Public sector IT leaders need to be mindful of falling into the AI hype trap
-
Chief data officers believe they'll be a 'pivotal' force in in the C-suite within five yearsNews Chief data officers might not be the most important execs in the C-suite right now, but they’ll soon rank among the most influential figures, according to research from Deloitte.
-
Big tech looks set to swerve AI regulations – at least for nowNews President Trump may be planning an executive order against AI regulation as the European Commission delays some aspects of AI Act
-
Enterprises are cutting back on entry-level roles for AI – and it's going to create a nightmarish future skills shortageNews AI is eating into graduate jobs, and that brings problems for the internal talent pipeline