The EU just shelved its AI liability directive
Industry stakeholders claim the decision could “clean up the patchwork of AI regulation”


The European Commission has shelved plans to impose civil liability rules on enterprises using harmful AI systems in a move critics have described as a “strategic mistake”.
First proposed in 2022, the AI Liability Directive aimed to overhaul existing rules on harmful AI systems and protect consumers.
However, the publication of the Commission’s final work program shows plans to introduce the rules will now be scrapped, noting that “no foreseeable agreement” has been reached by lawmakers.
The documents add that the Commission will “assess whether another proposal should be tabled or another type of approach should be chosen”.
The move comes in the wake of the AI Action Summit, held in Paris, which saw industry stakeholders come together to discuss the future of AI innovation both across the European Union (EU) and globally.
During the summit, US vice president JD Vance voiced concerns over the EU’s supposed heavy handed regulatory approach to the technology. Vance urged European enterprises and lawmakers to view the “new frontier of AI with optimism and not trepidation”.
“We want to embark on the AI revolution before us with the spirit of openness and collaboration, but to create that kind of trust we need international regulatory regimes that foster creation," he told attendees.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The liability directive was originally tabled alongside the EU AI Act, but has since taken a backseat amid the push to impose the landmark legislation.
Some EU lawmakers have voiced their disapproval,. According to reports from Euronews, Axel Voss, the EU Parliament’s lead representative for developing liability rules, described the move as a “strategic mistake”.
Voss told the publication the decision will lead to “legal uncertainty, corporate power imbalances, and a Wild West approach to AI liability that only benefits big tech”.
"The reality now is that AI liability will be dictated by a fragmented patchwork of 27 different national legal systems, suffocating European AI startups and SMEs,” he added.
Liability directive move could “clean up” patchwork AI regulation
Peter van der Putten, director of Pegasystems’ AI Lab and assistant professor at Leiden University, said that while the move may raise consumer protection concerns, the “impact may be relative” given new regulations such as the EU AI Act.
“The idea was that if a customer, citizen or business was claiming to have suffered harm, they wouldn’t have to prove in-depth causality between the AI system and the damage caused,” he explained.
“This would be more on the public or private organization operating the AI system (and/or underlying vendors).”
Ultimately, consumers and entities will still be protected against AI-related harms through the legislation, he noted, and the decision to shelve the proposals will create a more aligned regulatory environment.
“So whilst it is tempting to frame this all as a move towards less consumer protection in the global AI rat race, it can also just be seen as a sensible move to clean up the patchwork of AI regulation a bit, and not incite all kinds of litigation that in the end could either be resolved by existing regulation, or would likely not have been successful for claimants anyway,” van der Putten said.
Betting on a blended approach

The decision to withdraw from the AI Liability Directive is being read by critics as the EU retreating on consumer protection in the face of AI companies.
While the EU AI Act contains protections for citizens and measures to monitor and control the harms of AI model deployment, it does not provide a direct route for consumers making claims against AI developers for damages such as algorithmic bias.
The AI Liability Directive was specifically designed to set out such a route, establishing concrete law on civil liability relating to AI and assisting consumers in making claims.
Timing is everything when it comes to the optics of a decision like this. In dropping the AI Liability Directive just one day after JD Vance’s warning that “excessive regulation” could kill AI innovation, the Commission could invite unwanted suggestions that it’s moving in lock-step with US approaches on AI.
RELATED WHITEPAPER
But there’s every indication that rather than a reactive decision, this is more of a pragmatic move by the EU to maintain a handle on the AI sector. If it doesn’t keep its seat at the table by supporting EU-based AI developers and attracting investments from US tech giants, the EU Commission could lose any leverage it has over AI safety altogether.
The EU isn’t naïve and Vance’s statements on AI regulation wouldn’t have come as a surprise to anyone at the Paris Summit. As EU member states like France move to make the most of their established AI talent and the Commission gets more ambitious with its backing for AI infrastructure via its InvestAI initiative, there will be more pragmatic decisions to come.
Attracting US investment, alongside home-grown talent, will be a necessity for the time being.
For the EU Commission’s part, it has stated that it saw “no foreseeable agreement” on the terms of the directive, adding “the Commission will assess whether another proposal should be tabled or another type of approach should be chosen”.
As yet, an alternative approach has not been officially put forward.
Ultimately, the EU may have made the bet that any reduced consumer power in the short term can be balanced out by regional success at AI advancement. If it can carve out a space for innovative AI that doesn’t infringe on inviolable rights, it could beat the US at its own game.
But it still has significant ground to make up, especially in comparison to the US and China.
MORE FROM ITPRO
- A big enforcement deadline for the EU AI Act just passed
- Why regulatory uncertainty is holding back AI adoption
- AI is going to be a legal nightmare for years to come

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
How to implement a four-day week in tech
In-depth More companies are switching to a four-day week as they look to balance employee well-being with productivity
-
Intelligence sharing: The boost for businesses
In-depth Intelligence sharing with peers is essential if critical sectors are to be protected
-
Who is Mustafa Suleyman?
From Oxford drop out to ethical AI pioneer, Mustafa Suleyman is one of the biggest players in AI
-
Meta isn’t playing ball with the EU on the AI Act
News Europe is 'heading down the wrong path on AI', according to Meta, with the company accusing the EU of overreach
-
Generative AI enthusiasm continues to beat out business uncertainty
Analysis Massive data center buildout makes up a significant portion of IT spending, as hyperscalers make hay
-
‘Confusing for developers and bad for users’: Apple launches appeal over ‘unprecedented’ EU fine
News Apple is pushing back against new app store rules imposed by the European Commission, suggesting a €500m fine is a step too far.
-
‘Lean into it’: Amazon CEO Andy Jassy thinks enterprises need to embrace AI to avoid being left behind – even if that means fewer jobs in the future
News Amazon CEO Andy Jassy thinks companies need to "lean into" AI and embrace the technology despite concerns over job losses.
-
Engineering firms see little productivity benefit from use of AI
News While engineering firms are keen on ramping up the use of AI, many aren't fully unlocking value due to botched adoption strategies and legacy technology.
-
Gen Z workers are keen on AI in the workplace – but they’re still skeptical about the hype
News Younger workers could lead the shift to AI, but only think it can can manage some tasks
-
Google CEO Sundar Pichai is unfazed by AI job cuts — workers might not share the same optimism
Analysis Google CEO Sundar Pichai is upbeat about the impact of AI on the workforce, but workers might not share the same optimism amid repeated waves of job cuts.