The EU just shelved its AI liability directive
Industry stakeholders claim the decision could “clean up the patchwork of AI regulation”
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
The European Commission has shelved plans to impose civil liability rules on enterprises using harmful AI systems in a move critics have described as a “strategic mistake”.
First proposed in 2022, the AI Liability Directive aimed to overhaul existing rules on harmful AI systems and protect consumers.
However, the publication of the Commission’s final work program shows plans to introduce the rules will now be scrapped, noting that “no foreseeable agreement” has been reached by lawmakers.
The documents add that the Commission will “assess whether another proposal should be tabled or another type of approach should be chosen”.
The move comes in the wake of the AI Action Summit, held in Paris, which saw industry stakeholders come together to discuss the future of AI innovation both across the European Union (EU) and globally.
During the summit, US vice president JD Vance voiced concerns over the EU’s supposed heavy handed regulatory approach to the technology. Vance urged European enterprises and lawmakers to view the “new frontier of AI with optimism and not trepidation”.
“We want to embark on the AI revolution before us with the spirit of openness and collaboration, but to create that kind of trust we need international regulatory regimes that foster creation," he told attendees.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The liability directive was originally tabled alongside the EU AI Act, but has since taken a backseat amid the push to impose the landmark legislation.
Some EU lawmakers have voiced their disapproval,. According to reports from Euronews, Axel Voss, the EU Parliament’s lead representative for developing liability rules, described the move as a “strategic mistake”.
Voss told the publication the decision will lead to “legal uncertainty, corporate power imbalances, and a Wild West approach to AI liability that only benefits big tech”.
"The reality now is that AI liability will be dictated by a fragmented patchwork of 27 different national legal systems, suffocating European AI startups and SMEs,” he added.
Liability directive move could “clean up” patchwork AI regulation
Peter van der Putten, director of Pegasystems’ AI Lab and assistant professor at Leiden University, said that while the move may raise consumer protection concerns, the “impact may be relative” given new regulations such as the EU AI Act.
“The idea was that if a customer, citizen or business was claiming to have suffered harm, they wouldn’t have to prove in-depth causality between the AI system and the damage caused,” he explained.
“This would be more on the public or private organization operating the AI system (and/or underlying vendors).”
Ultimately, consumers and entities will still be protected against AI-related harms through the legislation, he noted, and the decision to shelve the proposals will create a more aligned regulatory environment.
“So whilst it is tempting to frame this all as a move towards less consumer protection in the global AI rat race, it can also just be seen as a sensible move to clean up the patchwork of AI regulation a bit, and not incite all kinds of litigation that in the end could either be resolved by existing regulation, or would likely not have been successful for claimants anyway,” van der Putten said.
Betting on a blended approach

The decision to withdraw from the AI Liability Directive is being read by critics as the EU retreating on consumer protection in the face of AI companies.
While the EU AI Act contains protections for citizens and measures to monitor and control the harms of AI model deployment, it does not provide a direct route for consumers making claims against AI developers for damages such as algorithmic bias.
The AI Liability Directive was specifically designed to set out such a route, establishing concrete law on civil liability relating to AI and assisting consumers in making claims.
Timing is everything when it comes to the optics of a decision like this. In dropping the AI Liability Directive just one day after JD Vance’s warning that “excessive regulation” could kill AI innovation, the Commission could invite unwanted suggestions that it’s moving in lock-step with US approaches on AI.
RELATED WHITEPAPER
But there’s every indication that rather than a reactive decision, this is more of a pragmatic move by the EU to maintain a handle on the AI sector. If it doesn’t keep its seat at the table by supporting EU-based AI developers and attracting investments from US tech giants, the EU Commission could lose any leverage it has over AI safety altogether.
The EU isn’t naïve and Vance’s statements on AI regulation wouldn’t have come as a surprise to anyone at the Paris Summit. As EU member states like France move to make the most of their established AI talent and the Commission gets more ambitious with its backing for AI infrastructure via its InvestAI initiative, there will be more pragmatic decisions to come.
Attracting US investment, alongside home-grown talent, will be a necessity for the time being.
For the EU Commission’s part, it has stated that it saw “no foreseeable agreement” on the terms of the directive, adding “the Commission will assess whether another proposal should be tabled or another type of approach should be chosen”.
As yet, an alternative approach has not been officially put forward.
Ultimately, the EU may have made the bet that any reduced consumer power in the short term can be balanced out by regional success at AI advancement. If it can carve out a space for innovative AI that doesn’t infringe on inviolable rights, it could beat the US at its own game.
But it still has significant ground to make up, especially in comparison to the US and China.
MORE FROM ITPRO
- A big enforcement deadline for the EU AI Act just passed
- Why regulatory uncertainty is holding back AI adoption
- AI is going to be a legal nightmare for years to come

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Tomorrow's fraud techniquesITPro Podcast Leaders need to proactive as attackers launch more consistent, sophisticated attacks
-
Met Office hails huge efficiency gains in first year of cloud supercomputing with Microsoft AzureNews In moving to the cloud, the Met Office has bolstered operational resilience and helped to deliver more accurate forecasts
-
Global demand for this one AI role has skyrocketed 283% in the last year aloneNews AI trainers are now among the most sought-after specialists around the world
-
Most executives have no idea how many employees are actually using AINews A concerning number of business leaders think their staff are using AI across most of their of tasks – the reality is quite different
-
UK firms are dragging their heels on AI training – shadow AI means they need to move fast to avoid unauthorized useNews With shadow AI rife, access to approved tools, clear guardrails, and training are needed to use the technology responsibly
-
OpenAI's big enterprise push needs systems integrators, so it's turning to consultancies to plug implementation gapsNews Consultancies such as Accenture and Capgemini will act as systems integrators and help shape AI strategies for OpenAI customers
-
Microsoft says fear of falling behind is driving an AI arms race among UK businesses – and it's fueling record adoption ratesNews New research shows AI is now a core part of UK business success strategies
-
CEOs aren't seeing any AI productivity gains, yet some tech industry leaders are still convinced AI will destroy white collar work within two yearsNews A massive survey by National Bureau of Economic Research shows limited AI impact, but continued hopes it'll boost productivity eventually
-
‘AI is no longer about experiments. It is about results’: Boards are pushing for faster returns on AI investments, and tech leaders can't keep paceNews AI projects are now being held to the same standards as any other business investment
-
AI isn’t making work easier, it’s intensifying it – researchers say teams are now facing 'unsustainable' workloads, cognitive strain, and higher levels of burnoutNews While workers report productivity gains with AI, that means they’re faced with bigger workloads