The EU just shelved its AI liability directive
Industry stakeholders claim the decision could “clean up the patchwork of AI regulation”
The European Commission has shelved plans to impose civil liability rules on enterprises using harmful AI systems in a move critics have described as a “strategic mistake”.
First proposed in 2022, the AI Liability Directive aimed to overhaul existing rules on harmful AI systems and protect consumers.
However, the publication of the Commission’s final work program shows plans to introduce the rules will now be scrapped, noting that “no foreseeable agreement” has been reached by lawmakers.
The documents add that the Commission will “assess whether another proposal should be tabled or another type of approach should be chosen”.
The move comes in the wake of the AI Action Summit, held in Paris, which saw industry stakeholders come together to discuss the future of AI innovation both across the European Union (EU) and globally.
During the summit, US vice president JD Vance voiced concerns over the EU’s supposed heavy handed regulatory approach to the technology. Vance urged European enterprises and lawmakers to view the “new frontier of AI with optimism and not trepidation”.
“We want to embark on the AI revolution before us with the spirit of openness and collaboration, but to create that kind of trust we need international regulatory regimes that foster creation," he told attendees.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The liability directive was originally tabled alongside the EU AI Act, but has since taken a backseat amid the push to impose the landmark legislation.
Some EU lawmakers have voiced their disapproval,. According to reports from Euronews, Axel Voss, the EU Parliament’s lead representative for developing liability rules, described the move as a “strategic mistake”.
Voss told the publication the decision will lead to “legal uncertainty, corporate power imbalances, and a Wild West approach to AI liability that only benefits big tech”.
"The reality now is that AI liability will be dictated by a fragmented patchwork of 27 different national legal systems, suffocating European AI startups and SMEs,” he added.
Liability directive move could “clean up” patchwork AI regulation
Peter van der Putten, director of Pegasystems’ AI Lab and assistant professor at Leiden University, said that while the move may raise consumer protection concerns, the “impact may be relative” given new regulations such as the EU AI Act.
“The idea was that if a customer, citizen or business was claiming to have suffered harm, they wouldn’t have to prove in-depth causality between the AI system and the damage caused,” he explained.
“This would be more on the public or private organization operating the AI system (and/or underlying vendors).”
Ultimately, consumers and entities will still be protected against AI-related harms through the legislation, he noted, and the decision to shelve the proposals will create a more aligned regulatory environment.
“So whilst it is tempting to frame this all as a move towards less consumer protection in the global AI rat race, it can also just be seen as a sensible move to clean up the patchwork of AI regulation a bit, and not incite all kinds of litigation that in the end could either be resolved by existing regulation, or would likely not have been successful for claimants anyway,” van der Putten said.
Betting on a blended approach

The decision to withdraw from the AI Liability Directive is being read by critics as the EU retreating on consumer protection in the face of AI companies.
While the EU AI Act contains protections for citizens and measures to monitor and control the harms of AI model deployment, it does not provide a direct route for consumers making claims against AI developers for damages such as algorithmic bias.
The AI Liability Directive was specifically designed to set out such a route, establishing concrete law on civil liability relating to AI and assisting consumers in making claims.
Timing is everything when it comes to the optics of a decision like this. In dropping the AI Liability Directive just one day after JD Vance’s warning that “excessive regulation” could kill AI innovation, the Commission could invite unwanted suggestions that it’s moving in lock-step with US approaches on AI.
RELATED WHITEPAPER
But there’s every indication that rather than a reactive decision, this is more of a pragmatic move by the EU to maintain a handle on the AI sector. If it doesn’t keep its seat at the table by supporting EU-based AI developers and attracting investments from US tech giants, the EU Commission could lose any leverage it has over AI safety altogether.
The EU isn’t naïve and Vance’s statements on AI regulation wouldn’t have come as a surprise to anyone at the Paris Summit. As EU member states like France move to make the most of their established AI talent and the Commission gets more ambitious with its backing for AI infrastructure via its InvestAI initiative, there will be more pragmatic decisions to come.
Attracting US investment, alongside home-grown talent, will be a necessity for the time being.
For the EU Commission’s part, it has stated that it saw “no foreseeable agreement” on the terms of the directive, adding “the Commission will assess whether another proposal should be tabled or another type of approach should be chosen”.
As yet, an alternative approach has not been officially put forward.
Ultimately, the EU may have made the bet that any reduced consumer power in the short term can be balanced out by regional success at AI advancement. If it can carve out a space for innovative AI that doesn’t infringe on inviolable rights, it could beat the US at its own game.
But it still has significant ground to make up, especially in comparison to the US and China.
MORE FROM ITPRO
- A big enforcement deadline for the EU AI Act just passed
- Why regulatory uncertainty is holding back AI adoption
- AI is going to be a legal nightmare for years to come

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
What is Microsoft Maia?Explainer Microsoft's in-house chip is planned to a core aspect of Microsoft Copilot and future Azure AI offerings
-
If Satya Nadella wants us to take AI seriously, let’s forget about mass adoption and start with a return on investment for those already using itOpinion If Satya Nadella wants us to take AI seriously, let's start with ROI for businesses
-
Lloyds Banking Group wants to train every employee in AI by the end of this year – here's how it plans to do itNews The new AI Academy from Lloyds Banking Group looks to upskill staff, drive AI use, and improve customer service
-
CEOs are fed up with poor returns on investment from AI: Enterprises are struggling to even 'move beyond pilots' and 56% say the technology has delivered zero cost or revenue improvementsNews Most CEOs say they're struggling to turn AI investment into tangible returns and failing to move beyond exploratory projects
-
Companies continue to splash out on AI, despite disillusionment with the technologyNews Worldwide spending on AI will hit $2.5 trillion in 2026, according to Gartner, despite IT leaders wallowing in the "Trough of Disillusionment" – and spending will surge again next year.
-
A new study claims AI will destroy 10.4 million roles in the US by 2030, more than the number of jobs lost in the Great Recession – but analysts still insist there won’t be a ‘jobs apocalypse’News A frantic push to automate roles with AI could come back to haunt many enterprises, according to Forrester
-
Businesses aren't laying off staff because of AI, they're using it as an excuse to distract from 'weak demand or excessive hiring'News It's sexier to say AI caused redundancies than it is to admit the economy is bad or overhiring has happened
-
Lisa Su says AI is changing AMD’s hiring strategy – but not for the reason you might thinkNews AMD CEO Lisa Su has revealed AI is directly influencing recruitment practices at the chip maker but, unlike some tech firms, it’s led to increased headcount.
-
Accenture acquires Faculty, poaches CEO in bid to drive client AI adoptionNews The Faculty acquisition will help Accenture streamline AI adoption processes
-
Productivity gains on the menu as CFOs target bullish tech spending in 2026News Findings from Deloitte’s Q4 CFO Survey show 59% of firms have now changed their tune on the potential performance improvements unlocked by AI.