EU AI Act risks collapse if consensus not reached, experts warn

Close up shot of the flag of the European Union
(Image credit: Getty Images)

Key questions over the scope and extent of the EU AI Act still remain unanswered and could create uncertainty among businesses, according to legal experts. 

David Dumont, partner at Hunton Andrews Kurth, said there are several elements around which no common ground has yet been established as EU lawmakers approach a critical juncture in negotiations.

In particular, rules pertaining to the development and use of foundation models and AI-related harms have not been agreed upon, which could create confusion for businesses and hamper innovation.

AI presents challenging legal questions and there are still key elements on which the Council and the European Parliament have not found common ground,” he said. “Such as the rules around foundational models and prohibited AI uses.”

Dumont warned that given the critical stage of negotiations, if no agreement is reached on certain aspects of the legislation, the bill is unlikely to pass before next year’s European elections, representing a significant setback in the union’s attempts to push through the sweeping regulations.

“December 6th is a key date for the EU AI Act as it is the last political trialogue negotiation that is currently scheduled for the EU AI Act,” he said. “If the EU legislative bodies do not succeed in reaching an agreement at this meeting, it will likely be difficult to pass the EU AI Act before next year’s European Parliament elections.”

“This would cause a significant delay and possibly result in the EU losing its first mover advantage in regulating AI. “

EU AI Act could “stifle innovation”

Industry stakeholders have repeatedly hit out at EU lawmakers over the proposed legislation, with some arguing the bill could stifle innovation unless it delivers a balanced regulatory approach to artificial intelligence. 

Particular concerns have been raised that the proposals could impede smaller enterprises and academics from conducting vital research into the technology.

Naveen Rao, VP of generative AI at Databricks, warned that reactionary legal efforts to combat AI risks could risk “overregulating AI” and harming European efforts to drive AI innovation.

“Any new regulation must not be at the cost of stifling smaller startups and academic researchers from being able to do their work and research”, he said. “The more we understand these models, the more we can share ideas on how to safely shape a future with AI.”

His comments follow similar concerns regarding the potential impact on smaller firms in recent weeks. In November, tech policy group DigitalEurope said EU-based startups could be “regulated out of existence” due to the legislation.

Rao further warned that the EU must avoid a “rigid, one-size-fits-all approach” to AI regulation and instead focus on a more flexible framework that enables innovation while balancing potential risks.

“I believe it will be critical for regulators to clearly mark the distinction between AI developers and AI deployers,” he said. “The developer creates the original AI, the deployer puts it into use.”

This distinction between AI development and deployment has become a political flashpoint in recent weeks amidst member state concerns over the scope of the EU AI Act and its potential impact on individual tech ecosystems.

In October, France, Germany, and Italy, three of the EU’s leading economies, published a joint paper pledging their support for “mandatory self regulation” of foundation models through the creation of voluntary codes of conduct.

The joint paper specifically called on EU lawmakers to distinguish between regulating the use of AI tools in society and the regulation of AI technologies themselves.

Speaking to Reuters at the time, German digital affairs minister Volker Wissing said lawmakers need to “regulate the applications and not the technology” if European economies are to compete on a global scale.

The EU AI Act has been a repeated source of controversy

The EU AI Act has been a source of controversy in the global technology industry in recent months over claims that the regulations could negatively impact European tech ecosystems. 

The act itself aims to curb the potential risks and harms associated with generative AI, such as misinformation, bias, and its use by cyber criminals to wage sophisticated attacks against enterprises.

Under the act, lawmakers will categorize models based on their risk factor. This ranges from ‘minimal’ to ‘unacceptable’ and ‘high risk’, and takes into account the potential for AI models to cause harm to both individuals and broader society.

Rao emphasized that “AI models on their own” are not necessarily ‘high risk’, but noted that developers do have a responsibility to curtail potential harms.

RELATED RESOURCE

Managing Data for AI and Analytics at Scale with an Open Data Lakehouse Approach: IBM watsonx.data whitepaper

(Image credit: IBM)

Discover how IBM watsonx.data supports a range of current standard technologies

DOWNLOAD NOW

AI models on their own are not “high risk”, and in fact, hold the potential to bring much good,” he said. “However, there will always be those who use new technology irresponsibly, or actively pursue nefarious intent.”

“That is not to say that developers have no responsibility. They should carry out risk assessment and mitigation, for instance, as well as clear documentation around data sources, impact assessments around possible bias, and so on.”

EU AI Act may harm open source industry

Stringent rules placed on open source developers through the EU AI Act have also been a key flashpoint throughout the legislative process so far in 2023. 

Open source advocates have repeatedly warned that the regulations will harm innovation in the European open source ecosystem.

A key focus has centered around whether research and testing of AI models could be interpreted as “commercial activity” under the legislation, and therefore subject to stricter rules.

In July, a consortium of companies including GitHub, Hugging Face, and Creative Commons called for greater flexibility on the testing of open source AI models.

The group called for EU lawmakers to include more concise definitions of AI components, as well as more leeway for open source AI research.

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.