Tech industry stakeholders have warned the European Union (EU) risks regulating AI startups “out of existence” if the upcoming EU AI Act comes into force in its current form.
A joint statement from tech policy group DigitalEurope said key concerns still remain with regard to the EU’s proposed regulation of AI, mainly how the rules will impact the use of foundation models by organizations across the union.
Under the EU AI Act, lawmakers will categorize models based on their risk factor, ranging from ‘minimal’ to ‘unacceptable’ and ‘high risk’. This takes into account the potential for AI models to cause harm to individuals or broader society.
This means startups developing foundation models deemed high risk will be subject to additional requirements, such as regular reporting on data and algorithm quality; potentially resulting in higher costs and a slower pace of development.
Limiting the ability for organizations, particularly startups, to harness foundation models, could impact long-term innovation across the union and harm competition with global competitors, the policy group argued.
“For Europe to become a global digital powerhouse, we need companies that can lead on AI innovation also using foundation models and GPAI,” the statement reads.
“As European digital industry representatives, we see a huge opportunity in foundation models, and new innovative players emerging in this space, many of them born here in Europe. Let’s not regulate them out of existence before they get a chance to scale, or force them to leave.”
The open letter notes that only 8% of European companies currently use AI in daily operations, and that just 3% of the world’s AI unicorns come from the union.
Through the AI Act, limiting the use of foundation models could damage future opportunities for newcomers to the AI industry, DigitalEurope said.
“Europe’s competitiveness and financial stability highly depend on the ability of European companies and citizens to deploy AI in key areas like green tech, health, manufacturing or energy.”
Data compiled by the European Commission shows that an SMB launching a single AI-enabled product on the market could result in compliance costs of around €300,000 under the AI Act.
Critics argue that the financial burdens placed on smaller enterprises could become untenable if the AI Act goes through in its current form.
Hefty criticism for the EU AI Act
This push back against the EU’s landmark AI regulation is the latest in a long-running back and forth between regulators and industry stakeholders.
The act has been subject to intense criticism in recent months amidst claims that it could harm innovation across Europe during a critical period in the development of the global AI industry.
The open source community, in particular, has hit out at the regulation. In September 2022, the Brookings think tank published a report which heavily criticized the proposals, warning that the act would seriously undermine open source AI development and harm developers.
In July 2023, a consortium of major tech companies including GitHub and Hugging Face published a policy paper warning that the EU AI Act will harm both open source AI development and broader industry goals.
The policy paper called for more concise definitions of AI components and greater support for open source development of AI models.
Community concerns centered specifically around whether research and testing of AI models will be interpreted as “commercial activity” under the act, and therefore subject to more stringent regulation.
Grumblings among EU member states with a vested interest in supporting their own technology industries have also emerged in recent weeks.
On 20 November, three of the EU’s largest economies – France, Germany, and Italy – signed a joint agreement calling for the “mandatory self regulation” of foundation models.
Discover how leading companies are using data as an asset
A joint paper from the trio called for the establishment of voluntary codes of conduct and a revised focus on distinguishing between regulating the use of AI tools in society, and the regulation of AI technologies themselves.
Experts told ITPro the move could create friction between member states and EU lawmakers.
DigitalEurope welcomed the agreement as a positive step to limiting the “scope for foundation models to transparency standards”.
“The AI Act does not have to regulate every new technology, and we strongly support the regulation’s original scope focusing on high-risk uses, not specific technologies.”
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.
Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.