France, Germany, and Italy align themselves on AI regulation, but the EU may not like it

A CGI render of the EU flag shown as 12 gold stars hovering and creating a ripple effect in a wave of blue data
(Image credit: Getty Images)

An agreement on AI regulation between France, Germany, and Italy could cause friction with EU lawmakers over the scope of legislation and throw the EU AI Act into disarray, experts have told ITPro.  

In a joint paper seen by Reuters, the three largest EU economies published their support for the “mandatory self regulation” of foundation models through the establishment of voluntary codes of conduct. 

The paper includes details on implementing obligations on companies to produce model cards for each foundation model with information that explains the function of the model, as well as its capabilities and limitations. 

Initially, sanctions will not be imposed on offending parties, according to the paper, but a penalty system could be set up if violations persist after an unspecified time frame has elapsed.

The focus of the paper and its signees appears to be on distinguishing between regulating the uses of AI tools in society and the regulation of AI technologies themselves. 

Digital Affairs Minister Volker Wissing told Reuters “[w]e need to regulate the applications and not the technology if we want to play in the top AI league worldwide”.

The concern around stifling AI development is central to the three nations’ opposition to stronger overarching regulations on foundation models moving forward. 

But this may come at the cost of creating divisions with EU lawmakers as the union continues with plans to introduce the EU AI Act

A growing divide on AI regulation

Harry Borovick, general counsel at Luminance, told ITPro this agreement may put ongoing discussions in jeopardy at a time when decisive regulatory action is needed.

“As European policymakers continue to debate the final details of the EU AI Act, a new agreement by three of Europe’s largest economies to support a code of conduct for foundation models has the potential to throw regulatory discussions into disarray.”

“For now, the agreement comes as a pro-business move that has little teeth – the code of conduct is ‘voluntary’ and therefore leaves plenty of latitude for businesses to determine their approach to AI governance.” Borovick added. 

“But it certainly  has the potential to cause fractures within the EU bloc and delay discussion at a time when, more than ever, decisive regulatory action is needed as the rate of AI innovation continues to outpace regulation.”

Previous consensus on AI Act is gone

A tiered approach where the most powerful models that could pose a risk to public health, safety, and human rights would face stricter controls around their data governance and risk mitigation practices was previously agreed during a political trilogue on the EU’s AI Act.

The paper proposes a different interpretation of what the AI Act is for, however. 

“We underline that the AI Act regulates the application of AI and not the technology as such,” the paper states. “The inherent risks lie in the application of AI systems rather than in the technology itself."

This represents a retreat from the tiered position previously agreed upon, with France, Germany, and Italy trying to tone down language they fear may inhibit innovation. 

In an interview with Sifted, the former digital affairs minister of France and current CEO at AI firm Mistral, Cédric O, warned the AI Act could kill the company.

O’s argument is that regulations on the development of foundation models will place additional administrative burdens on European startups who don’t have the resources to remain compliant.


Red whitepaper cover with title and logo above circular images of colleagues using laptops, and servers

(Image credit: Trend Micro)

Discover how you can protect your business from potential attacks


The outcome Paris, Berlin, and Rome are warning against is one in which startup growth is fundamentally impaired which may lead to halting further AI development, or even relocation outside the EU. 

This move from dominant members of the EU tracks closely with a similar position taken up by the UK. 

The coveted position of becoming world-leaders in artificial intelligence has led to the UK adopting a ‘pro-innovation’ approach that prime minister Rishi Sunak said will see the country “not rush” to regulate AI. 

The US also recently announced its own AI regulations that will require more transparency from companies developing models that pose a threat to national security. 

Borovick noted the timing of this announcement suggests France, Germany, and Italy are becoming increasingly concerned that their respective technology industries may be hamstrung by cumbersome regulations amidst a period of heightened global competition in the AI space. 

“Interestingly, the agreement comes hot off the heels of the AI Safety Summit hosted at Bletchley Park, suggesting that Europe’s economic powerhouses have been spurred into action to protect their status as hotbeds for AI-based startups and innovation,” he said. 

“The proposed approach by Germany, France and Italy aligns far more closely with that of the UK, with improved focus on regulating the applications of AI rather than the technology itself.”

Solomon Klappholz
Staff Writer

Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.