The second enforcement deadline for the EU AI Act is approaching – here’s what businesses need to know about the General-Purpose AI Code of Practice

General-purpose AI model providers will face heightened scrutiny

European Union (EU) concept image showing flag on a digitized background with ripples flowing out from 12 stars.
(Image credit: Getty Images)

The second major enforcement deadline for the EU AI Act is approaching, meaning big tech firms will face a greater degree of scrutiny over AI model safety.

From August 2nd, new governance rules for general-purpose AI (GPAI) models will be introduced through a voluntary Code of Practice.

The deadline represents the second major enforcement date for the landmark legislation this year, following on from a February deadline which focused primarily on prohibited use cases.

Enza Iannopollo, VP principal analyst at Forrester, said that while the onus will be placed on providers, enterprise end-users will also likely feel the impact of the new rules.

“Whilst the first regulatory milestone on 2nd February focused on requirements, including those on prohibited use cases, this second deadline expands accountability and enforcement as it introduces critical provisions regarding general-purpose AI (GPAI) models,” she explained.

“Providers of generative AI models are directly responsible for meeting these new rules, however it’s worth noting that any company using genAI models and systems — those directly purchased from genAI providers or embedded in other technologies — will feel the impact of these requirements on their value chain and on their third-party risk management practices.” 

What the GPAI code of practice means for businesses

The GPAI code of practice will enforce more robust guardrails for training AI models, according to EU lawmakers, and is based on three key pillars.

This includes greater transparency, meaning AI model providers are required to document and disclose training processes and share information on models with regulators.

Safety and security are a key focus of the code, again focusing on whether GPAI models pose risks to the public or enterprises. Under the new rules, providers are required to assess and document potential harms and take appropriate action to reduce any risks.

Dirk Schrader, resident CISO (EMEA) and VP of security research at Netwrix, said security considerations in the act are welcomed and help create a more aligned approach to AI-related security risks.

“One of the most significant anticipated successes of the Act is the standardization of AI security across the European Union, creating a harmonized, EU-wide security baseline,” he said.

“A key strength of the proposed regulations is their emphasis on a security-by-design ethos, mandating a lifecycle approach that integrates security considerations from the outset and throughout an AI system's operational life.”

Security considerations do raise questions over compliance, however. Simply put, there isn’t a solid baseline for enterprises to work from with regard to AI-related security risks at this stage.

“The Act is the first major law to call out protections against data poisoning, model poisoning, adversarial examples, confidentiality attacks, and model flaws,” he said.

“The real compliance burden will be determined by technical specifications that don't yet exist, as these will define the practical meaning of 'appropriate level of cybersecurity' and may evolve rapidly as AI threats mature.”

Elsewhere, rules pertaining to copyright are also outlined in the code of practice, and this has been a major point of contention in recent months. For example, under the code, signatories must ensure training data is sourced lawfully.

A host of major tech companies have agreed to the code of practice, most recently Google and OpenAI. Some, however, have taken a harder stance.

Earlier this month, Meta revealed it won’t sign up for the code of practice amid what it described as concerns over “legal uncertainties”.

In a LinkedIn post clarifying the company’s stance on the code, Meta’s chief global affairs officer Joel Kaplan said the code will introduce measures which “go far beyond the scope of the AI Act”.

"Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it," he said.

The risks of non-compliance

Organizations that fail to comply with the EU AI Act face serious repercussions, and while the new code of practice is voluntary, Iannopollo said it’s crucial that enterprises operating in the region pay close attention to the enforcement deadline.

“Like it or not, the EU AI Act will contribute to shape AI risk management and AI governance practices of most global companies,” she said. “Its requirements may not be perfect, but they are the only binding set of rules on AI with global reach, and it represents the only realistic option of trustworthy AI and responsible innovation.

“It’s crucial that companies operating AI technology in the EU, or using AI-generated insights within the EU market, pay attention to this enforcement milestone.”

The EU AI Act contains “significant fines” for non-compliance, including up to 7% of a company’s global turnover. Iannopollo noted that not all the authorities responsible for enforcement are up and running yet, but others are, including the EU AI Office.

“Companies, make no mistake: there will be action in the next few months,” she said.

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

TOPICS
Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.