The second enforcement deadline for the EU AI Act is approaching – here’s what businesses need to know about the General-Purpose AI Code of Practice
General-purpose AI model providers will face heightened scrutiny
 
 
The second major enforcement deadline for the EU AI Act is approaching, meaning big tech firms will face a greater degree of scrutiny over AI model safety.
From August 2nd, new governance rules for general-purpose AI (GPAI) models will be introduced through a voluntary Code of Practice.
The deadline represents the second major enforcement date for the landmark legislation this year, following on from a February deadline which focused primarily on prohibited use cases.
Enza Iannopollo, VP principal analyst at Forrester, said that while the onus will be placed on providers, enterprise end-users will also likely feel the impact of the new rules.
“Whilst the first regulatory milestone on 2nd February focused on requirements, including those on prohibited use cases, this second deadline expands accountability and enforcement as it introduces critical provisions regarding general-purpose AI (GPAI) models,” she explained.
“Providers of generative AI models are directly responsible for meeting these new rules, however it’s worth noting that any company using genAI models and systems — those directly purchased from genAI providers or embedded in other technologies — will feel the impact of these requirements on their value chain and on their third-party risk management practices.”
What the GPAI code of practice means for businesses
The GPAI code of practice will enforce more robust guardrails for training AI models, according to EU lawmakers, and is based on three key pillars.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
This includes greater transparency, meaning AI model providers are required to document and disclose training processes and share information on models with regulators.
Safety and security are a key focus of the code, again focusing on whether GPAI models pose risks to the public or enterprises. Under the new rules, providers are required to assess and document potential harms and take appropriate action to reduce any risks.
Dirk Schrader, resident CISO (EMEA) and VP of security research at Netwrix, said security considerations in the act are welcomed and help create a more aligned approach to AI-related security risks.
“One of the most significant anticipated successes of the Act is the standardization of AI security across the European Union, creating a harmonized, EU-wide security baseline,” he said.
“A key strength of the proposed regulations is their emphasis on a security-by-design ethos, mandating a lifecycle approach that integrates security considerations from the outset and throughout an AI system's operational life.”
Security considerations do raise questions over compliance, however. Simply put, there isn’t a solid baseline for enterprises to work from with regard to AI-related security risks at this stage.
“The Act is the first major law to call out protections against data poisoning, model poisoning, adversarial examples, confidentiality attacks, and model flaws,” he said.
“The real compliance burden will be determined by technical specifications that don't yet exist, as these will define the practical meaning of 'appropriate level of cybersecurity' and may evolve rapidly as AI threats mature.”
Elsewhere, rules pertaining to copyright are also outlined in the code of practice, and this has been a major point of contention in recent months. For example, under the code, signatories must ensure training data is sourced lawfully.
A host of major tech companies have agreed to the code of practice, most recently Google and OpenAI. Some, however, have taken a harder stance.
Earlier this month, Meta revealed it won’t sign up for the code of practice amid what it described as concerns over “legal uncertainties”.
In a LinkedIn post clarifying the company’s stance on the code, Meta’s chief global affairs officer Joel Kaplan said the code will introduce measures which “go far beyond the scope of the AI Act”.
"Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it," he said.
The risks of non-compliance
Organizations that fail to comply with the EU AI Act face serious repercussions, and while the new code of practice is voluntary, Iannopollo said it’s crucial that enterprises operating in the region pay close attention to the enforcement deadline.
“Like it or not, the EU AI Act will contribute to shape AI risk management and AI governance practices of most global companies,” she said. “Its requirements may not be perfect, but they are the only binding set of rules on AI with global reach, and it represents the only realistic option of trustworthy AI and responsible innovation.
“It’s crucial that companies operating AI technology in the EU, or using AI-generated insights within the EU market, pay attention to this enforcement milestone.”
The EU AI Act contains “significant fines” for non-compliance, including up to 7% of a company’s global turnover. Iannopollo noted that not all the authorities responsible for enforcement are up and running yet, but others are, including the EU AI Office.
“Companies, make no mistake: there will be action in the next few months,” she said.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
- The EU just shelved its AI liability directive
- How the EU AI Act compares to other international regulatory approaches
- Everything you need to know about the EU AI Act

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
- 
 What does modern security success look like for financial services? What does modern security success look like for financial services?Sponsored As financial institutions grapple with evolving cyber threats, intensifying regulations, and the limitations of ageing IT infrastructure, the need for a resilient and forward-thinking security strategy has never been greater 
- 
 Yes, legal AI. But what can you actually do with it? Let’s take a look… Yes, legal AI. But what can you actually do with it? Let’s take a look…Sponsored Legal AI is a knowledge multiplier that can accelerate research, sharpen insights, and organize information, provided legal teams have confidence in its transparent and auditable application 
- 
 Microsoft’s huge AI spending still has investors sweating despite solid cloud growth Microsoft’s huge AI spending still has investors sweating despite solid cloud growthNews Capital spending at Microsoft continues to surge, despite previous claims it would cool down 
- 
 Analysts warn AI layoffs could spark a new wave of offshoring – enterprises are rehiring after workforce cuts, but either outsourcing or at lower rates of pay Analysts warn AI layoffs could spark a new wave of offshoring – enterprises are rehiring after workforce cuts, but either outsourcing or at lower rates of payNews Analysts expect a wave of rehiring next year in the wake of AI layoffs. That may sound like good news for workers, but it'll probably involve offshoring or outsourcing. 
- 
 UK firms are pouring money into AI, but they won’t see a return on investment unless they address these key issues UK firms are pouring money into AI, but they won’t see a return on investment unless they address these key issuesNews An SAP report projects increased AI investment, but cautions that too many organizations are taking a fragmented approach 
- 
 Employee ‘task crafting' could be the key to getting the most out of AI Employee ‘task crafting' could be the key to getting the most out of AINews Tweaking roles to make the most of AI makes you more engaged at work 
- 
 Salesforce could become the king of enterprise AI – but only if customers believe in its potential Salesforce could become the king of enterprise AI – but only if customers believe in its potentialAnalysis At Dreamforce 2025, Salesforce painted a believable picture for enterprise AI, but shareholders will only be reassured by greater business buy-in 
- 
 OpenAI has a bold plan to pay for its $1 trillion spending spree: Ads, personal assistants, and cheaper subscriptions OpenAI has a bold plan to pay for its $1 trillion spending spree: Ads, personal assistants, and cheaper subscriptionsNews OpenAI has lined up more than $1 trillion in spending – and now it's trying to figure out how to pay for it all. 
- 
 AI is redefining roles in the tech industry and forcing Gen Z workers to reassess career paths AI is redefining roles in the tech industry and forcing Gen Z workers to reassess career pathsNews Gen Z workers remain cautious about AI while industry turbulence is changing their outlook on company loyalty 
- 
 "Do not sacrifice your entry-level jobs": Salesforce might be all in on AI, but it isn't giving up on junior workers yet – despite Marc Benioff's job replacement claims "Do not sacrifice your entry-level jobs": Salesforce might be all in on AI, but it isn't giving up on junior workers yet – despite Marc Benioff's job replacement claimsNews Salesforce is still committed to hiring junior team members even as AI automates roles, according to UK&I chief executive Zahra Bahrololoumi.