A big enforcement deadline for the EU AI Act just passed – here's what you need to know
Fines for noncompliance are still way off, but firms should still prepare for significant changes


The first of a number of enforcement actions for the EU AI Act have officially come into effect, and experts have warned firms should accelerate preparations for the next batch of deadlines.
Passed in March last year, the first elements of the EU’s landmark legislation came into effect on the 2nd February 2025, bringing with it a series of rules and regulations that AI developers and deployers must adhere to.
The EU AI Act employs a risk-based approach to assessing the potential impact of AI systems, designating them as being minimal, limited, or high-risk. High-risk systems, for example, are those defined as posing a potential threat to life, human rights, or financial livelihood.
These particular systems are in the crosshairs following the introduction of the new rules this month.
Speaking to ITPro ahead of the deadline, Enza Iannopollo, principal analyst at Forrester, said lawmakers specifically chose to target the most dangerous AI use cases with the first round of rules.
“Requirements enforced on this deadline focus on AI use-cases the EU considers pose the greatest risk to core Union values and fundamental rights, due to their potential negative impacts,” Iannopollo said.
“These rules are those related to prohibited AI use-cases, along with requirements related to AI literacy. Organizations that violate these rules could face severe fines — up to 7% of their global turnover — so it’s crucial that requirements are met effectively,” she added.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Iannopollo noted that fines will not be issued immediately, however, as details about sanctions are still a work-in-progress and the authorities in charge of enforcement are still not in place.
While there may not be any big fines in the headlines in the next few months, Iannopollo said this is still an important milestone.
Tim Roberts, UK country co-leader at AlixPartners, said the first set of compliance obligations will act similarly to GDPR, mainly in that they will apply to any organization doing business with AI models in Europe.
With this in mind, it’s critical that companies are aware of these first batch of rules, even if they are not EU-based.
“Naturally, this also reignites the debate about striking the right balance between innovation and regulation. But instead of seeing them as opposing forces, it’s more useful to think of them as two things we need to get right in parallel … because regulation can be a facilitator of innovation - not a blocker,” Roberts said.
“The speed at which AI is advancing has caused discomfort for some consumers, but strong safeguards can build trust and create a thriving (and fairer) environment for greater business innovation.
“The EU AI Act is an important first step in this journey, and its success will depend on how well it is applied and how well it evolves, with the end goal being smarter regulation that drives businesses to continue pushing boundaries for the benefit of all.”
EU AI Act: Firms should tighten up risk assessments
Due to the global reach of the Act and the fact that requirements span the entire AI value chain, Iannopollo said enterprises must ensure they adhere to the regulation.
“The EU AI Act will have a significant impact on AI governance globally. With these regulations, the EU has established the ‘de facto’ standard for trustworthy AI and AI risk management,” she added.
To prepare for the rules, enterprises are advised to begin refining risk assessment practices to ensure they’ve classified AI use cases in line with the designated risk categories contained in the Act.
RELATED WHITEPAPER
Systems that would fall within the ‘prohibited’ category need to be switched off immediately.
“Finally, they need to be prepared for the next key deadline on 2nd August. By this date, the enforcement machine and sanctions will be in better shape, and authorities will be much more likely to sanction firms that are not compliant. In other words, this is when we will see a lot more action.”

George Fitzmaurice is a former Staff Writer at ITPro and ChannelPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Can cyber group takedowns last?
ITPro Podcast Threat groups can recover from website takeovers or rebrand for new activity – but each successful sting provides researchers with valuable data
-
AI tools are a game changer for enterprise productivity, but reliability issues are causing major headaches – ‘everyone’s using AI, but very few know how to keep it from falling over’
News Enterprises are flocking to AI tools, but very few lack the appropriate infrastructure to drive adoption at scale
-
The second enforcement deadline for the EU AI Act is approaching – here’s what businesses need to know about the General-Purpose AI Code of Practice
News General-purpose AI model providers will face heightened scrutiny
-
Who is Mustafa Suleyman?
From Oxford drop out to ethical AI pioneer, Mustafa Suleyman is one of the biggest players in AI
-
Meta isn’t playing ball with the EU on the AI Act
News Europe is 'heading down the wrong path on AI', according to Meta, with the company accusing the EU of overreach
-
‘Confusing for developers and bad for users’: Apple launches appeal over ‘unprecedented’ EU fine
News Apple is pushing back against new app store rules imposed by the European Commission, suggesting a €500m fine is a step too far.
-
CEOs think workers are becoming hostile to AI tools, but they’re pushing ahead with adoption anyway
News Executives are driving the adoption of AI tools despite concerns workers will push back
-
The public sector is bullish on agentic AI – but data readiness will be its undoing
News Nine-in-ten public sector organizations worldwide are planning to explore, pilot, or implement agentic AI in the next two to three years.
-
Lenovo promotes Per Overgaard to general manager for ISG EMEA
News Overgaard will spearhead Lenovo's Infrastructure Solutions Group as organizations continue to invest in AI and advanced infrastructure