How will the EU AI Act affect businesses?

Chamber of the European Parliament in Strasbourg, which has just voted to pass the landmark EU AI Act.
(Image credit: Getty Images)

The EU AI Act has officially passed a vote in the European Parliament in a move that will radically affect how AI is used not just in Europe, but across the globe.

From developer to deployer, the act will establish the “first-ever legal framework on AI,” granting EU regulators and watchdogs an enhanced level of oversight and scope for governance. 

While other countries and regions have been flirting with regulatory possibilities, the EU has taken the plunge - and in doing so, has set a major precedent for organizations globally, according to Enza Iannopollo, principal analyst at Forrester.

“The adoption of the AI Act marks the beginning of a new AI era and its importance cannot be overstated,” Iannopollo said. 

The act works on what regulators call a “risk-based approach”, requiring the assessment of AI systems as either minimal, limited, or high-risk. To be high-risk, an AI system must pose a threat to life, financial livelihood, or human rights. 

Fall outside of these categories and an AI system could be deemed as posing “unacceptable risks” and therefore prohibited.

This puts enterprise-level AI users and developers firmly in the crosshairs as the EU clearly identifies areas of employee recruitment, credit scoring, financial services, and critical infrastructure all as potentially high-risk applications.  

High-risk platforms will face the greatest level of scrutiny stipulated in the act’s terms before they can reach the open market. 

They will have to undergo adequate risk and conformity assessments, ensure the utmost quality of training datasets to minimize risks and discrimination, and activity will have to be clearly and consistently logged to ensure traceability. 

The EU itself is also entitled to detailed documentation in order to complete compliance assessments, while the companies that deploy AI will be able to access “clear and adequate” information about the systems they use. 

“The goal is to enable institutions to exploit AI fully, in a safer, more trustworthy, and inclusive manner,” Iannopollo said. 

EU AI Act: How will businesses be affected?

Once businesses have gathered themselves after the shock of the premature vote -  originally slated for a month later -  they need to be sure that they’re looking to ensure a clear level of compliance with the new rules.

Generally speaking, the act will affect AI developers at the source by ensuring high-risk systems are adhering to a strict level of transparency and guidelines.

Big name AI companies like OpenAI will obviously be liable though, and by extension all enterprise-level customers rolling out tools based on other companies' models will also have to bend in line with increased scrutiny.

Though it's unlikely that many organizations will be using AI systems liable for prohibition, Gartner analyst Nader Henein told ITPro the prospect of a €35 million penalty is sure to put a lot of companies on edge. 

“Someone is going to have to stand in front of the board and say we are not in the line of fire because the size of the liability is just unacceptable,” Henein told ITPro.


What will be most important for organizations to understand, Henein added, will be the significance of their AI usage decisions. Whether or not a company is making AI, it will still retain a level of responsibility for usage.

“Where organizations have a bit of a misunderstanding is that you're not just responsible for the AI you build, you're also responsible for the capabilities you buy,” Henein said.

“The deployer has a lot of responsibility; you’re ultimately bringing this into your organization and letting it loose on your data” he added. 

What businesses can do to prepare

Henein raised the importance of some of the issues that businesses would be wise to keep in mind in the wake of the act.

The first is speed. Unlike GDPR, which demanded compliance around “two years later,” the EU AI act will enforce its rules on prohibited AI systems after just “six months.” 

Organizations will need to get a grip on the regulations “fairly fast,” Henein said, and be mindful of their timelines. 

Businesses also need to assess two key aspects of their AI usage; one being a process of discovery and the other a process of planning with regard to future adoption.

Henein mentioned the fact that myriad vendors have been building AI capabilities into their SaaS platforms and on-prem platforms for quite some time now without organizations “asking questions.”

Now, these businesses will need to look at what their existing AI usage through these platforms comprises, even if this is the first time they’re finding out about it.

At the same time, businesses will need to be careful about future AI adoption in ensuring that it adheres to the new guidelines. 

“You need to put in the controls … to ensure that you understand when you're adopting capabilities that are AI powered, and assess them accordingly,” Henein said. 

How will the EU AI Act affect UK firms?

Owing to its scale, the remit of the EU AI act won’t be limited to the bounds of the EU’s physical territory and it will inevitably force businesses outside of Europe into a level of compliance, Iannopollo said.

“The extra territorial effect of the rules, the hefty fines, and the pervasiveness of the requirements across the "AI value chain" mean that most global organizations using AI must – and will – comply with the Act,” she said.

The UK won't be spared this reality. Any company in the UK seeking to do business internationally, Iannopollo said, will have to comply with the EU act just the same as their counterparts in the US or Asia. 

Companies in the UK may have felt the burn slightly less suddenly if the Conservative government had managed to make any progress on AI legislation. 

There is a UK AI bill currently set to undergo its second reading in March, though the industry has been vocal about the need for a speedier process

Even OpenAI-investor Microsoft called for a greater level of regulation in the UK, expressing the need for clearer safety frameworks.  

“Despite the aspiration of becoming the "center of AI regulation", the UK has produced little so far when it comes to mitigating AI risks effectively,” Iannopollo said.

“Companies in the UK will have to face two very different regulatory environments to start with."

 “Over time, at least some of the work UK firms undertake to be compliant with the EU AI Act will become part of their overall AI governance strategy, regardless of UK specific requirements – or lack thereof.”

The US has been similarly slow off the mark compared to the EU, with only a bill of rights aimed at guiding AI policy rather than providing enforceable legal guardrails

“Like it or not, with this regulation, the EU establishes the "de facto" standard for trustworthy AI, AI risk mitigation, and responsible AI. Every other region can only play catch-up,” Iannopollo added.

George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.