How the EU AI Act compares to other international regulatory approaches

Flags of the United States and European Union pictured together, with EU flag in foreground blurred and US flag in background.
(Image credit: Getty Images)

With the recent passing of the EU AI Act, businesses globally will begin embarking on preparations for the enforcement of the new rules in a few months. 

The act will bring in regulations designed to prohibit the most dangerous AI models, as well as demand compliance from companies producing or using AI models with high levels of associated risk.

Analysts and industry stakeholders alike have hailed the legislation as a landmark moment in the regulation of AI technologies, but the move from EU lawmakers poses questions over what efforts are being made in other regions globally.

According to Enza Iannopollo, principal analyst at Forrester, other regions will only be able to “catch up” in the wake of such major legislation. To what extent other regions will be able to catch up, however, remains to be seen.

The international regulatory approach to AI has certainly been mixed so far, but that's not to say that countries outside of Europe aren’t steadily making progress on their own forms of legislation.

How has the US approached AI regulation?

In the grand scheme of things, the US made some fairly early headway on AI regulation with a proposed AI bill of rights in October 2022. The aims of this bill were, however, only to guide AI policy. 

Through the bill, the White House Office of Science and Technology identified five principles that it felt should be necessary to guide the design, use, and deployment of AI systems and technologies.

In May 2023, the White House expanded on these plans with the release of an AI research and development plan to further guide investment in the technology.

It is important to consider the internal, state-driven approach to US AI regulation, however. Individual states do have a degree of autonomy within the federal system, which has led to some implementing their own rules. 

“There is a significant amount of work at US federal and state level regarding AI regulation and trustworthy AI,” Tom Whittaker, senior associate at law firm Burges Salmon, told ITPro.

“A number of states have already introduced sector-specific regulation, or have developed proposals,” he added.

Several states have developed and rolled out various pieces of AI legislation within their own jurisdictions over the past few years, including in California, Colorado, New York, and Texas.

In California, Connecticut, and Louisiana, state agencies have been directed towards analyzing AI systems and issuing reports to governors about problematic AI effects, while in Illinois, and Maryland, legislation has been adopted that ensures individuals know when and how an AI system is being used.

RELATED WEBINAR

Perhaps most notably, president Biden delivered an ‘Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence’ directing various bodies, such as the National Institute of Standards and Technology (NIST) and the National Security Council (NSC), to develop standards for AI.

Though experts did express uncertainty about how effective this executive order would be at the time, TrustPath CEO Ivan Ivankovic reiterated the need to avoid too much comparison with EU regulations.

Legislative approaches to AI regulation will, he noted, be very much catered to the individual needs of each respective nation or region. There are also considerations at play regarding competition with other regional players. In this context, the US’ approach to AI regulation will take these issues into account.

“It’s not a question of who is ahead or doing a better job, particularly as this is such a new area which is developing all the time … the EU and the US are taking different approaches,” Ivankovic said.

“What is important is that both the US and the EU are introducing AI regulations to ensure we get safer AI, with an emphasis on greater transparency in AI systems to foster trust,” he added.

Most recently, according to documents seen by Bloomberg, the US proposed a resolution at the United Nations (UN) promoting “safe, secure and trustworthy” systems, encouraging General Assembly members to support “responsible and inclusive” AI.

How has the UK approached AI regulation so far?

The UK has been relatively slow on regulation, with largely only consultation guidance to go off and a governmental whitepaper that expressed the UK’s desire to balance risk aversion and support for innovation

There is a proposed AI bill in the works, though it has currently only made it as far as a second reading in the house of lords slated for late March.

The industry has been clear in its desire for a quicker process in the UK in terms of regulation, with companies like Microsoft openly calling for a greater level of regulation in the region.

Industry analysts have pointed toward a degree of fence-sitting with regard to the UK’s approach to AI regulation. The Conservative government has been keen to emphasize its willingness to sit somewhere in the middle between EU and US regulations, but this has raised concerns.

Some industry stakeholders went so far as to suggest the UK was falling behind on the topic of regulation in mid-2023.

The UK will likely need to move quicker if it's going to establish any sort of framework on AI within its own territory, especially with the EU’s AI act now well on its way to being enforced.

George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.