Is the UK falling behind the EU on AI regulation?

EU AI: Half of the EU flag on the left and half of the UK flag on the right, overlaid onto a semi-transparent view of skyscrapers shot from the ground facing towards the sky.
(Image credit: Getty Images)

The EU has progressed a key bill seeking to regulate the use of artificial intelligence (AI) to its latter stages, in another move that outpaces the UK’s AI efforts.

On 14 June, MEPs held a pivotal vote to move the AI Act closer to being signed into law across the region, which was passed by an overwhelming majority.

The Artificial Intelligence Act (‘AI Act’) is a wide-ranging series of controls for the implementation of AI technology, centered around a risk-based approach.

UK government officials have become more vocal on the issue of AI in recent weeks, but have not pushed forward any legislation to curb the use of the technology.

RELATED RESOURCE

Purple whitepaper cover with white text over background image of suited female wearing glasses

(Image credit: Mimecast)

AI and cyber security

The promise and truth of the AI security revolution

DOWNLOAD FOR FREE

Under the EU’s measures, companies seeking to use AI would be required to provide transparency around data used to train models and to clearly label AI-generated content.

The draft bill sets out a framework for risk assessment of systems, with high-risk systems defined as those that pose a “high risk to the health and safety or fundamental rights of natural persons”.

Companies that develop these systems will be required to register them in an EU database for public oversight.

Those that do not comply with the requirements of the act could be hit with fines of up to 4% of their annual worldwide turnover, or €20 million ($21.6 million).

The Financial Times reported a senior British official as having described the EU’s regulation as “draconian”, and OpenAI’s CEO Sam Altman warned his company could leave the EU over the law before walking this threat back.

In contrast to the EU’s strict risk-based approach, the UK whitepaper is based on a framework of five ‘principles’ for AI:

  • Safety, security, and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

It lays out no corresponding measures that could be taken against non-conforming companies, nor plans for statutory controls over AI.

The UK whitepaper specifically states that the government will not seek AI regulation on a statutory basis initially due to concerns that this could hinder innovation in the field.

Some in the industry have hailed this as a positive move for the UK AI landscape, as firms may be more willing to invest in the region if they believe it provides a better environment for profitable development.

“The key is to strike the balance between ensuring fairness and making AI possible to use and experiment with – which this pro-innovation approach supports,” said Iván de Prado Alonso, head of AI at Freepik Company.

“I expect a warm welcoming from technology vendors across the UK as the government encourages creating an AI-enabled nation.”

AI regulation: Clashing approaches to risk

As with the GDPR, firms seeking to operate within the EU will still need to abide by the AI Act, which could limit the extent to which AI firms feel the freedom that the UK government is at pains to provide.

Industry consultation indicated that small businesses could face great time and financial burdens by compliance requirements if regulators are “not proportionate and aligned in their regulation of AI”.

In place of initial legislation, existing regulators, such as Ofcom or the Digital Markets Unit, will be asked to implement controls in a way seen as proportionate.

Setting out the exact powers each regulator will have is likely to push the legislation well into the next parliament; the two years it has taken for the Competition and Markets Authority’s (CMA) Digital Markets Unit to get off the ground casts a shadow over these proposals.

The decentralized approach, based on an evolving framework and tending towards pro-innovation rather than caution, could also allow the UK to adapt to the changing state of the AI market more easily.

First proposed in April 2021, the AI Act has had to be rewritten many times to keep up with the developments in areas such as generative AI since its inception.

AI regulation: Banned technologies

The UK whitepaper acknowledged specific risks posed by generative AI and deepfakes, such as the potential for a criminal to generate false compromising images of an individual to damage their reputation, or for large language models (LLM) like ChatGPT to generate disinformation.

It did not go as far as providing risk guidelines with pre-defined uses or technologies in mind, and this could prove vital for its continued relevance as new technologies emerge.

The draft document for the AI Act, in comparison, includes a list of technologies deemed to carry an unacceptable risk, such as systems that could subliminally influence individuals or the use of AI for real-time biometrics tracking.

In this regard, it presents more of a complete overview for AI regulation than the UK whitepaper, which businesses can use to inform their business decisions going forward. 

The UK’s approach could give AI developers more freedom to try systems out; it has also left firms guessing when it comes to what regulations they may have to follow down the line. 

While outright bans on some uses of AI may be seen as restrictive, this is an example of the EU having delivered a clear answer to the ethical and regulatory questions that are being asked today while the UK is still in the consultation stage.

The conservative EPP Group sought to quash the biometrics ban through a last-minute amendment to the bill but failed to muster the votes necessary to have this passed. 

RELATED RESOURCE

Female looking at her mobile device with graphics of biometric scanning around her face

(Image credit: Trend Micro)

Leaked today, exploited for life

How social media biometric patterns affect your future

DOWNLOAD FOR FREE

It remains to be seen whether the ban will stick, and the bill does reference “certain limited exceptions” for the technology that may leave the door open for use by EU nations.

Police use of real-time biometrics technology such as live facial recognition (LFR) has long been a point of controversy. 

In 2022 a University of Cambridge study called for the UK to ban the technology in public spaces, and cited instances of the Metropolitan Police Service (MPS) using LFR in which “minimum ethical and legal standards” were not met.

Human rights groups have strongly objected to the use of LFR in recent years, alleging that the technology will entrench racial biases and violate individual privacy to an unacceptable degree.

“With such a persistently inhospitable environment towards people fleeing wars and conflict or in search of a better life, it is vital that the European Parliament doesn’t dismiss the harms of racist AI systems,” said Mher Hakobyan, advocacy advisor on AI regulation at Amnesty International.

“Lawmakers must ban racist profiling and risk assessment systems, which label migrants and asylum seekers as ‘threats’; and forecasting technologies to predict border movements and deny people the right to asylum.”

A cross-party group of UK lawmakers called for CCTV devices from two Chinese companies to be banned in July 2022, due to alleged links with Uyghur internment camps and reported use of facial-recognition technology to detect Uyghur individuals.

During a trip to the US in June, UK Prime Minister Rishi Sunak attempted to distance the UK’s approach to AI from that of its closest neighbors.

Sunak has voiced ambitions for the UK to become a global leader in AI, and the newly announced Atlantic Declaration will see the UK and US work closely on AI safety.

The UK is also set to host the first international summit on AI safety later this year.

AI regulation: Common ground on sandboxes

Both the EU and UK approaches favor the adoption of trial environments, or ‘sandboxes’, for AI testing and deployment.

The UK whitepaper committed to providing businesses with a regulatory sandbox for AI, following a recommendation by Sir Patrick Vallance, a measure the EU has also promoted.

It is hoped that this will allow AI products and services to reach market faster, and highlight any hurdles to innovation, or emerging tools that deserve greater regulatory attention.

EU digital technology industry organization Digital Europe published a report based on results from nine European startups and SMEs. 

A key finding was that respondents supported sandboxes, and would favor a continuous sandboxing process in which to test products.

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.