UK AI regulation: Lawmakers reportedly eye a tighter approach

The Palace of Westminster, photographed from London's south bank to represent the UK government.
(Image credit: Getty Images)

The UK is beginning to develop tighter legislation on generative AI in a departure from its previous light-touch approach, according to two people briefed on the plans. 

Per the Financial Times, individuals briefed on the matter said new UK AI legislation could mandate that companies “developing the most sophisticated models” share information with the government.

One of the two sources said this legislation may require companies developing models to share their algorithms with the government and provide evidence of their safety testing practices. 

They added that the Department for Science, Innovation and Technology (DSIT) is “developing its thinking” on legislation related to AI and generative AI, while another said that impending rules would apply to the large language models (LLMs) behind the applications, rather than the applications themselves.

Reports of the changes in approach come as a sharp contrast to the UK government’s stance on AI regulation so far, which has set out no hard and fast requirements for AI companies to share model details.

Most notably, prime minister Rishi Sunak confidently backed a pro-innovation approach to AI in October 2023 that would necessitate a “rush to regulate.”

Many in the industry, including tech powerhouse Microsoft, have since called for more clarity and scope in the UK’s regulatory framework on AI. These tentative developments, though unconfirmed, could be a move towards easing industry concerns.  

"After months of going backwards and forwards on a position to legislate AI it will be a relief to many businesses to finally get some clarity on where the UK is headed,” Matt Worsfold, partner at Ashurst Risk Advisory partner, told ITPro.

Changes to UK AI legislation have not been confirmed on any official level. 

A departure from UK AI plans to date

The UK has been reluctant to clamp down on AI, citing sector agility as the focus of its process. Michelle Donelan, secretary of state for Science, Innovation, and Technology, has previously stated the UK government’s aim is to ensure no unnecessary barriers are placed on businesses and that the UK continues to be a “world leader in both AI safety and AI development.”

There has been disagreement within parliament over the speed and intensity of AI regulation, with a House of Lords committee having suggested that the UK could miss out on the ‘AI goldrush’ were it to regulate too much or operate over cautiously. 

An AI Regulation bill currently at the committee stage in the House of Lords would lead to the introduction of regulatory principles in the areas of safety, transparency, and governance, as well as the creation of a body called the ‘AI authority.’

The most significant word from the UK government on generative AI to date came in the form of its AI whitepaper A pro-innovation approach to AI regulation, published in August 2023. This document suggested a sector-specific approach in lieu of an overarching legislative framework on the technology while acknowledging that down the line new AI regulators may be necessary.

The proposed changes, if realized, would therefore reflect a shift in government toward a more hands-on approach within the public sector when it comes to UK AI. If and when these regulations are passed, entire AI models would come under scrutiny rather than individual use cases, with certain companies required to divulge proprietary model information.

How the UK handles AI regulation compared to the EU AI Act is a significant factor. Worsfold told ITPro that it would be worth paying attention to whether the UK begins to mirror legislation in the EU. 

“Given many UK businesses will be caught in the EU AI Act’s extra-territorial reach, it will be interesting to see how closely the UK aligns or whether it follows a principle-based approach," Worsfold said. 

The suggestion of mandates on the part of the UK government forcing the developers of sophisticated models to share information on their algorithmic data certainly seems to reflect the EU’s current position. The EU AI Act compels businesses to maintain transparency and guidelines over AI, particularly when it comes to systems deemed high-risk. 

Those developing the models considered riskiest to citizens’ rights will be forced to carry out a risk assessment, maintain a dialog with regulators, and issue assurances over the cleanliness of data used to train AI. UK regulators have already fired the starting pistol on investigating the use of data for AI training, with the Information Commissioner’s Office (ICO) set to issue guidance on the legality of generative AI training in relation to UK GDPR.

George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.