Industry calls for tighter AI controls amid UK’s ‘agile’ approach to regulation

Westminster Parliament, home of the UK government, pictured during the day time with UK flag flying in background
(Image credit: Getty Images)

With the UK government having revealed earlier this month that it has no plans for new AI legislation, industry figures are calling for tighter controls.

The second reading of the Artificial Intelligence (Regulation) Bill is set to take place in March, while the Financial Conduct Authority, Bank of England, Ofcom, and other regulators are due to deliver their official approaches to AI by the end of April.

In its response to a white paper consultation earlier this month, the government said it didn't intend to create a new AI regulatory body, instead proposing 'more agile AI regulation'.

Speaking at the time, technology secretary Michelle Donelan said the government wants to avoid placing burdens on business that could stifle innovation, and to make the UK more agile than competitor nations.

"The UK’s innovative approach to AI regulation has made us a world leader in both AI safety and AI development," she said.

"AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely."

However, many within the tech industry are concerned that the government has been focusing too much on the long-term existential threats, such as AI-powered weapons systems, biological weapons, and the like.

Instead, it should be more focused on the immediate dangers.

"It is important that the government creates a stringent regulatory framework for AI to protect against the threats imposed, rather than waiting to see what happens,” said Oseloka Obiora, CTO of security firm RiverSafe. “It is about balancing a cautious approach to mitigate cyber risks, with ensuring businesses are empowered to continue driving innovation."

Other experts have expressed concerns that the proposals give too much power to the AI companies themselves.


Dr Andrew Rogoyski, director of innovation and partnerships at the University of Surrey’s Institute for People-Centred AI, said the government could risk opening the door to industry domination by foreign companies.

"The government response recognizes that 'many regulators in the UK can struggle to enforce existing rules on those actors designing, training, and developing the most powerful general-purpose AI systems', highlighting the lack of sovereign control the UK has over the big players in AI, which are predominantly based in the US or China," he said.

"Any one of the large US AI companies spends more on R&D than the combined UK government and industry R&D funding," he added. "The UK needs to spend substantially more just to stay in the game."

The next stage in the process, due by the end of April, will include reports from various regulators on their efforts to pull together robust safeguard proposals.

After this, the government will consider whether new legislation is needed to address any gaps, and whether the regulators' powers should be expanded.

Henry Balani, global head of industry and regulatory affairs at Encompass Corporation, said this will be a critical stage and one that could set the tone for any future AI regulation.

"As the country's key decision-makers continue to assess the impact and potential of AI, it is important they consider effective regulation without restricting innovation, taking a proactive approach that enables the technology available to be utilized safely and to potential," he said.

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.