The UK government’s AI goals are being stifled by ‘apocalyptic concerns’ over safety

Westminster Parliament, home of the UK government, pictured during the day time with UK flag flying in background
(Image credit: Getty Images)

The UK could miss out on the 'AI goldrush' thanks to an over-cautious attitude to the technology, a House of Lords committee has warned.

A report from the Communications and Digital Committee concluded that the government’s approach to AI and large language models (LLMs) has become too focused on a narrow view of AI safety.

Echoing widespread existing unease, the committee said the government's “apocalyptic concerns about threats to human existence” are exaggerated. More important, it said, are near-term security risks such as cyber attacks, child sexual exploitation material, terrorist content, and disinformation.

"The rapid development of AI Large Language Models is likely to have a profound effect on society, comparable to the introduction of the internet,” said committee chair Baroness Stowell of Beeston.

“That makes it vital for the government to get its approach right and not miss out on opportunities – particularly not if this is out of caution for far-off and improbable risks.

"We need to address risks in order to be able to take advantage of the opportunities – but we need to be proportionate and practical. We must avoid the UK missing out on a potential AI goldrush."

The report warned that unless the UK takes action to prioritize open competition and transparency, a small number of tech firms could rapidly consolidate control of a critical market and stifle new players. This could result in the UK failing to keep pace with competitors, lose international influence, and become strategically dependent on overseas tech firms for critical technology.

"One lesson from the way technology markets have developed since the inception of the internet is the danger of market dominance by a small group of companies,” Baroness Stowell said. 

“The government must ensure exaggerated predictions of an AI driven apocalypse, coming from some of the tech firms, do not lead it to policies that close down open-source AI development or exclude innovative smaller players from developing AI services.

RELATED RESOURCE

Whitepaper cover with image of female colleague using a tablet

(Image credit: AWS)

Unlock the potential of machine learning in the generative AI era

DOWNLOAD NOW

"We must be careful to avoid regulatory capture by the established technology companies in an area where regulators will be scrabbling to keep up with rapidly developing technology."

The committee also took a critical position on tech firms' use of data without permission or compensation, stating that the government "cannot sit on its hands” while LLM developers exploit the works of rightsholders.

"LLMs rely on ingesting massive datasets to work properly but that does not mean they should be able to use any material they can find without permission or paying rightsholders for the privilege. This is an issue the government can get a grip of quickly and it should do so," Baroness Stowell said.

The report outlined ten recommendations for the government aimed at boosting opportunities, addressing risks, and supporting effective regulatory oversight.

These include beefing up the UK's computing infrastructure, giving more support for AI startups, improving skills, and considering the creation of an 'in-house' sovereign LLM.

The report echoed widely-expressed concerns aired at the time of the UK's AI Summit last November, with the Center for Digital Innovation think tank warning that the summit's focus on the 'existential risk' of AI was misguided.

"Regrettably, the UK has opted for an expedient but misguided path by emphasizing its role in preventing existential risks from AI rather than putting its considerable research capabilities and global soft power behind the common-sense, outcomes-oriented, and pro-innovation approach for AI it laid out earlier this year," wrote the center's Daniel Castro and Hodan Omaar.

"As policymakers plan... future meetings, they should reconsider how to ensure their safety initiatives do not negatively impact the rapid adoption of beneficial uses of AI. One can hope they might even go so far as to focus on how to accelerate AI innovation and adoption."

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.