Concern over AI harms prompts calls for UK incident reporting system
The UK needs a central, up-to-date picture of problems as they emerge, according to a thinktank
The Centre for Long-Term Resilience (CLTR), a UK thinktank focused on long-term crises and extreme risks, is calling on the government to introduce a new AI incident reporting regime.
Describing what it calls 'a concerning gap in the UK’s regulatory plans', CLTR said the system should monitor how AI is causing real-world safety risks and how it's regulated and deployed.
It should coordinate responses to major incidents where speed is critical, carry out investigations into the root causes, and spot the early warning signs of larger-scale harms that could arise in future.
This information could then be used to help the AI Safety Institute and Central AI Risk Function carry out risk assessments.
"[the Department for Science, Innovation and Technology] DSIT lacks a central, up-to-date picture of these types of incidents as they emerge," the report warned.
"Though some regulators will collect some incident reports, we find that this is not likely to capture the novel harms posed by frontier AI. DSIT should priorities ensuring that the UK government finds out about such novel harms not through the news, but through proven processes of incident reporting."
Without such a system, DSIT could fail to deal with foundation model problems such as bias and discrimination or misaligned agents, incidents from the government’s own use of AI in public services, and the misuse of AI systems such as disinformation campaigns or even biological weapon development.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
To get things going, the government should start with simple steps such as expanding the Algorithmic Transparency Recording Standard (ATRS) to include a framework for reporting public sector AI incidents. These incidents could be fed directly to a government body, and possibly shared with the public.
It should commission UK regulators and consult experts to identify the most concerning safety gaps, and make sure high-priority incidents are dealt with.
Similarly, the thinktank said the government should also build capacity within DSIT to monitor, investigate, and respond to incidents, perhaps with the creation of a pilot AI incident database.
"This could comprise part of DSIT’s ‘central function’, and begin the development of the policy and technical infrastructure for collecting and responding to AI incident reports," the authors said.
Veera Siivonen, CCO and Partner at AI governance firm Saidot, agreed with the recommendations. But, she warned that while implementing them would be an important first step, there's still a long way to go.
"As the UK hurtles towards a general election, the next government’s AI policy will be the cornerstone for economic growth. However, this requires precision in navigating the balance between regulation and innovation, providing guardrails without narrowing the industry’s potential for experimentation," Siivonen said.
"The incoming UK government should provide certainty and understanding for enterprises with clear governance requirements, while monitoring and mitigating the most likely risks,” she added.
“By integrating a variety of AI governance strategies with centralized incident reporting, the UK can harness the economic potential of AI, ensuring that it benefits society while protecting democratic processes and public trust."
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.