UK spies need AI to counter cyber threats, says GCHQ report

A new intelligence report commissioned by the Government Communications Headquarters (GCHQ) claims that UK spies will need to use artificial intelligence (AI) to counter adversaries who use the technology for attacks in cyber space and on the political system.

The report by the Royal United Services Institute (Rusi) think tank emphasises three ways in which intelligence agencies could seek to deploy AI, which are for intelligence analysis, cyber security purposes, and in the automation of administrative organisational processes.

However, authors Alexander Babuta, Marion Oswald and Ardi Janjeva also warned that AI is not a sufficient alternative to human judgement and advised it requires updated guidance in order to address privacy and human rights concerns.

“Systems that attempt to ‘predict’ human behaviour at the individual level are likely to be of limited value for threat assessment purposes,” they wrote. “Nevertheless, the use of AI systems to collate information from multiple sources and flag significant data items for human review is likely to improve the efficiency of analysis tasks focused on individual subjects.”

The report, titled “Artificial Intelligence and UK National Security Policy Considerations”, argues that the implementation of AI will help the UK protect itself from physical, digital, and political security threats.

“Malicious actors will undoubtedly seek to use AI to attack the UK, and it is likely that the most capable hostile state actors, which are not bound by an equivalent legal framework, are developing or have developed offensive AI-enabled capabilities. In time, other threat actors, including cybercriminal groups, will also be able to take advantage of these same AI innovations,” it warned.


Seven strategies to securely enable remote workers

Sustain business operations during a crisis by following these strategies


The report comes after it was found that AI spending in the health care and pharmaceutical industries was predicted to increase from $463 million in 2019 to more than $2 billion over the next five years.

The UK police started using AI-based facial recognition technology back in January this year, which had been criticised due to its potential abuse of human rights and racial bias.

Metropolitan police commissioner Cressida Dick told reporters that “the best way to ensure that the police use new and emerging tech in a way that has the country’s support is for the government to bring in an enabling legislative framework that is debated through Parliament, consulted on in public and which will outline the boundaries for how the police should or should not use tech”.

It currently remains unclear whether the UK intelligence will call for the same legal guidelines.

Sabina Weston

Having only graduated from City University in 2019, Sabina has already demonstrated her abilities as a keen writer and effective journalist. Currently a content writer for Drapers, Sabina spent a number of years writing for ITPro, specialising in networking and telecommunications, as well as charting the efforts of technology companies to improve their inclusion and diversity strategies, a topic close to her heart.

Sabina has also held a number of editorial roles at Harper's Bazaar, Cube Collective, and HighClouds.