Generative AI poses a “major risk” to UK national security as report warns over risk of radicalization

A digital brain coloured in red
(Image credit: Getty Images)

Radicalization through the use of generative AI represents a “major risk to UK national security”, experts have told ITPro.

Josh Boer, director at tech consultancy VeUP, said the use of generative AI tools for nefarious means is a serious threat to the country, adding that the technology could be used by cyber criminals to inflict serious harm.

Boer’s comments come in the wake of a report by Johnathan Hall, an independent reviewer of terrorism legislation, that warned the technology could be used to fuel radicalization.

Writing in the Daily Telegraph, Hall described his conversation with a chatbot which “did not stint in its glorification of Islamic State”.

The chatbot in question was created using Character.ai, a tool which allows users to create chatbot tools and train them according to specific personalities.

Software like this currently operates under little legislation, meaning extremists can easily train chatbots to radicalize online communities, the report warned.

Jake Moore, Global Security Advisor at ESET, told ITPro the risk of generative AI being used for harmful means, such as misinformation, radicalization, or cyber crime, should be a key concern for AI developers moving forward.

Moore said developers should focus heavily on “baking in the right level of principles” in AI platforms to reduce long-term risks.

“The majority of AI is still taught by the building blocks it was designed from and therefore, the right tweaks can be adopted to steer the outputs away from becoming a beast,” he said.

“Legislation is difficult with this constantly evolving technology but a basic structure designed to reduce the risk of recruiting extremists doesn’t have to be problematic.”.

To avoid extremists gearing AI towards their own ends, AI needs to be trained against certain forms of interaction on an algorithmic level, Moore added.

The criminal applications of generative AI

AI isn't just gaining traction as a tool for extremists, either. Over the last year, concerns have been rising over the use of generative AI among cyber criminals, some of whom are using the technology to support operations, fine tune attack methods, and target a growing number of organizations globally. 

RELATED RESOURCE

Brain hovering above a chip on a motherboard, denoting AI and hardware

(Image credit: Getty Images)

The enterprise’s guide for Generative AI

Get an informed overview of what to consider when executing Gen AI initiatives

DOWNLOAD NOW

Generative AI is being used to create various ransomware and malware, as well as to generate fraudulent phishing content.

A few months ago, for example, threat actors used AI to generate deep fake videos of several celebrities in an attempt to lead users into fraudulent purchases.

Cyber criminals are constantly developing new ways to incorporate AI into their criminal processes, and regulators will have to take clear action to curb the increase of AI cyber crime.

“The issue is how to address this issue without stifling innovation,” Josh Boer told ITPro.

George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.