Can generative AI change security?

The words ‘Can generative AI change security?’ with ‘generative AI’ in yellow and the rest in white against a CGI render of green and pink coral seeping through a wire mesh fence

Artificial intelligence (AI) has moved from being a sci-fi staple to a tool that’s in widespread use. Headlines today are dominated with news of generative AI services such as ChatGPT and Bard, the latest and greatest in large language models from firms OpenAI and Google. These promise users a crack at incredibly capable text generation and interpretation based on vast training models, using nothing more complicated than their browser and a keyboard.


Leaked today, exploited for life

How social media biometric patterns affect your future


As powerful tools for text generation, large language models also carry the risk of empowering threat actors. As more people publicly access these systems than ever before, security firms are assessing how chatbots could bolster malicious activity.

In this episode, Jane and Rory speak to Hanah Darley, head of threat research at cybersecurity firm Darktrace, on the potential misuse of generative AI models, and the role the technology can play as part of a wider AI defence arsenal at the enterprise level.


“You could do a myriad of things, you could say ‘can you please craft for me a very, very well-written email targeted towards a banker who regularly reads the Wall Street Journal, and can you include specific details’ because ultimately, language modelling is designed off of training data.”

“Our statistics internally found that actually, relying on malicious links had decreased from about 22% of the phishing emails we saw to about 14%. But the average linguistic complexity of phishing emails jumped or increased by about 17% since the outset of ChatGPT.”

“I think in general, it's a great thing that legislators are not only thinking about but actually applying ethical concerns as well as controls onto technologies. Because ultimately, that's how you keep social accountability. I think there is a danger in placing all of your hopes and dreams in legislation, and kind of regulation as the way to go.”

Read the full transcript here.



Rory Bathgate

Rory Bathgate is a staff writer at ITPro covering the latest news on UK networking and data protection, privacy and compliance. He can sometimes be found on the ITPro Podcast, swapping a keyboard for a microphone to discuss the latest in tech trends.

In his free time, Rory enjoys photography, video editing and graphic design alongside good science fiction. After graduating from the University of Kent with BA in English and American Literature, Rory took an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, after four years in student journalism. You can contact Rory at or on LinkedIn.