Can generative AI change security?
The latest developments in artificial intelligence could empower threat actors, but firms should carefully consider whether it fits their stack before diving in themselves

Artificial intelligence (AI) has moved from being a sci-fi staple to a tool that’s in widespread use. Headlines today are dominated with news of generative AI services such as ChatGPT and Bard, the latest and greatest in large language models from firms OpenAI and Google. These promise users a crack at incredibly capable text generation and interpretation based on vast training models, using nothing more complicated than their browser and a keyboard.
As powerful tools for text generation, large language models also carry the risk of empowering threat actors. As more people publicly access these systems than ever before, security firms are assessing how chatbots could bolster malicious activity.
In this episode, Jane and Rory speak to Hanah Darley, head of threat research at cybersecurity firm Darktrace, on the potential misuse of generative AI models, and the role the technology can play as part of a wider AI defence arsenal at the enterprise level.
Highlights
“You could do a myriad of things, you could say ‘can you please craft for me a very, very well-written email targeted towards a banker who regularly reads the Wall Street Journal, and can you include specific details’ because ultimately, language modelling is designed off of training data.”
“Our statistics internally found that actually, relying on malicious links had decreased from about 22% of the phishing emails we saw to about 14%. But the average linguistic complexity of phishing emails jumped or increased by about 17% since the outset of ChatGPT.”
“I think in general, it's a great thing that legislators are not only thinking about but actually applying ethical concerns as well as controls onto technologies. Because ultimately, that's how you keep social accountability. I think there is a danger in placing all of your hopes and dreams in legislation, and kind of regulation as the way to go.”
Read the full transcript here.
Footnotes
- What is generative artificial intelligence (AI)?
- What is ChatGPT and what does it mean for businesses?
- OpenAI launches ChatGPT API for businesses at competitive price
- What is phishing?
- What good AI cyber security software looks like in 2022
- Why AI and machine learning are vital cybersecurity tools for 2022
Subscribe
ITPro Newsletter
A daily dose of IT news, reviews, features and insights, straight to your inbox!
Rory Bathgate is a staff writer at ITPro covering the latest news on UK networking and data protection, privacy and compliance. He can sometimes be found on the ITPro Podcast, swapping a keyboard for a microphone to discuss the latest in tech trends.
In his free time, Rory enjoys photography, video editing and graphic design alongside good science fiction. After graduating from the University of Kent with BA in English and American Literature, Rory took an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, after four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
Most Popular
Resources
A daily dose of IT news, reviews, features and insights, straight to your inbox!
Thank you for signing up to ITPro. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again.