Tools like ChatGPT will boost cyber crime and cyber security equally

The ChatGPT and OpenAI logo on a smartphone in front of stocks and shares data on a big screen
(Image credit: Getty Images)

You can’t have escaped the hype surrounding ChatGPT. For once, it might even be deserving of that hype, albeit it’s still early days.

For sure, you can ask it to produce a blog on almost any given subject and in any style and get a surprisingly good read out of it. Unless that is, you know the subject really well, and then you’ll spot the errors within. 

Of course, most people are unlikely to be experts in the field of whatever news article, blog piece, or social media posting they’re reading. This means they will take it at face value and believe every word. The days of rapid-fire, automated, fake news content production are truly with us. As someone who depends on two things for my living – a certain degree of expertise and an ability to write about it in an informative, accurate, and understandable way – that’s a challenging thought.

Whitepaper cover with title and connected cubes graphic

(Image credit: IBM)

An EDR buyer's guide

How to pick the best endpoint detection and response solution for your business


ChatGPT is a scary thing. I am sure that it will be used to provide content that is published without “AI-attribution” but rather presented as being the output of an actual writer. In fact, we know this is the case already, given CNET confirmed it had been publishing AI-generated news stories, with human editorial oversight, for some months as an ongoing trial. 

At the time of writing, that trial has been paused following media interest once the AI story creation was discovered. I have even had new contracts that specifically forbid the creation of content using ChatGPT or similar tools. To be honest, it hadn’t even occurred to me before reading that contract. Others, however, may not have the same ethical considerations as I do when it comes to content creation.

This brings me nicely to why I, a primarily security-focused contributor, am talking about ChatGPT in the first place. Cyber criminals have already thought about how to exploit the tool to quickly create everything from hacking tools to phishing campaigns and even, some security experts have noted, chatbots that can make a decent fist of pretending to be young women on dating sites. 

Given you can ask ChatGPT to create code (and workable, executable code at that), this shouldn’t come as any great surprise. Nor should the “blame” for this be laid at the feet of ChatGPT or OpenAI. Bad people will use good tools to create malicious elements. That’s a fact of life and exactly what they’ve been doing.

The Check Point researchers, who monitor the shady world of underground cyber crime forums, spotted one distributor of Android malware publishing code that he created, or rather ChatGPT had created, that could steal files online. We saw another piece of malware that could install a malicious backdoor on a PC. One rookie developer claimed that the first Python script they’d ever created was written mostly by ChatGPT and was used to encrypt files. This is something the Check Point researchers say could easily be adapted to encrypt an entire PC without user interaction, as part of a ransomware attack, for example. 

The only saving grace, at least for now, is all this code, the researchers say, is pretty basic. That won’t stay a saving grace for long. When it comes to phishing campaigns, ChatGPT has already implemented controls meant to prevent requests for such things from being actioned. These, I am told, have been bypassed by determined researchers, even some journalists, so one has to assume that you can add cyber criminals to the list.

If this research wasn’t worrying enough, CyberArk researchers have also been looking at the potential for ChatGPT malware creation. They say that it can be used to create polymorphic malware. In simple terms, that’s malware that changes its code from attack to attack to evade detection from antivirus defenses. This type of malware makes signature-based protections useless. CyberArk reckons the researchers were able to get around the ChatGPT filters by sticking to a number of set constraints. It’s still early days here, as well, and the code might not trigger initial detection, but other tools will find the malware once the machine is infected.

Ian Hirst, a partner in the Cyber Threat Services division at Gemserv, points out it’s not all negative in the world of ChatGPT and cyber security. “ChatGPT has the potential to be a powerful tool for cyber security,” he says. “The AI could be used to monitor chat conversations for suspicious activity and flag them for further investigation, helping to detect and prevent cyber crime. ChatGPT also could rapidly and effectively assist with cyber incident management, with chat conversations that provide guidance on how to contain and mitigate the impact of a cyber attack.”

Davey Winder

Davey is a three-decade veteran technology journalist specialising in cybersecurity and privacy matters and has been a Contributing Editor at PC Pro magazine since the first issue was published in 1994. He's also a Senior Contributor at Forbes, and co-founder of the Forbes Straight Talking Cyber video project that won the ‘Most Educational Content’ category at the 2021 European Cybersecurity Blogger Awards.

Davey has also picked up many other awards over the years, including the Security Serious ‘Cyber Writer of the Year’ title in 2020. As well as being the only three-time winner of the BT Security Journalist of the Year award (2006, 2008, 2010) Davey was also named BT Technology Journalist of the Year in 1996 for a forward-looking feature in PC Pro Magazine called ‘Threats to the Internet.’ In 2011 he was honoured with the Enigma Award for a lifetime contribution to IT security journalism which, thankfully, didn’t end his ongoing contributions - or his life for that matter.

You can follow Davey on Twitter @happygeek, or email him at