What is hackbot as a service and are malicious LLMs a risk?

An orange, fragmented digital avatar representing hackbots and the hackbot as a service model. Hexagonal lens flares and strands representing connections with users are splintering around the avatar.
(Image credit: Getty Images)

The explosion of interest in AI since 2022 has not been limited to those with pure intentions, with many in the sector now warning that threat actors are adopting malicious LLMs or ‘hackbots’ via subscription.

Threat actors have been just as eager to leverage AI tools in their attack chains as defenders have in their security stack, raising the importance of a concrete strategy for AI threats among security teams.

The UK’s National Cyber Security Centre (NCSC) has claimed AI, “will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years” in a report released in January 2024.

Hackers have already been using LLMs to refine social engineering attacks, matching the tone and style of an executive for phishing or using deepfake attacks to circumvent identity systems.

Vasu Jakkal, corporate vice president of security at Microsoft, has taken to the stage at RSA Conference 2024 to warn that AI is already being used to crack passwords and linked the availability of the technology to a 10x increase in identity-based attacks.

Experts have also suggested that chatbots could be used to create bespoke malware strains. Publicly available models like ChatGPT and Gemini have anti-blackhat guardrails built in to prevent them from being used to produce malicious content, but hackers have been able to bypass many of these protections through sophisticated prompt engineering techniques.

But recent research suggests that publicly-available LLMs are largely unable to exploit vulnerabilities, with only OpenAI’s GPT-4 having been able to produce exploits for known flaws. These limitations appear to have fed into the production of bespoke, malicious chatbots designed specifically to assist threat actors with their nefarious activities. 

WormGPT and FraudGPT – the birth of hackbot as a service

These tools are being advertised on marketplaces and forums across the dark web, and are available for threat actors to rent as and when they need them to enhance their attacks, spawning the hackbot as a service model.

Cyber security specialists Trustwave SpiderLabs published a blog in August 2023 outlining the rise of malicious LLMs being pushed on underground message boards across the dark web.

One such malicious LLM, WormGPT, was first discovered being touted on popular hacking platforms hosted on the dark web in June 2021, according to Trustwave’s research. Another is FraudGPT, first discovered by threat researchers at Netenrich being circulated on Telegram in July 2023.

Both of these tools allow attackers to design assets used in social engineering attacks such as phishing emails, deepfakes, voice cloning, and more but their creators claim their real value comes in exploiting vulnerabilities.

RELATED WHITEPAPER

Woman's hand touch a laptop screen

(Image credit: ServiceNow)

An overview of the current state of CX

Hackers can feed code pertaining to a specific vulnerability into these malicious models, which in theory could produce a number of proof of concept (PoC) exploits for an attacker to try.

Speaking to ITPro Jack Peters, customer solutions architect at cloud services company M247, describes the recent surge in these types of tools and how they can be used to assist in cyber attacks.

“Using the same user-friendly prompts akin to other generative AI chatbots, ‘FraudGPT’ and other tools are flooding the dark web, allowing hackers to take similar shortcuts to create malware, malicious code, and phishing emails to steal data and create havoc for businesses”, Peters explained.

“As with any sophisticated language model, one of FraudGPT’s biggest strengths is its ability to produce convincing emails and documents in order to gain access to a business’ systems.”

These tools are available through underground marketplaces on the dark web where hackers can pay for a monthly license to use the hackbot. In this sense, they are similar to the ransomware as a service (RaaS) model, which experts have directly linked to the diversified ransomware industry that plagues businesses today.

The research from Trustwave listed the price range for a monthly subscription for FraudGPT as between $90 and $200, whereas the first version WormGPT is available for €100 per month. 

Several new malicious LLMs have entered the market since the arrival of WormGPT, including BlackHatGPT, XXXGPT, WolfGPT, and more, forming a new segment of the cyber black market.

A blackhat alternative to ChatGPT, but are they really worth it?

Trustwave’s research attempted to test the efficacy of these tools by comparing the outputs of the hackbots to those produced by a legitimate chatbot.

The results showed that with the right prompts, ChatGPT could be made to produce some Python malware. But users had to first claim the code was to be used for white hat purposes and before it could be deployed the output code required further tweaking.

Ultimately, ChatGPT was able to produce a malicious Python script to the same requirements as WormGPT, but ChatGPT included a disclaimer urging the user to “use this script responsibly” etc. 

In a similar fashion, ChatGPT is also able to create very realistic exchanges that could be used for phishing but the prompts required to produce these outputs need to be very specific. In most circumstances, ChatGPT will refuse to comply with malicious requests.

As such, these malicious chatbots may simply offer cyber criminals an easier route to using AI for attacks compared to spending time trying to jailbreak their instance of ChatGPT or craft the phishing page or malware they require.

Etay Maor, senior director of security strategy at Cato Networks, tells ITPro he had doubts about the coding capabilities of these tools early on, citing the Python code included in the listing for WormGPT.

“I’m not impressed with the example that they gave here of a Python script that knows how to create a DDoS attack," Maor says. "Looking at this, it is pretty lame, I don’t know who it’s supposed to impress”.

Camden Woollven, group head of AI at GRC International Group, expresses similar skepticism and argues that the adversarial capabilities of these tools do not appear to be vastly superior to anything proficient hackers can already achieve.

“From what I can tell, these tools seem to be fairly rudimentary – generating simple malware code, phishing schemes, stuff a decent hacker could pull off without an AI assistant,” Woollven tells ITPro.

“There’s an element of ‘overhype’ at play, as seems to be the case with anything AI-related.”

Cyber criminals themselves have expressed their discontent with the capabilities of these tools. Reviews for WormGPT contain frequent complaints about the tool’s limited functionality, with some threat actors even having accused the developer behind the hackbot of operating a scam.

One review warns would-be users that the tool is "just an old cheap version of ChatGPT", adding that it was just as restrictive as the original version of the chatbot.

Hackbot as a service as a shortcut for cyber attacks

Despite their limited malicious credentials, Maor believes the real threat posed by these rudimentary hackbots is that they can significantly reduce the time and work a threat actor needs to put in to launch a high volume of effective attacks.

Even if the success rate of attacks based on hackbot outputs is the same or lower than those completely crafted by human attackers, the improved consistency and volume of attacks backed by AI could nevertheless drive up the success rates of threat actors. Hackbot as a service can therefore be a tempting value proposition for attackers looking for a leg up in what is an increasingly competitive space.

Maor says he is particularly concerned that hackbot as a service models could lower the barrier to entry for cyber crime for individuals who would have previously lacked the digital literacy to launch a cyber attack.

Simple attacks are likely to become easier for the scammers to pull off, while skilled hackers could see the scope of their capabilities increase dramatically when paired with malicious LLMs.

“Yes, it does lower the barrier for threat actors to do more and it makes life easier for those who are already proficient,” Maor explains.

This is a nascent market with new threats appearing all the time. In March, Maor published a blog post detailing a callout from a Russian group looking for engineers with a foundation in machine learning (ML) and AI to help them develop a malicious LLM.

Maor provides two reasons why he thinks the LLM in question, xGPT, could pose a far more serious threat to enterprises than WormGPT or similar ChatGPT variants:

“First, this is a known threat actor that has already sold credentials and access to US government entities, banks, mobile networks, and other victims,” he tells ITPro. “Second, it looks like they are not trying to just connect to an existing LLM but rather develop a solution of their own.”

As the hackbot as a service underground matures, businesses will need to carefully assess the protections they have in place. Strong AI security systems may become more necessary to counter the threat posed by corrupted LLMs, and controls such as identity management tools will be a boon against tailored social engineering attempts.

The efficacy of these tools is still in debate, but as security teams see with more and more advanced strains of ransomware there is always the potential for criminals to innovate at a pace approaching that of legitimate developers.

Solomon Klappholz
Staff Writer

Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.