The rise of GhostGPT – Why cybercriminals are turning to generative AI

GhostGPT is not an AI tool - It has been explicitly repurposed for criminal activity

A small plastic cartoon-style ghost, sat on an orange background. The ghost resembles a white bedsheet with black eyeholes cut at the top.
(Image credit: Getty Images)

While many businesses are still trying to understand how to use generative artificial intelligence (AI) to drive productivity and efficiency, malicious actors have moved rapidly. Their approach is not theoretical; it's increasingly practical and dangerously effective. One of the clearest examples of this shift is GhostGPT, an AI-powered chatbot that was discovered in late 2024 and is already reshaping the cyber threat landscape.

GhostGPT is not a general-purpose AI tool. It has been explicitly developed, or more likely, repurposed for criminal activity. Unlike public-facing large language models (LLMs) such as ChatGPT, which are constrained by security safeguards and ethical restrictions, GhostGPT operates free from such boundaries. It is widely believed to be a “wrapper” around a jailbroken LLM or an open-source model that has had its safety features stripped out. This enables it to respond freely to prompts for malware, phishing content, and attack strategies, effectively putting offensive cyber capabilities in the hands of anyone with a web browser and an illicit link.

More concerning still is the fact that GhostGPT deliberately avoids logging user interactions. This makes attribution extremely difficult and adds a further layer of anonymity for cybercriminals. Notably, mainstream tools like OpenAI’s ChatGPT are bound by usage policies and traceability, whereas GhostGPT is being marketed and seemingly used as a ‘black box’ for illegal digital activity.

Phishing in seconds

One of the key threats presented by GhostGPT is its ability to produce high volumes of convincing phishing content in just seconds. This is not limited to generic spam. GhostGPT can create personalized email messages that mimic internal tone, corporate templates, and even the linguistic quirks of specific individuals. Where previous phishing attempts relied on crude templates and clumsy spelling errors, generative AI enables far more persuasive messaging, tailored to the target and delivered at unprecedented speed.

According to the UK government’s Cyber Security Breaches Survey 2024, phishing remains the most commonly identified type of cyber-attack affecting British organizations. Among those that detected a breach or attack in the past 12 months, 84% of businesses and 83% of charities cited phishing as the root cause. The report notes that phishing is particularly disruptive due to its sheer volume and the investigative effort required to respond.

Cyber experts are now suggesting we face such an intense barrage of critical national infrastructure attacks, too - it’s now a case of not if such attacks happen, but when.

Add tools like GhostGPT to the equation, and the scale and sophistication of these campaigns are likely to increase sharply.

In parallel, GhostGPT can also be used to create highly realistic fake login portals. These spoofed web pages, generated in response to basic prompts, are nearly indistinguishable from genuine ones, especially when paired with email lures or SMS phishing (smishing) tactics. Once victims enter their credentials, attackers can gain access to critical systems or sell the data on underground markets.

Lowering attack barriers

Perhaps even more worrying is GhostGPT’s ability to generate malicious code. It allows users to request ransomware samples, write scripts to exfiltrate data, or even build polymorphic malware, a type of software that continually changes its code to evade detection. Polymorphic malware has been around for over a decade, but its creation previously required technical expertise. Now, with AI’s help, that barrier has been drastically lowered.

Cybersecurity specialists have long warned of the risks associated with AI-generated malware. A 2023 study by IBM’s X-Force team demonstrated that LLMs can be prompted to create viable malicious code with only a few lines of instruction, even on public models with supposed safeguards. GhostGPT, lacking any ethical brakes, removes those barriers entirely.

Attacks - now with detailed instructions

Beyond content and code generation, GhostGPT also offers step-by-step attack advice. Security researchers have observed it providing detailed instructions for setting up command-and-control infrastructure, bypassing endpoint detection systems, and exploiting specific software vulnerabilities. While such information has long been accessible via dark web forums, the difference here is the ease of access and contextualization. Instead of searching static posts, users can ask GhostGPT direct questions and receive real-time responses adapted to their goals.

This development fundamentally changes the economics of cybercrime. In the past, launching sophisticated attacks required coordination, specialized knowledge, and sometimes a team of actors. Now, with a tool like GhostGPT, a lone individual with a limited technical background can initiate campaigns that previously required weeks of preparation.

For UK organizations, particularly small and medium-sized businesses (SMEs) with limited internal cybersecurity resources, the risks are significant. According to the Department for Science, Innovation and Technology’s 2024 Cyber Security Breaches Survey, 32% of businesses reported being attacked at least once in the previous 12 months. As threat actors continue to adopt AI tools, the real figure may rise considerably, especially if firms are slow to adapt.

What to do

So, what can be done? While no single technology can neutralize the threat, there are measures that organizations can adopt to reduce their exposure.

First, the basics matter more than ever: regular software patching, the use of multi-factor authentication (MFA), and employee awareness training are essential. The sophistication of phishing emails may be increasing, but so too can the ability of staff to detect them if properly trained.

Beyond these fundamentals, it’s increasingly important to deploy AI-enhanced defensive tools. Endpoint Detection and Response (EDR) and Extended Detection and Response (XDR) systems are capable of identifying anomalous behaviors that signal compromise, even if the initial attack evades traditional defenses. DNS filtering, too, can reduce exposure to malicious links embedded in phishing emails or messaging apps.

Threat intelligence is also crucial. As tools like GhostGPT proliferate, staying ahead of the curve requires real-time awareness of tactics, techniques, and procedures (TTPs) used by attackers. Security providers and their channel partners must be capable of feeding this intelligence into automated systems that can act in near real time.

A shift in the cyber threat landscape

The emergence of GhostGPT signals a shift in the cyber threat landscape. Generative AI is no longer the exclusive domain of innovation labs or marketing departments; it has been weaponized. As this technology becomes more accessible, the lines between state-backed threats, organized cybercrime, and amateur experimentation will continue to blur.

For the UK channel community, this is both a challenge and an opportunity. Clients will increasingly look to service providers not just for protection, but for clarity. Understanding how tools like GhostGPT work and how to defend against them will become a differentiator. As ever, those who stay informed will be best placed to lead.

Ryan Estes
Intrusion analyst, WatchGuard Technologies

Ryan is an intrusion analyst at WatchGuard Technologies operating primarily within the malware attestation team, WatchGuard’s malware analysis service. Previously, he performed analyses for DNSWatch, WatchGuard's DNS-filtering service, before pivoting to endpoint-related research. Aside from malware analysis, Ryan created and maintains WatchGuard's Ransomware Tracker, which highlights all of his research on ransomware and those who operate them.