Hackers are using a new AI chatbot to wage cyber attacks: GhostGPT lets users write malicious code, create malware, and curate phishing emails – and it costs just $50 to use
Researchers warn GhostGPT could help hackers wage more sophisticated attacks


Hackers are using an uncensored chatbot dubbed GhostGPT to help write malware, highlighting how AI can be twisted to "illegal activities".
That's according to Abnormal Security, which laid out details of GhostGPT in a blog post, saying the chatbot lacks the guardrails of standard AI tools such as ChatGPT, making it a helpful tool for cyber criminals.
It's not the first hackbot-as-a-service, however. WormGPT arrived in 2023 offering a similar chatbot subscription service for writing phishing emails and business email compromise attacks.
That, Abnormal Security noted, was followed by WolfGPT and EscapeGPT, suggesting GhostGPT is a sign malicious actors see value in AI helping them commit cyber crime.
The security company explained that GhostGPT was specifically designed for cyber crime purposes and that enterprises should be wary of its potential looking ahead.
"It likely uses a wrapper to connect to a jailbroken version of ChatGPT or an open source large language model (LLM), effectively removing any ethical safeguards," the blogpost explained.
"By eliminating the ethical and safety restrictions typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by traditional AI systems."
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
What can GhostGPT do?
Abnormal shared a screenshot of an advertisement for the GhostGPT service that claimed the chatbot was fast and easy-to-use, offered uncensored responses, and had a strict no-logs policy, saying "protecting our users' privacy is our top priority."
Abnormal noted that GhostGPT was marketed for coding, malware creation, and exploit development, but could also be used to write material for business email compromise (BEC) scams. The advertisement noted its various features make GhostGPT "a valuable tool for cybersecurity and various other applications."
"While its promotional materials mention "cybersecurity" as a possible use, this claim is hard to believe, given its availability on cyber crime forums and its focus on BEC scams," the Abnormal post added.
"Such disclaimers seem like a weak attempt to dodge legal accountability — nothing new in the cybercrime world."
Indeed, Abnormal's researchers asked GhostGPT to write a phishing email; it outputted a template that could be used to trick victims.
Easy access for all hackers
GhostGPT is accessible as a Telegram bot, making it easy for attackers to make use of without having technical skills or taking the time to set up their own systems, Abnormal noted.
"Because it’s available as a Telegram bot, there is no need to jailbreak ChatGPT or set up an open source model," the blog post noted. " Users can pay a fee, gain immediate access, and focus directly on executing their attacks."
A report in DarkReading noted that prices for GhostGPT were relatively cheap, too: $50 for a week, $150 for a month, and $300 for three months.
Fresh challenge for security
By lowering the barrier of entry to would-be hackers, such chatbots make it easier for cyber criminals without extensive skills to attack anyone — potentially sparking a real challenge for personal and organizational security.
RELATED WHITEPAPER
Chatbots also make it faster and easier to launch cyber crime campaigns by enabling threat actors to create more effective malware, realistic-looking phishing emails, and so on.
"With its ability to deliver insights without limitations, GhostGPT serves as a powerful tool for those seeking to exploit AI for malicious purposes," Abnormal said.
Because cyber criminals are shifting to AI, so too must security professionals, says Abnormal, as tools like GhostGPT will make it easier to slip phishing emails and malware past traditional filters.
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion Published
-
Does speech recognition have a future in business tech?
Once a simple tool for dictation, speech recognition is being revolutionized by AI to improve customer experiences and drive inclusivity in the workforce
By Jonathan Weinberg Published
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion Published
-
Fake file converter tools are on the rise – here’s what you need to know
News The FBI has issued an alert over the rise of fake file converter tools available online after observing a spate of scams and ransomware attacks.
By Emma Woollacott Published
-
Forget MFA fatigue, attackers are exploiting ‘click tolerance’ to trick users into infecting themselves with malware
News Threat actors are exploiting users’ familiarity with verification tests to trick them into loading malware onto their systems, new research has warned.
By Solomon Klappholz Published
-
A ‘significant increase’ in infostealer malware attacks left 3.9 billion credentials exposed to cyber criminals last year – and experts worry this is a ticking time bomb for enterprises
News The threat of infostealer malware is on the rise, with 4.3 million machines infected last year alone
By Solomon Klappholz Published
-
Multichannel attacks are becoming a serious threat for enterprises – and AI is fueling the surge
News Organizations are seeing a steep rise in multichannel attacks fueled in part by an uptick in AI cyber crime, new research from SoSafe has found.
By George Fitzmaurice Published
-
12,000 API keys and passwords were found in a popular AI training dataset – experts say the issue is down to poor identity management
Analysis The discovery of almost 12,000 secrets in the archive of a popular AI training dataset is the result of the industry’s inability to keep up with the complexities of machine-machine authentication.
By Solomon Klappholz Published
-
Why ‘malware as a service’ is becoming a serious problem
News Researchers have issued a warning over the rise of 'malware as a service' platforms amid a surge in attacks over the last year.
By Solomon Klappholz Published
-
There’s a new ransomware player on the scene: the ‘BlackLock’ group has become one of the most prolific operators in the cyber crime industry – and researchers warn it’s only going to get worse for potential victims
News Security experts have warned the BlackLock group could become the most active ransomware operator in 2025
By Solomon Klappholz Published