Hackers are using a new AI chatbot to wage cyber attacks: GhostGPT lets users write malicious code, create malware, and curate phishing emails – and it costs just $50 to use
Researchers warn GhostGPT could help hackers wage more sophisticated attacks


Hackers are using an uncensored chatbot dubbed GhostGPT to help write malware, highlighting how AI can be twisted to "illegal activities".
That's according to Abnormal Security, which laid out details of GhostGPT in a blog post, saying the chatbot lacks the guardrails of standard AI tools such as ChatGPT, making it a helpful tool for cyber criminals.
It's not the first hackbot-as-a-service, however. WormGPT arrived in 2023 offering a similar chatbot subscription service for writing phishing emails and business email compromise attacks.
That, Abnormal Security noted, was followed by WolfGPT and EscapeGPT, suggesting GhostGPT is a sign malicious actors see value in AI helping them commit cyber crime.
The security company explained that GhostGPT was specifically designed for cyber crime purposes and that enterprises should be wary of its potential looking ahead.
"It likely uses a wrapper to connect to a jailbroken version of ChatGPT or an open source large language model (LLM), effectively removing any ethical safeguards," the blogpost explained.
"By eliminating the ethical and safety restrictions typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by traditional AI systems."
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
What can GhostGPT do?
Abnormal shared a screenshot of an advertisement for the GhostGPT service that claimed the chatbot was fast and easy-to-use, offered uncensored responses, and had a strict no-logs policy, saying "protecting our users' privacy is our top priority."
Abnormal noted that GhostGPT was marketed for coding, malware creation, and exploit development, but could also be used to write material for business email compromise (BEC) scams. The advertisement noted its various features make GhostGPT "a valuable tool for cybersecurity and various other applications."
"While its promotional materials mention "cybersecurity" as a possible use, this claim is hard to believe, given its availability on cyber crime forums and its focus on BEC scams," the Abnormal post added.
"Such disclaimers seem like a weak attempt to dodge legal accountability — nothing new in the cybercrime world."
Indeed, Abnormal's researchers asked GhostGPT to write a phishing email; it outputted a template that could be used to trick victims.
Easy access for all hackers
GhostGPT is accessible as a Telegram bot, making it easy for attackers to make use of without having technical skills or taking the time to set up their own systems, Abnormal noted.
"Because it’s available as a Telegram bot, there is no need to jailbreak ChatGPT or set up an open source model," the blog post noted. " Users can pay a fee, gain immediate access, and focus directly on executing their attacks."
A report in DarkReading noted that prices for GhostGPT were relatively cheap, too: $50 for a week, $150 for a month, and $300 for three months.
Fresh challenge for security
By lowering the barrier of entry to would-be hackers, such chatbots make it easier for cyber criminals without extensive skills to attack anyone — potentially sparking a real challenge for personal and organizational security.
RELATED WHITEPAPER
Chatbots also make it faster and easier to launch cyber crime campaigns by enabling threat actors to create more effective malware, realistic-looking phishing emails, and so on.
"With its ability to deliver insights without limitations, GhostGPT serves as a powerful tool for those seeking to exploit AI for malicious purposes," Abnormal said.
Because cyber criminals are shifting to AI, so too must security professionals, says Abnormal, as tools like GhostGPT will make it easier to slip phishing emails and malware past traditional filters.
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
How to implement a four-day week in tech
In-depth More companies are switching to a four-day week as they look to balance employee well-being with productivity
-
Intelligence sharing: The boost for businesses
In-depth Intelligence sharing with peers is essential if critical sectors are to be protected
-
Despite the hype, cybersecurity teams are still taking a cautious approach to using AI tools
News Research from ISC2 shows the appetite for AI tools in cybersecurity is growing, but professionals are taking a far more cautious approach than other industries.
-
Simplifying Password Management eBook
Whitepaper
-
Living off the Land eBook
Whitepaper
-
The Public Sector's Guide to Privilege and Password Management
Whitepaper
-
Zero Standing Privilege: Automating Cybersecurity Without Disrupting Productivity
Whitepaper
-
‘The worst thing an employee could do’: Workers are covering up cyber attacks for fear of reprisal – here’s why that’s a huge problem
News More than one-third of office workers say they wouldn’t tell their cybersecurity team if they thought they had been the victim of a cyber attack.
-
Cyber professionals call for a 'strategic pause' on AI adoption as teams left scrambling to secure tools
News Security professionals are scrambling to secure generative AI tools
-
Government cybersecurity action plan includes £16 million in funding
News Cash will go to help startups, scale-ups, and university spinouts, while a new advisory group will aim to improve public sector cybersecurity