Hackers are using a new AI chatbot to wage cyber attacks: GhostGPT lets users write malicious code, create malware, and curate phishing emails – and it costs just $50 to use
Researchers warn GhostGPT could help hackers wage more sophisticated attacks


Hackers are using an uncensored chatbot dubbed GhostGPT to help write malware, highlighting how AI can be twisted to "illegal activities".
That's according to Abnormal Security, which laid out details of GhostGPT in a blog post, saying the chatbot lacks the guardrails of standard AI tools such as ChatGPT, making it a helpful tool for cyber criminals.
It's not the first hackbot-as-a-service, however. WormGPT arrived in 2023 offering a similar chatbot subscription service for writing phishing emails and business email compromise attacks.
That, Abnormal Security noted, was followed by WolfGPT and EscapeGPT, suggesting GhostGPT is a sign malicious actors see value in AI helping them commit cyber crime.
The security company explained that GhostGPT was specifically designed for cyber crime purposes and that enterprises should be wary of its potential looking ahead.
"It likely uses a wrapper to connect to a jailbroken version of ChatGPT or an open source large language model (LLM), effectively removing any ethical safeguards," the blogpost explained.
"By eliminating the ethical and safety restrictions typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by traditional AI systems."
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
What can GhostGPT do?
Abnormal shared a screenshot of an advertisement for the GhostGPT service that claimed the chatbot was fast and easy-to-use, offered uncensored responses, and had a strict no-logs policy, saying "protecting our users' privacy is our top priority."
Abnormal noted that GhostGPT was marketed for coding, malware creation, and exploit development, but could also be used to write material for business email compromise (BEC) scams. The advertisement noted its various features make GhostGPT "a valuable tool for cybersecurity and various other applications."
"While its promotional materials mention "cybersecurity" as a possible use, this claim is hard to believe, given its availability on cyber crime forums and its focus on BEC scams," the Abnormal post added.
"Such disclaimers seem like a weak attempt to dodge legal accountability — nothing new in the cybercrime world."
Indeed, Abnormal's researchers asked GhostGPT to write a phishing email; it outputted a template that could be used to trick victims.
Easy access for all hackers
GhostGPT is accessible as a Telegram bot, making it easy for attackers to make use of without having technical skills or taking the time to set up their own systems, Abnormal noted.
"Because it’s available as a Telegram bot, there is no need to jailbreak ChatGPT or set up an open source model," the blog post noted. " Users can pay a fee, gain immediate access, and focus directly on executing their attacks."
A report in DarkReading noted that prices for GhostGPT were relatively cheap, too: $50 for a week, $150 for a month, and $300 for three months.
Fresh challenge for security
By lowering the barrier of entry to would-be hackers, such chatbots make it easier for cyber criminals without extensive skills to attack anyone — potentially sparking a real challenge for personal and organizational security.
RELATED WHITEPAPER
Chatbots also make it faster and easier to launch cyber crime campaigns by enabling threat actors to create more effective malware, realistic-looking phishing emails, and so on.
"With its ability to deliver insights without limitations, GhostGPT serves as a powerful tool for those seeking to exploit AI for malicious purposes," Abnormal said.
Because cyber criminals are shifting to AI, so too must security professionals, says Abnormal, as tools like GhostGPT will make it easier to slip phishing emails and malware past traditional filters.
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
What to look out for at RSAC Conference 2025
Analysis Convincing attendees that AI can revolutionize security will be the first point of order at next week’s RSA Conference – but traditional threats will be a constant undercurrent
By Rory Bathgate
-
Ransomware attacks are rising — but quiet payouts could mean there's more than actually reported
News Ransomware attacks continue to climb, but they may be even higher than official figures show as companies choose to quietly pay to make such incidents go away.
By Nicole Kobie
-
Hackers are using Zoom’s remote control feature to infect devices with malware
News Security experts have issued an alert over a new social engineering campaign using Zoom’s remote control features to take over victim devices.
By Ross Kelly
-
Simplifying Password Management eBook
By ITPro
-
Living off the Land eBook
By ITPro
-
The Public Sector's Guide to Privilege and Password Management
By ITPro
-
Zero Standing Privilege: Automating Cybersecurity Without Disrupting Productivity
whitepaper
By ITPro
-
Hackers are duping developers with malware-laden coding challenges
News A North Korean state-sponsored group has been targeting crypto developers through fake coding challenges given as part of the recruitment process.
By Emma Woollacott
-
‘We are now a full-fledged powerhouse’: Two years on from its Series B round, Hack the Box targets further growth with AI-powered cyber training programs and new market opportunities
News Hack the Box has grown significantly in the last two years, and it shows no signs of slowing down
By Ross Kelly
-
Businesses are taking their eye off the ball with vulnerability patching
News Security leaders are overconfident in their organization’s security posture while allowing vulnerability patching to fall by the wayside.
By Jane McCallion