AI is now a ‘standard part of the attacker toolkit’
Cyber attacks are increasing in scale, intensity, and velocity thanks to AI, and it’s forcing defenders to react faster than ever before
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
AI tools are now a “standard part of the attacker toolkit”, according to a senior Forescout exec, and it’s changing the game for defenders and cyber criminals alike.
Rik Ferguson, VP of security intelligence at Forescout, told assembled media that cyber criminals are now flocking to AI in increasing numbers as threat groups look to supercharge attacks – and increasingly use commercial AI models.
Ferguson’s comments come in the wake of new research from Forescout which highlighted significant improvements in AI’s potential for offensive cyber activities.
Analysis conducted by the cybersecurity firm revealed marked improvements in AI-powered vulnerability detection over the last year. Testing of 50 AI models in mid-2025 showed over half (55%) of AI models failed at basic vulnerability research, while a follow-up study published this month showed all models excelled on this front.
Notably, the use of AI to exploit vulnerabilities also improved significantly. As ITPro reported this week, Forescout’s VP of research Daniel dos Santos warned this could prompt an explosion of vulnerabilities in the near future.
While the study shows AI is raising the stakes for defenders, Ferguson said it also highlighted changing cyber criminal approaches to the technology.
Speaking during a roundtable session at the company’s Vedere Labs research hub in Eindhoven, Ferguson said the cyber criminal community is essentially going “mainstream” when it comes to AI adoption.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Previously, hackers had flocked to dedicated underground LLMs such as WormGPT when using AI for nefarious purposes. Yet popular models by mainstream providers are now front of mind for many.
“When it comes to the criminal community, the behaviour there is changing around AI,” Ferguson said. “We used to talk about … underground LLMs and criminal AI offerings. Things like WormGPT was definitely one that got a lot of coverage.”
Ferguson added that these have now been “largely abandoned” and replaced with mainstream models.
“Commercial models have largely replaced them using either jailbreaks, so a wrapper around a commercial model including a jailbreak, or local open source deployments of AI models being sold, or stolen subscriptions from other people are also commonly traded on underground forums,” he explained.
Ferguson pointed to Forescout research which found Anthropic’s Claude model is now ranked a “preferred tool” for threat actors. Observations from posts on underground forums showed Claude is highly sought-after, with newer ChatGPT models losing traction due to stronger guardrails.
YOU MAY ALSO LIKE
Both of the aforementioned providers are aware of – and cracking down on – misuse of their AI tools by cyber criminals.
In September last year, Anthropic warned that hackers had “weaponized” its AI tools to wage cyber attacks against organizations across a range of industries, banning accounts in response.
OpenAI, meanwhile, released a similar report in late 2024 detailing how cyber criminals were using its solutions to cause havoc. The company said at the time it had introduced new guardrails to prevent misuse and had disrupted 20 operations using the chatbot for criminal purposes.
Coming around to AI
Perception of AI and its potential benefits among cyber criminals has also been changing, Ferguson noted. As with mainstream enterprise adoption, initial skepticism has given way to an embracing of the technology.
The Forescout exec noted that, last year, conversations on criminal forums were “mostly laughing at AI and telling people not to use AI”. This has all changed as advances with the technology have progressed, however.
“The opposite is now true,” he told assembled media. “AI is recommended, and more experienced forum members are offering this knowledge transfer, skills transfer, not just recommending to use it, but recommending which one to use, how to use it, [and] offering tutorials.”
“It has become a standard part of the attacker toolkit, and Claude is very much the preferred model used by attackers currently.”
Several studies from cybersecurity firms over the last 18 months have highlighted the growing use, and popularity, of AI in criminal operations. Trend Micro, for example, found threat actors were using these tools to summarize threat intelligence reports and reverse engineer malware.
Research from Kaseya in March also warned that “AI-generated phishing became the baseline” for hackers in 2025, with criminals using the technology to curate more convincing phishing emails.
Raising the stakes for defenders
The use of AI by threat actors, particularly agentic AI, raises serious questions over operational capability and readiness for cybersecurity professionals, Ferguson noted.
AI and agentic-powered support in cyber crime activities increase velocity and scale, and statistics from Forescout back this up. The median time to hand-off by initial access brokers (IABs) once inside a network stands at 22 seconds, for example.
In 2022, this stood at over eight hours, and Ferguson noted that the hand off is now fully automated.
Agentic AI also adds a new dynamic in terms of continuous risks. AI agents don’t take lunch breaks and don’t sleep – meaning enterprises in the future could essentially face non-stop threats.
“AI is not constrained by the way we consider the world. So all of the things that we understand about how do I get attacked, how do attacks happen, may not apply going forwards,” he said. “It’s not just about speed, scale, and access, we actually have to rethink how we classify and understand attacks.”
“We saw it in very early days [with] voice cloning, deepfakes, phishing stuff, but we’re [now] seeing autonomous reconnaissance work happening, we’re seeing autonomous lateral movement already happening, [the] matching of vulnerabilities to live targets in real time,” Ferguson added.
“And of course, no human in the loop means no lunch breaks.”
This last aspect of AI in offensive cyber operations has a direct impact on attribution, Ferguson noted.
Security researchers were able to theorize the source of an attack based on the time of day a particular incident occurred. Automation and the potential for continuous, on-demand attacks will make attribution like this far more difficult in the future.
“You want to know where the attack is coming from? Look at what time does it start, what time does it end, and where’s the lunch break? Oh, that fits the China time zone, that fits the Russia time zone,” he noted. “It was one of the indicators that we could use.”
“If it becomes 24/7, 365, not only is that much more difficult to defend against, it’s actually much more difficult to attribute using those characteristics.”
Playing by the rules
Looking ahead, defenders and attackers will be pitting agents against each other, and Ferguson noted it’s already happening in some instances.
Organizations are using agents to automate device isolation and quarantine practices, for example, while bots are being used in threat hunting and asset monitoring.
The challenging part comes in terms of usage policies. however. Defenders work within tightly regulated boundaries, and rightly so. Attackers don’t play by the rules in this regard - and that will be the challenge moving forward.
“It’s not an unbalanced equation,” he said. “We are all using it. The big thing for us as defenders, and every practitioner out there, is that we have to justify everything we do.
“We actually care if something goes wrong. Attackers don’t care if something goes wrong, they just move on.”
FOLLOW US ON SOCIAL MEDIA
Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
NIST overhauls the National Vulnerability Database amid skyrocketing CVE reportsNews Drowning in CVEs, the NIST agency will now fully analyze only the most severe vulnerabilities
-
UK government targets 'sovereign AI' gains with new £500 million startup funding schemeNews A new unit will act as a venture capital fund to help UK AI firms scale
-
AI is raising the stakes for cyber professionals – Claude Mythos just took things to another levelNews AI efficiency gains work both ways, and threat actors are already capitalizing on powerful new tools
-
Agent identity governance can't keeping up with adoption rates – and it’s creating a security nightmareNews Enterprises are leaving high-privilege keys unchanged for months or years at a time
-
Systems are deterministic, people are probabilistic – AI is both, and that's a headache for cyber teamsNews AI combines both the risks associated with IT systems and the people using them, creating headaches for practitioners
-
AI agents are creating new identity security risks: 1Password wants to solve thatNews The Unified Access system from 1Password will help enterprises manage AI agent access across different devices and users
-
CISOs are keen on agentic AI, but they’re not going all-in yetNews Many security leaders face acute talent shortages and are looking to upskill workers
-
CrowdStrike says AI is officially supercharging cyber attacks: Average breakout times hit just 29 minutes in 2025, 65% faster than in 2024 – and some attacks take just secondsNews Cyber criminals are actively exploiting AI systems and injecting malicious prompts into legitimate generative AI tools
-
Using AI to generate passwords is a terrible idea, experts warnNews Researchers have warned the use of AI-generated passwords puts users and businesses at risk
-
Harnessing AI to secure the future of identityIndustry Insights Channel partners must lead on securing AI identities through governance and support