CrowdStrike says AI is officially supercharging cyber attacks: Average breakout times hit just 29 minutes in 2025, 65% faster than in 2024 – and some attacks take just seconds

Cyber criminals are actively exploiting AI systems and injecting malicious prompts into legitimate generative AI tools

Cyber attack concept image showing orange glowing alert symbol imposed over digital systems mesh.
(Image credit: Getty Images)

AI is expanding enterprise attack surfaces at alarming speed, according to new research from CrowdStrike, with AI-enabled attacks surging 89% over the last year and systems now a top target for cyber criminals.

Findings from CrowdStrike's 2026 Global Threat Report show cyber criminals are actively exploiting AI systems themselves, injecting malicious prompts into legitimate generative AI tools at more than 90 organizations to generate commands for credential and cryptocurrency theft.

Threat actors have also been observed exploiting vulnerabilities in AI development platforms to establish persistence and deploy ransomware, while others have established malicious AI servers impersonating trusted services to intercept sensitive data.

"As AI is embedded into development pipelines, SaaS platforms, and operational workflows, AI systems themselves become part of the attack surface," said CrowdStrike CEO George Kurtz.

"Adversaries exploited legitimate AI tools by injecting malicious prompts that generated unauthorized commands. As innovation accelerates, exploitation follows."

Prompt injection attacks gain traction

Researchers warned that cyber criminals are increasingly experimenting with prompt injection techniques to interfere with AI-enabled security workflows.

In one case, hackers embedded hidden prompt content within a phishing email to confuse or disrupt AI-based email triage, making it more likely that the message would evade detection.

"Though these techniques have not yet demonstrated consistent effectiveness at scale, they illustrate how attackers may seek to manipulate AI systems indirectly by targeting their inputs rather than exploiting the systems themselves," the researchers warned.

Attacks speeds are surging

AI is also speeding up attacks, CrowdStrike found, with the average breakout time falling to just 29 minutes in 2025, 65% faster than in 2024.

The fastest observed breakout took just 27 seconds, and in one case data exfiltration began within four minutes of initial access.

“This is an AI arms race. Breakout time is the clearest signal of how intrusion has changed. Adversaries are moving from initial access to lateral movement in minutes,” said Adam Meyers, head of counter adversary operations at CrowdStrike.

AI is compressing the time between intent and execution, while turning enterprise AI systems into targets. Security teams must operate faster than the adversary to win.”

State-sponsored hackers are getting in on the act

Notably, the use of AI among state-sponsored hackers surged by 89%. CrowdStrike warned that the Russian state-linked group Fancy Bear was observed deploying LLM-enabled malware last year to automate reconnaissance and document collection.

Punk Spider, the group behind Akira ransomware, was also observed using AI-generated scripts to accelerate credential dumping and erase forensic evidence.

Meanwhile, North Korea-linked incidents rose by more than 130%. Activity by Famous Chollima more than doubled, with the group using AI-generated personas to scale insider operations – a common tactic employed by North Korean-linked groups over the last two years.

FOLLOW US ON SOCIAL MEDIA

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.