CrowdStrike says AI is officially supercharging cyber attacks: Average breakout times hit just 29 minutes in 2025, 65% faster than in 2024 – and some attacks take just seconds
Cyber criminals are actively exploiting AI systems and injecting malicious prompts into legitimate generative AI tools
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
AI is expanding enterprise attack surfaces at alarming speed, according to new research from CrowdStrike, with AI-enabled attacks surging 89% over the last year and systems now a top target for cyber criminals.
Findings from CrowdStrike's 2026 Global Threat Report show cyber criminals are actively exploiting AI systems themselves, injecting malicious prompts into legitimate generative AI tools at more than 90 organizations to generate commands for credential and cryptocurrency theft.
Threat actors have also been observed exploiting vulnerabilities in AI development platforms to establish persistence and deploy ransomware, while others have established malicious AI servers impersonating trusted services to intercept sensitive data.
"As AI is embedded into development pipelines, SaaS platforms, and operational workflows, AI systems themselves become part of the attack surface," said CrowdStrike CEO George Kurtz.
"Adversaries exploited legitimate AI tools by injecting malicious prompts that generated unauthorized commands. As innovation accelerates, exploitation follows."
Prompt injection attacks gain traction
Researchers warned that cyber criminals are increasingly experimenting with prompt injection techniques to interfere with AI-enabled security workflows.
In one case, hackers embedded hidden prompt content within a phishing email to confuse or disrupt AI-based email triage, making it more likely that the message would evade detection.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"Though these techniques have not yet demonstrated consistent effectiveness at scale, they illustrate how attackers may seek to manipulate AI systems indirectly by targeting their inputs rather than exploiting the systems themselves," the researchers warned.
Attacks speeds are surging
AI is also speeding up attacks, CrowdStrike found, with the average breakout time falling to just 29 minutes in 2025, 65% faster than in 2024.
The fastest observed breakout took just 27 seconds, and in one case data exfiltration began within four minutes of initial access.
“This is an AI arms race. Breakout time is the clearest signal of how intrusion has changed. Adversaries are moving from initial access to lateral movement in minutes,” said Adam Meyers, head of counter adversary operations at CrowdStrike.
“AI is compressing the time between intent and execution, while turning enterprise AI systems into targets. Security teams must operate faster than the adversary to win.”
State-sponsored hackers are getting in on the act
Notably, the use of AI among state-sponsored hackers surged by 89%. CrowdStrike warned that the Russian state-linked group Fancy Bear was observed deploying LLM-enabled malware last year to automate reconnaissance and document collection.
Punk Spider, the group behind Akira ransomware, was also observed using AI-generated scripts to accelerate credential dumping and erase forensic evidence.
Meanwhile, North Korea-linked incidents rose by more than 130%. Activity by Famous Chollima more than doubled, with the group using AI-generated personas to scale insider operations – a common tactic employed by North Korean-linked groups over the last two years.
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Using AI to generate passwords is a terrible idea, experts warnNews Researchers have warned the use of AI-generated passwords puts users and businesses at risk
-
Harnessing AI to secure the future of identityIndustry Insights Channel partners must lead on securing AI identities through governance and support
-
‘They are able to move fast now’: AI is expanding attack surfaces – and hackers are looking to reap the same rewards as enterprises with the technologyNews Potent new malware strains, faster attack times, and the rise of shadow AI are causing havoc
-
CISA’s interim chief uploaded sensitive documents to a public version of ChatGPT – security experts explain why you should never do thatNews The incident at CISA raises yet more concerns about the rise of ‘shadow AI’ and data protection risks
-
AI is “forcing a fundamental shift” in data privacy and governanceNews Organizations are working to define and establish the governance structures they need to manage AI responsibly at scale – and budgets are going up
-
Supply chain and AI security in the spotlight for cyber leaders in 2026News Organizations are sharpening their focus on supply chain security and shoring up AI systems
-
Trend Micro issues warning over rise of 'vibe crime' as cyber criminals turn to agentic AI to automate attacksNews Trend Micro is warning of a boom in 'vibe crime' - the use of agentic AI to support fully-automated cyber criminal operations and accelerate attacks.
-
NCSC issues urgent warning over growing AI prompt injection risks – here’s what you need to knowNews Many organizations see prompt injection as just another version of SQL injection - but this is a mistake


