Agentic AI could be a blessing and a curse for cybersecurity
A new report warns hackers using agentic AI systems could revolutionize the global threat landscape
Agentic AI systems will “further revolutionize cyber criminal tactics,” according to new research from Malwarebytes.
In its 2025 State of Malware report, the security firm warned that businesses need to be prepared for AI-powered ransomware attacks. The firm specifically highlighted the threat posed by malicious AI agents that can reason, plan, and use tools autonomously.
The report claimed that up until this point, the impact of generative AI tools on cyber crime has been relatively limited. This is not because they cannot be used offensively, however. There have been notable examples of generative AI being used to generate phishing content and even produce exploits in limited cases.
But in the main, their use for offensive purposes has been in increasing the efficiency of attacks rather than introducing new capabilities or altering the underlying tactics used by hackers.
But this could all be about to change in 2025, according to Malwarebytes, which argued that agentic AI could help attackers to not only scale up the volume and efficiency of their attacks, but also strategize on how to compromise victims.
“With the expected near-term advances in AI, we could soon live in a world where well-funded ransomware gangs use AI agents to attack multiple targets at the same time,” Malwarebytes warned.
“Malicious AI agents might also be tasked with searching out and compromising vulnerable targets, running and fine-tuning malvertising campaigns or determining the best method for breaching victims”.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Use of offensive agentic AI could be years away
That isn’t to say agentic AI does not have defensive applications, and Malwarebytes noted that agentic AI could be used to address cybersecurity skills gaps that plague the industry.
As these systems become more capable, security teams will increasingly be able to hand off parts of their workload to the autonomous agents that can action them with minimal oversight.
“It is not far-fetched to imagine agents being tasked with looking out for supply-chain vulnerabilities, keeping a running inventory of internet-facing systems and ensuring they’re patched, or monitoring a network overnight and responding to suspicious EDR alerts,” the report argued.
ReliaQuest, which claimed to have launched the first autonomous AI security agent in September 2024, recently said its agent is capable of processing security alerts 20 times faster than traditional methods with 30% greater accuracy at picking out genuine threats.
Speaking to ITPro, Sohrob Kazerounian, distinguished AI researcher at AI security specialists Vectra AI, acknowledged the efficiency increases generative AI has already unlocked for threat actors, but agreed the more interesting shift will come in the future as they experiment with AI agents.
“In the near term, we will see attackers focus on trying to refine and optimize their use of AI. This means using generative AI to research targets and carry out spear phishing attacks at scale. Furthermore, attackers, like everyone else, will increasingly use generative AI as a means of saving time on their own tedious and repetitive actions,” he explained.
“But, the really interesting stuff will start happening in the background, as threat actors begin experimenting with how to use LLMs to deploy their own malicious AI agents that are capable of end-to-end autonomous attacks.”
But Kazerounian said the reality of cyber criminals integrating AI agents into their operations is still years away, as it will require a significant amount of fine-tuning and troubleshooting before these systems reach true efficacy.
RELATED WHITEPAPER
“While threat actors are already in the experimental phase, testing how far agents can carry out complete attacks without requiring human intervention, we are still a few years away from seeing these types of agents being reliably deployed and trusted to carry out actual attacks,” he argued.
“While such a capability would be hugely profitable in terms of time and cost of attacking at scale, autonomous agents of this sort would be too error-prone to trust on their own.”
Regardless, Kazerounian said the industry should be getting ready for this eventuality, as it will require significant changes to the traditional approach to threat detection.
“Nevertheless, in the future we expect threat actors will create Gen AI agents for various aspects of an attack – from research and reconnaissance, flagging and collecting sensitive data, to autonomously exfiltrating that data without the need for human guidance. Once this happens, without signs of a malicious human on the other end, the industry will need to transform how it spots the signs of an attack.”

Solomon Klappholz is a former staff writer for ITPro and ChannelPro. He has experience writing about the technologies that facilitate industrial manufacturing, which led to him developing a particular interest in cybersecurity, IT regulation, industrial infrastructure applications, and machine learning.
-
AI readiness is a top enterprise priority – here’s how the channel can helpIndustry Insights The role of the channel in helping enterprises get AI-ready
-
The tech industry reacts to the UK’s AI Growth LabFeature The UK government is launching a sandbox for companies to test AI products. It could drive innovation, but if it’s not done right, then good ideas could stall
-
NCSC issues urgent warning over growing AI prompt injection risks – here’s what you need to knowNews Many organizations see prompt injection as just another version of SQL injection - but this is a mistake
-
AWS CISO Amy Herzog thinks AI agents will be a ‘boon’ for cyber professionals — and teams at Amazon are already seeing huge gainsNews AWS CISO Amy Herzog thinks AI agents will be a ‘boon’ for cyber professionals, and the company has already unlocked significant benefits from the technology internally.
-
HPE selects CrowdStrike to safeguard high-performance AI workloadsNews The security vendor joins HPE’s Unleash AI partner program, bringing Falcon security capabilities to HPE Private Cloud AI
-
Microsoft opens up Entra Agent ID preview with new AI featuresNews Microsoft Entra Agent ID aims to help manage influx of AI agents using existing tools
-
GitHub is awash with leaked AI company secrets – API keys, tokens, and credentials were all found out in the openNews Wiz research suggests AI leaders need to clean up their act when it comes to secrets leaking
-
Generative AI attacks are accelerating at an alarming rateNews Two new reports from Gartner highlight the new AI-related pressures companies face, and the tools they are using to counter them
-
Hackers are using AI to dissect threat intelligence reports and ‘vibe code’ malwareNews TrendMicro has called for caution on how much detail is disclosed in security advisories
-
Anthropic admits hackers have 'weaponized' its tools – and cyber experts warn it's a terrifying glimpse into 'how quickly AI is changing the threat landscape'News Security experts say Anthropic's recent admission that hackers have "weaponized" its AI tools gives us a terrifying glimpse into the future of cyber crime.