Agentic AI could be a blessing and a curse for cybersecurity
A new report warns hackers using agentic AI systems could revolutionize the global threat landscape


Agentic AI systems will “further revolutionize cyber criminal tactics,” according to new research from Malwarebytes.
In its 2025 State of Malware report, the security firm warned that businesses need to be prepared for AI-powered ransomware attacks. The firm specifically highlighted the threat posed by malicious AI agents that can reason, plan, and use tools autonomously.
The report claimed that up until this point, the impact of generative AI tools on cyber crime has been relatively limited. This is not because they cannot be used offensively, however. There have been notable examples of generative AI being used to generate phishing content and even produce exploits in limited cases.
But in the main, their use for offensive purposes has been in increasing the efficiency of attacks rather than introducing new capabilities or altering the underlying tactics used by hackers.
But this could all be about to change in 2025, according to Malwarebytes, which argued that agentic AI could help attackers to not only scale up the volume and efficiency of their attacks, but also strategize on how to compromise victims.
“With the expected near-term advances in AI, we could soon live in a world where well-funded ransomware gangs use AI agents to attack multiple targets at the same time,” Malwarebytes warned.
“Malicious AI agents might also be tasked with searching out and compromising vulnerable targets, running and fine-tuning malvertising campaigns or determining the best method for breaching victims”.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Use of offensive agentic AI could be years away
That isn’t to say agentic AI does not have defensive applications, and Malwarebytes noted that agentic AI could be used to address cybersecurity skills gaps that plague the industry.
As these systems become more capable, security teams will increasingly be able to hand off parts of their workload to the autonomous agents that can action them with minimal oversight.
“It is not far-fetched to imagine agents being tasked with looking out for supply-chain vulnerabilities, keeping a running inventory of internet-facing systems and ensuring they’re patched, or monitoring a network overnight and responding to suspicious EDR alerts,” the report argued.
ReliaQuest, which claimed to have launched the first autonomous AI security agent in September 2024, recently said its agent is capable of processing security alerts 20 times faster than traditional methods with 30% greater accuracy at picking out genuine threats.
Speaking to ITPro, Sohrob Kazerounian, distinguished AI researcher at AI security specialists Vectra AI, acknowledged the efficiency increases generative AI has already unlocked for threat actors, but agreed the more interesting shift will come in the future as they experiment with AI agents.
“In the near term, we will see attackers focus on trying to refine and optimize their use of AI. This means using generative AI to research targets and carry out spear phishing attacks at scale. Furthermore, attackers, like everyone else, will increasingly use generative AI as a means of saving time on their own tedious and repetitive actions,” he explained.
“But, the really interesting stuff will start happening in the background, as threat actors begin experimenting with how to use LLMs to deploy their own malicious AI agents that are capable of end-to-end autonomous attacks.”
But Kazerounian said the reality of cyber criminals integrating AI agents into their operations is still years away, as it will require a significant amount of fine-tuning and troubleshooting before these systems reach true efficacy.
RELATED WHITEPAPER
“While threat actors are already in the experimental phase, testing how far agents can carry out complete attacks without requiring human intervention, we are still a few years away from seeing these types of agents being reliably deployed and trusted to carry out actual attacks,” he argued.
“While such a capability would be hugely profitable in terms of time and cost of attacking at scale, autonomous agents of this sort would be too error-prone to trust on their own.”
Regardless, Kazerounian said the industry should be getting ready for this eventuality, as it will require significant changes to the traditional approach to threat detection.
“Nevertheless, in the future we expect threat actors will create Gen AI agents for various aspects of an attack – from research and reconnaissance, flagging and collecting sensitive data, to autonomously exfiltrating that data without the need for human guidance. Once this happens, without signs of a malicious human on the other end, the industry will need to transform how it spots the signs of an attack.”

Solomon Klappholz is a former staff writer for ITPro and ChannelPro. He has experience writing about the technologies that facilitate industrial manufacturing, which led to him developing a particular interest in cybersecurity, IT regulation, industrial infrastructure applications, and machine learning.
-
Is the traditional MSP service desk dead?
Industry Insights AI and B2C expectations are reshaping B2B service desks and MSP strategy
-
From phone calls to roll calls: 3CX has the answer
How Yellowgrid, a 3CX Platinum distributor, has taken advantage of 3CX Phone System’s customisable nature to create a time-saving solution already embraced by over 100 UK schools
-
Flaw in Lenovo’s customer service AI chatbot could let hackers run malicious code, breach networks
News Hackers abusing the Lenovo flaw could inject malicious code with just a single prompt
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Passwords are a problem: why device-bound passkeys can be the future of secure authentication
Industry insights AI-driven cyberthreats demand a passwordless future…
-
Microsoft patched a critical vulnerability in its NLWeb AI search tool – but there's no CVE (yet)
News Researchers found an unauthenticated path traversal bug in the tool debuted at Microsoft Build in May
-
AI breaches aren’t just a scare story any more – they’re happening in real life
News IBM research shows proper AI access controls are leading to costly data leaks
-
The rise of GhostGPT – Why cybercriminals are turning to generative AI
Industry Insights GhostGPT is not an AI tool - It has been explicitly repurposed for criminal activity
-
Think DDoS attacks are bad now? Wait until hackers start using AI assistants to coordinate attacks, researchers warn
News The use of AI in DDoS attacks would change the game for hackers and force security teams to overhaul existing defenses
-
Okta and Palo Alto Networks are teaming up to ‘fight AI with AI’
News The expanded partnership aims to help shore up identity security as attackers increasingly target user credentials