Agentic AI carries huge implications for security teams - here's what leaders should know
AI agents should be considered with the same scrutiny as any internal user
The odds that you've not heard the terms ‘agentic AI’ or ‘AI agents’ in the last 12 months are low. The approach, which sees AI models perform tasks autonomously, has become one of the most sought-after force multipliers in the tech sector.
The idea of ‘intelligent agents’ within the field of AI dates back to the 1990s, though
Although “agentic AI” was already a term being used by AI enthusiasts back in 2018, it wasn’t until an X post, published at the start of 2024 by Andrew Ng, co-founder of Coursera, that it began to gain traction.
“AI agentic workflows will drive massive AI progress [in 2024] — perhaps even more than the next generation of foundation models,” Ng wrote in his post. “This is an important trend, and I urge everyone who works in AI to pay attention to it.”
Fast forward 19 months and the latest Technology Pulse Poll from Ernst & Young LLP, surveying more than 500 technology leaders, found that 48% of companies had already deployed agentic AI. Conducted in April 2025, the poll reveals an upsurge in interest around agentic AI, but also highlights the speed with which it’s being adopted, with 50% of respondents saying that over half of all AI deployments in the next 24 months will be autonomous. And while the pace of agentic AI adoption delivers benefits, it has also introduced a new layer of threats.
"Agentic AI shifts the paradigm from ‘AI as a tool’ to ‘AI as a teammate.’ And that AI teammate is equipped to act independently based on its goals,” Emanuela Zaccone, staff product manager in AI for Cybersecurity at Sysdig, tells ITPro. “While we’re still just beginning to scratch the surface of all the benefits agentic AI can afford to us, it also introduces new security risks.”
Agentic AI brings new risks
Giving AI tools the scope to act independently provides a new set of challenges for IT companies to consider, thanks to the unprecedented autonomy that’s being granted to them.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“While they are trained for specific scenarios, AI agents can also make decisions, remember context and call in external tools,” explains Andre Baptista, an ethical hacker and co-founder of Ethiack. “Their ability to access sensitive data, execute unintended actions, or traverse trust boundaries without human intervention may create additional attack surfaces for threat actors to exploit.”
Baptista also believes that there’s a significant privacy risk if the data used to train their models comes with insufficient user consent or anonymization.
“Agentic AI is also increasing the percentage of code generated and deployed autonomously, and this may ultimately increase the risk of data leakage, unauthorised access or misaligned objectives,” Baptista tells ITPro. “These risks are serious in their own right but they’re made more acute, because many legacy security systems were never designed to manage them."
Andrey Slastenov, head of security at Gcore, agrees, explaining that agentic AI has a “radical impact” on your company’s cyber threat vectors, whilst also introducing new ways for attackers to operate.
"AI agents automate processes and improve efficiency but also introduce new vulnerabilities that can be exploited by attackers,” Slastenov explains. “With AI-driven attacks such as automated hacking and malicious code generation growing more sophisticated, it's crucial for companies to evolve their security strategies to keep pace. Securing AI agents requires moving beyond static defenses to more dynamic, AI-enhanced systems.”
Going beyond code and infrastructure
Some security experts believe that without visibility into where agentic AI systems operate, companies cannot establish proper controls over what data is being accessed and used.
“This not only complicates things for companies trying to gain customer trust with transparency and accountability, but also puts a major strain on existing regulatory frameworks struggling to keep up,” Greg Notch, chief security officer at Expel tells ITPro. “Further, what data is accessed, when, and by whom, is made all the more complicated when fleets of AI agents are involved. When these “digital assistants” can act on someone’s behalf, identity governance and access management become all the more critical.”
It gets more complicated. Agentic AI systems can act on behalf of LLMs – which might also be directed by more agents – making accountability, visibility and decision-making become increasingly difficult.
“Agentic AI-powered security solutions aren’t just about protecting code or infrastructure; they’re about protecting decision-making itself,” explains Zaccone. “If not built with the right guardrails and the best datasets, agentic AI could make unauthorized choices, misinterpret context, or be manipulated by threat actors. The stakes are always high in cybersecurity, but they’re even higher now."
Rik Ferguson, VP of security intelligence at Forescout, says that it’s vital to have full visibility into what agentic AI agents are connected to, what data they can access, and which actions they are authorized to take.
“Security teams should treat AI agents like they would any third-party tool or internal user,” Ferguson says. “That means clearly defined permissions, monitored activity, and strong boundaries. Without that, you are opening the door to prompt injection, credential misuse, data leaks and supply chain compromises."
Baking in accountability
Without the correct procedures in place for accountability and decision-making, things can go awry very quickly. Jeff Schuman, head of brand at Mimecast, has first-hand experience of what can occur when agentic AI goes wrong.
"I’ve seen what happens when an AI agent becomes compromised,” Schuman says. “It started small, just responding to queries like any other assistant. But because it appeared trusted, systems kept handing it more access. Eventually, it was quietly collecting sensitive information and sending it out, no alarms, no suspicion, just quiet exploitation wrapped in a veneer of legitimacy.”
Some AI agents are designed to explain their activity if asked by an IT admin, or even produce documentation detailing each step they've taken. But agents can also make decisions for opaque reasons and even with oversight may introduce unexpected risks.
A prompt doesn’t always look like an attack vector, but that’s exactly what it can be, says Camden Woollven, group head of AI product marketing at GRC International Group. “All it takes is a carefully worded input, and suddenly the agent’s sharing credentials or pushing data where it shouldn’t go – and no one spots it because there’s no proper audit trail,” Woollven tells ITPro.
“Most companies don’t even know which agents have access to what,” Woollven says. He adds that by allowing agents to act on behalf of users, but without any of the usual controls, leaders leave the systems open to abuse. This is a view shared by Inesa Dagyte, head of information security at Oxylabs, who believes human accountability must be baked in as standard.
"Deploying an AI agent to work unattended is not just technically reckless due to its potential for error and hallucination; it is organizationally and ethically dangerous,” Dagyte tells ITPro. “It can create the perfect accountability black hole, where a machine, which cannot be held responsible, is blamed for failures, allowing human decision-makers to escape the consequences
“A human must make the final, high-impact decisions,” Dagyte continues. “This is non-negotiable, not only because a person can supply business context that an AI will never understand, but because proper security requires accountability, and accountability requires a person, not a process."

Dan Oliver is a writer and B2B content marketing specialist with years of experience in the field. In addition to his work for ITPro, he has written for brands including TechRadar, T3 magazine, and The Sunday Times.
-
What is a tensor processing unit (TPU)?Explainer Google's in-house AI chips are the most notable alternative to Nvidia at the enterprise scale
-
Cyber Security and Resilience Bill: Security experts question practicality, scope of new legislationNews The new legislation aims to shore up critical infrastructure defenses, but questions remain over compliance and scope