Agentic AI poses major challenge for security professionals, says Palo Alto Networks’ EMEA CISO
Runtime security and employee oversight are necessary to achieve success with AI agents, according to Haider Pasha


Agentic AI projects are likely to fail at a rate far higher than currently predicted and present a major challenge to cybersecurity operations, according to an information security expert.
Haider Pasha, EMEA CISO at Palo Alto Networks, told ITPro that the benefits of agentic AI would be outweighed by the risks if chief information security officers (CISOs) don’t employ strict strategic and technical controls over the technology’s deployment.
Gartner predicts 40% of agentic AI projects will fail by 2027 and Pasha said this wasn’t surprising.
“I actually think it’s low, personally I think it's going to be a lot higher because, like gen AI that we saw from the MIT report, I think a couple of years from now we're going to see a lot more agentic projects that will fail.
“Because the governance, and the security, the tools, the processes and the mindset shift towards really controlling what that system is supposed to do, all of those things will not necessarily be looked after.”
This is partly due to persistent confusion over the exact definition of agentic AI, Pasha explained, with executive interest in the technology being driven by internet hype or newspapers rather than clear business use cases.
He pointed to findings that 93% of enterprises are looking to adopt agent-based systems, per MuleSoft and Deloitte Digital’s 2025 Connectivity Benchmark Report, as evidence of this interest.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Early evidence suggests that agentic AI could deliver tangible benefits for businesses, with Capgemini having recently projected the technology could deliver $450 billion in economic value.
But Pasha cautioned against diving headfirst into agentic AI adoption without clear ideas of how it could benefit them and the impact it would have on their cybersecurity.
“I think, as CISOs, we need to do a better job of removing the hype and really un-teaching and teaching the board about what cybersecurity and cyber resilience really means in the age of agentic AI, generative AI, and so on,” Pasha said.
Organizations looking to deploy AI agents at scale will need to secure them in the runtime, he advised, with security leaders needing to examine agentic dependencies such as API calls, MCP connections, and SDKs.
“What you ultimately need to do, at a behavioral level, is treat it like an intern,” Pasha told ITPro.
“That’s the simplest way I'm explaining it to my peers, is I'm saying ‘Look treat it, like an intern. What level of privileges would you give an intern? How do you secure the identity, the access, the device that’s being used, the workload it can access, the tools it can call?”
This can be achieved at the network layer using firewalls or at the code level – with developer code within agents probed for vulnerabilities.
It’s this functionality that Palo Alto Networks sought through its recent acquisition of Protect AI, Pasha said, along with “red teaming on demand” in which runtime security analysis can be carried out on a tool while it’s still being coded.
AI Agents are full of vulnerabilities
All of this will be necessary to overcome inherent limitations within agentic AI systems such as memory misuse, Pasha explained.
This is a term for an attack in which hackers with admin-level privileges to an agentic system poison its fundamental instructions.
For example, an attacker could alter an agentic flight booking system to always provide specific users flights for free, then book a chartered flight to Dubai without the system flagging the journey as suspicious.
“I think things like that will actually happen, you will start to see things like memory misuse, tool misuse, or prompt injections,” he said.
“All the vulnerabilities that large language models have been having for the last two years embedded on top of the agentic actions, I think, is going to become a lot more complicated for CISOs to secure.”
Other risks Pasha identified include ‘objective drift’, in which agents gradually divert from their intended actions, and data vulnerabilities posed by shadow AI applications.
Palo Alto Networks found the average organization currently runs 66 generative AI applications and has released a feature in its next-generation firewall known as AI Access Security that allows administrators to identify and control user access to AI apps on a corporate network.
On a fundamental level, effective controls against these risks don’t require brand new technologies.
Though both Gartner and Pasha note that leaders could look to police agents with yet more agents, he also noted that CISOs and field CISOs have been talking about securing cloud computing for 15 years, securing endpoints for 25 years, and securing data for even longer.
“The basics haven’t changed,” he said.
“The governance you build to do something like agentic AI actually doesn't change. You have to think about secure by design, you have to think about having a committee that can tell you what types of guardrails you should be putting in, in order to secure the wider ecosystem.”
And there’s evidence this ecosystem will only become larger and more complex in the coming years.
Pasha noted that machine identities already outnumber humans by more than 80:1, per CyberArk, and echoed Microsoft’s prediction that in the next few year billions of agents will have been deployed.
“We've reached this inflection point when it comes to identity, which is the whole reason why we are in the process now of closing the acquisition of CyberArk, to help organizations focus on securing identity whether it's a human, or if it's a machine, or if it's an agent.”

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Enterprise browsers: the new standard for security?
In-depth The market for enterprise browsers is growing fast – but are their features and security controls enough to compete with free applications?
-
Unlocking technology value: the essential role of TBM in modern IT management
Manage spend, optimize costs, and drive greater value from your technology investments with TBM