Enterprises are worried about agentic AI security risks – Gartner says the answer is just adding more AI agents
Not content with deploying agents for frontline operations, some enterprises might double down with ‘guardian agents’ to monitor their bot-based workforces
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
With enterprises ramping up the use of AI agents, new research suggests many might turn to the technology itself to establish guardrails.
Analysis from Gartner shows ‘guardian agents’ will account for anywhere between 10 to 15% of the broader agentic AI market by 2030. These agents are designed specifically to support and mediate interactions with AI agents, the consultancy explained.
“They function as both AI assistants, supporting users with tasks like content review, monitoring and analysis, and as evolving semi-autonomous or fully autonomous agents, capable of formulating and executing action plans as well as redirecting or blocking actions to align with predefined agent goals,” according to Gartner.
The rise of these guardian agents comes amid growing in agentic AI, the consultancy found.
In a poll of CIOs and IT leaders, 24% of respondents said they have already deployed “a few” AI agents, or less than a dozen. Just 4% revealed they’d deployed over that number, the survey found.
The poll found that 50% of respondents were currently researching or experimenting with the technology, however, underlining the growing interest among tech leaders. An additional 17% said they plan to deploy AI agents by the end of 2026.
Avivah Litan, VP Distinguished Analyst at Gartner, said the projected uptake of AI agents means many enterprises need to implement robust guardrails. With this in mind, deploying agents designed specifically for governance-related tasks could be the go-to approach.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“Agentic AI will lead to unwanted outcomes if it is not controlled with the right guardrails,” Litan said.
“Guardian agents leverage a broad spectrum of agentic AI capabilities and AI-based, deterministic evaluations to oversee and manage the full range of agent capabilities, balancing runtime decision making with risk management.”
Agentic AI security threats are looming
According to Gartner polling, agents will likely be deployed across a wide range of business functions in the year ahead - particularly in areas such as IT, accounting, and human resources.
But while these are designed to support and drive productivity, there are key security considerations that tech leaders need to be wary of.
“As use-cases for AI agents continue to grow, there are several threat categories impacting them, including input manipulation and data poisoning, where agents rely on manipulated or misinterpreted data,” the consultancy said.
Credential hijacking was identified as a major threat faced by enterprises deploying AI agents, while agentic interaction which “fake or criminal websites and sources” could result in poisoned actions.
“The rapid acceleration and increasing agency of AI agents necessitates a shift beyond traditional human oversight,” said Litan.
“As companies move towards complex multi-agent systems that communicate at breakneck speed, humans cannot keep up with the potential for errors and malicious activities.
"This escalating threat landscape underscores the urgent need for guardian agents, which provide automated oversight, control, and security for AI applications and agents.”
What CIOs need to consider when using ‘guardian agents’
Gartner said CIOs, security leaders, and AI practitioners should focus on three distinct types of ‘guardian agents’ to improve safety and security.
These include ‘reviewers’, which could be used to identify and review AI-generated outputs and content for “accuracy and acceptable use”.
‘Monitor’ agents are designed to observe and track AI and agentic actions for human workers while ‘protectors’ can adjust or block actions and permissions during operations.
“Guardian agents will manage interactions and anomalies no matter the usage type,” the consultancy said. “This is a key pillar of their integration, since Gartner predicts that 70% of AI apps will use multi-agent systems by 2028.”
MORE FROM ITPRO

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Salesforce targets telco gains with new agentic AI toolsNews Telecoms operators can draw on an array of pre-built agents to automate and streamline tasks
-
Kyndryl wants to help enterprises keep AI agents in line – and avoid costly compliance blundersNews Controls become machine‑readable policies that AI agents can read and must obey
-
Want to deliver a successful agentic AI project? Stop treating it like traditional softwareAnalysis Designing and building agents is one thing, but testing and governance is crucial to success
-
OpenAI's Codex app is now available on macOS – and it’s free for some ChatGPT users for a limited timeNews OpenAI has rolled out the macOS app to help developers make more use of Codex in their work
-
‘In the model race, it still trails’: Meta’s huge AI spending plans show it’s struggling to keep pace with OpenAI and Google – Mark Zuckerberg thinks the launch of agents that ‘really work’ will be the keyNews Meta CEO Mark Zuckerberg promises new models this year "will be good" as the tech giant looks to catch up in the AI race
-
Half of agentic AI projects are still stuck at the pilot stage – but that’s not stopping enterprises from ramping up investmentNews Organizations are stymied by issues with security, privacy, and compliance, as well as the technical challenges of managing agents at scale
-
‘There’s been tremendous agent washing’: Dell Technologies CTO John Roese says the real potential of AI agents is just being realized – and they could end up managing humansNews As businesses look for return on investment with AI, Dell Technologies believes agents will begin showing true value at mid-tier tasks and in managerial roles.
-
Retailers are turning to AI to streamline supply chains and customer experience – and open source options are proving highly popularNews Companies are moving AI projects from pilot to production across the board, with a focus on open-source models and software, as well as agentic and physical AI

