Enterprises are adopting agents faster than they can secure and govern them – experts warn it’s a disaster waiting to happen
Identity systems developed for human interaction fail to cope with the new demands
The use of AI agents is spiralling out of control, as they're rushed into deployment more quickly than organizations can govern them.
According to new research from Ping Identity, identity systems originally designed for human interaction are now being pushed to operate continuously.
The firm warned this is placing huge strain on existing models while creating gaps in governance, visibility, and accountability at the point when decisions are executed.
"Enterprises are deploying autonomous AI faster than they can govern it,” warned Andre Durand, CEO and founder of Ping Identity. “Identity remains foundational, but in an agentic environment it must operate continuously. Control must be enforced at the moment an action occurs.”
Researchers noted that agents are creating a new class of identity risk in environments where they operate autonomously across enterprise systems.
By combining individually legitimate permissions in unintended ways, they can generate actions that bypass controls and that can't be fully traced or governed.
Some of the biggest challenges include a lack of visibility when it comes to delegation, along with sub-agent spawning, where agent chains become untraceable and break auditability.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Meanwhile, AI agents are bypassing human decision-makers in ways that aren't planned for in OAuth and OIDC models.
Other issues include context leakage across systems where there's no continuous re-evaluation of authorization, and new questions around permission inheritance, liability, and enforcement in agent-to-agent interactions.
IAM strategies need overhauled
Despite these risks, most identity and access management (IAM) approaches remain centered on human users and static access decisions, leaving organizations unprepared to govern autonomous systems.
“These trends reflect a broader shift in identity requirements,” said Martin Kuppinger, founder of KuppingerCole Analysts, which carried out the research.
“As autonomous agents become more prevalent, organizations will need to extend identity and authorization models to maintain control, accountability, and trust across increasingly dynamic environments.”
According to IBM’s 2025 Cost of a Data Breach report, 13% of organizations have experienced AI-related security breaches, and 97% lack adequate access controls for AI systems.
The Ping Identity report echoes recent research from SANS, which found that non‑human and AI identities are multiplying faster than organizations can secure them.
More than three-quarters of organizations said they were seeing growth in the use of non‑human identities (NHIs) such as service accounts, API keys, automation bots, and workload identities - but that governance was failing to keep pace.
Of the three-quarters already using AI agents that require credentials, 5% of security leaders told the SANS researchers that they didn’t even know whether agentic AI was running in their environment or not.
FOLLOW US ON SOCIAL MEDIA
Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Microsoft pats itself on the back over European commitmentsNews The company says it's been working to boost the bloc's digital sovereignty and resilience
-
Version 1 to expand AI services with CreateFuture acquisitionNews The deal will create a €500m digital transformation business with more than 4,000 employees
-
UK firms left in the dark over what workers are sharing with AINews Security teams can’t keep track of what workers are sharing with AI applications, regardless of whether they’re approved or unauthorized
-
'The goal for this year will be to automate all security processes': Google Cloud is betting on Wiz to usher in a new era of AI securityNews Wiz wants to deploy its agents for continuous penetration testing, and in Google it’s found a parent company that can achieve this vision at scale
-
AI is now a ‘standard part of the attacker toolkit’News Cyber attacks are increasing in scale, intensity, and velocity thanks to AI, and it’s forcing defenders to react faster than ever before
-
Agent identity governance can't keeping up with adoption rates – and it’s creating a security nightmareNews Enterprises are leaving high-privilege keys unchanged for months or years at a time
-
Systems are deterministic, people are probabilistic – AI is both, and that's a headache for cyber teamsNews AI combines both the risks associated with IT systems and the people using them, creating headaches for practitioners
-
AI agents are creating new identity security risks: 1Password wants to solve thatNews The Unified Access system from 1Password will help enterprises manage AI agent access across different devices and users
-
CISOs are keen on agentic AI, but they’re not going all-in yetNews Many security leaders face acute talent shortages and are looking to upskill workers
-
In the age of all-in-one platforms, how can partners avoid becoming interchangeable?Industry Insights Complacency is the real problem, rather than platformization...
