Enterprises are adopting agents faster than they can secure and govern them – experts warn it’s a disaster waiting to happen

Identity systems developed for human interaction fail to cope with the new demands

An illustration showing an AI agent side profile, depicted as a blue robot, with seven human faces in varying earthy metallic tones shown to the right.
(Image credit: Getty Images)

The use of AI agents is spiralling out of control, as they're rushed into deployment more quickly than organizations can govern them.

According to new research from Ping Identity, identity systems originally designed for human interaction are now being pushed to operate continuously.

The firm warned this is placing huge strain on existing models while creating gaps in governance, visibility, and accountability at the point when decisions are executed.

"Enterprises are deploying autonomous AI faster than they can govern it,” warned Andre Durand, CEO and founder of Ping Identity. “Identity remains foundational, but in an agentic environment it must operate continuously. Control must be enforced at the moment an action occurs.”

Researchers noted that agents are creating a new class of identity risk in environments where they operate autonomously across enterprise systems.

By combining individually legitimate permissions in unintended ways, they can generate actions that bypass controls and that can't be fully traced or governed.

Some of the biggest challenges include a lack of visibility when it comes to delegation, along with sub-agent spawning, where agent chains become untraceable and break auditability.

Meanwhile, AI agents are bypassing human decision-makers in ways that aren't planned for in OAuth and OIDC models.

Other issues include context leakage across systems where there's no continuous re-evaluation of authorization, and new questions around permission inheritance, liability, and enforcement in agent-to-agent interactions.

IAM strategies need overhauled

Despite these risks, most identity and access management (IAM) approaches remain centered on human users and static access decisions, leaving organizations unprepared to govern autonomous systems.

“These trends reflect a broader shift in identity requirements,” said Martin Kuppinger, founder of KuppingerCole Analysts, which carried out the research.

“As autonomous agents become more prevalent, organizations will need to extend identity and authorization models to maintain control, accountability, and trust across increasingly dynamic environments.”

According to IBM’s 2025 Cost of a Data Breach report, 13% of organizations have experienced AI-related security breaches, and 97% lack adequate access controls for AI systems.

The Ping Identity report echoes recent research from SANS, which found that non‑human and AI identities are multiplying faster than organizations can secure them.

More than three-quarters of organizations said they were seeing growth in the use of non‑human identities (NHIs) such as service accounts, API keys, automation bots, and workload identities - but that governance was failing to keep pace.

Of the three-quarters already using AI agents that require credentials, 5% of security leaders told the SANS researchers that they didn’t even know whether agentic AI was running in their environment or not.

FOLLOW US ON SOCIAL MEDIA

Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.

You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.