Over two-thirds of workers can’t identify actions taken by AI agents – and lax access controls are to blame

The rapid adoption of AI agents is outpacing access controls, credential hygiene, and identity attribution

An illustration showing an AI agent side profile, depicted as a blue robot, with seven human faces in varying earthy metallic tones shown to the right.
(Image credit: Getty Images)

More than two-thirds of organizations can’t clearly distinguish between human and AI agent activity, as identity and access management (IAM) models fail to keep up with the pace of change.

A new survey from the Cloud Security Alliance (CSA), commissioned by Aembit, has found that while 73% of organizations expect AI agents to become vital within the next year, 68% can’t accurately identify AI agent activity compared to human activity.

“AI agents are already embedded within enterprise environments, and as these systems take on more autonomous roles, organizations must address new challenges around identity and access,” said Hillary Baron, AVP of research at the Cloud Security Alliance.

“The survey data indicates that existing IAM approaches were not designed for autonomous agents, and are showing strain as deployments scale.”

AI agents are in wide use across enterprise workflows, as task automation agents (67%), research agents (52%), developer-assist agents (50%), and security or monitoring agents (50%). Most deployments now go beyond isolated test settings, with 85% of organizations using them in production environments – making it harder to maintain consistent identity governance and permission boundaries.

In fact, AI agents often exist in an identity gray area: 52% of organizations use workload identities, 43% rely on shared service accounts, and 31% allow agents to operate under human user identities. Without a defined taxonomy, the researchers found, this can mean that AI agents inherit permissions beyond their intended role.

Nearly three-quarters (74%) of respondents said that agents often receive more access than necessary, and 79% believe they create new access pathways that are difficult to monitor. More than half (52%) said agents, at least occasionally, inherit access originally intended for humans or other systems.

While 57% reported moderate or high confidence in identity scoping, 33% said they don't know how often AI agent credentials are rotated, 32% weren’t certain how much time is required to implement and maintain authentication or credential handling for a typical AI agent, and only 22% said access frameworks were applied very consistently to AI agents.

Responsibility is spread across departments: mainly security, with 28%, followed by development/engineering (21%) and IT (19%). Only 9% identify IAM teams as the primary owner.

Where identity-level IAM controls aren't yet consistently embedded for AI agents, many organizations are relying on governance mechanisms to manage risk. Disabling identities or revoking tokens (49%) are the most common techniques, while 42% reported terminating the compute environment where an agent runs. Only 33% reported removing or modifying access policies in real time.

"AI agents are inheriting human permissions, operating under shared accounts, and expanding the attack surface in ways that existing IAM tools weren’t designed to handle,” said David Goldschlag, co-founder and CEO at Aembit.

“The survey makes the stakes clear: Agentic autonomy without identity-level access controls is a risk organizations can’t afford to ignore.”

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.