How the explosion in machine identities is changing cyber defense

The rapid adoption of machine identities such as AI agents is creating new vulnerabilities and fresh challenges for IT teams

A visualization of machine identities, shown as blue nodes in the shape of the world with code surrounding them, against a black background.
(Image credit: Getty Images)

The interactions associated with machine identities typically include AI agents triggering workflows, APIs exchanging data and cloud services communicating across distributed systems.

According to research published in February 2026 by Obsidian Security, machine identities, including API keys, service accounts and certificates, now outnumber human identities by more than 100 to one in many enterprise environments. In some sectors, this is approaching 500 to one, as cloud-native architectures and AI workloads expand.

Obsidian’s research revealed that 79% of organizations expect the number of machine identities to increase in the coming year, with 16% anticipating growth of between 50% and 150%. Big tech agrees: Microsoft predicts that 1.3 billion AI agents will be deployed by 2028.

This trend leaves traditional perimeter-based defenses, designed for a world where users logged in from known locations, as inadequate.

“Organizations have to have a mature enough posture to be able to account for non-human identities,” says Nicole Carignan, senior vice president, Security and AI Strategy and field CISO at Darktrace. She explains that when attackers target AI systems, they often do so by exploiting the identities that control non-human actors like bots and agents, so organizations need strong systems in place to track and secure these identities and their permissions.

Machine identities are becoming the norm

These identities are increasingly responsible for routine operations. Microservices authenticate with each other, AI models access data pipelines and automation tools deploy infrastructure through APIs. Each interaction represents a potential security risk if identity and access controls are poorly managed.

“As organizations increase their use of AI agents, sometimes knowingly and sometimes unknowingly, that naturally increases the attack surface,” says Wil Rockall, cybersecurity partner at Deloitte. “Agents represent a new class of identity,” he adds, noting that these digital actors blur the boundaries between traditional identity categories.

“An AI agent sits at the intersection of human identity, device identity and application identity,” he explains. “It behaves a bit like a human because it’s not deterministic, but it’s automated and can move very quickly.”

This combination of automation and unpredictability creates scope for new vulnerabilities. If agents are poorly governed or given excessive permissions, they can perform actions at a scale far greater than humans.

Scott McKinnon, UK & Ireland CISO at Palo Alto Networks, says organizations are entering a new phase of AI adoption.

“We’re seeing a fundamental shift from AI giving answers to taking actions,” he says. “Those connectors serve as its eyes, ears, hands and legs, giving the model power to interact with external systems.” He warns that while traditional security focuses on stopping malicious files, the rise of agentic AI tools that can read, write and move data is creating a new blind spot for organizations.

Machine-driven breaches

The growth in machine identities is creating significant operational challenges. Obsidian reports that 50% of organizations experienced security breaches linked to compromised machine identities in the past year, with API keys and TLS certificates among the most common attack vectors.

The report also highlights that only 12% of organizations have fully automated lifecycle management for machine identities, leaving the majority dependent on manual tracking or ad-hoc processes.

This creates opportunities for attackers. If credentials are stolen or misused, malicious activity can look like legitimate machine traffic.

“Security was built around human behaviour – logins, sessions and intent,” says Amir Khayat, CEO and co-founder of Vorlon, a cybersecurity firm. “That framing collapses when software becomes the primary actor.”

In machine-driven environments, compromise is harder to detect. “AI agents authenticate once and operate continuously, moving across systems through APIs and integrations that most security teams never inspect,” Khayat says. “The biggest shift is that compromise no longer announces itself. It blends into normal system activity.”

Recent incidents illustrate the risk. Attacks exploiting stolen machine credentials have affected several major technology companies in recent years, including breaches involving automation tokens used in development pipelines. In these cases, such as the Salesloft breach, attackers bypassed traditional controls by impersonating legitimate services rather than targeting individual users.

Best practices for machine identities

To counter this threat, many organizations are adopting zero trust security architectures, which assume that no user or device should be trusted by default. That means constantly verifying identity and behaviour rather than relying on a single authentication event. It is also about tightly controlling those identities with permissions limited to specific tasks and timeframes.

Short-lived credentials, certificate-based authentication and just-in-time access are increasingly labeled as best practice.

Behavioral analytics are also increasingly playing a part in keeping machine identities secure. According to Obsidian Security, point-in-time inventories of machine identities are insufficient, because they cannot detect when credentials are used in unexpected ways, such as accessing unusual systems or operating outside normal patterns.

However, fully autonomous cyber defense remains controversial. “Autonomous defence is inevitable,” says Khayat. “Machines move faster than humans can respond.” But he warns that automation without sufficient context can unleash new risks. “When automation runs on fragmented data it either disrupts legitimate operations or misses real attacks,” he says. “The prerequisite is a clear behavioural baseline across agents, data flows and integrations.”

Most organizations are currently adopting a hybrid model where AI systems handle detection and initial response, while human analysts retain decision-making authority for complex incidents.

Bringing agents into an organization’s network can drive big productivity gains and be the basis of new services. However, they come with risks for those failing to take the security risks seriously enough. Machine identities can accumulate permissions over time, operate without clear ownership and become invisible within complex infrastructure.

The future of cybersecurity may ultimately depend on the ability to treat machines not just as tools, but as digital actors whose identities, behaviour and privileges must be governed as carefully as those of human users.

Justin Pugsley
Freelance writer

Justin Pugsley is a freelance writer with decades of experience in the business and tech spaces. He has previously contributed to the Financial Times and Thomson Reuters among other publications, and has extensive experience researching and writing for consultancies, asset managers and professional services firms.