Legacy IAM was built for humans — and AI agents now outnumber them 82 to 1

hero for identity article
Active Directory, LDAP, and early PAM were created for humans. AI agents and machines were the exception. Today, they outnumber people 82 to 1, and the human-first identity model is being broken at the pace of the machine.

AI agents are the fastest-growing and least governed class of these machine identities – and they don’t just authenticate, they act. ServiceNow alone spends approximately $11.6 billion on security acquisitions in 2025 – a sign that identity, not model, is becoming the control plane for enterprise AI risk.

CyberArk’s 2025 research confirms what security teams and AI builders have long suspected: machine identity now exceeds that of humans by a wide margin. Microsoft Copilot Studio users created more than 1 million AI agents in a single quarter, up 130% from the previous period. Gartner estimates that by 2028, 25% of enterprise breaches will be caused by AI agent misuse.

Why do legacy architectures fail at machine scale?

Builders do not create shadow agents or over-permitted service accounts due to negligence. They do this because cloud IAM is slow, security reviews don’t clearly map agent workflows, and production pressures prioritize speed over precision. Static credentials become the path of least resistance – until they become a breach vector.

Gartner analysts explained the main problem in a report published in May: "Traditional IAM approaches, designed for human users, fail to meet the unique needs of machines such as devices and workloads."

Their research identifies why retrofitting fails: "Retrofitting human IAM approaches to fit machine IAM use cases leads to fragmented and ineffective management of machine identities, violates regulatory mandates and exposes the organization to unnecessary risks."

The difference in governance is clear. CyberArk’s 2025 Identity Security Landscape Survey of 2,600 security decision makers reveals an alarming disconnect: Although machine identities now outnumber humans 82 to 1, 88% of organizations still define human identities only "Privileged user." The result is that machine identities actually have higher rates of sensitive access than humans.

That 42% figure represents automated processes with access to millions of API keys, service accounts, and crown jewels, all governed by policies designed for inside and outside employees.

The difference in visibility further compounds the problem. A Gartner survey of 335 IAM leaders found that IAM teams are responsible for only 44% of an organization’s machine identities, meaning most operate outside of security visibility. IAM strategy without cohesive machine, warns Gartner "Organizations run the risk of compromising the security and integrity of their IT infrastructure."

The Gartner Leaders Guide explains why legacy service accounts pose a systemic risk: They persist even after the workloads they support disappear, leaving orphan credentials with no clear owner or lifecycle.

In many of the enterprise breaches investigated in 2024, the attackers did not compromise the model or endpoint. They reused long-lived API keys associated with abandoned automation workflows – keys that no one realized were still active because the agent that created them no longer existed.

CrowdStrike CTO Elia Zaitsev explained in a recent VentureBeat interview why attackers have moved away from endpoint and toward identity: "Cloud, identity and remote management tools and valid credentials are where the competitor is moving because it is much harder to do unrestricted work on the endpoint. Why try to bypass and deal with a sophisticated platform like CrowdStrike on the endpoint when you can log in as an admin user?"

Why does agentic AI break identity assumptions?

The emergence of AI agents requiring their own credentials has introduced a range of machine identities that legacy systems never anticipated or were designed for. Gartner researchers specifically call out agentic AI as an important use case: "AI agents require credentials to interact with other systems. In some instances, they use delegated human credentials, while in others, they work with their own credentials. These credentials must be implemented carefully to adhere to the principle of least privilege."

The researchers also cite the Model Context Protocol (MCP) as an example of this challenge, the same protocol security researchers have flagged for its lack of built-in authentication. MCP doesn’t just eliminate authentication – it collapses traditional identity boundaries by allowing agents to traverse data and tools without a stable, auditable identity surface.

Administration problems increase when organizations deploy multiple GenAI tools at once. Security teams need visibility into which AI integrations have functional capabilities, including the ability to execute tasks, not just generate text, and whether those capabilities have been implemented appropriately.

Platforms that integrate identity, endpoints, and cloud telemetry are emerging as the only viable way to detect agent abuse in real-time. Broken point tools cannot keep up with machine-speed lateral movement.

Machine-to-machine interactions are already operating at a scale and speed that human governance models were never designed to handle.

Getting ahead of dynamic service identity changes

Gartner’s research points to dynamic service identity as the way forward. They are defined as short-lived, tightly scoped, policy-driven credentials that significantly reduce the attack surface. Because of this, Gartner is advising security leaders that "Move to a dynamic service identity model instead of defaulting to the legacy service account model. Dynamic service identity does not require creating separate accounts, thus reducing management overhead and attack surface."

The ultimate objective is to achieve periodic access and zero permanent privileges. Platforms that integrate identity, endpoint, and cloud telemetry are becoming the only viable way to detect and control agent abuse across the entire identity attack chain.

Security and AI Builders Can Take Practical Steps Today

Organizations taking ownership of agentic identity are treating it as a collaboration problem between security teams and AI builders. Based on Gartner’s Leaders Guide, OpenID Foundation guidance, and vendor best practices, these are the priorities emerging for enterprises deploying AI agents.

  • First perform a comprehensive search and audit of each account and credential. It’s a good idea to first create a baseline to see how many accounts and credentials are in use across all machines in IT. CISOs and security leaders told VentureBeat that audits often uncover six to ten times more identities than the security team was aware of prior to the audit. One hotel chain found that it was tracking only one-tenth of its machine identities before an audit.

  • Create and tightly manage agent inventory before production. Being on top of this ensures that AI builders know what they’re deploying and security teams know what they need to track. When there is too much intersection between those functions, it becomes easier for shadow agents to evade governance in the process. A shared registry should track ownership, permissions, data access, and API connections for each agent identity before agents reach the production environment.

  • Pay close attention to and excel at dynamic service identities. Transition from static service accounts to cloud-native options like AWS IAM roles, Azure managed identities, or Kubernetes service accounts. These identities are short-term and need to be strictly enforced, managed and policy-driven. The goal is to excel in compliance while providing AI builders with the credentials they need to build apps.

  • Enforce credentials on static secrets at appropriate times. It is important to integrate just-in-time credential provisioning, automated secret rotation, and least privilege defaults into CI/CD pipelines and agent frameworks. These are all fundamental elements of zero trust that need to be core to developing pipelines. Take the advice of experienced security leaders defending AI builders, who often call VentureBeat to advise not to trust perimeter security with any AI devops workflow or CI/CD processes. Pay big attention to zero trust and identity security when it comes to securing the workflow of AI builders.

  • Establish auditable delegation chains. When agents spawn sub-agents or implement external APIs, authorization chains become harder to track. Make sure humans are accountable for all services, including AI agents. To maintain accountability, enterprises need to track behavioral baselines and drift in real time.

  • Deploy continuous monitoring. Consistently monitor every use of machine credentials with the intentional goal of excellence in observability, keeping the principles of zero trust in mind. This includes auditing as it helps detect unusual activities such as unauthorized privilege escalation and lateral movement.

  • Evaluate posture management. Assess potential exploitation routes, extent of potential damage (blast radius), and any shadow administrator access. This includes removing unnecessary or outdated access and identifying misconfigurations that attackers can take advantage of.

  • Start implementing agent lifecycle management. Each agent requires human supervision, whether as part of a group of agents or in the context of an agent-based workflow. When AI builders move on to new projects, their agents should trigger the same offboarding workflow as departing employees. Orphaned agents with permanent privileges can become breach carriers.

  • Prefer integrated platforms over point solutions. Fragmented devices create fragmented visibility. Platforms that integrate identity, endpoint, and cloud security provide self-service visibility to AI builders while giving security teams cross-domain detection.

The gap is expected to increase in 2026

There is a growing gap between what AI builders deploy and what security teams can control. Unfortunately, each major technology change has also given rise to another generation of security breaches, often forcing its own unique industry-wide reckoning. Just as hybrid cloud misconfigurations, shadow AI and API dispersion continue to challenge security leaders and the AI ​​builders they support, the gap between what can be controlled in terms of machine identity attacks in 2026 and what needs to be improved to stop determined adversaries will widen.

The 82-to-1 ratio is not stable. It is gaining momentum. Organizations that continue to rely on human-first IAM architectures are not simply accepting technical debt; They are building security models that become weaker with every new agent deployed.

Agent AI doesn’t break security because it’s intelligent – ​​it breaks security because it grows identities faster than governance can. For many organizations, turning one of their most obvious security weaknesses into a strength begins with the realization that perimeter-based, legacy identity security is no match for the intensity, speed, and scale of machine-on-machine attacks that are the new normal and will increase in 2026.



<a href

Leave a Comment