Enterprise identity was built for humans — not AI agents

Gemini Generated Image w9k5pmw9k5pmw9k5
Presented by 1Password


Adding agentic capabilities to enterprise environments fundamentally reshapes the threat model by introducing a new class of actor into detection systems. The problem: AI agents are taking actions within sensitive enterprise systems, logging in, retrieving data, calling LLM tools, and often executing workflows without the visibility or control that traditional identity and access systems were designed to enforce.

AI tools and autonomous agents are spreading across enterprises faster than security teams can handle or control them. Also, most identity systems still assume static users, long-running service accounts, and coarse role assignments. They were not designed to represent agents working in delegated human authority, short-term execution contexts, or strict decision cycles.

As a result, IT leaders need to step back and rethink the trust layer. This change is not theoretical. NIST’s Zero Trust Architecture (SP 800-207) clearly states that “All subjects – including applications and non-human entities – are considered untrusted until authenticated and authorized.”

In an agentic world, this means AI systems must have their own clear, verifiable identity, rather than operating through inherited or shared credentials.

"Enterprise IAM architectures are built to assume that all system identities are human, meaning they rely on consistent behavior, clear intent, and direct human accountability to enforce trust." says Nancy Wang, CTO of 1Password and venture partner at Felicis. “Agent systems break those assumptions. The AI agent is not a user you can train or periodically review. It is software that can be copied, forked, scaled horizontally, and run in tight execution loops across multiple systems. If we continue to treat agents like humans or static service accounts, we lose the ability to clearly represent who they are working for, their What right does one have, and how long should that right last?

How AI agents turn development environments into security risk zones

One of the first places to break down these identity-related assumptions is the modern development environment. Integrated developer environments (IDEs) have evolved from a simple editor to an orchestrator capable of reading, writing, executing, fetching, and configuring systems. With an AI agent at the center of the process, prompt injection change is not just an abstract possibility; They become a tangible risk.

Since traditional IDEs were not designed with AI agents as a core component, adding aftermarket AI capabilities creates new types of risks that traditional security models were not built with in mind.

For example, AI agents inadvertently violate trust boundaries. Seemingly harmless README may contain hidden instructions that trick the assistant into exposing credentials during standard analysis. Project material from untrusted sources may alter agent behavior in unexpected ways, even if the material bears no obvious resemblance to a signal.

Input sources now extend beyond files that are intentionally played. Documentation, configuration files, file names, and tool metadata are all consumed by agents as part of their decision-making processes, influencing the way they interpret a project.

Trust is lost when agents act without intent or accountability

When you add highly autonomous, deterministic agents operating with elevated privileges with the ability to read, write, execute, or reconfigure the system, the threat increases. These agents have no context, no ability to determine whether a request for authentication is legitimate, who submitted that request, or what boundaries should be placed around that action.

"In the case of agents, you cannot assume that they have the ability to make accurate decisions, and they certainly lack a moral code," Wang says. "Every one of their actions needs to be precisely controlled, and access to sensitive systems and what they can do within them needs to be more clearly defined. The difficult thing is that they are taking continuous action, hence there is a need to continuously curb them."

Where traditional IAM fails with agents

Traditional identity and access management systems operate on several core assumptions that agentic AI violates:

Static privilege models fail with autonomous agent workflows: Traditional IAM allows permissions based on roles that remain relatively stable over time. But agents perform a series of actions that require different privilege levels at different moments. At least privilege can no longer be a set-it-and-forget-it configuration. Now it should be implemented dynamically with automatic expiration and refresh mechanisms with each action.

Human accountability for software agents breaks down: Legacy systems assume that each identity is tied to a specific person who can be held responsible for actions taken, but agents blur this line entirely. It is no longer clear when an agent acts, under whose authority, an agent is acting, which is already a tremendous vulnerability. But when that agent is copied, modified, or left in use long after its original purpose has been fulfilled, the risks increase manifold.

Behavior-based detection fails with continuous agent activity: While human users follow recognizable patterns, such as logging in during business hours, accessing familiar systems, and taking actions that align with their work tasks, agents work continuously across multiple systems simultaneously. This not only multiplies the potential for damage to the system, but also causes legitimate workflows to be flagged as suspicious for traditional anomaly detection systems.

Agent identity is often invisible to traditional IAM systems: Traditionally, IT teams could more or less configure and manage the identities active in their environment. But agents can dynamically create new identities, work through existing service accounts, or leverage credentials in ways that make them invisible to traditional IAM tools.

"This is the entire context part, the intention behind an agent, and traditional IAM systems have no ability to manage this," Wang says. "This convergence of different systems makes the challenge broader than mere identification, requiring context and observation to understand not only who did what, but why and how."

Rethinking security architecture for agentic systems.

Securing agentic AI requires rethinking enterprise security architecture from the ground up. Several major changes are necessary:

Identity as a control plane for AI agents: Rather than treating identity as a security component among many, organizations should recognize it as the fundamental control plane for AI agents. Major security vendors are already moving in this direction, with identity becoming integrated into every security solution and stack.

Context-aware access as a requirement for agentic AI: Policies should be much more detailed and specific, defining not only what an agent can access, but also under what circumstances. This means considering who invoked the agent, what device it is running on, what time constraints apply, and what specific actions are allowed within each system.

Zero-knowledge credential handling for autonomous agents: One promising approach is to keep credentials out of the agents’ view altogether. Using technologies such as agentic autofill, credentials can be injected into the authentication flow without agents seeing them in plain text, similar to how password managers work for humans, but extended to software agents.

Auditability Requirements for AI Agents: Traditional audit logs that track API calls and authentication events are inadequate. Agent auditability requires knowing who the agent is, under whose authority he or she acts, what scope of authority was granted, and the full range of actions taken to complete the workflow. This mirrors the detailed activity logging used for human employees, but must be optimized for software units performing hundreds of actions per minute.

Enforcing boundaries of trust between humans, agents, and systems: Organizations need clear, enforceable limits that define what an agent can do when called by a specific person on a particular device. This requires separating intent from execution: understanding what the user wants to achieve from the agent and what the agent actually does.

The future of enterprise security in an agentic world

As agentic AI becomes embedded in everyday enterprise workflows, the security challenge is not whether organizations will adopt agents; The point is whether the systems that control access can evolve to keep pace.

Blocking AI at the perimeter is unlikely to happen on a large scale, but neither will expanding legacy identity models. What is needed is a shift towards identity systems that can account for context, delegation and accountability between both humans, machines and AI agents in real time.

“Stepping up for agents in production won’t come from smart models alone,” Wang says. “This will come from predictable authorization and enforceable trust boundaries. Enterprises need identity systems that can clearly represent who an agent is acting for, what he or she is allowed to do, and when that authority expires. Without this, autonomy becomes an unmanaged risk. With that, agents become ungovernable.”


Sponsored articles are content produced by a company that is either paying for the post or that has a business relationship with VentureBeat, and they are always clearly marked. Contact for more information sales@venturebeat.com.



<a href

Leave a Comment