Human-centric IAM is failing: Agentic AI requires a new identity control plane

Identity is the new control plane

The race to deploy agentic AI is on. Across the enterprise, systems that can plan, take action, and collaborate across business applications promise unprecedented efficiency. But in the race to automate, a critical component is being overlooked: scalable security. We are creating a workforce of digital employees, without giving them a secure way to log in, access data, and do their work without creating catastrophic risks.

The fundamental problem is that traditional identity and access management (IAM) designed for humans breaks down at agentic scale. Controls like static roles, long-lived passwords, and one-time approval are useless when non-human identities can outnumber human identities by 10 to one. To harness the power of agentic AI, identity must evolve from a simple login gatekeeper to a dynamic control plane for your entire AI operation.

“The fastest path to responsible AI is to avoid real data. Use synthetic data to prove value, then earn the right to touch the real thing.” , Shawn Kanungo, keynote speaker and innovation strategist; Bestselling author of The Bold Ones

Why is your human-centric IAM a sitting duck?

Agent AI doesn’t just use software; It behaves like a user. It authenticates to the system, assumes roles, and calls APIs. If you treat these agents simply as features of an application, you invite invisible privileged and unprivileged actions. A single over-permissioned agent can exfiltrate data at a machine’s speed or trigger incorrect business processes, with no one being the wiser until it’s too late.

The static nature of legacy IAM is the main vulnerability. You cannot pre-define a fixed role for an agent whose tasks and required data access may change daily. The only way to keep access decisions accurate is to shift policy enforcement from one-time grants to continuous, runtime evaluation.

Prove value before production data

Kanungo’s guidance provides a practical on-ramp. Start with synthetic or masked datasets to validate agent workflow, scope, and guardrails. Once your policies, logs, and broken-glass paths are in this sandbox, you can give agents access to real data with confidence and clear audit evidence.

Building an identity-centric operating model for AI

Securing this new workforce requires a change in mindset. Every AI agent should be treated as a first-class citizen within your identity ecosystem.

First, each agent needs a unique, verifiable identity. It’s not just a technical ID; It should be tied to a human owner, a specific business use case, and a software bill of materials (SBOM). The era of shared service accounts is over; They are the equivalent of giving the master keys to a faceless crowd.

Second, replace set-and-forget roles with session-based, risk-aware permissions. Access should be granted in a timely manner, limited to the immediate task and minimum required datasets, then automatically revoked when the task is completed. Think of it like giving an agent the key to a room for a meeting, not the master key to the entire building.

Three Pillars of Scalable Agent Security Architecture

Context-aware authority at the core. Authority can no longer simply be a yes or no at the door. This should be an ongoing conversation. The system must evaluate the context in real time. Is the agent’s digital currency certified? Is it requesting specific data for its purpose? Is this access occurring during normal operating windows? This dynamic evaluation enables both security and speed.

Purpose-bound data access at the edge. The last line of defense is the data layer itself. By embedding policy enforcement directly into the data query engine, you can enforce row-level and column-level security based on the declared intent of the agent. A customer service agent should be automatically blocked from running queries designed for financial analysis. Purpose binding ensures that data is used according to the intended purpose, and not only accessed by authorized identities.

Tampering with obvious evidence by default. In a world of autonomous functions, auditability cannot be compromised. Every access decision, data query and API call should be immutably logged, capturing the who, what, where and why. Link logs so they are clearly interoperable and replayable for auditors or incident responders, providing a clear description of each agent’s activities.

A practical roadmap to get started

Start with an identity list. List all non-human identities and service accounts. You will likely see sharing and over-provisioning. Start issuing unique identities for each agent workload.

Pilot a just-in-time access platform. Implement a tool that provides short-term, scoped credentials for a specific project. It proves the concept and shows operational benefits.

Mandate short-term credentials. Issue tokens that expire in minutes, not months. Find and delete static API keys and secrets from code and configuration.

Set up a synthetic data sandbox. First validate agent workflows, scopes, prompts, and policies on synthetic or hidden data. Publish on actual data only after passing control, logging and exit policies.

Conduct an agent incident tabletop drill. Practice responses to leaked credentials, rapid injection, or tool escalation. Prove that you can revoke access, rotate credentials, and isolate an agent in minutes.

bottom line

You can’t manage an agentic, AI-powered future with human-era identity tools. The organizations that win will recognize identity as the central nervous system for AI operations. Make identity the control layer, move authorization to runtime, tie data access to purpose and prove value on synthetic data before it touches the real thing. Do this, and you can reach one million agents without increasing your risk of breach.

Michelle Buckner is a former NASA Information Systems Security Officer (ISSO).



Leave a Comment