
AI agents now have more access and more connections to enterprise systems than any other software in the environment. This gives them a larger attack surface than security teams have had before, and the industry doesn’t yet have a framework for it. "If that attack vector is used, it could result in a data breach or worse," Spiros Xanthos, Founder and CEO of Resolve AI, said while speaking at the recent VentureBeat AI Impact Series event. Traditional security frameworks are built around human interactions. John Aniano, SVP of product and CRM applications at Zendesk, said at the same event that there is no consensus yet for AI agents that have personality and can work autonomously. Agent AI is advancing faster than enterprises can build guardrails – and Model Context Protocol (MCP), while reducing integration complexity, is making the problem worse. “It’s an unsolved problem right now because it’s the wild, wild west,” Aniano said. “We don’t even have a defined technical agent-to-agent protocol that all companies agree on. How do you balance user expectations and keep your platform secure?”
mcp still "extremely approving"
Enterprises are increasingly turning to MCP servers because they simplify integration between agents, tools, and data. However, MCP servers are “extremely flexible,” he said. “In fact they’re probably worse than APIs,” he argued, “because APIs at least have more controls to impose on agents.” Today’s agents are acting on behalf of humans based on explicit permissions, thus establishing human accountability. "But in the future you may have tens, hundreds of agents with their own identities, their own access," Xanthos said. "This becomes a very complex matrix."
Even though his startup is developing autonomous AI agents for site reliability engineering (SRE) and systems management, he acknowledged that the industry “completely lacks a framework” for autonomous agents. “It’s entirely up to us and whoever makes the agent what restrictions they want to put in place,” he said. And customers need to be able to trust those decisions. Some existing security tools provide fine-grained access — for example, Splunk has developed a method to provide access to certain indexes in the underlying data store, he said — but most are broad and human-oriented. “We are trying to figure it out with existing equipment,” he said. "But I don’t think they are adequate for the age of agents.”
Who is accountable when an AI incorrectly authenticates a user?
At ZenDesk and other customer relationship management (CRM) platform providers, AI is involved in many user interactions, Aniano said — in fact, it’s now “at a volume and scale that we haven’t considered as businesses and as a society.”
This can be difficult when AI is helping human agents; The audit trail can become a maze. “So now you have a human who is talking to a human who is talking to an AI,” Aniano said. “The human tells the AI to take action. Whose fault is it if it’s the wrong action?” This becomes even more complex when “there are many pieces of AI and many humans.”" In the mixture. To prevent derailing agents, Zendesk is “very strict” about reach and scope; However, customers can define their own handrails based on their requirements. In most cases, AI can access knowledge sources, Aniano said, but they aren’t writing code or running commands on the server. If an AI calls an API, it is “declaratively designed” and acknowledged, and the functions are specifically invoked. However, customer demand is flooding these scenarios and “we’re closing the gate right now,” he said. The industry should develop solid standards for interacting with agents. “We’re entering a world where things like MCP that can auto-discover tools, we have to create new ways of security to limit what tools these bots can interact with,” Aniano said. When it comes to security, he said, enterprises are concerned when AI starts handling authentication tasks, such as sending and processing one-time passwords (OTP), SMS codes, or other two-step verification methods. What if an AI misauthenticates or misidentifies someone? This could lead to sensitive data being leaked or opening the door to attackers. “There’s a spectrum now, and today the end of that spectrum is a human being,” Aniano said. However, “tomorrow at the end of that spectrum there may be a specialized agent designed to have that same kind of gut feeling or human-level interaction.” Customers themselves are in the realm of adoption and comfort. Aniano said that in some companies — especially those in financial services or other highly-regulated environments — humans must still be involved in authentication. In other cases, legacy companies or the old guard rely solely on humans to authenticate other humans. He said Zendesk is experimenting with new AI agents that are “a little more connected to the system” and working with a select group of customers around guardrails.
permanent authority is coming
Xanthos said that in the future, agents may actually be trusted more than humans to perform certain tasks, and they may be given more permissions than humans have today. But we are a long way from that, and, for the most part, the fear of something going wrong is what is holding enterprises back. He said, “What’s a good fear, right? I’m not saying it’s a bad thing.” Many enterprises are not yet comfortable with an agent performing all steps of a workflow or closing the loop entirely. They still want human review. Resolve is on the verge of giving AI agents permanent authority in certain cases that are “generally safe”, such as coding; From there they will move on to more open scenarios that are not as risky, Xanthos explained. But he acknowledged that there will always be very risky situations where AI mistakes could “change the state of the production system,” as he put it. Ultimately, though: “Clearly, there’s no turning back; it’s probably moving even faster than mobile. So the question is, what do we do about it?”
What can security teams do now
Both speakers pointed to interim measures available within existing tooling. Xanthos noted that some tools — Splunk among them — already provide fine-grained index-level access controls that can be applied to agents. Aniano described Zendesk’s approach as a practical starting point: declaratively designed API calls with clearly approved functions, strict access and scope limitations, and human review before expanding agent permissions.
The underlying principle, as stated by Aniano: "We’re always examining those doors and seeing how we can widen the aperture" – which means do not grant permanent authorization until you have validated each extension.
<a href