The missing layer between agent connectivity and true collaboration

VB AI IMPACT Outshift 2026 099 XL
Today’s AI challenge is about agent coordination, context, and collaboration. How do you enable them to really think together, with all the relevant understanding, dialogue and shared purpose? This is an important next step toward a new kind of distributed intelligence that keeps humans firmly in the loop.

On the latest stop of VentureBeat’s AI Impact series, Vijay Pandey, SVP and GM of Outshift by Cisco, and Noah Goodman, Stanford professor and co-founder of Humans End, sat down to talk about moving beyond just connecting to agents who are immersed in collective intelligence.

The need for collective intelligence rather than coordinated action

Pandey said, this is the main challenge "Today, agents can connect together, but they can’t really think together."

While protocols like MCP and A2A have solved basic connectivity, and AGNTCY tackles problems ranging from discovery, identity management to inter-agent communication and observability, they have only addressed the equivalent of making a phone call between two people who don’t speak the same language. But Pandey’s team has identified something deeper than technical plumbing: Agents need to achieve collective intelligence, not just coordinated action.

How shared intentions and shared knowledge enable collective innovation

To understand where multi-agent AI is headed, both speakers pointed to the history of human intelligence. While humans became individually intelligent about 300,000 years ago, true collective intelligence did not emerge until about 70,000 years ago with the advent of sophisticated language.

This success enabled three critical capabilities: shared intention, shared knowledge and collective innovation.

"Once you have a shared intention, a shared goal, you have a repository of knowledge that you can modify, develop, build upon, then you can move towards collective innovation," Pandey said.

Goodman, whose work bridges computer science and psychology, explained that language is more than just encoding and decoding information.

"Language is this type of encoding that requires understanding the context, the speaker’s intentions, the world, how it affects what people will say in order to figure out what people mean," He said.

This sophisticated understanding is what underpins human cooperation and cumulative cultural evolution, and is what is currently missing from agent-to-agent interactions.

Addressing the gap with the Internet of Cognition

"We have to mimic human evolution,” Pandey explained. “Like individual humans, in addition to making agents smarter, we need to build infrastructure that enables collective innovation, which means sharing intent, coordinating and then sharing knowledge or context and evolving that context.”

Pandey calls it the Internet of Cognition: a three-layer architecture designed to enable collective thinking among heterogeneous agents:

Protocol Layer: Beyond basic connectivity, these protocols enable understanding, intent sharing, handling coordination, negotiation, and discovery between agents of different vendors and organizations.

Fabric Layer: A shared memory system that allows agents to create and evolve a collective context with emergent properties arising from their interactions.

Cognition Engine Layer: Accelerators and guardrails that help agents think faster while working within necessary constraints around compliance, security, and cost.

The difficulty is that organizations need to build collective intelligence across organizational boundaries.

"Think about shared memory in diverse ways," Pandey said. "We have agents from different parties coming together. So how do you develop that memory and keep it emerging?"

New Foundation Training Protocol to Advance Agent Connections

At the Human’s End, rather than simply relying on additional protocols, Goodman’s team is fundamentally changing how Foundation models are trained not only between a human and an agent, but between a human and multiple agents, and especially between an agent and multiple humans.

"By changing the training we provide to the Foundation Model and focusing the training on extremely long-term negotiations, they will understand how negotiations should proceed to achieve the right long-term results," He said.

And, he adds, this is a deliberate deviation from the long autonomy path taken by many large laboratories.

"Our goal is not longer and longer autonomy. This is better and better cooperation," He said. "Humans is building agents with deep social understanding: entities that know who knows what, can foster collaboration, and put the right experts in touch at the right time."

Installing Cognition Support Railings

Rails remains a central challenge in deploying multifunctional agents that touch every part of an organization’s systems. The question is how to impose limitations without stifling innovation. Organizations need strict, rule-like guardrails, but humans don’t really work that way. Instead, people operate on the principle of minimum harm, or think about the consequences in advance and make relevant decisions.

"How do we provide guardrails as rules, but also support outcome-based cognition when models get smart enough for it?" Goodman asked.

Pandey extended this thinking to the reality of innovation teams, who need to apply rules with discretion, not just follow them mechanically. Figuring out what’s open to interpretation is a “very collaborative task,” he said. “And you don’t understand it through a set of predicates. You don’t understand it through a document. You understand it through common understanding and grounding and discovery and conversation."

Distributed Intelligence: The Path to Superintelligence

True superintelligence will come not from increasingly powerful individual models, but from distributed systems.

"While we create better and better models, and better and better agents, ultimately we think true super intelligence will happen through distributed systems," Pandey said

Intelligence will scale along two axes, both vertical, or better individual agents, and horizontal, or more collaborative networks, in a manner similar to traditional distributed computing.

However, Goodman said, "We cannot move towards a future where AI stops working on its own. We have to move towards a future where there is an integrated ecosystem, a distributed ecosystem that integrates humans and AI together."



<a href

Leave a Comment