JP Morgan’s AI adoption hit 50% of employees. The secret? A connectivity-first architecture

JPMorgan
When Derek Waldron and his technology team at JPMorgan Chase first launched the LLM suite with personal assistants two and a half years ago, they weren’t sure what to expect. It wasn’t long after the game-changing emergence of ChatGPT, but in the enterprise, skepticism was still high.

Surprisingly, employees chose the internal platform organically – and quickly. In just a few months, usage grew from zero to 250,000 employees. Now, more than 60% of employees across sales, finance, technology, operations and other departments use the ever-evolving, constantly connected suite.

“We were surprised by how viral it was,” Waldron, JPMorgan’s chief analytics officer, explains in a new VB Beyond the Pilot podcast. Employees weren’t just designing signs, they were creating and customizing assistants with specific personalities, instructions, and roles and sharing their learnings on internal platforms.

The financial giant has achieved what most enterprises are still struggling to achieve: large-scale, voluntary employee adoption of AI. This was not a result of the mandate; Rather, early adopters shared concrete use cases, and workers began to build on each other’s enthusiasm. This bottom-up application ultimately resulted in the creation of an innovative flywheel.

“This is a deeply rooted innovative population,” says Waldron. “If we can continue to equip them with really easy-to-use, powerful capabilities, they can drive the next evolution of this journey.”

Ubiquitous connectivity plugged into highly sophisticated systems of record

JPMorgan has taken a rare, forward-looking approach to its technology architecture. The company treats AI as a core infrastructure rather than a novelty, driven by an initially contradictory stance that the models themselves would become a commodity. Instead, they identified connectivity around the system as the real challenge and avoidable gap.

The financial giant initially invested in multimodal retrieval-augmented generation (RAG), which is now in its fourth generation and includes multi-modality. Its AI suite is hosted at the center of an enterprise-wide platform equipped with connectors and tools that support analysis and preparation.

Employees can plug into an expanded ecosystem of critical business data and interact with “very sophisticated” documents, knowledge and structured data stores, as well as CRM, HR, trading, finance and risk systems. Waldron says his team will continue to add more connections month after month.

“We built the platform around this kind of ubiquitous connectivity,” he explains. Ultimately, AI is a great general-purpose technology that will only get more powerful, but if people don’t have meaningful access and significant use cases, “you’re wasting the opportunity.”

As Waldron says, AI’s capabilities are growing impressively – but they remain just shiny objects for show if they can’t prove their use in the real world.

“Even if super intelligence appears tomorrow, if that super intelligence cannot be connected to the systems, data, tools, knowledge, processes that exist within the enterprise, there is no value that can be optimally extracted,” he argues.

listen to full episode To hear about:

  • Waldron’s personal strategy of pausing before asking a human peer And instead assessing how its AI assistant can answer that question and solve the problem.

  • A "One platform, many jobs" Approach: No two roles are alike, so the strategy should focus on reusable building blocks (RAGs, document intelligence, structured data queries) that employees can assemble into role-specific tools.

  • Why RAG maturity matters: JPMorgan evolved through several generations of retrieval – from basic vector search to hierarchical, authoritative, multimodal knowledge pipelines.

Subscribe to Beyond the Pilot on Apple Podcasts and Spotify.



<a href

Leave a Comment