Why “which API do I call?” is the wrong question in the LLM era

MCP
For decades, we have embraced software. We learned shell commands, memorized HTTP method names, and pieced together the SDK. Every interface assumed we would speak Its Language. In the 1980s, we typed ‘grep’, ‘ssh’ and ‘ls’ into a shell; By the mid-2000s, we were invoking REST endpoints like GET /users; Until 2010, we imported the SDK (client.orders.list()), so we didn’t have to think about HTTP. But behind each of those steps was the same premise: to expose capabilities in a structured form so that others could invoke them.

but now we are Entering the next interface paradigm. Modern LLMs are challenging the notion that the user must choose a function or remember a method signature. Instead of “Which API do I call?” The question becomes: “What outcome am I trying to achieve?” In other words, the interface is moving from code → to language. In this shift, Model Context Protocol (MCP) emerges as the abstraction that allows models to interpret human intent, discover capabilities, and execute workflows, effectively exposing software functions not as programmers know them, but as natural-language requests.

MCP is not a hyperbolic term; Several independent studies identify architectural changes needed to invoke “LLM-consumable” tools. A blog from Akamai engineers describes the transition from traditional APIs to “language-driven integration” for LLM. Another academic paper on “AI Agentic Workflows and Enterprise APIs” talks about how enterprise API architectures should evolve to support goal-oriented agents rather than human-driven calls. In short: we’re no longer designing APIs just for code; We’re designing capabilities for intent.

Why does this matter to enterprises? Because enterprises are drowning in internal systems, integration sprawl and user training costs. Workers struggle not because they don’t have tools, but because they have too many tools, each with its own interface. When natural language becomes the primary interface, the question “What function do I call?” An obstacle arises. Disappears. A recent business blog observed that natural-language interfaces (NLIs) are enabling self-service data access for marketers who previously had to wait for analysts to write SQL. When the user just states the intent (e.g. “get last quarter’s revenue for region

Natural language becomes the interface, not the feature

To understand how this evolution works, consider the interface ladder:

Era

interface

Who was it made for?

CLI

shell order

Expert user typing text

API

web or rpc endpoint

Developers are integrating the system

sdk

library work

Programmers are using abstraction

Natural Language (MCP)

intent-based requests

Human + AI agents are telling What they want

Through each stage, humans had to “learn the language of the machine.” With MCP, the machine absorbs the human’s language and does the rest. This isn’t just a UX improvement, it’s an architectural change.

Under MCP, the functions of code still exist: data access, business logic, and orchestration. But they are discovered rather than applied manually. For example, instead of calling "billingApi.fetchInvoices(customerId=…)," You say, “Show all Acme Corp.’s invoices from January and highlight any late payments.” The model resolves entities, calls the correct systems, filters and returns structured insights. The developer’s work shifts from wiring endpoints to defining capability surfaces and guardrails.

This change transforms the developer experience and enterprise integration. Teams often struggle to onboard new tools because they need to map schema, write glue code, and train users. Along the natural-language front, onboarding involves defining business unit names, declaring capabilities, and exposing them through protocols. The human (or AI agent) no longer needs to know the parameter names or call order. Studies show that using LLM as an interface to an API can reduce the time and resources required to develop a chatbot or tool-invoke workflow.

The change also brings productivity gains. Enterprises adopting LLM-powered interfaces can convert data access latency (hours/day) into conversation latency (seconds). For example, if an analyst had to first export a CSV, run transformations, and deploy slides, a language interface allows one to “summarize the top five risk factors for churn over the past quarter” and generate narrative + visuals in one go. Then the human reviews, makes adjustments, and acts – the data is transferred from the plumber to the decision maker. It matters: According to a survey by McKinsey & Company, 63% of organizations using General AI are already creating text output, and more than a third are generating images or code. (Although many are still in the early days of capturing enterprise-wide ROI, the signal is clear: language as interface unlocks new value.

Architecturally, this means that software design must evolve. MCP demands publishing systems capacity metadataHelp semantic routing, maintain episodic memory and apply handrailsAn API design no longer needs to ask “What function will the user call?”, but rather “What intent can the user express?” A recently published framework for improving enterprise APIs for LLMs shows how APIs can be enriched with natural-language-friendly metadata so that agents can dynamically select tools, Implications: Software becomes modular around intent surfaces rather than function surfaces,

Language-first systems also bring risks and requirements. Natural language is ambiguous by nature, so enterprises must implement authentication, logging, provenance, and access controls, just as they did for APIs. Without these guardrails, an agent could call the wrong system, expose data, or misinterpret intent. A post on “quick collapse” highlights the danger: as natural-language UI becomes dominant, the software could turn into “a capability accessible through conversation” and the company could turn into “an API with a natural-language frontend.” This change is powerful, but only safe if the system is designed for introspection, audit, and governance.

This change also has cultural and organizational implications. For decades, enterprises have hired integration engineers to design APIs and middleware. With MCP-powered model, companies will hire faster ontology engineer, Capacity Architects And Agent Enablement SpecialistThese roles focus on defining the semantics of business operations, mapping business entities to system capabilities, and curating the context memory, Because the interface is now human-centered, skills such as domain knowledge, quick framing, observation, and evaluation have become central,

What should enterprise leaders do today? First, think of natural language as an interface layer, not a fancy add-on. Map out your business workflow that can be implemented securely through the language. Then list the built-in capabilities you already have: data services, analytics, and APIs. Then ask: “Are these discoverable? Can they be called by intent?” Finally, pilot an MCP-style layer: create a small domain (customer support triage) where users or agents can express results in language, and let the system do the orchestration. Then iterate and scale.

Natural language isn’t just the new front-end. It is becoming the default interface layer for software, replacing CLI, then API, then SDK. MCP is the abstraction that makes this possible. Benefits include fast integration, modular systems, higher productivity and new roles. For organizations still tied to manually calling endpoints, this change will feel like learning a new platform all over again. The question is no longer “Which function do I call?” But “What do I want to do?”

Dhyeya Mavani is accelerating generation AI and computational mathematics.



<a href

Leave a Comment