Why « which API to call? » is the wrong question in the LLM era

We’ve adapted to software for decades. We learned shell commands, memorized HTTP method names, and connected the SDK. Every interface assumed we were going to talk his language. In the 1980s we introduced ‘grep’, ‘ssh’ and ‘ls’ to the shell; until the mid-2000s we called REST endpoints as GET /users; until 2010 we imported the SDK (client.orders.list()) so we didn’t have to think about HTTP. But at the heart of each of these steps was the same premise: exposing abilities in a structured form so that others could call upon them.

But now we are entering the next interface paradigm. Modern LLMs challenge the idea that the user must choose a function or remember a method signature. Instead of « Which API should I call? » the question becomes, « What result am I trying to achieve? » In other words, the interface goes from code → to language. In this shift, the Model Context Protocol (MCP) emerges as the abstraction that enables models to interpret human intent, discover capabilities, and execute workflows, effectively exposing software functions not as programmers know them, but as natural language queries.

MCP is not an advertising term; multiple independent studies have identified the architectural change required to invoke the « LLM Consumable » tool. An Akamai engineer blog describes the transition from traditional APIs to « language-driven integrations » for LLM. Another academic paper on AI Agent Workflows and Enterprise APIs talks about how the enterprise API architecture needs to evolve to support goal-oriented agents rather than human-driven calls. In short: We no longer just design APIs for code; we design capabilities for intent.

Why does this matter to businesses? As enterprises drown in internal systems, integration and user training costs grow. Workers struggle not because they don’t have tools, but because they have too many tools, each with its own interface. When natural language becomes the main interface, the barrier of « which function to call? » disappears. A recent business blog noted that natural language interfaces (NLIs) enable self-service access to data for marketers who previously had to wait for analysts to write SQL. When the user simply states an intent (such as « get last quarter’s revenue for region X and note anomalies »), the system underneath can translate that into calls, orchestration, context memory, and deliver results.

Natural language becomes not a convenience but an interface

To understand how this evolution works, take a look at the interface ladder:

era

Interface

Who was it built for?

CLI

Shell commands

Experienced text input users

API

Web or RPC endpoints

Systems Integrator Developers

SDK

Library functions

Programmers using abstractions

Natural Language (MCP)

Intent-based requests

Human + AI agents state what they want

Through each step, humans had to « learn the language of the machine. » With MCP, the machine ingests the human tongue and does the rest. This isn’t just a UX improvement, it’s an architectural change.

With MCP, the functions of the code are still there: data access, business logic, and orchestration. But they are discovered, not called manually. For example, instead of calling "billingApi.fetchInvoices(customerId=…)," you say « Show all invoices for Acme Corp since January and highlight all late payments. » The model resolves the objects, calls the correct systems, filters and returns structured information. The developer’s work is shifting from wiring endpoints to defining surfaces and guardrails.

This change is transforming the developer experience and enterprise integration. Teams often struggle to implement new tools because they require mapping schemas, writing sticky code, and training users. With a natural language front end, inclusion involves defining business entity names, declaring capabilities, and exposing them through the protocol. The human (or AI agent) no longer needs to know the parameter names or the order of the calls. Studies show that using LLMs as interfaces to APIs can reduce the time and resources needed to develop chatbots or tool-invoked workflows.

The change also brings productivity benefits. Enterprises that adopt LLM-managed interfaces can convert data access latency (hours/days) to conversation latency (seconds). For example, if an analyst previously had to export CSV files, perform transformations, and embed slides, the language interface allows « Summary of the top five withdrawal risk factors in the last quarter » and generate a narrative + visuals all at once. The person then reviews, adjusts and acts – moving from data plumber to decision maker. This matters: According to a McKinsey & Company study, 63% of organizations using Gen AI already create text outputs, and more than a third generate images or code. (Although many are still in the early days of capturing enterprise-wide ROI, the signal is clear: language as an interface is unlocking new value.

Architecturally, this means that software design must evolve. MCP requires systems that publish ability metadatasupport semantic routing, I maintain context memory and imposed guardrails. API design should no longer ask « What function will the user call? », but rather « What intent can the user express? » A recently published framework for improving enterprise APIs for LLM shows how APIs can be enriched with natural language-friendly metadata so that agents can select tools dynamically. The bottom line: Software becomes modular around intent surfaces, not functional surfaces.

Language priority systems also carry risks and requirements. Natural language is ambiguous by nature, so enterprises must implement authentication, logging, provenance, and access control just as they did for APIs. Without these guardrails, an agent could call the wrong system, expose data, or misinterpret intent. One « fast shrink » post highlights the danger: as the natural language user interface becomes dominant, the software may become a « conversational capability » and the company a « natural language interface API. » This transformation is powerful, but safe only if systems are designed for introspection, auditing and management.

Change also has cultural and organizational implications. For decades, enterprises have hired integration engineers to design APIs and middleware. With MCP-driven models, companies will hire more and more employees ontological engineers, ability architects and agent activation specialists. These roles focus on defining the semantics of business operations, mapping business entities to system capabilities, and managing context memory. Since the interface is now human-centered, skills such as domain knowledge, rapid framing, supervision, and evaluation become central.

What should business leaders be doing today? First, think of natural language as the interface layer, not as a fancy add-on. Map your business flows that can be safely invoked through language. Next, catalog the core capabilities you already have: data services, analytics, and APIs. Then ask, « Are they detectable? Can they be invoked by intent? » Finally, pilot an MCP-style layer: build a small domain (of the customer support sort) where a user or agent can express the results in a language, and let the systems do the orchestration. Then repeat and scale.

Natural language isn’t just the new interface. It became the default interface layer for software, replacing the CLI, then the API, then the SDK. MCP is the abstraction that makes this possible. Benefits include faster integration, modular systems, higher productivity and new roles. For those organizations still tied to manually calling endpoints, the change will feel like learning a new platform all over again. The question is no longer « which function to call? » but « what do I want to do? »

Dhyey Mavani is accelerating the generation of AI and computational mathematics.

Orchestration,DataDecisionMakers

#API #call #wrong #question #LLM #era

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *