From databases to APIs
In the early days of computing, applications interacted directly with databases using SQL commands. While this allowed immediate data access, it opened a Pandora’s box of security issues, most famously SQL injections. As applications grew, direct database interactions became increasingly risky and impractical.
This led to the development of Application Programming Interfaces (APIs), providing structured, secure layers that mediated interactions with databases. APIs quickly became the lifeblood of modern web and software applications, ensuring consistent data access and enabling systems to communicate effectively without compromising security.
Introducing OpenAPI
As APIs proliferated, documenting and standardising them became essential. Enter OpenAPI (formerly Swagger)—an open standard that clearly defines RESTful APIs, detailing endpoints, operations, parameters, and responses. OpenAPI was a game-changer, making APIs understandable for both humans and machines.
Yet, as powerful as OpenAPI is, it’s inherently static. It describes what’s possible but doesn’t dynamically manage interactions, especially the nuanced, multi-step exchanges required by advanced AI-driven applications.
What exactly is MCP?
Model Context Protocol (MCP), introduced by Anthropic in 2024, aims to bridge that gap. Think of MCP as a dynamic conversational protocol specifically designed for Large Language Models (LLMs) like Claude or GPT. MCP facilitates real-time interactions, capability discovery, and secure integration between AI models and external data sources or tools.
Imagine MCP as a universal USB-C port for AI apps, streamlining connectivity so AI systems can seamlessly fetch live data, invoke complex actions, or update internal records through a standardised protocol.
How does MCP work?
At its core, MCP operates on a client-server model:
- MCP Servers expose tools, resources, or prompts through a standard interface.
- MCP Clients (like AI assistants) discover available tools and resources dynamically, request them as needed, and incorporate responses back into the AI’s output.
For example, if your AI assistant needs real-time weather data, it calls the relevant MCP tool from a “weather” MCP server. The MCP server fetches this data from external APIs and returns structured responses the AI can interpret effortlessly.
What does MCP add vs. OpenAPI alone?
A common question is why MCP is necessary when you can provide an LLM with an OpenAPI document and have it directly produce JSON for API calls. Here’s the key difference:
With OpenAPI, your AI directly interprets static API documentation and generates JSON. You then pass that JSON directly to the API. This is straightforward but limited—errors can occur if the AI misunderstands the API spec, and every AI integration is independent and manual.
MCP adds a dynamic layer that goes beyond static specs:
- Dynamic capability discovery: MCP clients query servers to discover available tools at runtime, ensuring up-to-date interactions.
- Structured tool invocation: MCP servers validate and structure tool invocations, reducing errors and improving reliability.
- Centralised security and policy management: MCP servers handle permissions, access control, and logging uniformly, crucial in enterprise settings.
However, there’s a strong counterargument: an API layer designed specifically for an LLM could directly work with existing OpenAPI files. This layer could read OpenAPI specifications, dynamically create a tool usage specification for the LLM, and directly call the API using instructions from the OpenAPI document. Critics argue that this approach would avoid additional coding or complexity from MCP while still offering similar benefits, raising questions about whether MCP genuinely provides enough incremental value to justify its adoption.
MCP: Local vs. SaaS applications
MCP might be particularly beneficial for local tasks such as a developer tool like Cursor interacting directly with your local terminal or filesystem. In these scenarios, the dynamic nature and structured interactions provided by MCP add significant value by safely abstracting local processes. Conversely, SaaS or server-based tasks, such as an AI assistant interacting with HubSpot to create a new deal, are often already well-served by the existing OpenAPI ecosystem. In these cases, MCP might represent unnecessary additional complexity, with existing standards and workflows adequately addressing the integration needs.
Why MCP matters (and why it might not)
MCP isn’t just another technical standard—it’s potentially transformative. But, like every emerging tech, it has its advocates and detractors.
The case for MCP:
- Simplifies integration: Developers integrate once using MCP, benefiting multiple AI applications.
- Dynamic and interactive: MCP facilitates multi-turn, context-rich interactions suited for conversational AI.
- Security and governance: MCP standardises secure data access and actions.
- Future-proof and flexible: MCP is model-agnostic, reducing vendor lock-in.
The sceptical view:
- Complexity concerns: Critics argue MCP complicates straightforward interactions already achievable with OpenAPI.
- Redundant solutions: Existing robust standards like OpenAPI and GraphQL may render MCP redundant.
- Vendor motive suspicion: Some perceive MCP as Anthropic’s strategic move to establish market influence.
The future of MCP
Will MCP become the next big standard, or fizzle out like countless ambitious protocols before? Adoption will be critical.
Encouragingly, companies like Block, Replit, and Apollo are already experimenting with MCP, demonstrating real-world utility. The developer community is cautiously optimistic, especially as tools emerge to automatically convert existing APIs to MCP-compatible servers.
If MCP gains traction beyond Anthropic’s ecosystem—particularly if OpenAI and other giants incorporate or support it—the future looks promising. Conversely, if MCP remains niche, adoption will be limited, and developers may default to existing, simpler solutions.
Conclusion & call-to-action
Model Context Protocol is ambitious, innovative, and potentially transformative. But whether MCP will redefine AI integrations or simply become another forgotten standard depends on adoption, community engagement, and practical demonstration of its benefits.
Interested in exploring MCP for your AI projects? Or seeking help with AI and integrations? Get in touch to book a discovery call and see how we can streamline your AI integration challenges.