Patricio Martins Logo
Back to articles

Model Context Protocol: what it is and why it will change AI

If you made it here, you are likely curious about the Model Context Protocol (MCP)—and rightly so. It represents a fundamental shift in how we build and integrate AI systems.

Before we dive in, this is my third article on MCP. If you want to see the journey, check out the previous two:

  1. How to Build Your First MCP Server in Python
  2. MCP Servers: Best Practices and FastMCP

Think of MCP as the “USB‑C for AI applications” — a standard that finally ends the chaos of custom integrations.

The problem no one had solved

Until recently, we lived in the “N×M integration problem.” Every AI app needed a custom connector for each data source. It was a nightmare for development, maintenance, and scale.

MCP fixes that. Launched as open source by Anthropic in November 2024, it was quickly adopted by companies like OpenAI, Microsoft, AWS, and GitHub. By early 2025, the community had created over 1,000 MCP servers.

With an architecture based on JSON‑RPC 2.0, MCP lets any AI model access data, execute actions, and use prompts securely and consistently. A typical request looks like this:

{
  "jsonrpc": "2.0",
  "method": "hello",
  "params": { "name": "world" },
  "id": 1
}

How does the MCP architecture work?

MCP follows a host–client–server model:

  • Host Process: The brain—apps like Claude Desktop or your IDE. It enforces security and orchestrates communication.
  • MCP Clients: Interpreters that connect to specific servers and mediate communication.
  • MCP Servers: Independent processes that expose capabilities via three component types.

Transports

  • WebSockets for continuous communication
  • HTTP for quick calls/tests
  • SSE appeared early but is being phased out

Three components: Resources, Tools, and Prompts

  • Resources: Read‑only data sources. The app selects resources to provide context. No side effects.
  • Tools: Executable functions that can have side effects (e.g., calling an API). Every execution requires human approval.
  • Prompts: Reusable templates chosen by the user to guide the model.

In short: Resources are managed by the app, Tools by the model, and Prompts by the user.

Real‑world examples

  • Resources: config://api-schema, file://docs/report.docx, database://schema/users
  • Tools: query_database("SELECT * FROM customers"), send_email(...), create_file("new.txt", "content")
  • Prompts: templates for meeting summaries, troubleshooting guides, or reports

Who is using it and what are the results?

  • Block: 60+ MCP servers powering their internal agent “Goose.”
  • Bloomberg: Standardized development for 10,000+ engineers.
  • GitHub: A Go‑based MCP server with 60+ tools for repos, issues, and PRs.

Benefits include faster workflows, democratized data access, and 60–80% reductions in routine task time.

Want to build your own server? Best practices

My recommendation is to use FastMCP—the fastest path to production for Python developers.

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("MyServer", dependencies=["requests", "pandas"])

@mcp.tool()
def process_data(param: str) -> str:
    try:
        result = do_something(param)
        return str(result)
    except Exception as e:
        return f"Error: {str(e)}"

Key tips:

  • Be modular; separate servers by domain
  • Handle errors clearly
  • Test with MCP Inspector, unit and integration tests
  • Prefer OAuth 2.1 with PKCE over simple Bearer tokens

Security and performance essentials

  • Authentication: Prefer OAuth 2.1 + OIDC
  • Token efficiency: Keep tool schemas concise to save context window
  • Location: Host servers near AI providers to reduce latency

The future is interoperable

MCP is model‑agnostic. Any system that speaks the protocol can use any server. SDKs exist in Python, TypeScript, C#, Java, and Kotlin. The ecosystem is expanding with registries, output schemas, and more.

MCP is not just another protocol. It is an architectural shift—like HTTP for the web—poised to become foundational infrastructure for the next generation of intelligent applications.