The Model Context Protocol (MCP) has sent ripples through the AI community, hailed as a universal standard for exposing tools—like GitHub APIs or cloud services—to AI agents. Instead of painstakingly coding individual tool integrations, developers can simply connect an MCP server and instantly access a curated set of capabilities. Yet, as I discovered firsthand, the reality of implementing MCP in your own code is far from plug-and-play. Despite the buzz, straightforward guides are scarce, leaving developers to wrestle with the nuances of handshakes, tool conversions, and iterative agent loops. This gap in tooling isn't just inconvenient; it stifles innovation in agentic AI, where dynamic tool use is key to complex problem-solving.

The MCP Implementation Challenge

Most developers expect MCP integration to be as simple as adding a mcp: field in SDKs like OpenAI's. But when I wired up GitHub's MCP server for an AI agent, I hit unexpected friction. The protocol requires a manual dance: fetching tools from the server, reformatting them for your inference provider, and handling multi-step tool calls in a loop. Crucially, no high-level libraries abstract this away yet. As Sean Goedecke notes in his exploration, "It took me way too long to figure out that I had to wire most of this up myself." This reflects a broader immaturity in the MCP ecosystem—while IDEs and servers get attention, the inference-layer tooling lags, forcing developers into custom code.

A Step-by-Step Blueprint for MCP Integration

Here’s the core workflow I implemented in TypeScript, adaptable to any MCP server or language. The goal is to connect to GitHub’s MCP, fetch tools, and enable an AI agent to use them iteratively.

  1. Handshake and Tool Fetching: Start by connecting to the MCP server and retrieving available tools. This involves initializing a client with authentication (e.g., a GitHub token) and converting MCP tool schemas to your inference provider’s format (like Azure AI’s function definitions).
// Example: Connecting to GitHub MCP and mapping tools
import { StreamableHTTPClientTransport, Client } from 'mcplib';

export async function connectToGitHubMCP(token: string) {
  const transport = new StreamableHTTPClientTransport('https://api.githubcopilot.com/mcp/', {
    requestInit: {
      headers: {
        Authorization: `Bearer ${token}`,
        'X-MCP-Readonly': 'true' // Restrict to safe operations
      }
    }
  });

  const client = new Client({ name: 'ai-agent', version: '1.0.0', transport });
  await client.connect();
  const toolsResponse = await client.listTools();

  // Convert MCP tools to Azure AI format
  const tools = toolsResponse.tools.map(t => ({
    type: 'function',
    function: {
      name: t.name,
      description: t.description,
      parameters: t.inputSchema
    }
  }));

  return { client, tools };
}

Key Insight: Always scope tokens minimally—never use admin credentials, as MCP actions inherit user permissions. As Goedecke warns, "Anyone who controls the model input can trigger MCP actions," risking privilege escalation.

  1. The Agentic Inference Loop: With tools in hand, run inferences in a loop, checking for tool calls and feeding responses back into the model. This enables multi-step reasoning, like querying PRs before generating a summary.
// Simplified inference loop with tool handling
export async function mcpInference(request, githubMcpClient) {
  const messages = [{ role: 'system', content: request.systemPrompt }, ...];
  let iterationCount = 0;
  const maxIterations = 5; // Prevent infinite loops

  while (iterationCount < maxIterations) {
    iterationCount++;
    const response = await inferenceClient.post('/chat/completions', {
      body: { messages, tools: githubMcpClient.tools }
    });

    const assistantMessage = response.body.choices[0].message;
    messages.push({ role: 'assistant', content: assistantMessage.content });

    if (!assistantMessage.tool_calls) {
      return assistantMessage.content; // Final response
    }

    // Execute tool calls and add results to messages
    const toolResults = await executeToolCalls(githubMcpClient.client, assistantMessage.tool_calls);
    messages.push(...toolResults);
  }
  throw new Error('Max iterations exceeded');
}

Why This Matters: This loop transforms static AI into an adaptive agent. For instance, a model could first call a "list issues" tool, analyze the output, then invoke a "get issue details" tool—all autonomously. But as the code shows, developers must manually manage state and error handling, highlighting the need for better abstractions.

The Bigger Picture: Tooling Gaps and Future Directions

MCP's power is undeniable—imagine plugging in Salesforce or AWS tools as easily as GitHub—but the current implementation burden slows adoption. Why isn't there a simple agenticInference(prompt, mcpClient, maxLoops) helper? Caching tool definitions locally could avoid redundant server fetches, but SDKs don't yet support it. This friction underscores a critical phase in AI tooling: protocols like MCP are racing ahead, while developer experience plays catch-up. Until libraries mature, sharing practical guides like this becomes essential. As the agentic AI wave builds, simplifying these integrations will unlock new use cases, from automated DevOps to personalized coding assistants. For now, embrace the DIY spirit—but demand better tools from the ecosystem.

Source: Adapted from Sean Goedecke's blog post, available at https://www.seangoedecke.com/how-to-actually-use-mcp/.