Why MCP Falls Short for Production AI Agents: Context and Schema Challenges
Share this article
The vision of a universal protocol for AI tools—a "USB-C for agents"—holds undeniable appeal. Yet according to a recent kernl.sh analysis, the current implementation of the Model Calling Protocol (MCP) remains fundamentally unprepared for production workloads despite its promising design. Developers building real-world AI agents are encountering hard limitations around context management and tool variability that demand alternative approaches.
The Context Propagation Gap
MCP lacks mechanisms to distinguish between parameters that should be controlled by the LLM and those requiring contextual injection. Consider a RAG implementation using Turbopuffer:
const search = tool({
execute: async (ctx, { namespace, query }) => {
const ns = tpuf.namespace(namespace); // Problem: LLM chooses namespace
return await ns.query({ query: [{ text: query }] });
},
});
In production, namespace typically derives from authentication context (ctx.user.orgId), not LLM discretion. Similar issues emerge with stateful tools like code interpreters where sandbox persistence requires execution context awareness. MCP's current specification forces developers into brittle workarounds for what should be foundational functionality.
Schema Variability: The Hidden Complexity
Tool interfaces often demand context-dependent schemas that MCP can't accommodate. A Turbopuffer search tool's parameters vary based on index structure:
parameters: z.object({
query: QuerySchema.describe("Search query by field"), // Schema varies!
// ...
}),
Should DocumentA ({ text: string }) or DocumentB ({ title: string, content: string }) define the schema? Hybrid search complications exacerbate this—vector searches require client-side embedding while text searches don't. These aren't edge cases but daily realities for agent developers.
The Ownership Alternative
Kernl advocates for local toolkit ownership over remote protocol dependencies:
kernl add toolkit turbopuffer
This installs toolkits as editable TypeScript files within the project. Need contextual injection? Modify the source:
// In your local turbopuffer toolkit
const ns = tpuf.namespace(ctx.user.orgId); // Directly inject context
The approach applies the "shadcn model" to agent tooling—providing sensible defaults while retaining full adaptation rights. As the AI ecosystem accelerates, this flexibility proves essential for teams iterating rapidly without waiting for protocol evolution.
While MCP may mature into a valuable standard, production systems today require deeper control. Developers building agentic systems would be wise to prioritize solutions offering immediate adaptability over premature standardization.
Source: kernl.sh blog analysis