The Model Context Protocol (MCP) promised a unified way for LLMs to access tools, but its practical shortcomings reveal why command-line interfaces (CLIs) remain the superior choice for most use cases. This analysis explores why MCP adds unnecessary complexity, debugging friction, and auth challenges when CLIs already provide battle-tested solutions.
The Model Context Protocol (MCP) emerged in late 2024 as a proposed standard for connecting LLMs to external tools. Its initial reception was enthusiastic, with companies rushing to implement MCP servers to demonstrate "AI-first" credentials. However, the protocol's adoption has stalled, and its practical value remains questionable. OpenClaw and Pi, two prominent AI agent frameworks, have already dropped MCP support, signaling a broader industry shift.
LLMs don't need a special protocol
LLMs excel at interpreting command-line interfaces. They've been trained on vast datasets of man pages, Stack Overflow posts, and GitHub repositories containing shell scripts. When I ask Claude to run gh pr view 123, it executes reliably because the CLI provides a predictable input-output model. MCP's promise of a cleaner abstraction feels redundant in this context.
Documentation overhead persists regardless of protocol choice. For MCP servers, I still need to describe tool behavior, accepted parameters, and usage patterns. The LLM doesn't magically understand tool capabilities just because they're exposed through MCP. This creates a false dichotomy between "special protocol" and "raw CLI" when the distinction often disappears in practice.
Debugging friction
When an LLM misinterprets a Jira command, I can simply rerun jira issue view to see the exact output. With MCP, troubleshooting requires parsing JSON transport logs and reconstructing tool behavior from context. The CLI approach provides immediate visibility that MCP abstracts away.
Consider Terraform plan analysis: terraform show -json plan.out | jq '[.resource_changes[] | select(.change.actions[0] == "no-op" | not)] | length'. This leverages existing tools for filtering. MCP alternatives either dump entire plans into context windows (costly and often impossible) or require custom filtering logic in the MCP server itself.
Composability advantages
CLIs enable powerful composition through pipes and redirection. terraform show -json | jq -r '.resource_changes[] | select(.change.actions[0] == "no-op" | not) | .address' filters specific resources, while grep -i "security" narrows results further. MCP's composability limitations force users to either:
- Overload context windows with large outputs
- Implement custom filtering in MCP servers
Both approaches create more work for worse results compared to existing CLI ecosystems.
Authentication maturity
MCP introduces unnecessary auth complexity. CLI tools already use battle-tested flows:
- AWS profiles and SSO (
aws sso login) - GitHub authentication (
gh auth login) - Kubernetes kubeconfig (
kubectl config use-context)
When auth breaks, I fix it the same way regardless of whether I'm typing commands or an LLM is executing them. MCP-specific troubleshooting adds no value here.
Operational simplicity
Local MCP servers require process management. They must start correctly, maintain state, and avoid silent failures. In Claude Code, these appear as child processes that sometimes hang. CLI tools are stateless binaries that exist only when needed.
Initialization flakiness manifests in lost productivity. I've restarted Claude Code multiple times due to MCP server startup failures. CLI tools have no such initialization dance.
Permission granularity
MCP's allowlist model offers only binary control: tool name either works or doesn't. CLI workflows enable finer-grained permissions:
- Allow
gh pr viewbut blockgh pr merge - Restrict
kubectl applyto read-only operations
This granularity matters for security-sensitive environments.
When MCP might make sense
MCP isn't universally useless. It could benefit tools without CLI equivalents, such as proprietary web APIs or specialized hardware interfaces. Standardized interfaces might help in multi-vendor integrations where CLI uniformity isn't possible.
However, these use cases represent exceptions rather than the rule. For the majority of software interactions, CLIs provide:
- Decades of design iteration
- Human-readable error messages
- Tooling ecosystems (completion, history, aliases)
- Existing documentation practices
The practical lesson
The best tools work for both humans and machines. CLIs have evolved to satisfy these dual requirements through:
- Composability via pipes
- Debuggability through direct execution
- Auth integration with existing systems
MCP attempted to build a better abstraction but overlooked these mature patterns. Its real-world friction points - initialization complexity, auth management, permission granularity - suggest we already had a functional solution.
Recommendations for builders
Companies developing MCP servers without CLI equivalents should reconsider their approach. A well-designed API should precede CLI development, as agents will naturally discover CLI usage patterns. When both exist, CLIs should be prioritized for:
- Faster adoption
- Lower maintenance overhead
- Better debugging experience
The protocol's promise of standardization feels hollow when it doesn't improve upon CLI workflows that already solve most problems effectively. The industry's move away from MCP adoption signals a return to proven fundamentals rather than chasing new abstractions.
Anthropic MCP announcement Claude Code documentation OpenClaw GitHub Pi AI framework
Comments
Please log in or register to join the discussion