MCP 'design flaw' puts 200k servers at risk: Researcher • The Register
#Vulnerabilities

MCP 'design flaw' puts 200k servers at risk: Researcher • The Register

Regulation Reporter
4 min read

Anthropic's Model Context Protocol (MCP) contains a fundamental design flaw that security researchers say puts 200,000 servers at risk of complete takeover, despite the company's refusal to address the root issue.

A critical security vulnerability in Anthropic's Model Context Protocol (MCP) has put an estimated 200,000 servers at risk of complete compromise, according to security researchers who say the AI company has refused to fix the underlying design flaw.

The Root of the Problem

The vulnerability stems from how MCP uses STDIO (standard input/output) as a local transport mechanism for AI applications to spawn MCP servers as subprocesses. While designed to work across programming languages including Python, TypeScript, Java, and Rust, this implementation allows attackers to execute arbitrary OS commands if they can successfully create an STDIO server.

"But in practice it actually lets anyone run any arbitrary OS command, if the command successfully creates an STDIO server it will return the handle, but when given a different command, it returns an error after the command is executed," the Ox research team explained in their findings.

Four Attack Vectors Discovered

The security researchers identified four distinct types of vulnerabilities that can be exploited through this design flaw:

1. Unauthenticated and Authenticated Command Injection

This vulnerability allows attackers to enter user-controlled commands that run directly on the server without authentication or sanitization. Any AI framework with a publicly facing UI is vulnerable to this attack, which can lead to total system compromise.

2. Command Injection with Hardening Bypass

Even when developers implement protections and user input sanitization, attackers can bypass these measures. The researchers demonstrated this by successfully injecting commands through allowed command arguments. For example, they bypassed restrictions on commands like "python," "npm," and "npx" by using syntax such as "npx -c ."

3. Zero-Click Prompt Injection

This vulnerability affects AI integrated development environments (IDEs) and coding assistants including Windsurf, Claude Code, Cursor, Gemini-CLI, and GitHub Copilot. The only CVE issued for this class of vulnerability is for Windsurf (CVE-2026-30615).

4. MCP Marketplace Poisoning

The researchers successfully "poisoned" nine out of eleven MCP marketplaces by submitting proof-of-concept MCPs that could execute arbitrary commands. "A single malicious MCP entry in any of these directories could be installed by thousands of developers before detection – each installation giving an attacker arbitrary command execution on the developer's machine," they warned.

Scale of the Impact

The vulnerability affects software packages with more than 150 million downloads and potentially millions of downstream users. Vulnerable projects include all versions of LangFlow, IBM's open source low-code framework for building AI applications and agents, and GPT Researcher, an open source AI agent designed for deep research.

Anthropic's Response

Despite being notified of these vulnerabilities in November 2025 and going through more than 30 responsible disclosure processes, Anthropic has declined to modify the protocol's architecture. The company released an updated security policy a week after the initial report, advising that MCP adapters, specifically STDIO ones, should be used with caution.

"This change didn't fix anything," the researchers stated in their 30-page paper detailing the findings.

Anthropic's position is that the behavior is "expected" based on the protocol's design, a stance that has frustrated security researchers who argue that a single architectural change at the protocol level could have protected every downstream project, every developer, and every end user who relies on MCP.

The Broader Implications

The controversy highlights a growing tension in the AI industry between rapid development and security. Anthropic's refusal to address what researchers consider a fundamental design flaw raises questions about responsibility when open source protocols become widely adopted.

"That's what it means to own the stack," the researchers argued, suggesting that Anthropic has both the ability and responsibility to make MCP secure by default.

The situation is particularly concerning given the increasing reliance on AI agents and automated systems in enterprise environments, where a single compromised server could lead to widespread data breaches or system compromises.

Featured image

What This Means for Developers

For developers using MCP-based systems, the findings suggest several immediate actions:

  • Review all MCP implementations for potential vulnerabilities
  • Implement additional security layers around MCP servers
  • Monitor for updates from affected projects
  • Consider alternative protocols or additional authentication mechanisms

The case also serves as a reminder that even protocols developed by leading AI companies can contain fundamental security flaws that may not be addressed promptly, leaving organizations to implement their own mitigations.

As AI systems become more deeply integrated into critical infrastructure, the security community will likely face increasing pressure to identify and address vulnerabilities before they can be exploited at scale. The MCP case demonstrates both the challenges and the importance of this work in an era where a single protocol flaw can potentially compromise hundreds of thousands of servers worldwide.

Comments

Loading comments...