Major AI companies are deflecting responsibility for serious security flaws, treating vulnerabilities as 'expected behavior' rather than fixing them, leaving users to bear the risk.
When security researchers discover critical vulnerabilities in AI systems, you might expect companies to rush to patch them. Instead, a troubling pattern has emerged: AI vendors are increasingly dismissing serious security flaws as "working as intended" or "by-design risks," effectively passing the buck to users and developers.
This approach reveals a fundamental immaturity in how AI companies handle security, according to industry observers. The pattern is particularly concerning given how aggressively these same companies push AI as the solution to security problems in the first place.
The GitHub Actions Agent Vulnerability
A recent case involving three popular AI agents that integrate with GitHub Actions illustrates this problem perfectly. Researchers discovered that Anthropic's Claude Code Security Review, Google's Gemini CLI Action, and Microsoft's GitHub Copilot could all be hijacked to steal API keys and access tokens.
While all three vendors eventually paid bug bounties—Anthropic $100, Google $1,337, and GitHub $500—none assigned CVEs or published public security advisories. Anthropic upgraded the severity rating from 9.3 to 9.4 and added a "security considerations" section to its documentation, but the underlying vulnerability remained unaddressed.
GitHub's initial response was particularly telling, claiming the issue was a "known problem" they "were unable to reproduce" before ultimately paying the researchers.
The MCP Protocol Design Flaw
The situation becomes even more concerning with Anthropic's Model Context Protocol (MCP). Security researchers identified a design flaw that potentially puts 200,000 servers at risk of complete takeover. Despite 10 high- and critical-severity CVEs being issued for individual tools using MCP, Anthropic refused to patch the root issue.
"This is an explicit part of how MCP stdio servers work and we believe this design does not represent a secure default," Anthropic told the researchers, effectively admitting the flaw exists but refusing to fix it.
The researchers estimate that a root patch could have protected software packages with over 150 million downloads and millions of downstream users. Instead, the burden falls entirely on developers and companies using the open-source code.
The Maturity Problem
This "wasn't me" behavior from AI companies demonstrates a troubling lack of maturity in the industry. As one observer noted, true maturity and earning trust requires taking responsibility for choices and actions, admitting mistakes, fixing them when possible, and making course corrections.
Instead, AI companies are treating security as someone else's problem to solve. This approach leaves IT shops and end users to deal with the messy reality of securing complex, non-deterministic AI systems.
The problem is compounded by the current regulatory environment. Despite Anthropic itself warning that its latest model is so skilled at finding security flaws that it would be "much too dangerous" to release publicly, there are virtually no US federal AI regulations restricting these companies' operations.
The Broader Implications
This pattern of deflection has serious implications for the entire tech ecosystem. Developers using Anthropic's official MCP SDK in their applications, open-source projects incorporating this code, and companies bringing AI tools into their environments all bear the risk of vulnerabilities that vendors refuse to address.
It's a stark contrast to how other industries handle product safety. Imagine a car manufacturer discovering a critical safety flaw but deciding it's "working as intended" and leaving it to drivers to figure out how to stay safe.
As AI becomes increasingly embedded in critical infrastructure and business operations, this approach to security is unsustainable. Either AI companies need to mature quickly and take responsibility for the security of their products, or regulators need to step in and force accountability.
Until then, the burden of AI security will continue to fall on the very users and developers these companies claim to serve.


Comments
Please log in or register to join the discussion