Anthropic's Claude Code continues to read sensitive files despite .claudeignore directives, exposing passwords and API keys and raising serious security concerns for developers.
Developers using Anthropic's Claude Code are facing a significant security issue: the AI assistant continues to read sensitive files like .env files containing passwords and API keys, even when these files are explicitly blocked by .claudeignore directives.
The Security Gap
When software developers store secrets—passwords, tokens, API keys, and other credentials—in .env files within project directories, they typically rely on .gitignore files to prevent these sensitive files from being committed to public repositories. Claude Code implements a similar mechanism through .claudeignore files, which are supposed to tell the AI assistant which files to avoid reading.
However, testing by The Register has confirmed that Claude Code fails to respect these ignore rules. When asked how to prevent Claude from reading .env files, the AI incorrectly stated that adding .env to a .claudeignore file would work. In practice, Claude Code continues to access and read the contents of these files despite the ignore directive.
Reproduction of the Issue
The problem was first documented in a Pastebin post and has since been independently verified. The reproduction steps are straightforward:
- Create a directory with an
.envfile containing sample secrets - Add a
.claudeignorefile with entries for.envand.env.* - Start Claude Code via CLI
- Ask Claude to read the
.envfile
The AI proceeds to read and display the secrets, demonstrating that the ignore rules are not being enforced.
Broader Security Implications
This issue has serious security implications, particularly for AI agents that could be manipulated through indirect prompt injection to access and share stored secrets. The problem extends beyond .claudeignore—Claude also ignores .gitignore entries when accessing files, despite having a default configuration setting that should respect .gitignore in file pickers.
When accessing .env files that are listed in .gitignore, Claude displays the secrets with a warning about committing credentials to version control, but still proceeds to read the file. This behavior undermines the fundamental security principle of keeping sensitive credentials out of AI systems.
Community Response and Ongoing Issues
The Claude Code community has been raising this concern for months. Multiple GitHub issues highlight the problem:
- A "HIGH PRIORITY" issue titled "Claude exposes secrets/tokens in tool output - no redaction" was opened two days ago
- Posts from November 2025 raised the same concern
- Another issue from two weeks ago flags Claude's willingness to display secrets
- A bug report from three weeks ago specifically states that "Claude should refrain from reading or even being aware of anything in the
.claudeignorefile, using the same standard parsing rules as a.gitignorefile"
Workarounds and Their Limitations
Developers have found that configuring permissions within a settings.json file in a project's .claude directory can prevent Claude from accessing .env files. When properly configured, Claude reports that the file is blocked by permission settings and excluded from tool access as a security measure.
However, this workaround comes with its own challenges:
- The syntax for absolute paths requires two forward slashes (
//) instead of one, which differs from Linux and macOS conventions - There are reported problems with the
@file reference syntax insettings.json - Permissions.deny settings don't always prevent files from being loaded into memory
Anthropic's Response
Anthropic did not respond to requests for comment on this security issue. The lack of response or acknowledgment from the company leaves developers uncertain about when or if this critical security flaw will be addressed.
The Core Problem
The fundamental issue is that Claude Code's own recommendations about using .claudeignore files are incorrect, potentially leading developers to believe their secrets are protected when they are not. If settings.json is intended to be the only supported method for denying file access, Anthropic should clearly communicate this and ensure that the AI's guidance aligns with actual security practices rather than misleading users.
This security gap represents a significant risk for development teams using Claude Code, as it could lead to accidental exposure of sensitive credentials through AI interactions. Until the issue is resolved, developers should exercise extreme caution when working with secrets in projects where Claude Code has access.


Comments
Please log in or register to join the discussion