Anthropic's attempt to simplify Claude Code's interface by hiding file names has triggered strong pushback from developers who say the change reduces transparency and increases costs.
Anthropic has sparked controversy among developers by updating Claude Code, its AI coding assistant, to hide the names of files being accessed during operations. The change, implemented in version 2.1.20, collapses progress output to show only counts like "Read 3 files (ctrl+o to expand)" rather than displaying specific file names and line counts.

Developers have pushed back strongly against the change, arguing that seeing which files Claude accesses is essential for security, context verification, and auditing past activity. One developer explained that "knowing what context Claude is pulling helps me catch mistakes early and steer the conversation" when working with complex codebases.
The financial implications are significant too. When developers can spot that Claude is heading down the wrong path, they can interrupt the process and avoid wasting tokens. As one user noted, they've "benefited from seeing the files Claude was reading, to understand how I could interrupt and give it a little more context... saving thousands of tokens."
Boris Cherny, creator and head of Claude Code at Anthropic, defended the change as a way to "simplify the UI so you can focus on what matters, diffs and bash/mcp outputs." He suggested developers "try it out for a few days" and claimed Anthropic's own developers "appreciated the reduced noise."
However, responses from the developer community were largely negative. Many found the condensed output like "searched for 2 patterns, read 3 files" conveyed no useful information. One user called it "an idiotic removal of valuable information," while another said "Verbose mode is not a viable alternative, there's way too much noise."
Anthropic has since attempted to address concerns by repurposing the existing verbose mode setting to show file paths for read/searches while still hiding full thinking, hook output, or subagent output. However, this solution has been criticized as inadequate since making verbose mode less verbose is problematic for users who want complete details.
The debate highlights a fundamental tension in AI coding tools: as these systems become more autonomous and generate more output, finding the right balance between information density and usability becomes critical. Cherny acknowledged that "Claude has gotten more intelligent, it runs for longer periods of time, and it is able to more agentically use more tools... The amount of output this generates can quickly become overwhelming in a terminal."
For many developers, the opacity introduced by these changes is a deal-breaker. As one put it: "Right now Claude cannot be trusted to get things right without constant oversight and frequent correction... If I cannot follow the reasoning, read the intent, or catch logic disconnects early, the session just burns through my token quota."
The controversy underscores a broader challenge in AI tool development: changes that seem like improvements to tool creators can feel like regressions to power users who rely on detailed visibility for their workflow. Whether Anthropic will revert to the previous behavior or find a middle ground remains to be seen, but the strong developer reaction suggests that transparency in AI coding tools isn't just a nice-to-have feature—it's essential for trust and effective use.

Comments
Please log in or register to join the discussion