Anthropic detailed three specific issues that degraded Claude Code's performance over March and April 2026: an ill-timed shift to medium reasoning effort causing perceived freezes, a context-clearing bug inducing repetition, and an overzealous verbosity limit harming code quality. All fixes were deployed by April 20th, restoring the agent's expected capabilities after user feedback forced rapid rollbacks of well-intentioned but misguided changes.
When Claude Code users reported worsening performance in March 2026—slower responses, forgotten context, and awkwardly terse code suggestions—many wondered if they were imagining the decline. Anthropic’s April 24th blog post confirms it was real, tracing the issues to three separate changes made in pursuit of efficiency that inadvertently undermined the agent’s core utility for developers.
The first problem emerged on March 4th when Anthropic switched Claude Code’s default reasoning effort from "high" to "medium". Reasoning effort controls how much computational work the model allocates to thinking before responding—a critical factor for complex coding tasks where multi-step logic is common. While this reduced latency (addressing user reports of the app "freezing"), it simultaneously capped Claude’s ability to handle intricate refactoring or debugging scenarios. Anthropic observed that users preferred retaining high effort as the baseline, manually lowering it only when speed was paramount. By April 7th, they reverted the default, acknowledging that forcing users to constantly adjust this setting disrupted workflow more than occasional delays.
Separately, a bug introduced around the same time caused Claude Code to discard its internal "thinking" context after every tool interaction. For coding agents, this context isn’t just helpful—it’s foundational. When generating a function, the model might need to recall variable names from earlier steps, API constraints noted three turns back, or architectural decisions made during planning. Clearing this state each time forced Claude to re-infer context from scratch, leading to contradictory suggestions, redundant code, and frequent apologies for "forgetting" previous instructions. Users described this as the agent "losing the thread" mid-task. Anthropic quietly patched the context retention mechanism on April 10th, restoring continuity between tool calls.
The final issue stemmed from a well-intentioned but overly aggressive verbosity reduction. On April 16th, Anthropic added a system prompt instructing Claude to keep intermediate tool responses under 25 words and final answers under 100 words unless absolutely necessary. While intended to curb excessive explanation, this constraint clashed with coding workflows where concise technical details are often essential—think explaining why a specific edge case matters in a regex pattern or justifying a security-sensitive implementation choice. The limit caused Claude to truncate vital reasoning, producing syntactically correct but functionally incomplete code snippets. Anthropic admitted the change "hurt coding quality" and rolled it back on April 20th after direct feedback from power users highlighted degraded output in real-world projects.
What’s notable isn’t just that these issues occurred, but how quickly Anthropic responded once user signals became clear. Each fix arrived within weeks of the problematic change, suggesting robust monitoring but also revealing a tension between optimization experiments and user expectations for stability in paid tools. For developers relying on Claude Code for daily work, the return to high reasoning effort by default, persistent context retention, and relaxed verbosity constraints means the agent should now behave as it did before March—assuming no further adjustments are made without clearer user communication. The episode serves as a reminder that in AI agent design, perceived "efficiency" gains can easily backfire if they sacrifice the very capabilities that make the tool valuable.

Comments
Please log in or register to join the discussion