- Anthropic traced Claude Code quality issues to three separate bugs introduced between March and April 2026.
- Problems included reduced reasoning effort, session memory loss, and over-corrected verbosity controls.
- The company says all three issues are fixed and is resetting subscriber usage limits as compensation.
Anthropic acknowledged something users had been complaining about for weeks: Claude Code, the coding assistant that competes with GitHub Copilot, got worse. In a postmortem published April 23, the company traced the degradation to three separate changes, now all reverted as of April 20.
The first issue appeared March 4, when Anthropic changed Claude Code’s default reasoning effort from “high” to “medium.” The change was intended to address reports that Opus 4.6 would occasionally freeze the UI by thinking so long that latency became unacceptable. The company admitted this was “the wrong tradeoff,” reverted the change April 7, and said users told them they’d prefer to default to higher intelligence and opt into lower effort for simple tasks.
The second bug hit March 26, when Anthropic shipped a change to clear Claude’s older thinking from sessions that had been idle over an hour. The goal was reducing latency when users resumed those sessions. A bug caused this clearing to happen every turn instead of just once, making Claude seem forgetful and repetitive. The company fixed it April 10.
The API Wasn’t Affected—Just the Tools Users Actually Use
The third issue, surfaced April 16, was an attempt to reduce verbosity. A system prompt instruction, combined with other prompt changes, “hurt coding quality.” Anthropic reverted it April 20, but noted the aggregate effect of all three changes—each affecting different slices of traffic on different schedules—made the degradation look broad and inconsistent.
The company says the Claude API was never affected. Only Claude Code, the Claude Agent SDK, and Claude Cowork—the actual products developers interact with—suffered. Anthropic also noted that while they began investigating reports in early March, the issues were challenging to distinguish from normal user feedback variation at first, and neither internal usage nor evaluations initially reproduced the problems.
As compensation, Anthropic is resetting usage limits for all subscribers. That gesture might help, though one imagines developers who spent weeks debugging code with a degraded assistant might prefer an actual apology. The company declined to characterize the situation as a regression in a core product, instead framing it as “changes that affected quality.” Whatever you call it, Claude Code users spent weeks with a tool that didn’t work as advertised.
The debacle comes at a delicate moment for Anthropic, which just raised its valuation to $60 billion and is positioning Claude as an enterprise-grade alternative to OpenAI’s offerings. If enterprise customers can’t trust that coding assistants won’t randomly get worse, $60 billion starts to look like a lot to live up to.
