Anthropic confirms Claude 'dumbing down' bug, resets quotas
The AMW Read
Novelty 2: overturns Claude's reliability narrative for coding, updating the §4 case study. Significance 2: impacts foundation model trust and coding tool market dynamics.
Anthropic confirms Claude 'dumbing down' bug, resets quotas
Anthropic published a postmortem admitting three bugs caused Claude performance degradation over the past two months: a silent reduction of Claude Code's reasoning level from high to medium on March 4, a cache bug on March 26 that cleared thinking records every turn instead of only after idle hours, and a system prompt constraint limiting tool-call text to 25 words that hurt Opus 4.6 and 4.7 output quality. The company reset all usage quotas as compensation, though users noted this was effectively a normal cycle reset following the Opus 4.7 release.
This incident exemplifies the 'AI shrinkflation' pattern—users paying the same price for degraded service—and updates the debate around trust in top labs. The bugs surfaced independently but collectively eroded Claude's coding reputation, precisely as GPT-5.5 launched with strong code capabilities and DeepSeek V4 gained traction. The timing forced Anthropic's hand after weeks of community pushback, highlighting how competitive pressure disciplines quality in foundation model markets.
"This is a classic case of cost anxiety masquerading as a bug," said a veteran AI analyst. "Anthropic was optimizing for latency and compute spend, but the cumulative effect broke the user trust that had made Claude the default for coding. The postmortem is transparent, but the damage to the 'just works' narrative is real." The market now watches whether Claude can recover its lead or whether GPT-5.5 and open-weight variants permanently shift coding preferences.


