Anthropic Hikes Usage-Based Prices for Power Users Amid Model Decline

Claude Code Plugin Sparks Traders to Sell Cybersecurity Stocks

He opened Claude Code to finish a late-night bug and hit a usage limit mid-save. The Slack thread lit up: “Did they secretly change something?” Within days, companies that had leaned on Anthropic’s tools realized the bill might be about to look very different.

I’ve watched this play out across products before, and I want you to see the levers Anthropic pulled — and how they land on teams that run heavy compute jobs.

On dev teams that ship late at night: Anthropic is switching Claude Enterprise from per-seat caps to metered compute

You used to pay a flat fee — up to $200 per user per month (€184) — and get predictable access. Now Anthropic plans to charge per compute use on top of a smaller seat fee of $20 per user per month (€18). For heavy users of Claude Code and Claude Cowork, that turns a fixed subscription into a running meter, like a metered taxi that suddenly recalibrates its rates mid-ride.

That shift matters because Claude Code sessions are compute-heavy. They can run for long stretches while refactoring, compiling, or iterating on a model-assisted code review. Anthropic tells The Information the change is meant to match pricing to actual usage — preventing some customers from hitting artificial caps while stopping others from paying for unused capacity.

Fredrik Filipsson from Redress Compliance warns the math could be brutal: some enterprises may see bills triple. I read that as a red flag for teams that built workflows around predictable SaaS line items.

How will Anthropic’s pricing change affect enterprise costs?

If your developers run long, iterative Claude Code sessions, per-compute charges compound quickly. A team that once budgeted $200 per power user might now pay a $20 base plus a variable compute bill that spikes with intensive jobs. The result is less predictability and more work for procurement and finance teams to model usage and cap exposure.

In the offices of engineers and architects: complaints about Claude’s performance arrived at the same time

Stella Laurenzo, a senior director at AMD, posted a terse report in February: Claude Code was suddenly less reliable on complex engineering tasks. The note circulated, then exploded on social media and developer forums.

Users reported the model ignoring instructions, offering oversimplified fixes, and generally doing less of the heavy reasoning teams relied on. Screenshots on X suggested Anthropic had changed the default effort level from high to medium — a tweak some said was made without clear notice.

Boris Cherny, who leads Claude Code, pushed back publicly. He said the default was changed after user feedback about token consumption, documented in a changelog, and presented with an opt-out dialog. That defense comes from inside the product team and reads as an attempt to balance cost, token usage, and latency.

Is Claude actually getting worse for coding tasks?

The signals are mixed. You have high-profile user reports from AMD and other engineers; you have community screenshots and a growing thread on VentureBeat and X. Then you have Anthropic’s product team saying the change was explicit and reversible. I treat the situation as a classic coordination problem: product choices made to control resource use can degrade marginal utility for power users who need maximum reasoning depth.

In procurement meetings and board decks: investors want margins while customers want predictable value

Anthropic’s investor base saw huge capital inflows and now expects a clearer path to profit. That pressure translates into pricing experiments that push costs onto the heaviest consumers of compute.

For you, the choices are operational and financial. Do you accept a variable cost model and build monitoring to throttle sessions? Do you negotiate per-feature caps or move certain workloads in-house? Firms like Redress Compliance exist because procurement teams are suddenly rewriting the rules on how to negotiate AI licensing.

Can companies avoid a sudden cost spike?

Yes, but it takes work. Start by tagging and tracking Claude sessions, set alerts on compute spend, and negotiate seat-based caps for critical workflows. Consider hybrid approaches: keep latency-critical jobs in an on-prem cluster or reserve a guaranteed pool of enterprise compute. I’ve helped teams map usage to cost before — and the easiest money to save is the money you avoid spending on unnecessary iterations.

At developer standups and CTO briefings: the trade-offs are practical, not philosophical

Everyone agrees that model improvements and cost controls are needed. The fight is over the mechanics and the communication.

Anthropic faces a classic scaling tension: investors demand discipline; teams demand reliability. The community reaction — from GitHub posts to X threads — shows how quickly technical trust erodes when product defaults shift. A dimmer switch on how “hard” Claude thinks may save tokens and compute, but it also changes what you can automate and what you must push back to humans.

I’ll keep monitoring how Claude Enterprise evolves. You should do the same: instrument usage, ask for transparent compute-pricing models, and press for contractual caps. Will your team accept a metered future, or will this be the moment you pull critical workflows off the platform and back in-house?