I watched a feed of model responses roll past and felt the moment tilt. The pattern was unmistakable: raw outputs repurposed into someone else’s upgrade. You register that tug — a competitive edge that didn’t arrive by accident.
On Monday, Anthropic published a blog post accusing three Chinese firms of massive “distillation” efforts.
I read Anthropic’s account and you should too: the company claims DeepSeek, Moonshot, and MiniMax extracted Claude’s capabilities by harvesting huge volumes of generated text. The episode is a cracked mirror reflecting borrowed minds — not a break-in, Anthropic says, but a scale problem that skirts policy and regional rules.
What is a distillation attack?
Short answer: a distillation attack turns a powerful “teacher” model’s outputs into training material for a cheaper, faster “student” model without permission. I call it a shortcut that can strip years of lab work into a few million prompt–response pairs. Legitimate labs distill internally to make products smaller and cheaper; Anthropic alleges these cases cross the line into rule-breaking at scale.
MiniMax logged more than 13 million alleged exchanges; Moonshot logged 3.4 million and DeepSeek about 150,000.
Those numbers matter. When one app runs millions of exchanges against a frontier model, you begin to recreate behavior you didn’t build. Claude was a lantern whose flame others tried to bottle, Anthropic argues — and the scale here makes the claim hard to ignore.
Is distillation illegal or just a terms-of-service problem?
Anthropic isn’t accusing these companies of criminal theft in public statements; it frames the behavior as a violation of terms of service and regional access restrictions. I’d read that as a civil and commercial dispute with potential regulatory echoes: export controls, Pentagon supply-chain concerns, and industry contracts can turn a TOS breach into a geopolitical headache.
OpenAI has raised similar alarms while DeepSeek prepares a new flagship release any day now.
OpenAI reportedly warned U.S. lawmakers that firms are “free-riding” on frontier labs’ work, and media outlets from Reuters to CNBC have tracked the fallout. You’re watching a flashpoint where corporate IP fights, national security anxieties, and Wall Street expectations collide as DeepSeek V4 nears its public debut.
Will DeepSeek’s V4 mirror Claude or redraw competitive lines?
There’s no certainty, only timing and stakes. If DeepSeek ships a model that matches frontier capabilities, regulators and customers will ask whether that leap came from original research or mass distillation. Investors and integrators will react in real time, and Anthropic’s posture signals it plans to press policy and platform levers.
I’ve followed similar industry rows before: legal skirmishes that began as platform disputes and ended up defining export rules and partnerships. You can feel the momentum now — one release could redraw who gets credit for breakthroughs, who pays for them, and who controls access; will the industry treat this as theft or as clever competitive intelligence?