Trump Says He Fired Anthropic ‘Like Dogs’ – Sparks Outrage

Trump Says He Fired Anthropic 'Like Dogs' - Sparks Outrage

The microphone is still hot when the president leans into the line he likes best: blunt, public, irreversible. He told me—and the country—that he “fired Anthropic like dogs.” The room that follows is equal parts political theater and technical panic.

On Politico the president said, “I fired Anthropic like dogs.”

I heard the quote the same way you did: raw and deliberate. He repeated a rhetorical move he’s used before to close a chapter fast and loudly. His words landed like a gavel — decisive, performative.

That phrasing does three things at once: it claims authority, invites spectacle, and forces actors—lawyers, Pentagon officials, investors—to react. You should treat the boast as both political signal and possible legal blueprint: a public claim intended to make a designation feel settled, even if the paperwork isn’t.

Pete Hegseth announced on X that Anthropic was a “supply chain risk.”

The Department of Defense’s public line appeared days earlier on social media from Secretary Pete Hegseth. He said Anthropic couldn’t be trusted with government partners.

Hegseth’s announcement matters because, if formalized, the label can isolate a vendor from U.S. government work. Bloomberg and the Financial Times report that Anthropic has been told—though the company itself says it hasn’t received formal notice. So there’s a split between the public posture of the Pentagon and Anthropic’s legal posture.

What does it mean for Anthropic to be a “supply chain risk”?

Practically, it can bar federal agencies and contractors from engaging with a firm. For you, that means contracts dry up and partners re-evaluate reputational exposure. For Anthropic, it risks losing revenue and government data feeds that are strategic for model training and enterprise sales.

Dario Amodei, Anthropic’s CEO, has signaled he will sue if the Pentagon moves to choke off contracts. The company’s arguments will center on process: was there clear notice, opportunity to respond, and factual basis for the claim?

The Wall Street Journal and Washington Post reported Claude is already used inside Palantir’s Maven system.

Journal reporting says the military paired Claude with Palantir’s targeting tools to suggest and prioritize targets during strikes. That real-world detail changed the debate overnight.

Reports claim the pairing sped planning cycles and produced precise coordinates—outputs that, if wrong, have human consequences. There are allegations that a strike on a school in Minab that killed 168 people may have involved stale or misapplied data. The possibility of AI-assisted targeting raises a legal and ethical storm you can’t ignore.

Can the Pentagon legally bar Anthropic from government contracts?

Yes, but it’s messy. The DoD has tools to restrict suppliers on national-security grounds, yet those moves invite litigation and international scrutiny. Anthropic has threatened suit; a court fight would hinge on evidence of risk and whether the department followed its own rules.

If Anthropic wins in court, the company could get contracts back and damages; if it loses, its government business could collapse and partners may sever ties. The designation is a fuse that could detonate Anthropic’s business ties.

Emil Michael is negotiating with Dario Amodei even after public insults.

That’s the oddest part: behind the rhetoric you still find a negotiating table. Michael, the Under-Secretary of Defense for Research and Engineering, called Amodei a “liar” on X, yet both sides are talking.

I’ve followed similar crises before: companies posture on ethics to court public trust, then pivot when their commercial survival is threatened. You should watch the tone of those talks as closely as the legal filings—tone often signals whether a compromise is possible or whether both sides are preparing for court and public spectacle.

The tech world is watching because this sets precedent for AI governance and procurement.

Anthropic’s fate will send a message to other model builders about how the government treats safety rules versus military requirements. Investors, competitors, and foreign partners will read the outcome as a rulebook for future deals.

For you, the big question is practical: will the U.S. government force companies to remove safety guardrails to satisfy wartime requirements, or will it accept limits on certain applications? That choice will shape product roadmaps at companies from Anthropic to Palantir to OpenAI.

I’m watching the filings, the briefings, and the executive tweets. You should watch who keeps access to contracts and who loses it—because this isn’t only about one company, it’s about how power, technology, and law collide in real time. Who wins when the president says he’s already fired the company and the Pentagon says it’s a security risk?