You open your inbox and the subject line reads like a dare. I read Sam Altman’s memo and felt the ritual of PR dressed as principle. Across town, Anthropic was actually refusing a Pentagon demand that could change AI’s role in war.
On Thursday night, OpenAI sent a memo that landed like a press release and a creed
I want you to read that memo the way I did: as a public performance with a private echo. Sam Altman told employees—via a note that reached reporters almost immediately—that OpenAI opposes mass surveillance and autonomous lethal weapons, and that humans should remain in the loop for high-stakes decisions. It sounds good on a poster; it also matches the red lines Anthropic’s Dario Amodei has been defending in meetings with the Department of Defense.
Would OpenAI refuse Pentagon demands?
That question is the magnet drawing every headline. Altman’s message reads like a promise, but context matters: his company has different contracts, partners, and dependencies than Anthropic. Saying you would do something is not the same as being put in a room where a federal agency threatens to declare you a supply-chain risk or invoke the Defense Production Act.
On a separate front, Anthropic actually had to stare down explicit Pentagon pressure
You can imagine the conversation: DoD lawyers testing hypotheticals, company engineers giving guarded answers, lawyers counting options. Reports in The New York Times and The Washington Post detail that the Pentagon asked whether Anthropic’s Claude could be used for tasks ranging from defensive missile intercepts to other battlefield roles. Amodei’s response, according to one account, was to tell them to call—an answer that read like a polite refusal and a firewall at once.
Anthropic even softened one red line, allowing a carveout for “defensive weapons,” yet the agency pressed harder. Threats to cancel contracts, to label the company a supply-chain risk, and to coerce compliance via the Defense Production Act followed. Bloomberg later suggested talks hadn’t fully collapsed; Palantir’s reliance on Anthropic’s models for parts of its stack is one lever the company quietly holds.
Why did Anthropic say no to the DoD?
This is where values meet leverage. Anthropic’s official stance is narrow and moral: no mass surveillance, no autonomous killers, and humans in the loop for life-or-death choices. Those are the lines that, if crossed, would change what the company is and what Claude becomes. The Pentagon insisted it wanted AI for “all lawful purposes,” but as War on the Rocks noted, law and practice are not the same thing when it comes to autonomous weapons.
At the Pentagon, officials escalated from negotiations to public barbs
On X, undersecretary Emil Michael went from staff briefings to insults, calling Amodei a “liar” with a “God complex” and accusing Anthropic of trying to replace the Constitution with a company rulebook. The Daily Beast published the blow-up, and that public display shifted the fight into media theater.
It’s a strange posture for a defense agency: argue you want AI for lawful, defensive use, then hurl personal attacks when a private firm refuses. That tactic looks like a pressure cooker about to whistle, and it risks turning the DoD into a bad-cop caricature while the tech firms play the principled siblings.
On the broader stage, employees and rivals pushed the narrative outward
At Google and DeepMind, more than 100 staffers signed a letter urging the company to adopt Anthropic-style red lines if it continues DoD work. I watched that internal pressure become public pressure, which in turn nudged Altman to plant his flag in the same soil. The optics matter: when engineers at Google, reporters at The New York Times, and analysts at Bloomberg all frame the debate, the court of public opinion starts drafting its verdict.
Palantir, Tom’s Hardware coverage about LLM war games, and research showing chatbots choosing nuclear options in simulated scenarios added fuel to the fear. Those studies are like a referee refusing a foul—sudden, decisive, and capable of rerouting an entire game.
On the question of leverage, law, and the next move
There are multiple levers in play: legal authority, procurement dependencies, and the simple fact that companies have customers inside the government. The Pentagon can threaten the Defense Production Act; it can name a company a supply-chain risk; it can cancel contracts. But Anthropic has counterweights—partnerships, technical niches, and a public commitment that now looks tested rather than performative.
You should watch how this shapes policy around AI in defense. If companies keep drawing red lines publicly, Congress and regulators will feel pressure to formalize rules. If the Pentagon wins without changing law, the private sector may learn that resistance is costly. Either outcome reshapes incentives for engineers, CEOs, and investors.
Can the Defense Production Act force AI companies to build military-specific models?
Short answer: it can be used, but it’s politically and legally messy. The DPA gives the government procurement powers, but invoking it against a high-profile AI firm risks backlash from employees, corporate boards, and the press. Bloomberg reported ongoing talks, which suggests both sides still have cards to play and are wary of escalation costs.
I’ve covered negotiations like this for years: public postures, private leverage, and a lot of theater. You should treat Altman’s memo as a signpost, not proof. Anthropic’s refusal has already been tested, and that makes it consequential in ways a memo can barely match. Which pressure will win—the Pentagon’s legal tools or the companies’ resolve—and who decides the rules of the next war?