OpenAI Defends Pentagon Deal as Employees Fear Surveillance, Strikes

OpenAI Defends Pentagon Deal as Employees Fear Surveillance, Strikes

I watched the feed flip from refusal to approval in a handful of hours. The Pentagon quietly dropped a contract with Anthropic, then inked one with OpenAI, and later that night strikes began over Tehran. You could feel the room tilt—employees waiting, executives speaking, a country choosing.

I’ll walk you through what happened, what was said, and what no one has shown us yet. My job isn’t to reassure you. It’s to point to the moments that demand answers.

I watched executives trade statements across X while the Pentagon shifted partners in less than a day.

Within hours the Pentagon labeled Anthropic a supply-chain risk after Anthropic refused contractual language on safety guardrails that would limit certain military uses. OpenAI followed by agreeing to a contract that company leaders defended publicly, and Claude climbed the App Store charts as users plotted a boycott of OpenAI.

The announcement was a fuse throwing sparks into public debate: some saw safety-first resistance rewarded, others saw what felt like a hasty concession to political pressure. Sam Altman admitted the deal felt “rushed” and that the optics “don’t look good.” Katrina Mulligan, OpenAI’s head of national security partnerships, framed the contract as limited to defense and bound by “applicable law.”

You can scroll through X and find dozens of OpenAI staff threads that read like a company in a holding pattern.

Before the Pentagon announcement, nearly a hundred employees signed an open letter urging leadership to refuse permissions for domestic mass surveillance or fully autonomous lethal systems. Afterward, many senior staff publicly asked the Pentagon to withdraw Anthropic’s supply-chain risk designation.

Voices inside the company are split. Some executives argue their technical controls and on-site engineers provide a stronger guardrail than contract language; others, including named research staff, say the tradeoff wasn’t worth it. If you work or worked at OpenAI, reporters have been asking for sources, and employees are debating whether public assurances are enough.

The released posts were a live Q&A; the contract itself stayed behind closed doors.

OpenAI shared details in open forums on X and a handful of posts from leadership, but the actual contract text remains private. That gap is the clearest source of unease: words on a social feed versus clauses in a binding agreement.

Will OpenAI’s technology be used for domestic surveillance?

OpenAI says no: Mulligan wrote the contract applies to defense and not domestic law enforcement, and that U.S. law limits the Pentagon’s ability to conduct domestic surveillance. Sam Altman echoed faith in democratic institutions. Still, historical precedents—mass surveillance scandals and legal wiggle room—leave space for skepticism. You should want the clauses visible and enforceable, not only promised in posts.

Can OpenAI systems power autonomous weapons?

OpenAI has stated its models won’t be used to power fully autonomous weapons systems. Anthropic had pushed for firm contractual prohibitions; OpenAI favored technical controls and embedding engineers with the Defense Department. That leaves a practical question: what does adequate human supervision look like on the battlefield? Sarah Shoker and defense researchers point out the industry lacks consensus on that definition, which is probably why Anthropic refused the terms.

Sam Altman and Katrina Mulligan leaned on U.S. law and internal controls in public posts.

They argued that existing law and company-built safeguards will prevent the worst outcomes. OpenAI plans to place engineers with the Pentagon and to offer technical limits on model behavior. The company framed those measures as more reliable than contractual clauses, and suggested Anthropic sought different operational control.

But trust is strained. Internal dissent, public protests, and an employee exodus threat are signs that the social contract inside the company frayed. Internal trust hardened into a ragged firewall against reputational and moral risk.

There are real precedents and unresolved questions on the table.

The Pentagon’s move came hours before U.S. strikes on Tehran, a timing that will be read in many ways. Civil liberties groups, like the ACLU, have previously marked some military actions as bordering on illegality, and Congress has been slow to write laws that specifically address artificial intelligence. Regulators, employees, and the public now face two choices: demand transparent contract terms and enforceable oversight, or accept assurances delivered in social posts and private notes.

Names matter here: Anthropic and Claude stood their ground; OpenAI and Sam Altman accepted the Pentagon’s terms and used public forums to explain the decision. Reporters at The Verge and others have already flagged that the technical safeguards’ effectiveness is unclear. If you follow the discourse, you’ll see a mix of legal argument, technical confidence, and moral worry.

I’m not telling you what to think—I want you to ask the exact questions the silence demands. Who will read the contract for you, and who will hold a private company accountable when the stakes are war and surveillance?