OpenAI Adds Anti-Surveillance Terms to Pentagon Deal After Katy Perry

OpenAI Adds Anti-Surveillance Terms to Pentagon Deal After Katy Perry

At 10 p.m. on Monday, the OpenAI-Pentagon draft quietly grew a few lines that changed the room. I read the new language and felt the air tighten—this is where policy meets public trust. You should care, because the added words aim to answer a very specific fear: will AI be used to watch Americans?

I’m going to walk you through what changed, why it mattered for Anthropic, and how this backlash turned corporate PR into a culture quarrel. You’ll get names, quotes, and the small details the headlines miss. Read this like a short briefing with a point of view—I’ll tell you what to watch next.

At a late-night signing moment, new privacy lines were inserted

OpenAI and the Pentagon added explicit language to the agreement that makes privacy promises more visible on paper.

“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

“For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

Those two sentences are short, legal-sounding, and designed to calm a specific kind of outrage. They also signal an important shift: the Pentagon and OpenAI are no longer assuming “it’s obvious” what’s lawful—now it’s spelled out.

Can the Pentagon legally use AI for domestic surveillance?

The Pentagon has been blunt: anything that counts as mass surveillance of Americans is already illegal. I agree that law is the guardrail, but law and practice are not the same thing. The new clauses put intent and types of data—like commercially acquired personal information—on the table. That matters because intent is what separates lawful intelligence work from unlawful monitoring.

On the other side of the room, Anthropic raised its hand and said “not like this”

Employees at Anthropic warned the Pentagon that certain unclassified commercial datasets can be used to reconstruct people’s movements and browsing habits.

According to reporting in The New York Times, Anthropic requested a legally binding promise that the Defense Department would not run its models on unclassified commercial data that could identify or track U.S. persons. The company later found itself labeled a “supply-chain risk,” frozen out of parts of the defense market, and in a public spat that now smells like politics.

What did OpenAI add to the Pentagon agreement?

Beyond the quoted lines above, OpenAI’s public posts indicate it negotiated language to make its principles explicit inside the contract. Sam Altman reposted an internal note saying the company worked to make the limits “very clear.” Whether that clarity is operational or merely declarative will be judged in time and practice.

Outside reactions turned personal and viral—Katy Perry switching to Claude is a loud signal

When a celebrity tweets she’s moved to a rival model, it’s not just a punchline—it’s a momentum marker.

There is now a QuitGPT website claiming more than 1.5 million signatories (1,513,922 at the moment I checked). The site urges a boycott and frames the choice as a moral stance against “ICE enablers.” Celebrity endorsements—Katy Perry’s switch to Claude—add social proof and give skeptical customers an easy out. Like a courtroom curtain pulled aside, the spectacle reveals sentiment that otherwise hides behind tech PR.

Does the Anthropic dispute affect OpenAI customers?

Short answer: customers feel it. Whether enterprises or consumers materially change behavior is still unfolding. Some will stay, reasoning that OpenAI’s obligations to the Pentagon now come with clearer limits. Others will defect to competitors that framed their refusal to work with the Pentagon as a principled stand. The market will act as judge and jury.

Behind the scenes, official posture is simple: “we want legal rights to act”

Pentagon spokespeople have repeatedly said they only want the freedom to do what is lawful and to protect warfighters when necessary.

Sean Parnell said the Department of Defense has “no interest in using AI to conduct mass surveillance of Americans (which is illegal).” That line is both an appeal to law and an attempt at authority signaling: the military wants the operational latitude to use AI within legal bounds, and it sees rigid contract clauses as risks to mission effectiveness.

OpenAI says it shares Anthropic’s concern about surveillance and has pressed the Pentagon to accept contract language that prohibits deliberate domestic monitoring. Those public commitments try to thread a narrow needle—balancing access to sensitive national-security workflows against public backlash and brand risk.

In the marketplace, choices are narrative as much as product

Downloads, tweets, counter sites, and celebrity posts change perception faster than contracts do.

QuitGPT frames the issue as moral and urgent; OpenAI counters with contractual language and public reassurance. The battle is as much about public imagination as about clauses—customers will vote with usage, PR, and payments. The company response so far has been defensive and deliberate, which may hold some users and push others away like a weather vane spinning in a storm.

I’ve named the players and traced the pivots. You can read the clauses, follow the politics, and watch the market decide whether a few legal phrases are enough. Who wins the public trust war: the company that writes the clearest lines, or the rival that refuses to sign at all?