I walked into a briefing where a single line changed the mood: the Defense Department would stop using Anthropic’s Claude. You felt it—phones in pockets, people calculating risk in real time. The argument on the table was simple and terrifying: private ad data plus military AI could map your life.
A Pentagon memo landed on desks this week and changed the calculus: why Anthropic refused certain government uses
I want you to hold that moment. Anthropic publicly told the Department of Defense it will not let Claude be used for fully autonomous weapons or for mass surveillance of Americans. That simple stand is now a legal and political grenade in the center of tech policy.
Sen. Ron Wyden slammed the Pentagon’s reaction as overreach and warned of a real danger: the government combining commercially sourced location data, browsing histories, and other commercial signals to build profiles of citizens. “The Defense Department is throwing a fit over Anthropic asking for the bare minimum ethical guardrails on how DOD uses its product. That’s serious cause for alarm, given AI’s ability to turn disparate pieces of public or commercial data into highly revealing profiles of Americans,” Wyden said.
A DOD directive circulated and users will be phased out: how the White House and Pentagon escalated
The visible step was stark: President Donald Trump announced the federal government would stop using Claude and the Defense Department said it will phase out use over six months. Defense Secretary Pete Hegseth added pressure by suggesting companies that want to work with the government should also stop working with Anthropic. Anthropic says it will sue.
You should know this fight is both legal and symbolic. The Pentagon is buying commercial data and, according to advocacy groups, paying vendors to let them analyze people’s movements and interests. Greg Nojeim of the Center for Democracy and Technology put it plainly: “There is a whole industry of data brokers who purchase and sell location information about Americans,” and the industry is largely unregulated at the federal level.
Can the government buy your location data?
Yes—right now companies sell location and behavioral data for tiny sums, and agencies like DHS have been shown buying it. A recent 404 Media report traces how Homeland Security purchased location data derived from ads served on phones. You probably don’t realize that casual apps and mobile games feed that supply chain.
An open market of personal signals exists and Wyden says the law is outdated: what senators are proposing
I’ve followed Wyden’s privacy work for years. He’s introducing and backing bills that aim to stop this flow into government hands: the Fourth Amendment’s Not For Sale Act and the Banning Surveillance Advertising Act. The Not For Sale Act passed the House in 2024 but stalled in the Senate; with Democrats currently in the minority, those bills have slim odds of becoming law until political control shifts.
Wyden warned: “Location data, web browsing records, and information about mental health, political activities and religious affiliations are all available for pennies on the open market and could make Americans targets for doing things that are completely legal.”
What can Congress do to stop mass surveillance?
Congress can regulate the data broker industry, restrict how federal agencies buy and use commercial data, and ban certain AI-driven inferences about protected traits. Those are legislative levers, but they require majorities and votes you and I both know are hard to secure.
A data broker feed is already a supply chain and advocates warn of chilling outcomes: how AI amplifies risk
Here’s the blunt view: data brokers are a flea market for your life. AI then turns that flea market into a searchlight trained on individuals, pulling disparate purchases, locations, and app habits into one profile.
Wyden again: “I’ve been warning for nearly a decade that data available for purchase from companies is just as sensitive as information the government collects directly. Creating AI profiles of Americans based on that data represents a chilling expansion of mass surveillance that should not be allowed, regardless of what the current, outdated laws on the books say.”
Greg Nojeim told me the purchases are legal today. That legal gap is the problem: the DOD can sign contracts for commercial feeds and use tools like Claude to analyze them, even when the data concerns Americans.
A public confrontation is unfolding and companies are choosing sides: where Anthropic fits
Anthropic’s letter framed its position as ethical limits on use. The company refused to allow Claude for autonomous weapons and mass surveillance—moves that put it at odds with a government intent on rapid access to data and analytic tools. The White House response and the Pentagon’s pressure look designed to set an example.
What happens next will be shaped by courts, contracts, and whether Congress acts. You’ll see lawsuits over contract terms, trade fights over whether vendors can be barred from government work based on partnerships, and steady public debate about privacy, security, and power.
Is Anthropic legally justified in refusing DOD uses?
Contract law and constitutional questions will decide some disputes. Anthropic argues its business choices and user-protection policies give it the right to refuse certain uses; the DOD will argue national security and procurement prerogatives. Expect courtroom tests.
I’m not asking you to take a side on the politics, but to look at your phone differently. If the government can buy the same commercial signals advertisers use, and AI stitches them into portraits of behavior, who sets the red lines—and will you get a say?