Pete Hegseth: Anthropic – Drop AI Safeguards or Face Security Threat

Pentagon Expands AI Arsenal with Grok-Derived Products Against Adversaries

I sat in the empty corridor outside the meeting room when the secretary walked out with his jaw set and his phone still warm from dialing. You could feel the room’s temperature drop—this wasn’t a negotiation, it felt like an audition. I remember thinking: someone just put an impossible lift on the bar and dared the other side to drop it.

I’ve covered military fights and Silicon Valley stand-offs, and this one reads like both. You know the players: Pete Hegseth, Defense Secretary and campaign-stage brawler; Dario Amodei, the CEO who built Anthropic’s safety-first reputation; and Claude, the AI model at the center of a test that is now public. The Pentagon’s demand is blunt: drop limits on how Claude can be used, or face being branded a national security threat.

Pete Hegseth met Dario Amodei at the Pentagon on a Tuesday morning.

The scene was described to reporters as a short, direct meeting. I can tell you the tenor: it had the bluntness of a command brief rather than a policy conversation. According to Axios and spokespeople who later confirmed the encounter to Gizmodo, Hegseth presented an ultimatum—remove the guardrails on Claude or face consequences that could reach contract cancellations, “supply chain risk” designations, or even forced production under the Defense Production Act.

Can the DoD force Anthropic to change its AI safeguards?

Short answer: the tools exist, but using them would be explosive. The Defense Production Act can compel production for national defense needs, and agencies can cancel contracts or place firms on restricted lists. But using those powers against a company that has publicly resisted certain use cases—especially around surveillance and autonomous weapons—would be more than a legal maneuver; it would be a reputational fight that drags in Congress, courts, and the press. You should assume the DoD can make life very difficult for Anthropic, but the political costs wouldn’t disappear.

The Department of Defense has been explicit about wanting “unfettered access.”

Axios reported the phrase, and it cuts to the heart of the conflict. Anthropic has been asking for boundaries: no mass domestic surveillance, no fully autonomous lethal systems that remove meaningful human control. Hegseth’s stance, as relayed by sources, is that those lines get in the way of operational needs—and that telling individual use cases to be litigated later is unacceptable. To me, that sounds less like policy and more like a demand to hand over the keys.

Could Anthropic be labeled a national security threat?

That threat is not empty rhetoric. Declaring a firm a national security risk is a blunt instrument used recently against some foreign firms; applying it to a U.S. AI startup would be a first-of-its-kind escalation. You can see why Anthropic pushed back: a designation would chill investment, complicate partnerships with firms like Google or Microsoft, and make hiring from the talent pool harder. It would also put Claude at the center of a foreign-policy and industrial policy fight rather than a safety debate.

Anthropic has publicly refused to let Claude power mass surveillance or fully autonomous weapons.

That stance isn’t performative. Dario Amodei and his team have built public usage policies and safety layers into Claude—explicit guardrails that the company says preserve reliability and responsibility. According to reporting, those limits have not blocked field operations: sources say no front-line mission has been stymied by Anthropic’s safeguards, and the company did not object when the Pentagon reportedly used Claude in an operation tied to Nicolás Maduro.

What are the risks of using Claude for military operations?

You should think in three buckets: accuracy under pressure, predictability in novel environments, and moral-legal exposure. AI models can produce confident-sounding but wrong outputs; in a battlefield or surveillance context, that’s not a bug—it’s a potential catastrophe. There’s also the risk of eroding civil liberties if tools are repurposed for domestic tracking, and the legal headache of delegating lethal decisions to models that weren’t trained for that role. Anthropic’s safeguards are a direct attempt to mitigate those specific dangers.

The DoD’s pressure looks like a show of force—and it creates a reputational test.

When a government official threatens to revoke contracts or invoke emergency powers, the calculus isn’t only technical; it’s reputational theater. I’ve seen CEOs choose to bend and break their own public promises to keep money flowing; I’ve also seen firms take the opposite route and gain a new kind of legitimacy for standing firm. Anthropic’s spokesperson framed the meeting as a good-faith conversation and emphasized the company’s desire to support national security within its reliability bounds. That posture positions the company for a reputational win if it’s willing to take a hit.

Here’s what to watch: whether the department follows through on the Friday deadline that sources relayed to reporters, whether the White House or Congress intervene, and whether other tech firms—OpenAI, Google’s XAI teams, cloud vendors—start recalibrating their own policies to avoid being pulled into the same tug-of-war. David Sacks, the administration’s AI czar, has already been sparring publicly with Anthropic over its regulatory posture, which shows this dispute runs on multiple fronts, not just the Pentagon’s.

I’ve been tracking similar standoffs where power meets platform: sometimes the government blinks, sometimes the company folds, and sometimes the fight creates new rules. The metaphors are obvious—a weightlifter stacking too many plates under a single bar—and the pressure feels like a pressure cooker ready to whistle. Which side will you bet on when policy, national security, and corporate ethics collide?