Former Military, Academics, Tech Leaders Rebuke Pentagon Over Anthropic

Former Military, Academics, Tech Leaders Rebuke Pentagon Over Anthropic

Two weeks ago I watched a terse tweet from Defense Secretary Pete Hegseth and felt the room shrink. The Pentagon had cast Anthropic — a US-based AI startup — as a supply chain threat, and suddenly a quiet legal and commercial fight turned public. You could see the phones lighting up at venture firms, law desks, and congressional offices.

At a Washington briefing, the administration framed the move as national security. That framing set off a letter from more than two dozen former defense and intelligence officials, tech policy leaders, and academics demanding limits.

I read the letter before most people did. You should too, because it does something rare: it ties everyday legal principles to the future of American AI. The signers span the political spectrum — from former CIA director Michael Hayden to tech voices like Lawrence Lessig — and they are blunt about what’s at stake.

The letter calls the Pentagon’s move an “inappropriate use of executive authority against Anthropic.” Brad Carson, president of Americans for Responsible Innovation and former Under Secretary of the Army, told Gizmodo that this was a dangerous precedent. The signers argue that supply chain risk designations were designed to stop foreign infiltration — companies tied to Beijing or Moscow — not American firms operating openly under U.S. law.

At a Defense Department podium, officials floated threats to block contractors. The signers warn that blacklisting a domestic company chills investment and competition.

I want you to feel the scale: when a government label used for foreign adversaries is applied to an American startup, the message to entrepreneurs is immediate and corrosive. The letter warns this is “not a marketplace any serious entrepreneur or investor can build around.”

They are not idle technocrats either. The roster reads like a who’s who of security and civic life: Michael Hayden; retired Vice Admiral Donald Arthur; Diana Banks Thompson; Randi Weingarten; and members of tech-focused think tanks. Their argument: rejecting military requests to remove safety guardrails — Anthropic’s stance — should not be punished by branding the company a supply chain risk.

Why did the Pentagon label Anthropic a supply chain risk?

The short version is politics and policy collided. Secretary Hegseth and President Donald Trump reacted to Anthropic’s refusal to remove safety restrictions for military use. That prompted talk of blacklisting and public pressure on contractors to sever ties. Yet the letter’s signers say the authority used is intended to counter foreign infiltration, not to discipline American innovators.

At the Capitol, members of both parties received the letter. The authors pressed Congress to write clearer rules limiting AI surveillance and autonomous weapons.

I want you to notice how they anchor their demands to existing law: prohibiting fully autonomous lethal weapons, they say, aligns with the laws of armed conflict and Geneva principles on distinction and proportionality. Banning mass domestic surveillance, they note, rests on the Fourth Amendment and treaty obligations under the International Covenant on Civil and Political Rights.

This argument is a legal cord connecting current tools to long-standing limits on war and police power. It asks you — citizen, voter, professional — whether new technologies should get an old-fashioned legal blank check.

What are the legal limits on autonomous weapons?

International law requires distinction and proportionality. The letter’s signers argue fully autonomous lethal systems risk violating those norms. That view isn’t fringe — it draws on established humanitarian law and decades of treaty practice. I read their language as a bid to keep human judgment in the loop.

At investor meetings and startup lunches, founders are asking the same question: will U.S. policy protect innovation or punish it? The letter warns that heavy-handed blacklisting weakens U.S. competitiveness.

You and I both know markets are sensitive to legal risk. Labeling a U.S. company with a supply chain designation is like a bulldozer through a hedgerow — it clears more than it intends. Investors will reroute, partners will hesitate, and engineers will consider greener pastures abroad.

Anthropic’s future remains uncertain. Hegseth has not formally served notice beyond public statements, and recent reporting from CBS News suggests Anthropic is still negotiating with the Pentagon. That liminal state — part negotiation, part public spectacle — is harmful on its own.

How will this affect U.S. AI competitiveness?

If you care about America’s lead in AI, the letter’s warning should resonate. Blacklisting domestic firms erodes trust between government and industry, and it signals to entrepreneurs and foreign talent that the US market can be politically volatile. The signers argue that such a landscape is “not a marketplace any serious entrepreneur or investor can build around.”

Blacklisting an American company is like a library on fire: it destroys not just a building but the knowledge and future projects inside. If the government weaponizes administrative tools for political ends, you lose more than a contract — you lose a pattern of predictable rules that entrepreneurs rely on.

At hearings and briefings ahead, members of the House and Senate Armed Services Committees will see this letter. The authors ask lawmakers to act where the executive has overreached.

The letter is addressed to Republicans Sen. Roger Wicker and Rep. Mike Rogers and Democrats Sen. Jack Reed and Rep. Adam Smith. It asks Congress to establish clear policies about domestic surveillance and autonomous lethal systems, effectively moving the debate from a tweet to statute and oversight.

As someone who follows policy and tech closely, I think lawmakers now face a choice: reassert constitutional boundaries and legal norms, or allow executive power to set ad hoc rules that could chill innovation. You have a stake in which path they choose.

So where do we go from here, and who will defend the rule of law, American innovation, and the constitutional contours of executive power — you, me, or the next tweet?