Ex-Uber Exec Tied to Perplexity Backing Pentagon Anthropic Blacklist

Pentagon Claims Anthropic's 'Soul' Is a Supply-Chain Risk - Debunked

It was a courtroom moment that felt like a trapdoor clicking open: Judge Rita Lin bluntly called the Pentagon’s move “an attempt to cripple Anthropic.” I watched the agency’s posture wobble between public fury and private dependence, and you can feel the tension in every filing and press release. The man pushing hardest to blacklist Anthropic stands to gain if his favored rival wins the race.

A financial disclosure shows Emil Michael holds Perplexity stock

I pulled the form that Michael filed when he joined the Pentagon and found a clear, cold line item: $2–$10 million in Perplexity equity (€1.8–€9.2 million). You don’t need a law degree to see how the optics look: a senior Pentagon official pressing to block a competitor while owning millions in a rival.

Reports from The Lever and a ProPublica data dump confirm Michael served on Perplexity’s board and carries both vested and unvested shares. Perplexity has an agreement to roll its AI search product out to federal agencies and is one of three firms being considered to host government AI systems on their own servers, according to Fast Company and Perplexity’s announcement.

Does Emil Michael have conflicts of interest?

Short answer: the disclosure raises red flags. I’m not saying there’s proof of illegal conduct in the public record, but you and I can agree this is the kind of interest that invites scrutiny. Michael’s position gave him influence over procurement decisions where his stock could benefit directly or indirectly.

A court hearing showed the Pentagon’s posture wobbling in public

At a public hearing, Judge Rita Lin described the Defense Department’s labeling of Anthropic as a supply-chain risk as “an attempt to cripple Anthropic.” That observation landed like a landing punch — it undercut the administration’s rhetoric and forced a reassessment.

The Department of Defense has both accused Anthropic of posing a threat and quietly used Anthropic’s Claude in early planning around Iran, per reporting in The Hill. The agency’s actions read like a paper shield: loud on restriction, soft in practice.

Why did the Pentagon blacklist Anthropic?

The stated reason was national security and supply-chain risk. The backstory is messier: Anthropic refused to sign off on use cases the company deemed immoral, such as domestic surveillance and fully autonomous weapon systems, which triggered an aggressive response from some in the Pentagon. Public pressure to punish Anthropic, according to the Wall Street Journal, came from Michael in his role as the department’s chief technologist.

Michael’s professional past gives the move a personal edge

At Uber, Emil Michael was Travis Kalanick’s right-hand man — a job that ended poorly and left him saying he would “never forgive” investors who pushed them out. That observation matters because it frames him as someone who holds grudges and plays for keeps.

He also advised Tools for Humanity, Sam Altman’s firm behind the eye-scanning orb, and Altman’s OpenAI picked up the Pentagon contract Anthropic lost. Michael’s ties create an ecosystem: Perplexity, Tools for Humanity, OpenAI and the DoD are all moving on overlapping tracks. Michael has become a gatekeeper — a keyhole through which defense AI must pass.

Perplexity’s federal partnership means it’s not a distant player: the company signed to deploy an AI search engine across agencies and is being vetted to host government systems. If you track procurement, that’s a sale with enormous strategic value, not just a ledger item.

A judge’s skepticism doesn’t erase power dynamics in play

Observers in the courtroom noted how a single designation can wreck a startup’s government prospects overnight. You can imagine the ripple effect: contracts vanish, other agencies get nervous, investors pull back. That real-world consequence is why lawyers and lobbyists swarm these disputes.

Anthropic sued for retaliation; the judge’s initial sympathy is a win for the company, but the broader question remains: will policy be shaped by national security criteria or by private fortunes? The answer matters for the future of trustworthy AI in government and for companies that try to draw lines on ethical use.

A final act is still unwritten

You’ve seen the actors: Anthropic, Perplexity, OpenAI, Tools for Humanity, a Defense Department split between caution and operational need, and a senior official with a substantial financial stake. The facts are in the filings, the reporting, and the courtroom transcripts. The narrative that unfolds will shape who gets to host and operate the government’s AI tools.

I’ve followed murky influence campaigns before; this one smells of personal interest wrapped in policy language. If Anthropic wins in court, it’s a rebuke to the department’s heavy-handed approach. If Anthropic loses, companies that set ethical boundaries may find it harder to work with Washington.

Which outcome do you think would be better for citizens and for the future of AI stewardship?