Google and OpenAI Back Anthropic in Legal Brief

Google and OpenAI Back Anthropic in Legal Brief

They signed the brief at 2 a.m., in the quiet overlap between product duty and moral unease. I read the names and felt the room tilt: rivals acting like allies because something bigger was breaking. You can almost hear legal strategy and engineering ethics collide.

I’ve covered courtroom battles and corporate feuds. You should know why this one matters: Google and OpenAI engineers — 37 of them — filed an amicus curiae brief backing Anthropic after the federal government labeled the company a “supply chain risk to national security” and tried to block partners from working with it.

In a late-night Slack at Google, an engineer posted a short message that became part signal and part complaint.

The list of signatories reads like a who’s-who of ML talent: Jeff Dean, Grant Birkinbine, Sanjeev Dhanda, Leo Gao, Zach Parent, Kathy Korevec, Ian McKenzie, and scores more from Google and OpenAI. I’d call that authority — you should call that attention. Their brief argues three central points: Anthropic’s red lines around mass surveillance and autonomous lethal systems were defensible; the government’s move was an arbitrary use of power; and the decision carries wide industry ramifications.

The moment feels fragile, like a fuse you don’t want to light. If regulators can brand a vendor a security risk and block others from dealing with it, you change how contracts, research partnerships, and product roadmaps form overnight.

What does the amicus brief say?

The amici say Anthropic did the right thing by codifying limits on how its models could be used. They defend the company’s public “red lines” and frame the government’s action as an overreach that threatens technical norms and safety practices across the sector. The brief highlights the specific harms the engineers fear — surveillance at scale, and permissive paths toward fully autonomous weapons — and warns that agency precedent could chill safety-focused engineering and corporate policies.

At the courthouse steps, a legal clerk told me briefs pour in from every corner — but this one reads like industry self-defense.

Amicus briefs are supposed to help judges by offering expertise; sometimes they’re political theater. This one lands somewhere between scholarship and a career-based warning. When engineering leaders from competitors join forces, they’re signaling both legal reasoning and market concern. I’ve seen briefs that sway outcomes; you should take the presence of Jeff Dean — who also funds startups — and OpenAI engineers seriously because that mixes technical authority with real economic stakes.

Why did Google and OpenAI employees sign?

There are a few motives layered together. First, many signatories are genuinely worried about model misuse and the precedent created if agencies can single-handedly designate vendors as national-security risks. Second, there’s a competitive and reputational angle: partnerships and procurement are how platforms scale — losing access because of an administrative label could hurt how companies build and buy tech. Third, public positioning matters; Sam Altman criticized the decision publicly on X, calling it “a very bad decision from the DoW,” while also admitting OpenAI’s Pentagon ties made criticism look messy.

You should also factor politics and optics: when a regulator steps into a high-stakes tech fight, product teams and legal teams react fast. That reaction is what drove these engineers to put their names on a court document.

In product demos and boardrooms, legal classifications already change who speaks with whom.

This dispute is now a test case for how far an agency can go in constraining commercial relationships without clear congressional backing. Anthropic sued to block the government from imposing what it calls an improper restriction on private contracts and collaborations. The amici call the move arbitrary and warn of “serious ramifications for our industry.” If a single administrative decision can redraw partnership maps, you’ll see supply chains shift as companies hedge against sudden exclusions.

Think of the industry as a chessboard where players move alliances to protect research and access to compute — every forced move reshapes strategy.

I won’t pretend the legal outcome is predetermined. You and I should watch three things: how the court treats technical expertise in the brief, whether judges push back on the agency’s statutory basis, and how quickly firms alter deals to avoid similar risks. Do you think an industry that just signed a rare, collective brief will change the rules of engagement for safety and procurement?