The Pentagon isn’t thrilled. Reports claim they’re ready to cut ties with Anthropic, the AI firm behind their latest tool, Claude, due to the company’s insistence on ethical guidelines governing military applications. According to an anonymously sourced piece from Axios, the military’s frustrations stem from Anthropic’s principled stance on restricting specific uses of its technology, especially in areas like autonomous weaponry and mass surveillance.
The Axios source described Anthropic as perhaps the most “ideological” among AI companies. Their flagship product, Claude, and its coding capabilities have led to a troubling pattern—every enhancement sparks Wall Street’s speculative frenzy, as businesses pivot to accommodate rapid technological change. With ambitions to dominate the AI landscape, should we feel uneasy about the discomfort of its employees regarding unintended consequences of such power?
Last year, Anthropic celebrated securing a $200 million Pentagon contract, calling it “a new chapter in Anthropic’s commitment to supporting U.S. national security.” But the recent Axios account highlights vital concerns raised by an Anthropic spokesperson: the ethical boundaries around fully autonomous weapons and invasive surveillance tactics. These were also echoed by founder-CEO Dario Amodei just days prior on Ross Douthat’s Interesting Times podcast.
In his compelling essay “The Adolescence of Technology,” Amodei shared his unease about autonomous drones; he warned that unchecked AI could lead to serious abuses of power. “The constitutional protections in our military structures depend on the idea that there are humans who would—we hope—disobey illegal orders. With fully autonomous weapons, we don’t necessarily have those protections,” he noted.
“It is not illegal to put cameras around everywhere in public space and record every conversation. It’s a public space—you don’t have a right to privacy in a public space. But today, the government couldn’t record that all and make sense of it. With A.I., the ability to transcribe speech, to look through it, correlate it all, you could say: This person is a member of the opposition.”
Losing access to Claude would have significant implications, as Axios remarked. A Defense official alleged that “the other model companies are just behind” Claude in technological prowess. The heart of disagreement appears murky—it’s been suggested that Anthropic inquired about Palantir’s involvement in recent U.S. military operations in Venezuela, a claim Anthropic disputes as alluding to current operations.
Yet, the Pentagon’s representative implied a serious concern: that Anthropic might not approve of their technology’s involvement in controversial strikes. How might a tool designed for coding software like Claude Code become entangled in the grim realities of warfare? As we navigate these troubling waters, the question remains: Should AI companies prioritize public ethics over military contracts, and at what cost?