I was on a late-night scroll through policy changes when a tiny edit made my stomach drop. An old prohibition—a clear ban on military uses—was gone, like someone erasing a line from a rulebook. Minutes later, contract notices and public statements began to stitch a very different story.
At a Pentagon workstation, an Azure login lit up — then questions followed
You can imagine the moment: a console, an account token, a model answering prompts that were not supposed to run in that environment. Wired reported that the Defense Department had been trying out Microsoft’s Azure OpenAI service in 2023, and a Microsoft spokesperson later confirmed Azure OpenAI became available to the U.S. government that year. The company also said the service wasn’t approved for “top secret” workloads until 2025.
I want you to hold two facts close: the Pentagon was experimenting with models via Microsoft, and OpenAI at the time still carried public language against military use. That gap is where the headline grows legs.
Did the Pentagon use OpenAI models?
Short answer: reportedly. Sources cited by Wired say the Defense Department tested Azure OpenAI in 2023. Microsoft confirmed the platform’s availability to government customers that year but stopped short of a timeline tying specific tests to Pentagon programs. The platform’s clearance for the highest classification levels didn’t arrive until 2025.
A public policy once read like a clear red line — then it faded
OpenAI’s usage rules once listed a ban on “activity that has high risk of physical harm,” explicitly calling out “weapons development” and “the military and warfare.” In January 2024, the company removed the blanket ban on “military and warfare,” a move The Intercept flagged at the time.
That quiet edit cleared the path for OpenAI’s own government push. The company launched OpenAI for Government and announced a pilot with the Department of Defense’s CDAO capped at $200 million (€184 million). Later, OpenAI made a customized ChatGPT available on the Defense Department’s unclassified GenAI.mil platform alongside competitors like xAI and Google’s Gemini.
How did Microsoft provide OpenAI models to the Pentagon?
Microsoft’s Azure OpenAI service packages access to models through its cloud platform. Per Microsoft, Azure OpenAI became available to U.S. government customers in 2023 and was governed by Microsoft’s own terms. That setup can create intermediated access: the Pentagon talks to Microsoft, Microsoft brokers model access — and that is where critics see a workaround to OpenAI’s earlier ban.
Think of it like a bridge with a gate: the gate belonged to one company, but the bridge still carried traffic.
A negotiation that turned public in hours — and sharpened loyalties
A conference call or a legal memo can be private; what happened here went public fast. Anthropic had been the only firm cleared to operate in the military’s classified systems until negotiations with the Defense Department broke down. Defense Secretary Pete Hegseth said the talks failed and directed the Pentagon to label Anthropic a supply-chain risk — a designation meant to sever commercial ties between Anthropic and military contractors.
Hours after that designation, OpenAI announced it had reached an agreement to operate in classified environments. Sam Altman admitted the timing looked “opportunistic and sloppy.”
Is OpenAI allowed to work with the military?
Legally, companies can sell services to federal agencies so long as they meet procurement rules and security clearances. Policy choices are different: OpenAI’s earlier public restrictions reflected internal guardrails. Those rules changed, and the company pursued a government program and a pilot with the DoD, then moved into classified discussions. That sequence matters more for trust than for legality.
Anthropic pushed back on demands from the Pentagon to allow its models to be used for “all lawful purposes,” seeking contractual guardrails that would forbid domestic surveillance and autonomous weapons use. After the breakdown, Anthropic’s CEO Dario Amodei issued an apology for a leaked memo and said the company would challenge the designation in court. Microsoft said it would continue offering Anthropic’s products to clients despite the Pentagon’s move.
A few stray threads can unravel policy fast
Small edits, private tests through a partner, a public spat over supply-chain risk — each element changes the fabric of how AI and national security fit together. OpenAI’s removal of its military ban, Microsoft’s Azure OpenAI availability, and the rapid pivot to a classified contract created a rush that looked, to some, opportunistic.
One image sticks: a curtain pulled back onstage to reveal a new actor who wasn’t on the playbill. That jolt is what keeps me watching this story.
There are real stakes here — governance, chain-of-custody for models, commercial influence on warfighting tools. You and I can argue the ethics, the procurement logic, or the political theater. But the core question remains: who gets to set the rules when cutting commercial AI meets national security imperatives?