They Want to Connect AI to Nuclear Weapons: A Terrifying Proposal

They Want to Connect AI to Nuclear Weapons: A Terrifying Proposal

I watched an email trail go dead and felt the floor drop out beneath a national lawsuit. You and I are supposed to trust the people asking to wire artificial intelligence into weapons of mass destruction, yet they can’t hit a single inbox. The mistake was small; the consequence was enormous.

On a humid Oklahoma afternoon, the Justice Department began emailing the wrong address.

I read the Democracy Docket acquisition and felt the absurdity sharpen into real world danger. The DOJ’s Voting Section, led in part by acting chief Eric Neff, asked for Oklahoma’s statewide voter rolls — then addressed its request to the secretary of the State Election Board, Paul Ziriax, rather than the Secretary of State. The emails sent afterward went to an address typed as “[email protected]” instead of the correct “[email protected].”

That typo is more than a clerical hiccup; it is the hinge of a federal suit naming Oklahoma, 29 other states, and D.C. for allegedly refusing to produce voter rolls. The image of the DOJ, the country’s chief law enforcement agency, crafting a narrative of noncompliance while its messages never landed feels oddly like a GPS locked to the wrong map.

How do email errors affect legal cases?

They can unravel evidence chains, delay discovery, and create the appearance of willful obstruction even when none exists. In this case, unanswered emails became part of the DOJ’s justification for litigation — and Oklahoma officials never saw the messages because of a simple misspelling. That gap will live in court filings, press cycles, and public memory.

At a small deposition room, a former DOGE employee described emailing documents to himself.

I sat through clips where Nathan Cavanaugh explained how he routed official files to his personal device, then passed them along via Signal to DOGE lead Steve Davis because, he said, “there was no other way.” You can hear the resignation in his voice: insecure workflows patched together outside official systems.

This is not just sloppy. It’s a pattern. When officials default to ad-hoc methods rather than secure, audited platforms — whether Signal for chat or personal Gmail for file transfer — accountability evaporates. I want you to picture agencies leaning on third-party tools like OpenAI for decision support and companies like Palantir for data aggregation, while everyday communication looks amateur-hour.

Can the government be trusted with AI weapons?

When the Pentagon asks about AI-assisted targeting and news outlets report Democrats demanding oversight of AI in lethal decisions, you should ask questions. If staffers struggle with basic digital hygiene — emailing themselves documents and misaddressing official requests — handing them systems that can select targets, automate strikes, or influence millions of votes is reckless. The misinformation risk scales with the tech: small errors can produce catastrophic outcomes, turning a misrouted email into a much larger moral and operational failure.

In court filings, the DOJ named states for failing to produce voter rolls — while its own paper trail had gaps.

I followed the Justice Department suit and the reporting from Democracy Docket and NBC News, and the narrative didn’t match the mechanics. The DOJ insists it sought compliance; Oklahoma says it never received adequate requests. That contradiction matters.

You should care because this is the same crew pushing for broad latitude to integrate artificial intelligence across government functions. They want algorithms steering war machines, informing prosecutions, and shaping elections — yet they trip over email addresses and prefer ad hoc workarounds to hardened systems. That mismatch is like a house of cards: impressive until the first breath.

Why did the DOJ sue states over voter rolls?

The DOJ argued it needed state voter registration lists to investigate alleged irregularities tied to the 2020 election. When requests went unanswered or were routed incorrectly, the department escalated to litigation, naming states and D.C. for failure to comply. Whether the suit holds up will depend on proof of proper outreach, timeliness, and authority — all of which are muddied by the newly released emails.

I’ve worked sources and court dockets long enough to know one thing: competence matters as much as authority. You don’t hand a scalpel to someone who can’t keep a patient chart straight, and you don’t hand a nuclear control loop to a team that can’t aim an email. So tell me — if they can’t manage basic digital hygiene, should they be trusted to plug AI into our most destructive systems?