AI Doomers Playing With Fire: Dangerous Rhetoric Turns Violent

OpenAI Reportedly Pivots to Business and Productivity After WSJ Leak

The Molotov shattered the night outside Sam Altman’s home and someone posted a manifesto online. I watched the footage and felt the room tilt: an argument about risk had become a living threat. You don’t need a PhD in AI to know this is where rhetoric becomes violence.

I’m going to be blunt with you: the people who built the most powerful models—Sam Altman at OpenAI, Dario Amodei at Anthropic, Elon Musk at xAI—have spent years telling the public that their toys could end the world. Then they sold those toys to governments and corporations while asking everyone to calm down. That contradiction is a tinderbox.

The attack on a CEO’s home was real.

A Molotov cocktail was allegedly thrown at Sam Altman’s house and the suspect carried an anti-AI document, according to police. What happened that night is the first paragraph of a story we should have been writing for years: when public alarm about existential risk hardens into action.

You’ve seen the headlines: charges filed, a second shooting near Altman’s home under investigation, suspects released. Chris Lehane, OpenAI’s policy chief, scolded critics for “irresponsible” rhetoric after the attack. And yes, rhetoric matters. But you can’t simply tell a worried public to calm down when the executives who stoked that fear once used the word extinction.

I’ve listened to Sam Altman say things on podcasts and in testimony—phrases like “we lose control” and warnings about AI designing biological pathogens. Dario Amodei has warned about “almost unimaginable power” landing in human hands. Those are not sound bites you whisper in a trade press—they’re emergency signals.

Is AI a threat to humanity?

People ask this because they saw a CEO describe potential extinction on national stages. The honest answer is that some senior figures at leading labs have framed the technology as having catastrophic risk. That admission should trigger public oversight, not private heroics. You should ask: who decides when a model is too dangerous to run, and why are those decisions concentrated in a handful of companies?

Executives once preached apocalypse and then sold reassurance.

OpenAI testified before Congress saying the stakes were existential. Then the same leaders lobbied for government contracts and defense work. That tension is not just hypocritical—it shapes incentives.

When companies warn of doom, they gain influence. They win access to policymakers, secure research funding, and position themselves as the only ones capable of managing the risk. You can think of it like a live wire sparking in a crowded room: the warning gives them cover while they rewire the market to their advantage.

Lehane’s framing—divide the world into believers in abundance and “doomers”—is a classic market move: make the undecided buy your product because only you can avert catastrophe. But this sales pitch ignores a moral question: if you’ve created a tool you say could end life, what obligations follow? Regulatory handshakes and PR videos aren’t answers.

Will AI take jobs?

Yes, and companies are already using it as a justification for layoffs. When you see corporate memos blaming “efficiencies” on machine learning, treat that as a labor-policy problem, not a technical inevitability. Elon Musk tweets about universal high income as a cure, while his actions in government have cut aid and flattened public services. That contradiction tells you everything about how the ruling class expects to distribute risk and reward.

Job cuts and existential talk collide in the workplace.

Companies from startups to Fortune 500s have cited AI in recent rounds of layoffs. That’s a real, immediate harm people live with.

AI’s capacity to write, analyze, and automate white-collar tasks means displacement is coming faster than many officials acknowledge. The CEOs who promise abundance to the public are often the same people who pitch their boards on efficiency, then lobby governments for light regulation. You end up with a house of cards built on optimism and strategic ambiguity.

That dynamic creates two overlapping crises: people lose work, and public trust erodes. When trust drains away, fear fills the space—and fear is what drives someone to carry a Molotov and a manifesto to a CEO’s front door.

What should be regulated in AI?

Regulators should focus on access, misuse, and concentration of power—who can run what model, for what purpose, and under what safeguards. You don’t need to defer to any single company’s ethics board to see the need for independent audits, red-team mandates, and limits on dual-use capabilities like automated biological design.

The power to decide who lives and who loses is closing in on an unelected few.

OpenAI, Anthropic, xAI and the handful of firms running the largest models hold extraordinary leverage over labor markets, national security, and cultural narratives. That’s an observable shift in how decisions are made.

I’ve covered tech for years, and I’ve watched narratives change to fit incentives: from alarm to reassurance to sales pitch. You should be suspicious when the people who warned about annihilation now tell you to accept widespread job disruption because their product is inevitable.

We need public institutions with teeth. Not marketing teams with better language. Not philanthropy dressed as regulation. Real accountability means independent oversight, transparent incident reporting, and limits on what any single private lab can deploy without wider consent.

People on all sides will line up behind comforting frames—the prosperity crowd will promise utopias, the doom crowd will promise destruction. I don’t want to be fatalistic. I want you to be fierce in asking who benefits and who pays.

For years, dangerous rhetoric has been out of control. And now it’s starting to turn violent. What will you do about it?