Anthropic CEO: Is Humanity Ready for Advanced AI?

Congress Invites Anthropic CEO to Discuss China's AI Cyberattack

The year is 1945. Robert Oppenheimer watches the Trinity test bloom over the New Mexico desert, a morbid understanding washing over him. Years later, haunted by the creation, he famously quoted the Bhagavad Gita: “Now I am become Death, the destroyer of worlds.” Are we at a similar precipice with AI?

Dario Amodei, the CEO of Anthropic (the company behind Claude), recently published a lengthy essay—nearly 40 pages—titled “The Adolescence of Technology.” In it, he lays out his concerns about the immense dangers lurking within the rapid development of superintelligence.

Make no mistake: Anthropic isn’t halting its AI development.

Amodei, known for his thoughtful essays, believes humanity is on the cusp of a transformative era, a “rite of passage, both turbulent and inevitable, which will test who we are as a species.” However, this era could be our last if things go awry. As he puts it, “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.” He adds, chillingly, that “AI-enabled authoritarianism terrifies me.”

Side note: Anthropic once offered Claude to the Trump administration’s federal government for a symbolic $1 (€0.93) per year.

The Virologist in Your Pocket

Remember the 1995 sarin gas attack in a Tokyo subway, carried out by the Aum Shinrikyo religious movement? Fourteen people died; many more suffered. Amodei uses this as a stark example, suggesting that putting a “genius in everyone’s pocket” would drastically lower the barrier to entry for such attacks, potentially making them far deadlier.

“The disturbed loner who wants to kill people but lacks the discipline or skill to do so will now be elevated to the capability level of the PhD virologist, who is unlikely to have this motivation,” he writes. “I am worried there are potentially a large number of such people out there, and that if they have access to an easy way to kill millions of people, sooner or later one of them will do it.”

And, apropos of nothing, consider that one evaluation in Anthropic’s “System card” report for Claude Opus 4.5 involved tasking the model with assisting virologists in reconstructing a complex virus.

How close are we to true AI?

Amodei acknowledges the impressive gains in AI capabilities recently, but he warns that if this pace continues, we’ll soon reach superintelligence. He notes the terminology shift away from “artificial general intelligence” (AGI), but the underlying concern remains: “If the exponential continues—which is not certain, but now has a decade-long track record supporting it—then it cannot possibly be more than a few years before AI is better than humans at essentially everything.”

A ‘Country of Geniuses’ Arrives

Imagine waking up to this headline: A new nation of hyper-intelligent beings has materialized. Amodei provides an analogy: “Suppose a literal ‘country of geniuses’ were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist,” he writes. “Suppose you were the national security advisor of a major state, responsible for assessing and responding to the situation. Imagine, further, that because AI systems can operate hundreds of times faster than humans, this ‘country’ is operating with a time advantage relative to all other countries: for every cognitive action we can take, this country can take ten.”

Within that framework, Amodei asks what our biggest worries should be. He lists his own—”Autonomy risks,” “misuse for destruction,” and “misuse for seizing power”—before concluding that any rational assessment would deem this new country “the single most serious national security threat we’ve faced in a century, possibly ever.”

Just remember: Anthropic is actively building that “country” in the analogy.

Is AI safety just ‘regulatory capture?’

Anthropic has been, more than most AI companies, vocal about the dangers of AI and a proponent of increased regulation and consumer safeguards (whether you see this as genuine concern or a cynical attempt at regulatory capture is your call, but they talk a good game). Yet, they keep building the very technology they claim could spell disaster. This is where the paradox hits hard. It is like being a fireman who is an arsonist.

Will AI be accessible to everyone?

If there’s genuine worry that humanity isn’t ready for AI, should this technology be so easily accessible to the masses, especially when companies like Anthropic publicly tout their rising monthly active users?

That begs the question: Is making powerful AI openly available creating a powder keg, just waiting for a spark?