The air in the auditorium crackled with anticipation as Jensen Huang stepped onto the stage, his signature leather jacket gleaming under the spotlight. He adjusted the microphone, a wry smile playing on his lips, and began to speak, not about teraflops or CUDA cores, but about… feelings. It was a plea, thinly veiled as a lecture, and it was directed at anyone who dared question the AI utopia he was building.
The Nvidia CEO, whose personal fortune has increased almost $100 billion since the AI boom, would prefer if we all focused on the sunny side. As he sees it, all this talk about potential downsides? It’s a real buzzkill.
During an appearance on the No Priors podcast with Elad Gil and Sarah Guo, Huang went after the “doomers.” “[It’s] extremely hurtful, frankly, and I think we’ve done a lot of damage with very well-respected people who have painted a doomer narrative,” he said.
Huang argues that dwelling on existential risks might actually create problems. “It’s not helpful. It’s not helpful to people. It’s not helpful to the industry. It’s not helpful to society. It’s not helpful to the governments,” he declared. He seemed especially annoyed by industry peers asking the government for regulation. “You have to ask yourself, you know, what is the purpose of that narrative and what are their intentions,” he said. “Why are they talking to governments about these things to create regulations to suffocate startups?”
The Self-Serving Prophecy
I saw a study recently about AI lobbying. Turns out, Silicon Valley firms have poured over $100 million (€92.7 million) into Super PACs to promote pro-AI messaging before the 2026 midterms, according to the Wall Street Journal. Huang isn’t wrong to suggest that regulatory capture is a concern. Multi-billion-dollar entities might try to solidify their position by influencing politicians.
There’s no question that some AI companies use fear to their advantage. It positions them as guardians, suggesting only they can handle such powerful tech. It’s a classic sales play: create a problem, then sell the solution.
However, optimism alone doesn’t erase legitimate risks. “When 90% of the messaging is all around the end of the world and the pessimism, and I think we’re scaring people from making the investments in AI that makes it safer, more functional, more productive, and more useful to society,” Huang stated, without explaining how simply throwing more money at AI makes it intrinsically safer.
Can AI really solve climate change?
Some argue that AI could offer solutions to climate change, from optimizing energy grids to discovering new materials. Huang seems to suggest this, but it’s a broad claim. The reality is more complicated. AI’s potential environmental impact is a double-edged sword. The training of these models requires massive computational power, leading to significant energy consumption. This paradox—using energy-intensive AI to solve energy problems—demands careful evaluation.
The Missing Pieces
I read a disturbing article about the mental health crisis and AI chatbots just last week. Huang doesn’t offer solutions for job displacement, which is accelerated by companies eager to adopt AI without considering the human cost. Nor does he address issues like misinformation or the mental health challenges amplified by AI. We’re all participating in a giant, uncontrolled experiment.
The apparent answer is to accelerate development, hoping that a future superintelligence will fix everything. If the doomers are accused of having a control agenda, it’s difficult not to see a similar motive in Huang’s stance: boosting his company’s profits. It’s a narrative as old as time: the hero who also benefits handsomely from saving the day.
What are the risks of unregulated AI?
Many experts express worry about bias in algorithms, the potential for misuse in surveillance, and the concentration of power in the hands of a few tech giants. Unregulated AI could lead to job losses, erode privacy, and exacerbate existing inequalities. The safeguards Huang dismisses are meant to address these very real concerns, to ensure AI benefits everyone, not just shareholders.
The Ulterior Motive
Think of it like this: Huang is the conductor of an orchestra, and the symphony is the relentless march of technological progress. But some musicians are playing out of tune, and those sour notes are the ethical questions, the potential downsides, the uncomfortable truths that threaten to disrupt the harmony. His request for positivity is really a demand for unwavering loyalty to the score he’s written.
But sometimes, the most beautiful music comes from embracing dissonance. Maybe the “doomers” aren’t trying to destroy the symphony, but to enrich it.
How will AI impact the job market?
While AI may create new jobs, it will also automate many existing ones, leading to displacement. The question is whether the new opportunities will outweigh the losses and whether people will have access to the necessary training to adapt.
Huang may see AI as a rising tide that lifts all boats, but history shows that technological progress often creates winners and losers. And sometimes, the tide leaves behind a lot of wreckage. Is his plea for optimism a genuine belief in a brighter future, or a carefully constructed shield against any threat to his bottom line?