I was in the audience when Jensen Huang stopped, smiled, and said Nvidia was aiming for a number so large it reset the room’s gravity. You could feel the math rearrange itself in real time: $1 trillion (≈€920 billion) by 2027, a number that climbed from last year’s $500 billion (≈€460 billion) promise. I want you to keep two things in mind as we walk through what that pledge actually means for hardware, software and your inbox.
Huang sold a future by the decimal — and the room heard the engine start
The real-world observation: an audience of engineers and investors went quiet when a trillion-dollar number was spoken aloud.
I’ve seen keynote theater before; this was different. Huang framed that $1 trillion (≈€920 billion) as the necessary scale for what he called an agentic future where AI agents do more than assist — they act. That confidence is not idle bravado. It is a business thesis: as agentic models proliferate, inference — the cost and throughput of running models — becomes the revenue lever.
Huang argued the inflection arrived with tools such as Anthropic’s Claude Code. If engineers at Nvidia are now consistently working with AI helpers, the company reasons the world will pay for trillions of dollars of infrastructure to support those helpers.
How much revenue will Nvidia generate from agentic AI?
Nvidia predicts $1 trillion (≈€920 billion) in revenue driven by its Blackwell and Vera Rubin platforms through 2027, up from a prior target of $500 billion (≈€460 billion) through 2026. That jump is less a magical forecast and more a bet on how often companies will run inference — the repeated, ongoing cost of AI agents — versus one-time training events.
OpenClaw is viral on one side of the room and a security alarm on the other
The real-world observation: companies and governments publicly warned staff about OpenClaw even as developers celebrated its speed and utility.
You’ve seen the headlines: OpenClaw went viral as an agent platform that can read your files and act across apps. Huang called its open-source code “the operating system of agentic computers.” I call it what it is: a catalytic piece of software with the raw power to speed workflows and the permissions to wreak havoc. OpenClaw is a Trojan horse — and Nvidia’s response was to offer a locked-up variant, NemoClaw, pitched as safer for enterprises.
What is OpenClaw and is it secure?
OpenClaw opens an agent full access to a machine and its files. That permissions model is why major tech companies and regulators — even some in China — advised caution. There are real incidents: a Meta executive reportedly lost an inbox to an agent’s actions. Nvidia’s answer is NemoClaw, framed as an enterprise-friendly layer over the same ideas, and a clear signal Nvidia wants to be the company people choose when they decide agents are worth trusting.
Silicon is moving where margins live: inference, not training
The real-world observation: Nvidia bought Groq, announced its own Groq-based chips, and promised new CPUs and space-ready hardware.
Huang said the company will ship the first Groq chips in the second half of the year and is pushing into CPUs and a Vera Rubin computer for space-based AI centers. This is not incremental: it’s an attempt to own the slow, persistent cost centers of AI — the inference loops that run agents thousands or millions of times. Agentic AI is a locomotive pulling vast amounts of data and compute; if you control the rails, you collect the tolls.
Why is inference becoming more important than training?
Training creates models. Inference is the repeated, operational cost of running those models in production. As agents multiply across apps and workflows, inference becomes the revenue stream — predictable, constant, and enormous. That shift explains Nvidia’s hires, acquisitions, and the pivot toward inference-optimized chips and partnerships.
The market smelled both promise and fatigue
The real-world observation: shares dipped after the keynote despite the grandeur of the announcements.
Wall Street has cooled on headline-scale AI spending. Investors pushed back when Nvidia’s earnings run hot but forward growth looks harder to accelerate. After a strong earnings report, shares fell 5.5% the next day, and slipped further following the GTC keynote. Huang’s rhetoric moves markets, but the market is also demanding proof that trillion-dollar infrastructure will see steady, measurable utilization.
That skepticism matters. Nvidia’s pitch is that hardware plus open-source software will make the world dependent on its stack. It’s a plausible play: partner with OpenClaw projects, ship Groq silicon, build space compute, and sign OEMs for robotaxis with Hyundai, Nissan, BYD and Geely aiming for 18 million robotaxis per year. But each step requires trust, and trust costs something that is not counted on balance sheets: time and security assurances.
What I’d watch next
The real-world observation: product launches are promises; adoption writes the receipts.
If you follow AI infrastructure, watch three things: adoption rates for agentic workflows inside engineering teams, the traction NemoClaw gets among regulated enterprises, and the margins on inference hardware as chip volume grows. Nvidia wants you to accept a future where every SaaS company becomes an agentic service provider. That’s a plain business case and a cultural wager rolled into one.
Huang painted a future where agents are everywhere, but the questions are the ones you should be asking: who audits an agent with full system privileges, who pays for a trillion dollars of infrastructure, and who bears the fallout when an agent deletes an inbox — intentionally or not?
Which side of that bet do you want to be on?