The rumor arrived in my inbox on a Tuesday: Jensen Huang and Sam Altman were barely speaking. The mega-deal, the one that was supposed to solidify their dominance, was crumbling. What happens when the titans of tech start to turn on each other?
OpenAI and Nvidia, once the darlings of the AI world and seemingly inseparable, now appear to be experiencing some friction.
At the heart of this tension is a proposed $100 billion Nvidia investment in OpenAI, initially announced in September 2025. The plan involved Nvidia constructing 10 gigawatts of AI data centers for OpenAI, investing $100 billion (€92 billion) in the company across 10 stages, aligning with each gigawatt’s completion. OpenAI intended to use this massive funding to lease Nvidia’s advanced chips.
At the time, the announcement sparked concerns about circular dealmaking within the AI sector, a potentially fragile network of financial interdependencies reminiscent of the dotcom bubble. The fear: if one element fails or demand falls short, it could trigger a collapse of the entire structure.
The initial announcement stated that the first gigawatt of computing power would be online by the latter half of 2026, with further details to follow. However, an Nvidia SEC filing from November described the OpenAI investment merely as “a letter of intent with an opportunity to invest.”
Fast forward, and The Wall Street Journal reported that discussions hadn’t progressed significantly, with Nvidia’s CEO, Jensen Huang, allegedly expressing private criticisms about OpenAI’s perceived lack of business discipline. Huang has since reportedly emphasized that the $100 billion (€92 billion) agreement was non-binding and unfinalized.
Following this report, Huang attempted to reassure reporters, praising OpenAI and stating Nvidia would “absolutely be involved” in their next funding round before an anticipated IPO. He described the planned investment as potentially Nvidia’s largest but clarified it wouldn’t reach $100 billion (€92 billion).
Investor anxieties persisted, fueled by another anonymously sourced report. OpenAI reportedly wasn’t satisfied with the inference speed of Nvidia’s chips for certain ChatGPT requests. The company was exploring alternative chip providers, like Cerebras and Groq, to handle approximately 10% of its inference requirements, according to Reuters.
The report also alleged that OpenAI attributed some weaknesses in its AI coding assistant, Codex, to Nvidia’s hardware.
In response, OpenAI executives publicly praised Nvidia. CEO Sam Altman tweeted that Nvidia produces “the best AI chips in the world,” and infrastructure executive Sachin Katti affirmed Nvidia as OpenAI’s “most important partner for both training and inference.”
Inference, with its substantial memory demands, has seemingly become a major focus for Nvidia. As AI models mature, the importance of inference surpasses that of training. The rise of agentic AI has also increased the data volume managed by AI systems during inference, further highlighting the significance of memory capacity.
Addressing these concerns, Nvidia acquired Groq, the AI chip startup reportedly considered by OpenAI, in what was its largest acquisition. Nvidia then introduced its new Rubin platform, highlighting its inference and memory bandwidth improvements.
The Chessboard Tilts: Google Steps Up
I was at a tech conference last year when whispers about Google’s increasing competitiveness began to circulate. This rising competition, particularly from Google, appears to be a significant concern for both Nvidia and OpenAI.
Google intensified its competition with both OpenAI and Nvidia late last year.
Google’s tensor processing units (TPUs) are custom AI chips designed for inference and, in some instances, considered superior to Nvidia’s GPUs. These TPUs support Google’s AI models and are utilized by OpenAI competitor Anthropic and potentially Meta.
The Wall Street Journal report also indicated Huang’s apprehension regarding the threat Google and Anthropic pose to OpenAI’s market leadership. Huang is reportedly concerned that any decline in OpenAI’s performance could negatively impact Nvidia’s sales, given OpenAI is a major customer.
OpenAI reportedly declared a “code red” in December after Google’s Gemini 3 was considered to outperform ChatGPT. OpenAI has also been actively scaling Codex to surpass Anthropic’s Claude Code.
Why is inference becoming more important than AI training?
Think of AI training as cramming for an exam and inference as applying that knowledge in the real world. As AI models mature, their ability to *apply* what they’ve learned becomes the real test. Inference, the process of using a trained model to make predictions or decisions, is where AI truly interacts with the world. The growing importance of inference stems from the increasing demand for real-time, responsive AI applications.
What specific concerns does Jensen Huang have about OpenAI’s business approach?
I heard from an industry contact that Huang’s primary concern centers on what he sees as a lack of financial discipline at OpenAI. It’s like building a skyscraper on shaky foundations. He’s worried about the long-term sustainability of their business model, particularly their heavy reliance on massive capital expenditures and complex, interconnected deals.
How might competition from Google impact Nvidia and OpenAI’s relationship?
Imagine Nvidia and OpenAI as dance partners, suddenly finding another couple cutting in on their routine. The emergence of Google as a strong competitor introduces new dynamics. For Nvidia, it means OpenAI might start exploring alternative hardware solutions like Google’s TPUs, reducing their reliance on Nvidia’s chips. For OpenAI, Google’s advancements in AI models threaten their market dominance, potentially impacting their revenue and ability to invest in Nvidia’s infrastructure.
If these investor concerns materialize, and the deal doesn’t proceed as planned, the consequences would extend far beyond just OpenAI and Nvidia. The two companies are at the center of a complex network of AI deals, including a $300 billion (€276 billion) OpenAI-Oracle cloud agreement that dwarfs the Nvidia commitment. These deals have significantly boosted the American economy, and a failure in one could destabilize the entire structure.
The entire situation feels like a high-stakes poker game where everyone’s bluffing. Is this just a temporary standoff, or a sign of deeper cracks in the foundation of the AI boom?