Zuckerberg’s Hiring Spree Stumbles as Meta Delays AI Rollout

Zuckerberg's Hiring Spree Stumbles as Meta Delays AI Rollout

I was on a call when the slide deck switched from triumph to a single, silent graph: the new model had lagged behind a competitor. You could feel the room go quiet even through the phone—sudden doubt where there had been certainty. A selfie of Mark Zuckerberg at HQ would appear online hours later, like a staged exhale.

I follow these product sprints because you notice the small things first: the hiring contracts, the stake purchases, the line items in a quarterly filing. Meta poured money into infrastructure—$600 billion (€558 billion) pledged for U.S. AI build-out by 2028—and then doubled down with headline salaries and exotic stock packages. The company expected momentum; what arrived instead was a delay.

The report from inside the test lab smelled like burnt coffee on a busy morning.

People briefed to The New York Times say the model, code-named Avocado, outperformed Meta’s prior release and even beat Google’s Gemini 2.5 in internal benchmarks. But when engineers stacked Avocado against Gemini 3.0, OpenAI’s latest, and Anthropic’s offerings, it came up short on reasoning, coding, and writing—core capabilities for any foundational model intended to power Facebook, Instagram, WhatsApp, and Threads.

That shortfall pushed Meta to delay a planned March rollout until at least May. Executives have quietly floated a contingency: temporarily routing product-level AI calls to Google’s Gemini models while they polish Avocado’s performance.

Why is Meta delaying its AI model?

You should see this as a product decision, not simply a PR stumble. Meta’s leadership tied enormous capital and public expectations to a progression of models that must show measurable leaps. When internal benchmarks fail to match rivals, the choice is stark: ship something that underperforms users’ expectations or pull back and rebuild.

The PR around the delay read like a well-rehearsed line in an earnings call.

Meta framed the pause as part of a rapid but steady release cadence. Mark Zuckerberg told investors earlier that the next model would “show the rapid trajectory we’re on,” and the company repeated that message to reporters. That script buys time, but it also raises the question of whether the money—$115 billion to $135 billion projected in capital expenditures for 2026 (€107 billion–€125 billion)—is buying speed or simply more iterations.

Money drew people. Meta chased talent with enormous packages—some reports put certain offers near $100 million (€93 million), and Andrew Tulloch’s deal is rumored to include as much as $1.5 billion (€1.4 billion) in incentives tied to long-term performance. Alexander Wang from Scale AI took a publicized role, and Meta purchased a 49% stake in Scale, folding Wang into a TBD Lab charged with building models like Avocado.

Will Meta use Google’s Gemini models?

Yes, but cautiously. Meta engineers have discussed plugging Gemini into product stacks temporarily. That’s a pragmatic move: if Gemini can carry chat, coding, or moderation tasks while internal work continues, it keeps user-facing features fluent. It also flags a new reality: the biggest AI companies may increasingly rely on each other’s best-in-class components instead of just their own.

The talent roster looked like a parade of hiring announcements across winter and spring.

Recruiting splashes—public hires from Thinking Machines Lab, Scale AI, and other top teams—created a perception of inevitability. Yet personnel moves don’t instantly translate to a better model. Building a foundation model is technical craft, not merely an acquisition ledger. The tension you feel now is between payroll numbers and product readiness.

How much did Meta spend on AI hiring?

Reported signings and retention packages read like venture-era headlines: offers in the tens to hundreds of millions, with at least one package reported near $1.5 billion (€1.4 billion) tied to stock and performance. Those figures are real and meant to sway talent away from OpenAI, Google, Anthropic, and startups. But salary alone can’t fix model weaknesses discovered in bench tests.

I want to be blunt: you shouldn’t read this as a single failure. You should read it as an inflection point. Meta bet big on infrastructure and people—the capital and the headlines are in place—but model releases are granular, iterative, and unforgiving when measured against rivals that launched steady upgrades like Gemini 3.0 and OpenAI’s roadmap.

Two scenes stick with me. One is a lab where the whiteboard is an argument map of hallucination cases and edge prompts; the other is a quiet compensation meeting where dollars are discussed as if they were development time. The company is a pressure cooker and, sometimes, like a high-stakes poker hand: the next play matters more than the last reveal.

So what happens next? Meta can keep iterating on Avocado, lean on Gemini where useful, or accelerate acquisitions and partnerships. You should watch product behavior—how well moderation, search, and Threads respond—more than corporate statements.

If billions in infrastructure spending and marquee hires still leave a model behind competitors, what does that say about the real limits of money in AI development?