Google DeepMind Invests in CCP Games to Explore AI-Powered EVE Online

Google Invests $4 Billion in Data Center Firm Despite Low Demand

I watched a fleet of players pause mid-battle as rumors spread across the EVE forums: an unfamiliar intelligence had been seen probing market orders. The chat thread spun from curiosity to suspicion in a single hour. You could feel the small panic that follows whenever something new touches a fragile, player-run system.

I’m going to walk you through what actually happened and why it matters to players, researchers, and the companies that bet on both. This isn’t corporate theater — it’s a handover of experimental ground and a test of trust.

Outside the studio in Reykjavik, someone still keeps an old star map pinned to the wall.

That framed map is a reminder: EVE Online is not a typical game. Launched in 2003, it sprawls across more than 7,000 star systems and runs on player-driven economics, politics, and warfare. A single market order can ripple into alliances declaring war; a coordinated bot can reshape trade routes.

Which is exactly why Google DeepMind has taken a minority stake in Fernris Creations (the studio formerly known as CCP Games). Fernris announced it bought itself back from Pearl Abyss for a deal worth $120 million (€110 million) and simultaneously opened a research partnership with DeepMind that gives the AI lab a seat inside EVE’s complicated sandbox. The plan: run AI models against an offline copy of the universe on a local server, and research long-horizon planning, memory, and continual learning.

This is less about flashy demos and more about controlled experiments inside a complex, evolving system — a place where strategy, deception, and scale meet.

Why did Google DeepMind invest in EVE Online maker?

I asked the same question when the press release landed. The short answer: games are an efficient, measurable way to stress-test intelligence. Demis Hassabis, DeepMind’s CEO, framed it plainly — games offer environments with clear objectives, rules, and high-stakes variability. DeepMind has history here: AlphaGo toppled Lee Sedol in 2016, AlphaStar hit Grandmaster level in StarCraft II in 2019, and those projects taught the lab how to engineer planning and adaptation under pressure.

Fernris’s CEO, Hilmar Veigar Pétursson, called the partnership a next chapter — one aimed at understanding intelligence in systems where human choices drive outcomes. DeepMind will pay attention to long-horizon goals (plans that unfold over days or weeks), memory (what agents retain and recall), and continual learning (models that keep updating without erasing what they already know).

In forum threads and Discord servers, players often suspect anything new of being a bot.

That suspicion is healthy. Players depend on transparent rules because the game’s economy and politics are real to them. DeepMind’s work will happen on an offline server, which means models can be tested without touching live characters or live markets — at least at first.

Fernris and DeepMind say they’ll also explore AI-driven gameplay experiences. If they deliver, expect NPCs and systems that can plan over months, remember past interactions, and adapt as player behavior changes. This could reshape how corporations are formed and wars are waged inside EVE. The virtual universe becomes a laboratory, every experiment leaving a trace for researchers to analyze.

How will this affect EVE Online players and gameplay?

Short-term: very little will change for you if you’re logged into the live universe. The tests will run offline. Longer-term: the presence of advanced agents could introduce smarter NPCs, richer emergent stories, and tools for developers to prototype systemic changes faster. It might also raise legitimate concerns about surveillance, decision-making autonomy, and fairness.

Fernris and DeepMind emphasize safety and player-driven integrity. Hassabis has historically pointed to past milestones — AlphaGo, AlphaStar — as proof that games are a good proving ground for algorithms. OpenAI’s 2019 Dota 2 tournament is another proof point; that project even reappeared in legal filings this year in the OpenAI/Elon Musk saga, where Greg Brockman’s note about an AI victory prompted public reactions about the company’s next moves.

On the tech-watching desk, analysts have already started making lists of risks and opportunities.

The industry shift is obvious: AI labs want complex, realistic environments and game studios want research money and technical partners. Pearl Abyss sold back Fernris for $120 million (€110 million); DeepMind’s minority stake brings prestige and research horsepower. Names matter here — DeepMind, Demis Hassabis, Hilmar Veigar Pétursson, OpenAI, and even players who run the markets — because each will be watched as this work unfolds.

There are questions you should expect to hear more about: data governance, model safety, consent, and how research results might be applied to live systems. Fernris says the partnership will advance research “safely inside a player-driven universe.” That sentence sounds reassuring until you probe for specifics — and you should probe.

Games have taught AI some of its biggest lessons, and now the reverse may be true: AI could teach games new kinds of social complexity and strategy. The stakes are social as much as technical — who benefits, who loses, and who gets to steer the experiments?

Is the gaming world ready to hand pieces of its living experiment to a research lab, even with safeguards in place?