I walked into a soundstage that looked like a blank photograph: empty walls, a handful of lights, and a crew pretending the world existed beyond the plywood. You can feel the gamble — a Hollywood veteran betting pixels instead of planes. I want you to see why that bet smells like both saving grace and a spectacle designed to be watched twice.
I’ve covered big-budget gambles and tech fads, and I’ll tell you what to watch for now that Doug Liman, Gal Gadot, Pete Davidson, and Casey Affleck are attached to Bitcoin: Killing Satoshi — a film being positioned as the first “studio-quality” feature built on near wall-to-wall AI imagery, according to The Wrap. You should know the players: Acme AI & FX, the visual-effects outfit producing the movie, built a custom stage and then shot the film in 20 days.
They shot in a bare room for 20 days — The set will be painted later, pixel by pixel
The production reportedly left the physical world almost entirely blank and intends to generate sets, lighting, and backgrounds in post. That’s a logistics pivot: instead of trucking crews to Antarctica or Vegas, you generate those places with machine learning models, neural rendering, and compositing pipelines built on tools like Unreal Engine, NVIDIA GPU farms, and proprietary Acme systems.
That workflow behaves like a stage magician swapping backdrops mid-act: the audience never sees the mechanics, only the reveal. It’s cheaper on travel and permits, but heavier on compute, model training, and careful artistic oversight.
And yes, the human element remains visible on the call sheet: the shoot used 107 cast, about 100 shoot crew, and 54 non-shoot crew, while the planned 30-week post-production is slated to require 55 so-called “AI artists.” The producers lean into that headcount when they talk to press — a clear salve for the industry’s uneasy feelings about generative tech.
Is this the first AI-generated feature film?
Short answer: it’s being billed as the first “studio-quality” feature to use pervasive AI imagery. Indie shorts and experimental films have leaned on generative methods before, but the claim here is scale and ambition: a tentpole-style title (they suggest a $300 million (~€279 million) theoretical cost) reimagined with AI-heavy post work.
They quoted $300 million then said AI cut it to $70 million — Hollywood budgeting is performative
Producer Ryan Kavanaugh told The Wrap the script contains “about 200 distinct locations,” which is the conventional Hollywood explanation for why a film would balloon to $300 million (~€279 million). The logic: shoot everywhere, pay for travel, build practical sets, insure stunts, hire local crews.
Using AI, Kavanaugh says, reduced the tab to $70 million (~€65 million). That’s significant but also a marketing claim — you can cut location costs by simply faking a place with practical sets or shooting on a local backlot. Yet the savings here shift budget into compute, specialized personnel, and long post schedules. The math is not magic; it’s reallocation.
This budget story works like a Trojan horse carrying a human workforce inside: the headline number drops, but the labor and hidden costs — data licensing, GPU time, render farms, model fine-tuning, legal clearance — often reappear under different line items.
How did AI cut production costs?
AI trimmed line items that usually explode: location fees, travel, physical set builds, and some practical effects. It also replaces portions of traditional VFX pipelines with generative tools—think image-to-image rendering, neural texture synthesis, and AI-driven relighting. But you don’t get a free lunch: you trade capex and crew travel for server racks, model licensing, and weeks of artisanal post work from “AI artists.”
They cast recognizable names but not the usual $300M headliners — The movie needs celebrity oxygen more than megabudget marquee billing
Gal Gadot was a headline choice after a rough showing in a live-action remake; Casey Affleck tends to anchor complex supporting work; Pete Davidson brings a certain irreverent media angle. The roster reads like a press-engine strategy: enough fame to sell coverage, not so much to swallow the AI headlines.
The publicity play is obvious: you promote the tech and let the cast supply human interest. Producers want the AI to be the star — the celebrity of novelty — while the people attached provide the familiar emotional center.
Will AI replace visual effects artists?
No one in the trenches expects an overnight replacement. Studios and vendors will adopt generative tools to automate repeatable tasks — background generation, initial matte work, or lighting passes — but VFX supervisors, compositors, and artists remain critical for oversight, creative direction, and legal vetting. Guilds and unions will shape how that labor is classified and compensated.
There are arguments about taste and integrity, about whether a film created this way will feel hollow or prove a new creative vocabulary. You have to decide whether you’re curious about the experiment or allergic to the method. Will the film make AI the tool that elevates storytelling, or the spectacle that distracts from it?