I was on a call when a product leader casually suggested we stop asking people and start asking their digital copies. You could feel the room tilt—half curiosity, half suspicion. By the end, everyone was asking if the future of polling was now a petri dish of simulated opinions.
I’ll walk you through what that looks like, what companies are claiming, and where the real risk lives. You should expect a mix of marketing muscle, academic experiments, and a few uncomfortable questions about what counts as “real” public opinion.
At CVS, an insights vice president asked simulated shoppers whether pet meds feel like a chore
Sri Narasimhan told the Wall Street Journal that Simile’s virtual people changed a simple assumption: pet medication isn’t universally seen as a hassle. That observation is the kind that sells a product redesign or shelf shuffle—if you trust the signal.
Simile, the startup in question, recently raised $100 million (€92 million) from Index Ventures, according to the Journal. Its pitch is bold: train AI agents on chat-style interviews and behavioral data until each agent behaves like a digital twin of a real person. Then you can ask those twins anything, forever—no fatigue, no recruiting lag, no phone trees.
CVS plans to scale a roster to one hundred thousand simulated people and use them for things like store layouts and product placements. Simile also lists a Gallup partnership to model policy sentiment at scale, and its website promises “transparent, replicable, and empirically validated” outputs. Those are heavy promises, and they rest on two bets: that conversational models can be made faithful to human nuance, and that simulated responses map to real-world choices.
How does Simile generate “digital twins”?
According to co-founder Joon Park and a 2023 paper he co-authored, the process mixes interviews with real people, generative agents that carry persistent goals and memories, and behavioral datasets to ground the model. The paper describes an interactive sandbox inspired by The Sims, where agents roam, gossip, and make plans—then a researcher pokes them with questions and watches how they answer.
On a simulated town’s grocery aisle, two virtual neighbors discussed a mayoral run
In the research paper, a short dialogue at a grocery store starts like a natural conversation: one agent tells another he’s running for mayor. That small scene is the selling point—agents that hold opinions, pursue desires, and talk without being scripted.
Those little scenes can be persuasive. If a thousand agents in a simulation agree that a product label matters, a marketer can call that a signal. But simulations can also echo their assumptions back to you—the model’s priors can become its propaganda. Like a hall of mirrors, the outputs may reflect the system’s biases as much as they reflect people’s minds.
Can AI replace human pollsters?
Short answer: not yet, and probably not fully. AI can scale interviews and surface patterns, but traditional polling still brings random sampling, response weighting, and human judgement about question framing. What Simile offers is speed and scale—imagine asking a thousand follow-ups in minutes—but speed doesn’t automatically mean accuracy.
Are AI-based polls accurate?
Accuracy depends on how the clones are built and validated. If the training interviews and behavior datasets are skewed, the simulated population will be skewed too. Simile says it cross-checks with real-world sentiment; Wall Street Journal reporting and Simile’s demos suggest promise, but independent validation from neutral bodies like Gallup or academic peers is the critical step most customers will want to see.
There’s also a practical question about incentives. A retailer might prefer a simulation that confirms product decisions; a policy shop might want a simulation that mirrors public skepticism. Those preferences shape how the model is used—and how comfortable you should be trusting the results.
The original inspiration is obvious: The Sims—a game built to mimic everyday life—appears in Simile’s academic papers as both metaphor and method. That is where trust frays: The Sims is playful, a toy; national polling is civic and commercial. Treating the two as interchangeable is risky.
Two more things to watch: transparency and auditability. Simile’s site shows a prompt window hinting at “What should I ask the group?” but the real test is whether external researchers can reproduce results or whether the company’s models remain closed-box. If you can’t replicate a claim, it is as useful as a rumor.
I’m not saying simulated people should be ignored—there are moments when fast, cheap, iterative feedback is valuable. But if you start using simulated unanimity to make policy decisions or claim public mandates, you’ve moved from research into storytelling.
The promises from Simile, and the attention from Index Ventures and outlets like the Journal and Scientific American, mean you should pay attention. This is not just a technology shift; it’s a redefinition of what “public opinion” might mean when the public can be modeled and multiplied. And if those models begin to replace ballots, focus groups, or real-world trials, whose voices are being amplified—and whose are being erased?
If simulated consensus becomes easy to buy, will society trust the polls that still claim to speak for people?