It landed in my feed at 2:14 a.m.: a grainy clip of Jeffrey Epstein and Donald Trump laughing together, then cut to Elon Musk and Bill Gates in the same frame — all labeled “not fake.” My pulse tightened. By morning the clip had millions of views and a dozen bot accounts had stitched it into the wider story.
I’ve chased disinformation before, and you should treat this like a live clue: the signal is messy, but the pattern is clear. I’ll show you what the Washington Post found, why these AI fakes matter even when real evidence exists, and what the platforms are getting wrong.
On social feeds: I watched a clip go viral, then rot into copies
Observation: The Washington Post identified accounts on X — HDX News and GPX News — that pushed AI-generated videos pairing Epstein and Trump.
Those clips weren’t just creative edits. They were engineered to look like documentary proof and to spread fast. One caption read: “This video is not fake. These pedophile perverts started a war so that this wouldn’t be talked about.” That line works as both accusation and inoculation: it tries to pre-empt scrutiny while stoking outrage.
Why this matters: fake visuals do more than lie — they seed doubt about the real archives already public. There are authentic photos, videos, and official documents tying Trump to Epstein; the fakes create noise that can make people question everything at once.
Are these videos real?
No. Independent researchers and reporters traced the most viral clips to AI generation and bot amplification. The Post found that several accounts pushing the videos were aligned with pro-Iran messaging; many were suspended only after the newspaper reached out to X.
This video is not fake.
These pedophile perverts started a war so that this wouldn’t be talked about. pic.twitter.com/bYLDn970Fq
— Hamaad حماد (@ashrafhamaad) March 3, 2026
At the accounts level: I tracked patterns that point to a messaging agenda
Observation: The two named X accounts racked up millions of views, and many copybot accounts reposted the same material.
The pattern looked coordinated: identical captions, reuploads within minutes, and a mix of other content pushing narratives favorable to Iran’s stance. The Post’s researchers flagged nine of 15 accounts as “verified” — not a mark of journalistic credibility, but proof of how the blue check, now purchasable on X for a subscription, can be weaponized. The verification service once cost around $8 per month (€7) after Elon Musk shifted the model; the change let bad actors appear more official.
Why this matters: coordinated amplification turns a single synthetic clip into perceived evidence. Networks of bots and paid accounts don’t need to be state-funded to be useful to a foreign messaging campaign; they only need scale and timing.
Why would pro-Iran accounts push these fakes?
Short answer: to magnify political chaos and erode trust. Whether these accounts are directly financed by Tehran or opportunistic proxies, the effect is the same: amplify mistrust about Western leaders, muddy competing scandals, and force newsrooms and platforms into a defensive posture.
On platforms: I logged promises, then found slow responses
Observation: Platforms publicly vowed to clamp down on AI fakes as the war escalated, but removals lagged.
X, Instagram, and TikTok all reported surges of AI-generated war footage after Feb. 28. Yet the specific accounts the Post flagged weren’t suspended until reporters asked questions. That delay matters: a lie that circulates for hours or days becomes harder to correct once it accrues engagement and shares.
Platforms have tools — content ID, provenance labels, human review teams, and API signals — but they’re always racing. The problem is less technical than psychological: users treat moving images as proof, and social systems reward the sensational. The result is a feedback loop that magnifies fakes faster than takedown systems can respond.
How can platforms stop this without breaking speech?
They need three things working in lockstep: better provenance metadata baked into uploads, rapid human review for high-velocity clips, and friction for purchased verification. Expect more partnerships between researchers, outlets like the Washington Post, and platforms — because outside scrutiny has been the most effective force in spotting coordinated campaigns so far.
In the wild: I tracked a clip that mixed real names with synthetic faces
Observation: One popular viral edit blended AI-generated Epstein-Trump footage with fake appearances of Elon Musk, Bill Gates, and an AI-generated Diddy.
That kind of composite is designed to pull multiple reputations into one emotional knot. It’s a form of associative contagion: if you can link powerful names to salacious acts in a short video, the clip hijacks existing suspicions and amplifies them.
The psychology is simple and ugly — outrage travels faster than nuance. The clip’s makers are counting on that speed: they want the outrage, not the explanation. The clip acts like a stage magician, misdirecting attention so the method remains hidden. And once a lie is retweeted thousands of times, it can feel, to casual viewers, like part of the record.
I want to be clear: there is real, documented material on Epstein and his ties to powerful people. The danger isn’t that the fakes invent a story out of nothing; it’s that they drown the facts under a flood of noise so people give up trying to sort truth from spectacle.
Tools you can use right now: check provenance labels on X and Instagram, reverse-image search suspicious frames, and rely on trusted outlets doing forensic work. Researchers and journalists still catch the best fakes by comparing multiple sources, metadata, and the code footprints left by generative tools.
What this means for public trust: I’ve seen narratives flip on bot timing
Observation: When a synthetic clip arrives at the same moment as a major geopolitical event, it’s amplified like wildfire.
That timing isn’t accidental. Whether the goal is to deflect attention from the Epstein files or to weaponize scandal for geopolitical advantage, the strategy aims to produce a single outcome: confusion. The more the public doubts the veracity of everything, the less pressure there is on powerful actors to answer uncomfortable questions.
Think of it as a mold growing over a painting — once the surface is obscured, the original image becomes harder to restore.
I’ve followed propaganda networks, and I’ll tell you what I always tell colleagues: stop treating every viral clip as a source and start treating it as a lead. Verify, crosscheck, and be suspicious of accounts with sudden verification and synchronized reposting. The platforms must act faster, but you and I can make the first call.
If pro-Iran actors are exploiting AI and platform design to amplify false visuals about Epstein and Trump, will public outrage force platforms to change, or will the noise keep winning?