My phone buzzed with a screenshot: a man’s face in a Dodgers cap, captioned as the alleged White House Correspondents’ Dinner shooter. Every swipe revealed the same face swapped into different team jerseys, and my stomach dropped. In seconds the internet had rearranged a person’s identity into something unrecognizable.
I’ve been following how misinformation moves for years, and you should know what to watch for. I’ll walk you through what’s real, what’s fabricated, and why those flashy posts are designed to make you stop scrolling.
On April 25, the Associated Press identified Cole Tomas Allen.
The AP named a 31-year-old from Torrance, California, after authorities linked him to the White House Correspondents’ Dinner shooting. The Justice Department has charged Allen with transporting a firearm and ammunition with intent to commit a felony, discharging a firearm during a crime of violence, and attempting to assassinate the president.
I looked for reliable records and reporting; you’ll find those charges in the Department of Justice statement and coverage from mainstream outlets. What you won’t find is evidence that Allen worked as security for any sports franchise.
Are the images of Cole Tomas Allen in sports uniforms real?
No. Fact-checkers including Lead Stories traced dozens of AI-generated photos showing Allen wearing L.A. Dodgers gear, Montreal Canadiens sweaters, and college jerseys for Oregon and Michigan State. The images were carnival mirrors—distorted reflections built to catch attention, not to inform.
Many of the fake posts appear to originate from a Facebook page called West Coast Sluggers, which paired the images with claims Allen had worked as stadium security. That claim has no substantiation; reporting identifies Allen as a teacher and engineer, not a team employee.
Here’s 45 seconds of Facebook telling me the alleged WHCD shooter was a former staffer of literally almost every major collegiate and professional sports team pic.twitter.com/HrZ2rTUl3E
— Ellyn Briggs (@EllynBriggs) April 28, 2026
On April 26, Facebook feeds suddenly filled with the same theme: team-branded deepfakes.
Scroll a bit and you’d see the alleged shooter in the shirt of your favorite team; click and you landed on a shallow article bolted to ads. Some of those articles—peddled by small accounts like The Ohio Spirit—hosted advertising for browser extensions that are almost certainly malicious.
Those posts were clickbait landmines: designed to look shareable to fans, while funneling clicks into ad revenue and sketchy installs. Lead Stories documented many of these posts and traced the pattern.
Did Cole Tomas Allen ever work as security for the Dodgers or other teams?
No verifiable employment records or credible reporting support that claim. The West Coast Sluggers caption that circulated widely asserted he’d been a Dodgers security staffer; independent reporting does not back it up. Treat personnel claims tied to viral images with skepticism.
If you want the original source material, the Justice Department’s press release lists the formal charges; AP’s initial bulletin provided the identification that set the misinformation cascade in motion.
On social platforms, old videos and new AI collided into a fog of false connections.
A 2017 clip that shows Allen resurfaced, but social posts that claimed the same video included Usha Vance, the vice president’s wife, were false. The woman in the clip did not match Usha Vance.
Pages swapped faces into uniforms and paired them with invented past jobs. The goal was simple: piggyback on fandom and outrage to spread faster. Platforms like Facebook amplified the posts; smaller pages and ad-driven sites monetized the attention.
How did these deepfakes spread on Facebook?
Bad actors used AI tools to superimpose team logos and jerseys onto a single photo, then posted the altered images across dozens of fan-oriented pages. Algorithmic amplification pushed the most engaging variations into more feeds. Some links led to low-quality sites filled with ads for programs like Capital One Shopping; in several cases the prompts urged users to install browser extensions that could compromise security.
Reporting from journalists such as Ellyn Briggs and checks by Lead Stories mapped the path: identification by AP → rapid AI alterations → fan-page amplification → ad-driven monetization.
You can still be noisy about what you share and quieter about what you trust: pause, check Lead Stories or the AP, and don’t install random browser extensions. I’ll keep tracking how images get weaponized—will you keep asking who benefits from the lies?