Imagine seeing a video of a tragic event online, and then seeing people use AI to create fake images related to it. That’s exactly what happened after the shooting of Renee Good in Minneapolis. Instead of helping, these AI-generated images spread misinformation and caused more pain. It’s a reminder that sometimes the newest tech isn’t the best way to find answers. It’s more important than ever to double-check what we see online.
I’ve been an SEO content strategist for over 10 years, and I’ve seen firsthand how quickly misinformation can spread online. Let’s take a closer look at what happened with the Renee Good case and how AI played a negative role.
1. What Happened After the ICE Shooting of Renee Good?
On a Wednesday in Minneapolis, an ICE agent shot and killed 37-year-old Renee Good. The incident, caught on video, quickly spread across social media. People were trying to figure out what happened. Some even tried using AI to identify the agent involved. But that’s where things went wrong. According to the Minnesota Star Tribune, the ICE agent was identified as Jonathan Ross.
Homeland Security Secretary Kristi Noem said that Good was trying to run over the ICE agents and committed an act of “domestic terrorism.” But visual investigations from Bellingcat and the New York Times seemed to tell a different story.
Forensic analysis of objective video evidence. This is how you serve readers searching for clarity.
— Mike Hixenbaugh (@mikehixenbaugh.com) January 8, 2026 at 5:00 AM
2. Why Can’t AI Unmask People Accurately?
People started using AI chatbots like Grok to try and unmask the ICE agent. They were also creating fake images and spreading them on platforms like TikTok and Instagram. The problem? AI can’t accurately unmask people. It creates images from scratch. These images don’t show the real faces and are about as helpful as picking a random photo from the internet. I once tested an AI tool that claimed to enhance images, and the results were laughably inaccurate.

3. Can AI Detect AI-Generated Images?
AI isn’t great at spotting images created by AI either. Even though Google has SynthID, which is a watermark detector in Gemini, it only works for images made with Google tools. This means that if an image was created with a different tool, Gemini can’t say for sure if it’s fake. For example, when I asked Gemini if the image above was AI-generated, it said it was likely a real photograph.
AI detection software can also struggle with text created by AI, sometimes leading to false accusations. Students have been accused of using AI to write their papers when they didn’t.
4. How Did Misinformation Affect Real People?
Because AI can’t unmask people, the fake images led to real-world consequences. One name that kept popping up was Steve Grove, who owns a gun shop in Springfield, Missouri. The Springfield Daily Citizen reported that his Facebook account was flooded with messages. Another Steve Grove is the CEO of the Star Tribune newspaper in Minneapolis.
5. What Other Fake Images Were Created?
There were also fake images of Renee Good. One AI-generated image showed her in her car before the shooting. The image was flipped to make it look like she was in the driver’s seat. Even more disturbing, one person had Grok put a bikini on an image of Good after she was dead. It’s scary how easily AI can be used to create harmful content. Content like this can spread rapidly across social networks.

6. Why Do People Keep Misusing AI?
People keep using AI as an investigative tool, even when it’s not accurate. After the Charlie Kirk shooting, people used AI to try and get a clearer picture of the suspect. But when the suspect was arrested, he didn’t look anything like the AI-altered images. When Trump appeared ill one time, people used AI to “enhance” photos, which added a strange lump to his head.

7. How Does Old-Fashioned Misinformation Fit In?
It’s not just AI causing problems. Newsmax anchor Greg Kelly tried to make the stickers on Good’s car look suspicious. He seemed to imply the stickers showed that she was affiliated with “WACK JOB groups.” But those stickers looked like they were from National Parks. The Associated Press reported that Good was dropping off her son at school when the incident happened. She had two children from her first marriage, ages 15 and 12, and a 6-year-old son from her second marriage, according to Minnesota Public Radio. A GoFundMe campaign for Good’s surviving wife and son has raised over $600,000 EUR at the time of this writing.
TOTALLY JUSTIFIED SHOOTING!!!!!! NOT EVEN CLOSE!!! (Curious about these Stickers on the Back of the Car. Various WACK JOB groups and affiliations? ) pic.twitter.com/3xng119z7m
— Greg Kelly (@gregkellyusa) January 7, 2026
Why is it important to verify information before sharing it?
Verifying information helps prevent the spread of misinformation. Sharing unverified claims can lead to reputational damage, emotional distress, and even real-world harm.
What are some reliable sources for fact-checking?
Some reliable sources for fact-checking include Snopes, PolitiFact, FactCheck.org, and major news organizations with dedicated fact-checking teams.
How can I identify AI-generated images?
Identifying AI-generated images can be difficult, but look for inconsistencies, unnatural details, and artifacts. Use AI detection tools, but be aware that they are not always accurate.
The Renee Good case shows us how important it is to be careful with AI and what we share online. Using AI to “unmask” people or create fake images can lead to misinformation and harm. It’s a reminder that we need to think critically and double-check information before sharing it. Want to learn more about spotting misinformation online and protecting yourself from fake news? What other examples have you seen of AI being misused in the news?