Imagine being arrested, the flashing lights blurring your vision. Then, you see your face splashed across social media, doctored to paint you as a weeping mess. The White House shared it, but the tears? Probably AI-generated. Now you’re left wondering: can you fight back when the most powerful entity in the world distorts your image and reputation?
The case of Nekima Levy Armstrong, a civil rights attorney, throws this into sharp relief. After her arrest during a protest at a St. Paul church – a protest against the pastor’s alleged collaboration with ICE – a manipulated image of her surfaced. The claim? The White House X account (formerly Twitter) posted an image that appeared to be AI-generated. Her lawyer calls it defamation.
Levy Armstrong and Chauntyll Allen, a St. Paul school board member, were arrested January 23, accused of violating the FACE Act. This act prohibits intimidating or interfering with religious services. A video taken by Levy Armstrong’s husband captured the agents involved.
“Why are you recording?” Levy Armstrong asked in the 7-minute video. “I would ask that you not record.”
“It’s not going to be on Twitter,” the unidentified agent told her. “It’s not going to be on anything like that.”
Yet, there it was. Homeland Security Secretary Kristi Noem initially posted a photo with a neutral expression. But the White House X account showed something different: tears streaming down her face, a sign of distress or regret manufactured through artificial intelligence. Was this defamation? Jordan Kushner, Levy Armstrong’s lawyer, certainly thinks so.
“It is just so outrageous that the White House would make up stories about someone to try and discredit them,” Kushner said. “She was completely calm and composed and rational. There was no one crying. So this is just outrageous defamation.”
What options does someone have when facing such a blatant act from the highest office? Experts suggest the road to justice is paved with obstacles.
Can Defamation Claims Stand Against Government Propaganda?
Consider this: you meticulously build your reputation, only to watch it crumble under the weight of a fabricated image. Eric Goldman, a law professor at Santa Clara University School of Law, highlights the irony. The government, while attempting to regulate AI misuse, engages in the very behavior it seeks to prevent. It is, as he puts it, “role modeling the worst behavior that it’s trying to prevent its citizens from engaging in.”
“It’s so shocking to see the government put out a deliberately false image without claiming that they were manipulating the image. This is what we call government propaganda,” said Goldman.
For a defamation claim to hold water, Goldman explains, several elements must align.
“She’d have to show that there was a false statement of fact. And normally we treat photos as conclusive statements of fact, that they’re truthful for what they depicted, but it wouldn’t surprise me if the government argued that it was a parody or that it was so obviously false that everyone knew it was false and therefore it was not a statement of fact,” said Goldman.
“Now, that’s just sophistry, right? If defamation law means anything, it would apply to a fictionalized photo that is presented as truthful. Like, that’s what it’s supposed to cover. And yet, the government could very well win on the very first element,” Goldman continued.
The statement must also demonstrably harm someone’s reputation. The defense might argue that crying during an arrest isn’t reputationally damaging, a claim Goldman finds problematic. Then there’s the question of whether Levy Armstrong is a public figure.
“There’s a First Amendment defense that limits defamation claims. And they raised the bar on claims that apply to matters of public concern and public figures. And I would argue that potentially the photo subject would qualify as a public figure and her arrest was clearly a matter of public concern,” said Goldman.
Finally, “actual malice” must be proven – demonstrating the government knew the statement was false and intended to inflict harm. Faking a photo and presenting it as real might suggest this, but even then, the path forward is unclear.
The situation can be likened to fighting a wildfire with a water pistol; the odds are stacked against the individual.
The upshot? Goldman states, “It’s not clear to me that even if she sues, she wins.”
Other legal minds concur. Defamation cases are difficult to win. The traditional remedy? Voting out politicians who engage in falsehoods.
“We’ve assumed that if politicians are gonna publish false information, the voters are gonna punish them for it,” said Goldman. “And there might’ve been a time that was true, but that model is clearly broken down.”
Which AI Image Generators Can Create Deepfakes?
The question of which AI tool created the image remains unanswered. However, experimentation reveals interesting limitations. Google’s Gemini and OpenAI’s ChatGPT could be coaxed into generating such images. Microsoft Co-Pilot and Anthropic’s Claude, on the other hand, refused, citing potential misuse.
What about xAI’s Grok? The service was down when we tried. But it’s safe to say that Grok probably will let you make people cry in an attempt to ridicule them, given everything else that Elon Musk will let you do.
We’re navigating uncharted waters. Governments have always engaged in some level of deception. But the brazenness, the sheer transparency of recent falsehoods, is startling.
Kristi Noem got up in front of microphones on Sunday to call Alex Pretti, the man killed by ICE agents in Minneapolis, a domestic terrorist. She said that the 37-year-old ICU nurse at the VA showed up to “perpetuate violence.” It’d be amusing if it weren’t so horrifying. The government lies with impunity, and they don’t care that we all could see a compassionate and caring man murdered in the street by masked agents of the state.
When the government goes even further than mere words, attempting to manipulate the images we see with AI fakery, it somehow feels even worse, like we’re on the precipice of a post-truth society. Unfortunately, many Trump voters don’t seem to care.
What Recourse Do Citizens Have Against Government Misinformation?
The legal landscape surrounding AI-generated propaganda is murky. As Goldman points out, the tools to punish such abuses may simply be inadequate.
“I don’t think we’ve had enough discussion about AI deepfakes being weaponized by the government’s propaganda so they can lie against their constituents,” said Goldman. “And we may not have an adequate set of resources to punish the government for such abuses.”
“I don’t know what the remedies are. I fear that we don’t have them strong enough, but I fear even more that voters are going to reward politicians for abusive propaganda. This might just be what it means to own the libs.”
This is more than just spin; it’s a calculated distortion of reality. The carefully crafted narrative is a house of cards, ready to collapse with the slightest push. The problem is, the deck is stacked. Are we prepared to accept a reality where truth becomes a casualty of political warfare?