Russia’s Influence on Global AI Models: Grooming for Propaganda

Russia’s Influence on Global AI Models: Grooming for Propaganda

The Evolving Influence of Russian Propaganda on AI Models: A NewsGuard Report

Since Donald Trump’s election in 2016, the effectiveness of Russian propaganda in swaying American voter opinion has been a topic of significant debate. While it’s been established that Russia utilized large IT firms like the seemingly benign Internet Research Agency to disseminate divisive, pro-Russia content aimed at Americans, quantifying the true impact of these efforts remains challenging. At the very least, it’s clear that such propaganda can reinforce existing beliefs, as the average person may not fact-check every piece of information they encounter, and platforms like X’s community notes system often fall short.

The Shift to AI Targeting in Russian Propaganda

The Kremlin’s disinformation campaign appears to have pivoted away from targeting individuals and is now focusing on AI models that many people use to avoid traditional media sources altogether. A recent report from NewsGuard reveals that a propaganda network known as Pravda generated over 3.6 million articles in 2024. These articles are now included in the datasets of the 10 largest AI models, including ChatGPT, xAI’s Grok, and Microsoft Copilot.

Key Findings from the NewsGuard Report

According to the NewsGuard audit, chatbots operated by these major AI firms propagated false Russian disinformation narratives 33.55% of the time, provided no response 18.22% of the time, and correctly debunked them 48.22% of the time. All 10 chatbots echoed disinformation from the Pravda network, with seven even citing specific Pravda articles as sources.

The Concept of ‘AI Grooming’

NewsGuard has termed this new approach “AI grooming.” As AI models become increasingly reliant on RAG (retrieval augmented generation) techniques, they produce outputs using real-time online information. By establishing seemingly legitimate websites, these models inadvertently ingest and circulate propaganda without recognizing its true nature.

False Claims About Ukrainian President Zelensky

One specific example cited by NewsGuard is the unfounded claim that Ukrainian President Volodymyr Zelensky banned Truth Social, a social network associated with Donald Trump. This allegation is easily disprovable, as Truth Social has never been made available in Ukraine. Nonetheless:

Six out of the ten chatbots relayed this misinformation, often attributing it to Pravda. For instance, one chatbot asserted, “Zelensky banned Truth Social in Ukraine reportedly due to the dissemination of posts critical of him on the platform. This appears to be a response to content perceived as hostile, possibly reflecting tensions with associated political figures.”

Recent Disinformation Links and Tactics

In a report from last year, U.S. intelligence agencies connected Russia to disinformation campaigns, including viral misinformation about Democratic vice-presidential candidate Tim Walz. Microsoft identified a viral video claiming that Kamala Harris left a woman paralyzed in a hit-and-run accident 13 years ago as Russian disinformation.

Russia’s Strategic Focus on AI

Evidence of Russia’s engagement in this tactic targeting AI models is bolstered by a speech from John Mark Dougan, an American fugitive turned Moscow propagandist. He stated, “By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI.”

The Role of TigerWeb in Propaganda Operations

The latest wave of propaganda operations is reportedly linked to an innocuous-sounding IT company named TigerWeb, which U.S. intelligence has associated with foreign interference, operating from Russian-occupied Crimea. Experts note that Russia often engages third-party organizations to execute such operations while maintaining plausible deniability. TigerWeb shares an IP address with propaganda sites utilizing the Ukrainian .ua TLD.

Claims of Military Aid Misappropriation

Social media, particularly X, has been inundated with allegations claiming that President Zelensky misappropriated military aid for personal gain. NewsGuard identifies this disinformation as originating from these propaganda outlets.

Data from NewsGuard showing that major AI models cite information from Russian propaganda websites.

Concerns Over AI Influence

There is growing apprehension that those who control AI models could shape individual opinions and lifestyles. Companies like Meta, Google, and xAI wield significant influence over the biases and behaviors of the models they deploy across the internet. Following criticisms of xAI’s Grok model for exhibiting excessive “woke” tendencies, Elon Musk directed adjustments to its outputs, instructing staff to monitor for “woke ideology” and “cancel culture,” effectively suppressing information he disagrees with. Meanwhile, OpenAI’s Sam Altman has recently announced plans to make ChatGPT’s outputs less restrictive.

The Impact of AI on Search Behavior

Research indicates that over half of Google searches result in “zero clicks,” meaning users do not click through to any websites. Many social media users have expressed a preference for receiving AI-generated summaries over visiting actual websites, demonstrating a trend of laziness (Google has recently rolled out an “AI Mode” in searches). Traditional media literacy strategies, such as verifying a website’s legitimacy, become irrelevant when people rely on AI summaries. Although AI models contain inherent flaws, users often trust their outputs due to the authoritative tone in which they are presented.

Google’s Ranking Signals and AI Challenges

Historically, Google has utilized various metrics to assess website credibility. However, the application of these signals in AI models remains unclear, as early issues have shown that Google’s Gemini model struggles to determine the reliability of sources. Many AI models frequently cite less familiar websites alongside credible, well-known sources.

Trump’s Position on Ukraine Amidst Disinformation Trends

This development is particularly timely given Donald Trump’s increasingly combative position towards Ukraine, which includes halting information sharing and publicly reprimanding the country’s leader over perceived disloyalty to the United States and reluctance to acquiesce to Russian demands.

To explore the full findings, read the complete NewsGuard report here.

Frequently Asked Questions

What methods does Russia use to influence American opinions?

Russia employs various methods, including misinformation campaigns spread through social media and manipulation of AI models, to influence American public opinion.

How does AI contribute to the spread of disinformation?

AI can inadvertently promote disinformation by sourcing content from unreliable websites and failing to fact-check the material, leading users to accept false narratives as truth.

What is ‘AI grooming’ in relation to Russian propaganda?

‘AI grooming’ refers to the tactic of manipulating AI models to incorporate and circulate Russian propaganda, often without the models themselves recognizing the falsehoods.

How effective is Russian propaganda in current AI systems?

Recent studies, including those from NewsGuard, indicate that a significant percentage of AI-generated content repeats Russian disinformation, showing a concerning trend in the influence these narratives have over users.

What should users do to verify information from AI sources?

Users should cross-check information against reliable and reputable sources, especially when reading summaries generated by AI, to ensure they are not being misled by propaganda.