The first time I saw it, I almost dismissed it as a glitch. ChatGPT, usually a fountain of (mostly) reliable information, cited Grokipedia—Elon Musk’s AI-driven Wikipedia alternative—as a source. It felt like finding a plastic toy in a gourmet meal. Is this the future of AI: chatbots regurgitating each other’s outputs until sense erodes away?
The Guardian recently reported that OpenAI’s latest flagship model, GPT-5.2, cited Grokipedia nine times in response to just over a dozen questions. These questions spanned topics from Iranian political structures to the scholarship of British historian Sir Richard Evans. Similar tests conducted by Gizmodo confirmed the presence of Grokipedia citations in ChatGPT responses.
Musk launched Grokipedia last October, positioning it as a human-free alternative to Wikipedia. In September, he boldly claimed it would be “a massive improvement over Wikipedia.” He’s frequently criticized Wikipedia, even derisively calling it “Wokipedia,” lamenting the lack of a major, right-leaning alternative.
The solution, in his eyes, was a new platform powered by AI-generated articles. Much of Grokipedia’s content seems adapted from Wikipedia, but reframed to reflect Musk’s political leanings. The devil, as always, is in the details.
For example, Grokipedia describes the events of January 6, 2021, as a “riot” at the U.S. Capitol, where “supporters of outgoing President Donald Trump protest[ed] the certification of the 2020 presidential election results.” Wikipedia, however, calls it an “attack” carried out by a mob of Trump supporters attempting what it labels a self-coup.
Similarly, Grokipedia labels Britain First as a “far-right British political party that advocates for national sovereignty,” while Wikipedia identifies it as a neo-fascist political party and hate group.
Grokipedia also adopts a milder tone regarding the Great Replacement theory, which promotes the idea that white people are being systematically replaced. Wikipedia clearly calls it a conspiracy theory. Musk, notably, has publicly supported this conspiracy, referencing “white genocide.”
The Rise of AI-Generated Echo Chambers
Imagine a library where the books write themselves, without librarians to fact-check. That’s Grokipedia: designed to rapidly produce content without human oversight, potentially sacrificing accuracy for speed and a particular slant.
Now, this unchecked information source seems to be influencing other chatbots. The Guardian observed that ChatGPT didn’t cite Grokipedia on topics where it’s known to spread misinformation, but rather in response to more obscure queries. Is this a deliberate strategy, or simply an unintended consequence of relying on AI-generated content?
The issue isn’t limited to ChatGPT. Social media users have reported that Anthropic’s Claude has also referenced Grokipedia in its answers.
OpenAI and Anthropic haven’t yet commented on the matter. However, OpenAI told The Guardian that its model “aims to draw from a broad range of publicly available sources and viewpoints.”
An OpenAI spokesperson added that they “apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations.”
How Do AI Chatbots Choose Their Sources?
The process is complex, but at a high level, chatbots like ChatGPT comb the internet for relevant information based on the user’s query. They then synthesize this information into a coherent response, citing sources where appropriate. The problem arises when the internet becomes saturated with AI-generated content, potentially skewing the results and leading to the propagation of biased or inaccurate information. The algorithms start eating their own tail.
Researchers have warned about malicious actors flooding the internet with AI-generated content to manipulate large language models—a process called LLM grooming. But the dangers extend beyond intentional misinformation.
Model Collapse: When AI Eats Itself
It’s unclear if human users actively visit Grokipedia. After its launch, Similarweb reported a drop from 460,000 US web visits on Oct. 28 to about 30,000 daily visitors, while Wikipedia averages hundreds of millions of daily pageviews. Some suspect Grokipedia isn’t intended for human consumption but to “poison” future LLMs. Is it a honeypot, designed to trap unsuspecting AI models?
Over-reliance on AI-generated content can lead to model collapse. A 2024 study found that when large language models train on data from other AI systems, their quality degrades.
“In the early stage of model collapse, first models lose variance, losing performance on minority data,” researcher Ilia Shumailov told Gizmodo. “In the late stage of model collapse, [the] model breaks down fully.” As models train on less accurate text they’ve generated, outputs degrade and become nonsensical.
Can AI Be Trained to Distinguish Between Reliable and Unreliable Sources?
Theoretically, yes. One approach involves training AI models on datasets specifically designed to highlight the characteristics of reliable and unreliable sources. This might include factors such as the source’s reputation, the presence of fact-checking mechanisms, and the consistency of its reporting. However, even with these measures, the potential for bias and manipulation remains. After all, who decides what constitutes a “reliable” source?
What Measures Can Be Taken to Prevent the Spread of Misinformation Through AI Chatbots?
Several strategies can help. Firstly, enhancing the transparency of AI models by clearly indicating the sources used to generate responses. Secondly, implementing robust fact-checking mechanisms to identify and flag inaccurate information. Thirdly, promoting media literacy among users to help them critically evaluate the information they encounter online. These defenses are helpful, but reactive.
The larger question is what happens when AI models start training on each other’s outputs at scale. The signal degrades. Hallucinations become more common. Plausibility declines. If information is the lifeblood of these systems, what happens when that blood becomes tainted?