Why Librarians Are Tired of AI-Created Fake Book Claims

Why Librarians Are Tired of AI-Created Fake Book Claims

The rise of AI chatbots like ChatGPT, Grok, and Gemini has brought tremendous convenience, but it has also led to an alarming trend: the proliferation of inaccurate information. For librarians and information specialists, this “AI-generated nonsense” is becoming an exhausting burden. Recent reports indicate that many librarians feel overwhelmed by the number of requests for nonexistent books and journal articles. According to a piece in Scientific American, the volume of these queries is rising rapidly.

Sarah Falls, the Chief of Researcher Engagement at the Library of Virginia, estimates that about 15% of all emailed reference questions originate from AI chatbots. A significant portion of these requests pertains to imaginary citations that simply do not exist.

Interestingly, many individuals struggle to accept this reality; they often believe the AI over the expertise of librarians. This trend is echoed in another report by 404 Media, highlighting that some users trust their chatbots more than the seasoned professionals dedicated to providing factual information.

In a recent post from the International Committee of the Red Cross (ICRC), they stated, “If a reference cannot be found, this does not mean that the ICRC is withholding information. Various situations may explain this, including incomplete citations, documents preserved in other institutions, or— increasingly—AI-generated hallucinations.” Such assertions illustrate the growing frustration librarians feel in the face of rising misinformation.

“Yes, it happened to me . I went to a bookstore for a totally plausible old French metaphor book mentioned by ChatGPT a year ago, only to discover that it does not exist.” — Joanne Boisson (@joanneboisson.bsky.social)

The year has seen numerous instances of fraudulent book and journal citations. For example, a freelance writer for the Chicago Sun-Times recommended a summer reading list that included ten titles that were entirely fictional. Even the findings from Health Secretary Robert F. Kennedy Jr.’s Make America Healthy Again commission were scrutinized, revealing that at least seven citations didn’t correspond to any real sources.

While AI is often blamed, inaccuracies have existed long before chatbots became mainstream. In 2017, a professor found at least 400 scholarly articles citing an imaginary research paper that turned out to be nothing more than filler text.

Van der Geer, J., Hanraads, J.A.J., Lupton, R.A., 2010. The art of writing a scientific article. J Sci. Commun. 163 (2) 51-59.

This citation is a prime example of how low-quality papers can perpetuate misinformation. However, what’s different now is that many people seem more inclined to accept AI-generated data over reputable human sources.

“As someone who receives numerous local history queries, I can confirm that there has been a big increase in people starting their research with AI tools, only to find no corroborating data.” — Huddersfield Exposed (@huddersfield.exposed)

Why do users often trust AI over human expertise? The authoritative tone of AI can be persuasive, making it easy to believe the machine over a human librarian. Additionally, many users have developed their own methods for improving AI reliability, often persuading themselves that commands like “don’t hallucinate” will yield high-quality results. If only it were that simple, tech companies would have quickly integrated such features.

A greater understanding of these issues is critical as we navigate a world increasingly reliant on artificial intelligence. It underscores the necessity of verifying sources, cross-referencing information, and consulting trained professionals when accuracy is essential.

What are some common questions people have about finding accurate information?

Is it true that AI chatbots can provide fake citations? Yes, many AI chatbots generate fictional references or citations, causing confusion for users seeking accurate information.

Why are librarians overwhelmed with reference requests? Librarians face rising numbers of inquiries linked to non-existent works generated by AI chatbots, which often leave patrons unsure where to turn for genuine resources.

How can I verify information provided by AI chatbots? The best course of action is to cross-check information against reputable databases, academic journals, or consult a professional librarian.

Why might someone trust an AI over a librarian? Many people find AI outputs more persuasive due to their authoritative tone, which can lead to misplaced trust in machine-generated content.

What constitutes a hallucinated reference? A hallucinated reference is a citation or source that does not actually exist, often generated mistakenly by AI tools in response to user queries.

As you explore the world of information, staying informed about the pitfalls of AI-generated content is vital. For more insights and tips, continue your journey at Moyens I/O.