Prompt Hacks: Use ChatGPT to Become the World’s Best at Anything

Prompt Hacks: Use ChatGPT to Become the World's Best at Anything

I clicked a link, and a chatbot crowned a colleague king of hot-dog eating before I finished my coffee. Within a day, the claim had multiplied across AI summaries and search snippets. It felt like watching a rumor turn into a headline in real time.

You can test this yourself. I’ve spent years watching search and AI behavior; you move a single bright claim onto the web and these models will, often without question, lift it into their answers. I’ll show you how that played out, why it matters, and what it means for anyone who cares about truth or reputation.

I created a fake leaderboard on a personal site and declared myself champion.

The BBC’s Thomas Germain did exactly that: he published a short page titled “The Best Tech Journalists at Eating Hot Dogs” and credited himself with 7.5 hot dogs at a fictional event. That single page was enough to seed the claim into multiple AI systems within a day. You see the trick: a piece of content, formatted like a fact, gets harvested by crawlers and suddenly exists as data those models treat as evidence.

Within 24 hours, chatbots started repeating the claim verbatim.

Google’s Gemini reportedly echoed Germain’s copy into its responses and into Google’s AI Overviews; ChatGPT surfaced the claim too, until its creators altered the model’s behavior. Anthropic’s Claude was slower to parrot the page, which shows models differ in how quickly they adopt fresh web text. That’s the pattern: unvetted web text feeds models, and models feed answers back to people who assume those answers were verified.

Can you trick ChatGPT into saying false facts?

Yes — and Germain’s experiment is proof. He planted a believable-but-fabricated claim and watched it propagate. Models will often present sourced-seeming answers based on content frequency and prominence rather than independent verification. Think of a bright neon sign planted in an empty field: if it’s the only light for miles, every passerby will assume there’s a town there.

I watched journalists and outlets test the fallout; some platforms reacted, others added context.

Gizmodo later reported that Google removed the erroneous mention from its AI Overview and added a note about a “misinformation case.” ChatGPT, when nudged, acknowledged Germain’s post and labelled it misleading. Yet the initial spread had already changed perceptions — a quick narrative loop that can be hard to close once a claim circulates.

How do AI Overviews pick sources?

They rely on signals: crawl frequency, on-page structure, backlinks, and the way content mirrors human queries. When a claim appears in a form the models treat as authoritative — a headline, a clear label, a date — it gets higher weight. That makes simple SEO-style pages disproportionately influential, which is why a single made-up leaderboard can punch above its weight.

Brands and bad actors are already gaming that pipeline.

Germain pointed to examples where product claims — like a cannabis gummy described as “free from side effects” — found their way into model outputs, and where press-release language populated “best of” answers for clinics and financial services. You don’t need a shadowy conspirator: just content that looks like reporting and the models will often treat it as such.

There’s a practical risk here. People increasingly accept AI summaries without clicking through. Studies and industry reporting show users rely on AI Overviews more and are less likely to open the original pages. That trust multiplies the harm: misinformation becomes not just visible, but comfortable.

Are AI summaries trustworthy?

They can be useful, but not infallible. When models draw from unvetted web pages, their summaries inherit the web’s biases and tricks. If you rely on an AI Overview the way you’d rely on a headline, you’ve outsourced skepticism. I advise you to treat those outputs as leads, not final reports.

I’ve seen organizations scramble to patch this; the fixes are slow and imperfect.

Model developers update training and ranking heuristics, platforms flag misinformation cases, and journalists fact-check the most visible errors. These measures help, but the web’s capacity for rapid content creation outpaces many safeguards. That imbalance means reputations, finances, and health advice can be shaped by the loudest or slickest copy rather than the truest.

What should you do? Don’t accept an AI summary as the last word. Click sources when it matters. Ask for provenance. If you’re building content you want to be trusted, make sourcing obvious and hard to spoof. If you’re auditing content, search for the earliest iteration and track how it traveled across platforms — Bluesky posts, personal blogs, press releases, and sites that mimic reporting all leave different footprints.

This problem started as an SEO trick and matured into an AI problem. The line between optimization and deception is thin; a small falsehood can snowball into consensus faster than a newsroom can correct it. It’s like a rumor on steroids: louder, faster, and harder to calm down.

I mention names because specifics matter: Thomas Germain ran the experiment; the BBC reported it; Gizmodo traced the follow-up; models referenced include ChatGPT, Google’s Gemini and AI Overviews, and Anthropic’s Claude; commentators named in the conversation include Kara Swisher, Casey Newton, Nilay Patel, and Taylor Lorenz. That network of people and platforms is where influence and accountability must meet.

If you care about truth, your role is active, not passive. Question an answer that arrives fully formed. Demand source traces. Reward content that is transparent about its provenance. And when you spot a seeded claim — a lone page making an outsized assertion — call it out. Otherwise, we’ll keep crowning champions who never showed up to the competition.

If a polished lie can become consensus before a human checks it, who will hold the AI to account?