I sat at my keyboard, timer ticking, staring at a prompt that asked me to draw a character I had never heard of. I had sixty seconds and the absurd, thrilling thought that someone on the other side expected “an AI” to answer. For a moment I realized the machine wasn’t the star—people were.

I’m saying this as someone who has watched the label “AI” balloon into a catch‑all for anything that chats, draws, or recommends. You and I both feel the creep of conversational models—OpenAI’s ChatGPT or Google Gemini—speaking in a voice that resembles a person. But most of those responses are imitation: regurgitated patterns stitched from stolen text, and sometimes a person typing answers behind an avatar.
My friend runs a small note-taking startup and pretended the product was sentient.
He would sometimes answer customer emails himself and sign them as the “AI.” That’s not a tech failure; it’s a marketing trick: humans acting as machines to sell confidence. I’ve seen worse—people labeling hours of Meta Ray-Bans footage, contractors piloting Tesla robots, or teams doing the heavy lifting for a product that promises an invisible intellect.
A random Saturday night, I tried the site and the timer began to buzz.
Your AI Slop Bores Me lets you be the fake oracle. You can either submit prompts or spend 60 seconds responding to other users. The bulk of requests are meme-level: anime trivia, quick sketches, absurd roleplay. It’s performative labor, but it’s oddly fun. You slip into the role of the algorithm and watch strangers accept the illusion.
What is Your AI Slop Bores Me?
It’s a micro-economy of pretend intelligence. The site uses a credit system that flips the usual incentives: you spend time asking to earn chances to answer. That twist asks a simple question—do people want to be the human behind an “AI” or are they just playing a game? The answers are both pragmatic and performative.

I asked for a drawing of a niche anime prop and spent half my time Googling it.
Sixty seconds is a harsh deadline. It turns thoughtful responses into heuristics. That pressure reveals the seams: people approximate, bluff, or paste. The site is a carnival funhouse of serviceable fakery—sometimes entertaining, sometimes alarming. When platforms like Fireflies or startup founders pretend an app is “AI-driven” but do the work manually, the experience feels eerily similar.
Can people pretend to be AI on these platforms?
Yes. And they already have. Last year, reports surfaced of founders posing as their own product, and stories emerged about human moderators rewriting or filtering supposedly automated outputs. The practical result: the public confuses mimicry with intelligence, and that confusion fuels investor narratives and media headlines around “AI” without accountability.
One afternoon, I thought about pay and scale.
Microtasks like these rarely pay well. If credits translate to cash, you might be earning dollars for minutes; a hypothetical $1 (€0.93) per credit sounds better until you factor in churn, research time, and the mental cost of pretending not to be human. Platforms that lean on human-in-the-loop labor borrow trust from names like OpenAI and Gemini while outsourcing the messy bits.
I’ll be blunt: the label “AI” sells a myth of autonomy. You, answering prompts with a timer ticking, understand how brittle that myth is. Being the stagehand behind the curtain feels oddly empowering and degrading at once; the work teaches you how impressions are manufactured.
So what does that mean for the future of work and truth online—are we building smarter tools or better masks?