xAI Hiring Writers to Fix Elon’s Grok AI?

xAI Hiring Writers to Fix Elon's Grok AI?

The email landed with a thud: “Congratulations, you’ve been selected…” My heart leaped, thinking of the Nebula Award I’d chased for years, only to read on and discover they wanted me to teach a chatbot how to write like me. The offer? A pittance compared to what my work was actually worth. It felt like a betrayal, a twisted joke only Silicon Valley could conjure.

Reports are circulating that Elon Musk, never one to shy away from controversy, wants to pay skilled writers to train his AI. xAI, Musk’s artificial intelligence company, recently posted a job listing seeking writers across disciplines—from medical and legal to journalism—at rates ranging from $40/hour (€37/hr) to $125/hour (€116/hr).

The mission is clear: “evaluate, refine, and create elite-level writing in a variety of genres and formats to advance Grok’s capabilities.” Grok, for those blissfully unaware, is the AI chatbot that has stirred up trouble more than once. Remember when it was caught regurgitating white supremacist theories? Or that other time it seemed a little too fond of Hitler?

More recently, Grok drew criticism for its capability to generate deepfake pornography, resulting in outright bans in countries such as Indonesia and the Philippines. The AI was used to digitally strip women and girls without their consent, highlighting the potential for abuse.

The Bar is Set Ridiculously High, But is the Price Right?

I overheard two writers at a conference debating the ethics of AI training, and one thing was clear: The value of experience is not cheap. The requirements for fiction writers hoping to shape Grok’s narrative abilities are exceptionally demanding:

For prose fiction writers—At least two of the following: (1) verified novel publishing deals with major houses (e.g., Big Five); (2) novel sales >50,000 units (excluding free promotions); (3) 10+ short stories in major outlets (e.g., The New Yorker, Clarkesworld); (4) major awards recognition (e.g., Hugo, Nebula finalist or comparable); (5) critical acclaim (e.g., starred reviews in Kirkus, Publishers Weekly, features in Library Journal or NY Times Book Review)

Landing ten stories in places like The New Yorker? Securing a book deal with a “Big Five” publisher? These aren’t just accomplishments; they’re career milestones. Winning a Hugo Award? It’s the literary equivalent of scaling Everest.

The criteria for screenwriters are no less daunting:

For screenwriters—One or more of the following: (1) verified “written by” or “screenplay by” credits on at least two produced feature films distributed by major studios, networks, or streaming platforms (e.g., Warner Bros., Netflix, HBO, Disney); (2) “written by” (or equivalent) credits on 10 produced half-hour or one-hour episodes aired on broadcast TV or cable networks, or having achieved an aggregate of 10 million views on streaming services like YouTube; (3) nominations, wins, or finalist placement for major screenwriting awards (e.g., Academy Awards, Emmy Awards, WGA Awards, Nicholl Fellowship).

The listing maintains this pattern across categories. Aspiring to guide Grok’s journalistic endeavors? You’ll need substantial experience with prestigious outlets like The New York Times or the BBC. How about game writing? At least five years in the trenches and published credits on “notable games” are your price of admission.

What are the dangers of AI replacing human writers?

The concern isn’t just hypothetical. As AI models grow more sophisticated, we risk a homogenization of voice, a dilution of originality. Imagine a world where every article, every story, every script is subtly, almost imperceptibly, the same. It’s like listening to an album where every song was written by the same algorithm, lacking the unique stamp of human experience. It may sound like a utopia but it is more like a dystopia.

Is this a Golden Opportunity, or Exploitation Disguised?

On one level, xAI’s desire for experienced talent is logical. But, let’s step back. They’re asking accomplished professionals—the very people whose work defines excellence—to train a tool designed, at least in part, to make their skills obsolete, and for what amounts to a relative bargain.

How do writers feel about AI writing tools?

I spoke with a novelist friend about this, and she put it bluntly: “It’s like being asked to sharpen the axe that’s going to chop down your own tree.” The sentiment is widespread. Many writers view AI with a mix of curiosity and trepidation, unsure whether it represents a helpful tool or a looming threat to their livelihoods.

What is the long-term impact of AI on creative professions?

The most profound impact may be on the next generation of writers. If AI can generate passable content quickly and cheaply, will publishers and studios still be willing to invest in new, unproven talent? Will aspiring writers find it even harder to break into a market flooded with algorithm-generated content? The whole deal feels like writing’s equivalent of the Oklahoma Land Rush – staking claims on a future that’s rapidly changing.

What happens when the art of storytelling, the power of investigative journalism, and the craft of screenwriting are all subtly shaped by the same digital hand? Will we celebrate the evolution of content creation, or mourn the loss of something uniquely human?