Kevin Roose & Vibecoding: A Hard Fork Take

Claude Beta: Simplify Vibecoding with Just a Slack Message

The air crackled with a nervous energy as I watched my neighbor, illuminated by the glow of his laptop, frantically adjusting a latex clown on his lawn at 2 AM. Halloween was weeks away, but he was determined to win the neighborhood decoration contest this year. He muttered something about “optimizing fear factors” with AI, and I couldn’t help but wonder if he’d gone completely mad.

It’s generous of Kevin Roose, New York Times tech columnist and co-host of the Hard Fork podcast, to pity people who are toiling away without the benefit of claudeswarms. 

 

In a January 25 X post, Roose said that he has “never seen such a yawning gap” between Silicon Valley insiders like him and outsiders. He says the people he lives near are “putting multi-agent claudeswarms in charge of their lives, consulting chatbots before every decision,” and “wireheading to a degree only sci-fi writers dared to imagine.”

Hard Fork involves a great deal of guffawing from Roose—mostly directed at his more comedically nimble co-host Casey Newton—so it’s not lost on me that Roose is trying to layer some irony and exaggeration on top of his condescension in this post. He takes that mask right off, however, in his next one, in which he says he wants “to believe that everyone can learn this stuff,” but frets that perhaps, “restrictive IT policies have created a generation of knowledge workers who will never fully catch up.” 

Recent Hard Fork episodes have been unusually enthusiastic about vibecoding—using AI tools to perform speedy software engineering. Once upon a time, Github Copilot and ChatGPT caused software engineers’ eyes to bug out because they could write code like a person, and you could run the code, and the code would work. Since around 2021 AI’s knack for coding has been steadily improving, and steering certain software engineers toward prophesies of various forms of Armageddon.

For instance, Dario Amodei, the CEO of Claude parent company Anthropic, published one of these earlier today in the form of a 38-page blog post. “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it,” Amodei wrote. 

Roose and Newton are not, first and foremost, software engineers, but Roose recently used Claude Code to make an app called Stash, an experience he talked about on Hard Fork. Stash is a read-later app like the discontinued Pocket, or the still-extant Instapaper. Stash, according to Roose, does “what I used to use Pocket for. Except now I own it and I can make changes to the app. And I made it, I would say in about two hours.” Well done. Sincerely. 

In another episode of Hard Fork, listeners provided their own stories about what they’ve been vibecoding. Presumably these people didn’t used to code, and now they’re coding, which is admittedly kind of cool. One built a tool for wallpaper clients to calculate how much wallpaper they need to buy. Another built a gamification system for his kids’ housework. 

With all due respect to these people and the neat stuff they’re pulling off with vibecoding, this is just people giving themselves busywork for fun. There’s nothing wrong with that, but that’s what it is. 

It’s true that most people don’t have the knowledge to perform software engineering tasks, and it’s intriguing to try vibecoding if, like me, you’ve never coded anything. I’ve had LLMs make some rudimentary side-scrolling games, build ray-traced 3-D environments in javascript, and perform some other little experiments that glitched out. I learned a little about LLMs, but it didn’t change my life.

Then again, I, like many people, am bored by optimization and productivity hacks, and it’s not in my nature to have software ideas that are purely software. In rare cases where I feel a creative spark that involves coding, the coding tends to be a small part of the idea, and the rest of the idea tends to involve a lot more engaging with the world than an LLM can do. For instance, I live in one of those neighborhoods where people go nuts with their Halloween decorations, and I’ve daydreamed about setting up festive lawn animatronics, but vibecoding a control system would only get me so far in the process of configuring my monsters. Most of the actual work would be me out in my yard with a power drill, wires, and stakes, futzing with my werewolf dummy, and Claude Code isn’t on the verge of getting that thing to stay upright on my lawn. 

Roose and other AI fanatics are talking lately as if It’s. Finally. Here. They make it sound as if AI is really about to take off, and the normies need to strap in. 

 

When Roose talks about these benighted “knowledge workers” outside of San Francisco, if he exclusively means software engineers struggling to accomplish tasks that could be performed by claudeswarms (Claudeswarms, in case you’re wondering, seem to be little virtual coder hives that carry out complex coding tasks), I suspect his pity is misplaced. If AI-inclined coders are not allowed to use the latest AI tools while they’re on the clock, and they’re also software engineers in their spare time, it stands to reason that they’re playing with AI toys at home if they want to

And there can be little doubt that, half-joking or not, Roose’s experience of people in the Bay Area “wireheading” and constantly asking chatbots for life advice is real. That’s to be expected. They have a lot of other problems too, like a horrifying new habit of injecting themselves with peptide solutions they bought online. 

It’s not at all surprising that people in San Francisco think AI is about to become the closest possible thing to a god, because it feels like it’s close to being the thing a lot of people in San Francisco think is a god: a software engineer. An understandable mistake. 

But the rest of the pathetic knowledge workers who aren’t blessed to be in the AI haven of San Francisco don’t necessarily believe software engineers are all that powerful, and some of us are counting the months until next Halloween, and AI isn’t going to be much help getting our latex clowns to look scary by then. It probably never will, and that’s fine.  

The Gospel of the Claudeswarm: Are We All Doomed?

I recently overheard someone at a coffee shop proclaiming that AI would soon replace all creative jobs. The barista, without missing a beat, responded, “Then who’s gonna make your oat milk latte, bro?” The exchange highlights a growing tension: on one side, the evangelists touting the transformative power of AI tools like Anthropic’s Claude; on the other, the skeptics who see it as just another tech fad.

Roose, like many in the tech world, seems genuinely excited—perhaps a bit too excited—about the potential of “vibecoding” and AI-assisted living. He paints a picture of a San Francisco where everyone is augmenting their brains and automating their decisions, leaving the rest of us in the digital dust. Is this a legitimate vision of the future, or is it a self-serving narrative cooked up inside the Bay Area bubble?

His enthusiasm, though, feels a little…forced. As if he’s trying to convince himself (and his listeners) that this is the real deal. It feels like proclaiming that electric cars are the only cars worth owning, while ignoring the millions of reliable gasoline-powered vehicles still on the road. He also glosses over the very real limitations and potential pitfalls of relying too heavily on AI.

Will AI Replace Software Engineers?

One of the biggest questions surrounding AI is its impact on the job market. Will AI tools like Claude Code make software engineers obsolete? The answer, at least for now, is likely no. While AI can automate certain coding tasks and accelerate development, it still requires human oversight, creativity, and problem-solving skills. It will augment software engineering, not replace it entirely.

For now, these tools are a bit like a fancy calculator—helpful for complex equations, but useless without someone who understands the underlying math. They can help you build a rudimentary read-later app in two hours, but they can’t replace the strategic thinking and domain expertise of an actual software engineer.

Beyond the Hype: AI and the Mundane

Consider the wallpaper client who built a calculation tool or the parent who created a chore-tracking system for their kids. These are genuinely cool applications of vibecoding, demonstrating the accessibility and potential of AI to solve everyday problems. But let’s not mistake these for revolutionary breakthroughs.

They are, as the original article points out, “people giving themselves busywork for fun.” And there’s nothing wrong with that! But it doesn’t mean we’re on the cusp of a technological singularity. It feels as though he’s selling a vision of AI as a magic wand, when it’s more like a set of Lego bricks: powerful in the right hands, but ultimately limited by the builder’s imagination and skill.

What is “Vibecoding” Anyway?

Vibecoding essentially refers to using AI-powered tools to rapidly prototype and build software applications, often with minimal traditional coding knowledge. It relies on natural language prompts and AI’s ability to generate code snippets, allowing users to “vibe” their ideas into existence. The New York Times’ Roose is using tools such as Claude, Anthropic’s answer to ChatGPT.

But the Halloween clown doesn’t need vibecoding. He needs elbow grease, a power drill, and maybe a therapist. And that’s the point: AI is a powerful tool, but it’s not a substitute for real-world skills, creativity, and human interaction.

The San Francisco Bubble: A Different Reality?

I once visited a tech conference in San Francisco and felt like I’d landed on a different planet. Everyone was buzzing about the latest gadgets, talking in jargon-filled sentences, and seemingly convinced that technology held the answer to every problem. Back in the real world, people were still struggling with traffic jams, rising grocery prices, and the simple frustration of a slow internet connection.

Roose’s portrayal of a San Francisco where people are “wireheading” and constantly consulting chatbots may be an exaggeration, but it reflects a genuine cultural difference. The Bay Area, with its concentration of tech companies and early adopters, often exists in a self-reinforcing echo chamber, where the latest trends and technologies are amplified and hyped.

Are “Restrictive IT Policies” Holding Us Back?

Roose laments that some companies’ “restrictive IT policies” might prevent employees from fully embracing AI tools. But is this necessarily a bad thing? Perhaps these policies are in place to protect sensitive data, maintain security, or simply ensure that employees are focused on their core responsibilities. Not every company needs a “claudeswarm” to thrive.

This raises a bigger question: are we blindly accepting the gospel of AI without considering the potential downsides? Are we sacrificing critical thinking, creativity, and human connection at the altar of efficiency and automation? Or will the relentless AI drumbeat lead to innovation fatigue and a much-needed reality check?