Anthropic Claude Source Code Leak Exposes Company Secrets

Claude Beta: Simplify Vibecoding with Just a Slack Message

I opened a thread on X and saw a plaintext .map file pinned to an npm package. You could almost hear teams across the internet lean forward. The code was raw, public, and impossible to ignore.

I watched programmers pull apart a 512,000-line .map file that turned parts of Claude Code inside out.

Chaofan Shou, posting as @Fried_rice on X, posted the discovery: a .map file in a public npm registry. For engineers, a .map is a decoder—meant for debugging, not distribution. What Anthropic published contained unobfuscated TypeScript tied to Claude Code v2.1.88.

The dump exposed code for the agent’s API engine, its token-counting logic, and a lot of behavioral plumbing. Someone collected those finds and mirrored them to a GitHub repository, where forks and comments spread fast.

The situation was, in effect, like a surgeon’s notes left on a cafeteria table. People with context began parsing spinner verbs, prompt-handling quirks, and even a rumored April Fools’ “Tamagotchi” project.

Was Anthropic’s Claude Code source leaked?

Yes. Internal source files intended for debugging were packaged into a public release. Anthropic confirmed the release contained internal source code and called it a packaging error caused by human mistake.

Did the leak expose customer data or credentials?

Anthropic told reporters no sensitive customer data or credentials were exposed. The company described the incident as a release-packaging problem rather than a security breach and said it would change processes to prevent recurrence.

I noticed engineers at Anthropic were increasingly letting Claude Code write Claude Code.

Boris Cherny, who runs Claude Code, tweeted that in a recent month 100% of his contributions to the project were produced by the agent itself. You read that right: the tool was writing its own scaffolding.

That dynamic raises two pressures at once. One is practical: if the codebase is written by an assistant, small packaging slips can ripple widely. The other is cultural: teams trusting a model to author critical metadata and build scripts can accelerate mistakes.

This felt like a backstage pass left onstage—an access token everyone could see and use.

I watched competitors and investors tilt their heads as Anthropic prepared for a public step.

Anthropic appears to be positioning for a public offering later this year, and timing matters. Rivals are already jockeying for enterprise mindshare: OpenAI has pushed into business products and expanded access to its coding tools, and other players are eyeing enterprise contracts and developer mindshare.

Leaks that reveal operational patterns—even if not the core model weights—give competitors tactical clues about product direction, integration points, and where Anthropic might be vulnerable in enterprise sales.

I listened as the community turned the leak into working documents overnight.

Programmers have dissected the files on GitHub and in public threads, extracting behavior triggers, token accounting methods, and UI strings. The code didn’t expose Anthropic’s underlying model, but it showed how the company stitches features together.

That transparency changes the conversation. Your engineering team can read the scaffolding they’ll compete against. Investors can infer technical maturity. Customers can test behavior assumptions in public forks.

I kept asking whether this was a one-off human mistake or a symptom of a larger process problem.

Anthropic called it human error and said it will implement measures to prevent recurrence. That’s plausible. But when a product’s designers increasingly rely on the product itself to produce code, human oversight needs to be rethought.

If you build with an assistant writing build files, you must control packaging channels and audit outputs before release. That’s simple in concept; messy in practice.

I watched the internet turn the incident into a live case study.

Solayer Lab’s intern Chaofan Shou found the file; community members flagged odd behaviors; people forked the repo. GitHub, npm, X, and public threads became the triage room.

For engineers and product teams, this is a reminder: access controls, CI checks, and release gatekeeping are not optional. For competitors, it’s a chance to study operational design without breaking into a vault.

I’ve told you how the leak happened, who found it, and what people are doing with the files. You can watch the mirrors and forks, read the threads by Boris Cherny and others, and judge whether Anthropic’s response will calm markets or raise more questions—so which will it be?