top of page
  • LinkedIn
  • Instagram
  • Facebook
  • X

Anthropic's Claude Code Source Code Leaked and Here's What It Shows

Claude Code Source Code Leaked
Anthropic's Claude Code Source Code Leaked and Here's What It Shows

On the morning of March 31, 2026, security researcher Chaofan Shou posted a brief message to X: "Claude code source code has been leaked via a map file in their npm registry." Within hours, the complete TypeScript source of Anthropic's Claude Code CLI had been archived to a public GitHub repository, where it accumulated over 1,100 stars and 1,900 forks before the day was out.


Anthropic had not been hacked. A source map file — a debugging artifact that maps minified production code back to its original source — had been accidentally bundled into the published npm package. The Bun runtime that Claude Code uses generates these files by default. Someone had forgotten to add *.map to the project's .npmignore. The result was that anyone who knew where to look could download a ZIP archive of the complete, unobfuscated codebase directly from Anthropic's own R2 cloud storage bucket.


The exposed code totals approximately 512,000 lines, 1,900 files, written in strict TypeScript. Anthropic had not commented publicly at the time of writing.


How the Claude Code Source Code Leaked


Source maps exist for a practical reason: when production JavaScript crashes, the error trace points to minified code that's nearly impossible to read. Source maps let developers trace that crash back to the original line in the original file. They belong in development environments. Including them in a production npm package is the kind of oversight that occasionally happens on small projects; doing it on a widely-used commercial tool is more consequential.


This is reportedly not the first time the issue affected Anthropic. A similar source map exposure was patched in early 2025. The incident arrived on the same day that Axios — one of npm's most downloaded packages, at 83 million weekly downloads — was compromised through a hijacked maintainer account to deploy a cross-platform Remote Access Trojan. The coincidence underscored how much of the software supply chain flows through npm, and how differently things can go wrong: one through sophisticated credential theft, the other through a misconfigured build file.


The irony that multiple observers noted is that Anthropic had built a specific countermeasure inside Claude Code called "Undercover Mode" — a subsystem designed to prevent the tool from accidentally revealing internal codenames in commits on public repositories. The system prompt injected during Undercover Mode explicitly instructs the model not to mention animal-named internal model codenames ("Capybara," "Tengu"), unreleased version numbers, or internal Slack channels and short links. That precaution was in place. The source map was not.


What’s Actually Inside


The leaked codebase is substantially more complex than Claude Code’s terminal interface implies. A few modules are large enough to be remarkable on their own: QueryEngine.ts runs to approximately 46,000 lines and handles all LLM API calls, streaming, caching, and multi-turn orchestration. Tool.ts runs to around 29,000 lines and defines all agent tool types along with their permission schemas. commands.ts registers roughly 85 slash commands.


The tool system itself counts approximately 40 discrete capabilities, each permission-gated: file reads, bash execution, web fetches, LSP integration, and an AgentTool that handles sub-agent spawning. Read-only operations run concurrently; mutating operations run serially to avoid conflicts. The architecture reflects deliberate choices about when parallelism is safe.


The multi-agent system is one of the more significant findings. The source reveals three distinct execution models for subagents: a fork model that creates a byte-identical copy of the parent context and hits the API’s prompt cache, a teammate model that communicates via file-based mailbox across terminal panes, and a worktree model that assigns each agent its own isolated git branch. The fork model’s relationship to prompt caching has practical cost implications: because spawned agents share the parent context, running several in parallel costs roughly the same as running one sequentially.


The hook system exposes over 25 lifecycle events — PreToolUse, PostToolUse, UserPromptSubmit, SessionStart, SessionEnd, and more — across five hook types including shell commands, LLM-injected context, full agent verification loops, HTTP webhooks, and JavaScript functions. This is an extension API that Anthropic has not prominently advertised.


The codebase also includes CLAUDE.md handling that re-reads the file on every query iteration, not just at session start. The file supports a hierarchy: a global preferences file at ~/.claude/CLAUDE.md, a project-level file, modular rules under .claude/rules/*.md, and a gitignored local notes file. The character limit is 40,000.


On the stranger end of the findings: the source includes a complete Tamagotchi-style companion system called “Buddy,” with species rarity, shiny variants, procedurally generated stats, and a soul description written by the model on first hatch. It lives in a buddy/ directory and is gated behind a compile-time feature flag.


The Broader Security Context


The leak is Anthropic’s second significant disclosure in less than a week. On March 26, a CMS configuration error exposed details about an unreleased model called "Claude Mythos" — described in draft blog posts as a compute-intensive system with advanced reasoning, intended initially for enterprise security teams. Both incidents trace to configuration errors, not attacks.


That distinction matters, but it doesn’t fully resolve the question it raises. Claude Code is a tool that requests access to filesystems, terminals, and entire codebases. It runs commands. It manages git workflows. The case for trusting it with that access rests in part on confidence in Anthropic’s operational practices. Two configuration-level exposures in five days — one of them a repeat of a previously patched error — give reasonable users reason to think more carefully about that.


The security community’s reaction has been split. Some have argued the exposure is relatively benign: the source code of the client application doesn’t reveal the underlying model weights or Anthropic’s training procedures, and competitors were presumably already aware of the general architecture. Others have pointed out that the exposed code covers internal API client logic, OAuth 2.0 authentication flows, permission enforcement, and undisclosed feature pipelines — information with more potential value than a client-side codebase might typically contain.


What It Means for Users


The most immediate practical lesson is about CLAUDE.md. According to the source, the file is parsed on every single query iteration. Users who have left theirs empty, or written only minimal notes, are leaving the tool’s most direct configuration surface largely unused. The 40,000-character limit is substantial; most users are reportedly using a small fraction of it.


The permission configuration findings point in a similar direction. Every click on an "allow this action?" dialog is a unit of friction the tool was built to eliminate through pre-configured glob patterns. The source reveals a five-level settings cascade and an "auto" mode that runs an LLM classifier on each action, racing multiple resolvers in parallel to approve or deny.


The session persistence system deserves attention. Every conversation is saved as JSONL and supports resumption with --continue or --resume, with session memory that extracts task specs, file lists, errors, and workflow state across compactions. The architecture explicitly preserves context across sessions.


The compaction system reveals five separate strategies for managing context pressure: time-based clearing of old tool results, conversation summarization, session memory extraction, full history summarization, and oldest-message truncation. The source suggests this is a central engineering concern, not a peripheral one.


The Build Configuration Problem


Beyond what the code reveals about Claude Code’s capabilities, the leak is a reminder about a persistent class of error in software publishing. npm packages have exposed sensitive content through source maps before — in documented cases, hardcoded API keys have appeared in production source maps. The standard safeguards are well-known: running npm pack --dry-run before publishing to audit exactly what goes into the package, maintaining an explicit whitelist in package.json’s files field, and building automated CI/CD checks that catch stray .map files before they reach the registry.


For Anthropic, the question is partly reputational and partly procedural. A tool trusted with deep access to developer environments should arguably have the most rigorous publishing pipeline. The same tool had this problem before, and addressed it, and then shipped the same error again in a newer version.


At time of writing, the archived repository had continued to accumulate forks and stars as developers worked through the codebase. Anthropic had not issued a statement.

 
 
 
SIGN UP FOR MY  NEWSLETTER
 

ARTIFICIAL INTELLIGENCE, BUSINESS, TECHNOLOGY, RECENT PRESS & EVENTS

Thanks for subscribing!

CONTACT

Contacting You About:

Thanks for submitting!

New York, NY           

Db @DavidBorish.com           

  • LinkedIn
  • Instagram
  • Facebook
  • X
Back to top

© 2026 by David Borish IP, LLC, All Rights Reserved

bottom of page