Claude Code Source Leak Exposed More Than a Packaging Mistake
When Anthropic accidentally shipped the full source code of Claude Code, the first reaction centered on the mistake itself. A company known for emphasizing safety had made a basic packaging error. But once security researchers and journalists began digging through the exposed TypeScript codebase, the story became much bigger.
What surfaced was a clearer view of how Claude Code operates on a developer’s machine and how much control it appears to maintain while running. The exposed code suggests that Claude Code can observe, store, and communicate far more than many developers likely assumed when giving it access to local files and terminal workflows.
How the Claude Code Leak Happened
Source map file exposed the codebase
The leak happened on March 31 when a source map file was mistakenly bundled into Claude Code version 2.1.88 on npm. Security researcher Chaofan Shou first spotted the issue, and the discovery quickly drew widespread attention.
Anthropic said the exposure resulted from human error rather than a security breach. The company also stated that no customer data or credentials were involved.
The code is still accessible through mirrors and forks
Although Anthropic patched the npm package and removed earlier versions from the registry, the exposed code did not disappear. The codebase, made up of roughly 512,000 lines across about 1,900 TypeScript files, has already been mirrored across public GitHub repositories. That means the package contents remain available for ongoing review even after the original issue was addressed.
What the Exposed Claude Code Source Reveals
Local logging captures files, commands, and edits
One of the clearest takeaways from the exposed code is the amount of activity Claude Code logs while it is running. Reports based on the code indicate that every file Claude reads, every bash command it executes, and every edit it makes is recorded locally in plaintext JSONL files.
Session transcripts are also referenced across the tool’s memory and “dream” consolidation systems. That detail matters because it shows the software is not just performing tasks in the moment. It is also organizing and retaining traces of those interactions in structured ways.
Data retention extends beyond the local machine
A comparison cited in reporting said Anthropic retains this data for up to five years for consumer users who have the “Help improve Claude” setting enabled. For developers, that changes the privacy conversation in a very practical way. If a tool can inspect files, run commands, and edit code, retention policies become more than a settings-page footnote.
Hidden Telemetry and Remote Control Features
Claude Code polls Anthropic’s servers every hour
The leaked code reportedly shows persistent telemetry that checks in with Anthropic’s servers every hour. This is not a one-time setup event or a passive background detail. It suggests an ongoing communication loop between running instances of Claude Code and Anthropic’s infrastructure.
That alone is significant for developers working in environments where local tooling is expected to stay local unless an update or sync is explicitly initiated.
Feature flags can change behavior without a user update
The exposed code also indicates that configuration changes can be pushed to running instances through feature flags, without requiring a user-initiated update. In plain terms, behavior may be altered remotely even if the user has not chosen to install a new version.
For developers who assume local software behavior is mostly fixed between updates, that is a meaningful architectural detail.
Remote killswitches can force actions or shut the app down
Researchers reportedly found six or more remote killswitches capable of forcing specific behaviors. These include bypassing permission prompts, toggling analytics, and in some cases making the application exit entirely.
An auto-update mechanism also allows Anthropic to remotely enable or disable specific versions. Taken together, these capabilities point to a model in which control does not fully stop at the edge of the user’s machine.
Hidden Features and Internal-Only Tools Found in the Leak
Unreleased feature flags point to internal capabilities
Among the exposed elements were 44 unreleased feature flags. These flags offered a glimpse into tools and functions that were not publicly available.
One of them, called DiscoverSkills, was described as an AI-powered skill search tool available only to Anthropic employees. That detail suggests the leaked code did not just expose standard product behavior. It also exposed parts of Anthropic’s internal working environment.
KAIROS suggests a more autonomous agent mode
The tool list also revealed KAIROS, described as a persistent autonomous agent mode. According to the reporting, it can proactively initiate actions and consolidate its own memory while a user is idle.
That description stands out because it points to behavior that goes beyond reactive assistance. It suggests a system designed to continue operating, organizing, and acting even when the user is not actively directing it.
Undercover Mode raised obvious concerns
Another revealed feature, “Undercover Mode,” reportedly instructs Claude to hide all evidence of AI involvement when Anthropic employees use the tool to contribute to open-source projects.
That is the kind of detail that immediately shifts the conversation from product design to trust. For developers, open-source maintainers, and teams evaluating AI coding tools, a feature framed in those terms is hard to ignore.
Why This Leak Matters for Developers Who Use Claude Code
Filesystem and terminal access now look different
Claude Code has become one of the most widely adopted AI coding tools in the industry. That adoption makes the leak especially important. Developers often grant these tools broad permissions because that is what makes them useful: access to the filesystem, the ability to read project files, and permission to execute terminal commands.
But the exposed architecture suggests those permissions exist alongside logging systems, telemetry, remote configuration controls, and shutdown mechanisms. One analysis described the setup as a “persistent remote control channel to Anthropic’s servers.” For anyone using the tool on sensitive codebases, that framing changes the risk calculation.
Trust depends on understanding the actual operating model
What this leak really did was remove ambiguity. It gave developers a more detailed picture of how Claude Code functions behind the interface. And when a tool can inspect files, store transcripts, phone home regularly, receive remote configuration updates, and trigger killswitch behavior, teams need to understand that before treating it like a simple local assistant.
That does not automatically answer whether the tool should or should not be used. But it does make one thing clear: the permission model and the control model are more expansive than many users likely realized.
Anthropic’s Broader Leak Problems Add More Pressure
This source code exposure came only days after another reported data exposure tied to a misconfigured content management system. That earlier incident reportedly exposed details about an unreleased model codenamed “Mythos” along with roughly 3,000 unpublished internal files.
Seen together, the incidents intensify scrutiny around Anthropic’s internal handling of sensitive systems and product information. The Claude Code leak would already be serious on its own. Arriving so soon after another exposure makes it harder to dismiss as an isolated mistake.

