
Claude Code Source Leak Exposes How Much Data Anthropic Collects From Your System
Anthropic accidentally exposed Claude Code's full source through a packaging error in its npm release, giving researchers an unfiltered look at how the popular AI coding tool operates on user machines.
The roughly 500,000 lines of leaked TypeScript revealed background processes, clipboard access, screenshot capabilities, and an unreleased headless mode that runs while the user is away from the terminal. Researchers also found that the tool monitors user frustration patterns through regex analysis of conversation inputs, raising questions about behavioral data collection that users may not realize is happening.
The leak was the second major accidental exposure from Anthropic in days, drawing scrutiny toward whether a company that markets itself on safety and transparency can adequately secure its own systems.
How much access to your local machine should an AI coding tool have before you start treating it like a security risk?
















