Claude Code leak: Anthropic cites human error, works to limit damage
Thousands of copies of the code were removed from GitHub in response to copyright takedown requests from Anthropic, according to a notice on the popular developer platform
Bloomberg Anthropic PBC is rushing to address the inadvertent release of internal source code behind Claude Code, an AI-powered assistant that has become a key moneymaker for the company.
Thousands of copies of the code were removed from GitHub in response to copyright takedown requests from Anthropic, according to a notice on the popular developer platform. Anthropic later said the takedown impacted more GitHub repositories than intended and has since been significantly scaled back.
The artificial intelligence startup is also taking steps to tweak its internal systems to prevent a similar leak from happening again, including by improving its automation process.
In a series of posts overnight on X, Claude Code creator Boris Cherny said Anthropic’s “deploy process has a few manual steps, and we didn’t do one of the steps correctly.” He said the company has already “made a few improvements to the automation for next time,” with plans for “a couple more on the way.”
The accidental release marked Anthropic’s second security slip-up in a matter of days, compromising approximately 1,900 files and 512,000 lines of code related to Claude Code. Last week, Fortune separately reported that Anthropic had been storing thousands of internal files on a publicly accessible system, including a draft blog post that detailed an upcoming model known internally as both “Mythos” and “Capybara.”
The exposures hit at a delicate moment for the company. Anthropic is currently in a legal battle with the US government over the Pentagon’s decision to declare it a supply-chain risk following a standoff over AI safety guardrails. The company has warned that the labeling could cost it billions in lost revenue.
At the same time, Anthropic has seen significant user and revenue growth in recent months, in part thanks to traction from Claude Code – a tool that’s meant to help streamline the process of writing and debugging software. Claude Code’s run-rate revenue topped $2.5 billion as of February, the company said, a year after its release. Those gains are key to the company’s ambitions to go public as soon as this year.
In a statement Tuesday, Anthropic confirmed the leak and said “no sensitive customer data or credentials were involved or exposed.” The company added: “This was a release packaging issue caused by human error, not a security breach.”
The issue first came to light in a post on the social media platform X that purported to share a link to the code and garnered more than 30 million views. The leak has touched off thousands of posts online by people saying they’ve scoured the code.
Some have claimed they’ve unearthed yet-to-be-released features, including an always-on AI agent named Kairos that fields tasks proactively as well as a system for tracking instances when users express frustration and use profanities.
Cherny said the company is “always experimenting with new ideas,” most of which don’t end up getting released. He said Anthropic remains “on the fence” about the Kairos feature, in particular. As for the tracking system, he said it’s “one of the signals we use to figure out if people are having a good experience.”
Beyond offering hints of a future releases, the leak also risks giving bad actors “useful insight into internals, workflows and likely abuse paths,” cybersecurity firm Tanium said a blog post.
Malicious actors will study the code to determine such things as how the tool handles local files, what data it may access during normal operation and how guardrails are implemented, the firm said.