Anthropic Exposes 16M Query Theft Campaign by Chinese AI Labs
Anthropic dropped a bombshell Tuesday, publicly naming three Chinese AI laboratories—DeepSeek, Moonshot, and MiniMax—as perpetrators of coordinated campaigns to steal Claude's capabilities through over 16 million fraudulent API exchanges.
The attacks used approximately 24,000 fake accounts to circumvent Anthropic's regional access restrictions and terms of service. One proxy network alone managed more than 20,000 simultaneous fraudulent accounts, mixing distillation traffic with legitimate requests to evade detection.
The Numbers Tell the Story
MiniMax led the assault with 13 million exchanges targeting agentic coding and tool orchestration. Moonshot followed with 3.4 million exchanges focused on computer-use agent development and reasoning capabilities. DeepSeek's campaign, while smaller at 150,000 exchanges, employed particularly sophisticated techniques—including prompts designed to make Claude articulate its internal reasoning step-by-step, essentially generating chain-of-thought training data on demand.
Anthropic traced several DeepSeek accounts directly to specific researchers at the lab through request metadata analysis.
Why This Matters Beyond Corporate Espionage
The timing here isn't coincidental. OpenAI publicly accused DeepSeek of distilling ChatGPT just three days earlier on February 21. Google's Threat Intelligence Group flagged increased distillation activity on February 16, including a campaign using over 100,000 prompts to replicate Gemini's reasoning abilities.
What makes this particularly concerning? Anthropic argues these attacks undermine U.S. export controls on advanced chips. Foreign labs can effectively bypass innovation requirements by extracting capabilities from American models—and they need those restricted chips to run distillation at scale anyway.
"Illicitly distilled models lack necessary safeguards," Anthropic warned, noting stripped-out protections could enable "offensive cyber operations, disinformation campaigns, and mass surveillance" by authoritarian governments.
The Hydra Problem
Anthropic described the infrastructure enabling these attacks as "hydra cluster" architectures—sprawling networks with no single point of failure. Ban one account, another spawns immediately. The proxy services reselling Claude access made detection exponentially harder by distributing traffic across Anthropic's API and third-party cloud platforms simultaneously.
When Anthropic released a new Claude model during MiniMax's active campaign, the lab pivoted within 24 hours, redirecting nearly half their traffic to capture the latest capabilities. That kind of operational agility suggests these aren't opportunistic attacks but sustained, well-resourced operations.
Anthropic's Countermeasures
The company outlined several defensive measures: behavioral fingerprinting systems to detect distillation patterns, strengthened verification for educational and startup accounts (the most commonly exploited pathways), and model-level safeguards designed to degrade output quality for illicit extraction without affecting legitimate users.
Anthropic is sharing technical indicators with other AI labs, cloud providers, and government authorities. The message is clear: this requires industry-wide coordination.
For investors tracking AI infrastructure plays, this escalation adds another variable to the competitive landscape. Labs that can't defend their models risk watching their R&D investments walk out the door—16 million queries at a time.