Anthropic has accused Chinese AI companies DeepSeek, Moonshot AI, and MiniMax of using fraudulent accounts to extract data from its Claude models through 'distillation' techniques, raising national security concerns about AI model theft.
Anthropic, the US-based AI company behind the Claude models, has accused Chinese AI labs of conducting "industrial-scale campaigns" to siphon knowledge from its systems through a technique known as "distillation." The company claims that DeepSeek, Moonshot AI, and MiniMax have been using networks of fraudulent accounts to probe Claude models on a vast scale, generating over 16 million exchanges through approximately 24,000 fraudulent accounts.
Model distillation is a deep learning technique where a large "teacher" model transfers its learned patterns to a smaller "student" model. This form of data compression ideally produces a more efficient model without significant performance loss. While useful for explainable AI and making black box algorithms more transparent, it's also a method for copying models.
This accusation comes from a company that has itself faced multiple copyright infringement lawsuits. Anthropic has been sued by various parties including music publishers, authors, and Reddit for alleged copyright violations and unauthorized web scraping during its own model training. The irony of a company that built its business by remixing content created by others now complaining about others doing the same to it hasn't gone unnoticed.
The company refers to the distributed infrastructure used for model distillation as "hydra clusters," though critics note this menacing multi-headed mythological reference may be overstated given that the underlying technology appears similar to commercial proxy services already available in the market.
Anthropic's concerns extend beyond just intellectual property theft. The company argues that distilled models from foreign AI labs could enable authoritarian regimes to undertake cyberattacks, disinformation campaigns, and mass surveillance. They warn that if these distilled models are open-sourced, the risks multiply as dangerous capabilities spread freely beyond any single government's control.
This isn't an isolated concern. Just two weeks ago, Anthropic's arch-competitor OpenAI sent a memo to the US House Select Committee on China warning about similar activities. OpenAI specifically cited DeepSeek, noting that its models "lack meaningful guardrails against dangerous outputs in high-risk domains like chemistry and biology, or offer limited protections for copyrighted material."
Both companies are framing this as a national security issue. Anthropic argues that US companies build systems that prevent state and non-state actors from using AI to develop bioweapons or carry out malicious cyber activities, while models built through illicit distillation are unlikely to retain those safeguards.
The timing is notable as the Forecasting Research Institute's fifth wave report from the Longitudinal Expert AI Panel (LEAP) predicts that the performance gap between US and Chinese AI models will narrow by 2031, with parity anticipated by 2041. This suggests the competitive pressure driving these accusations may only intensify in the coming years.
DeepSeek, Moonshot AI, and MiniMax did not immediately respond to requests for comment on these allegations. The accusations highlight the growing tensions in the global AI race, where companies that have faced their own legal challenges over data usage are now positioning themselves as defenders of ethical AI development against foreign competitors.
The situation raises complex questions about the nature of AI development, intellectual property in the age of machine learning, and the geopolitical implications of technological advancement. As courts continue to grapple with whether training AI models on copyrighted material without consent violates the law, the industry appears to be moving toward a more adversarial stance on international competition and model security.

Comments
Please log in or register to join the discussion