Anthropic accuses Chinese AI labs of orchestrating a large-scale effort to replicate its Claude AI model through a process known as distillation.
The San Francisco-based startup claims that three firms created thousands of fake accounts to extract advanced outputs from Claude. According to Anthropic, DeepSeek, Moonshot AI, and MiniMax carried out millions of interactions with the model.
The company alleges that more than 24,000 fake accounts were used in the process.
As Anthropic accuses Chinese AI labs, it highlights a technique called “distillation.” In AI development, distillation involves extracting outputs from a powerful system and using them to train another model at a lower cost.
Anthropic claims MiniMax focused heavily on agentic coding and tool management. Moonshot AI allegedly targeted reasoning and computer vision tasks, while DeepSeek worked on improving logical structures and bypassing safeguards.
The company argues that large-scale distillation requires significant computing power. Therefore, it connects the issue to ongoing debates over U.S. export restrictions on advanced AI chips.
Read: Anthropic Targets OpenAI With Super Bowl Ads Over AI Advertising Plans
Anthropic also raised national security concerns. It stated that U.S.-developed models often include safeguards to prevent harmful uses. Distilled systems, it warns, may not contain equivalent protections.
The startup has called for industry-wide coordination, urging policymakers, cloud providers, and AI developers to strengthen safeguards against misuse.