Anthropic is at the center of a new dispute in the global race for artificial intelligence. The US company accuses three Chinese AI labs of having used its Claude model on a large scale to selectively improve their own systems. The allegations involve more than 24,000 fake accounts and over 16 million interactions. These accusations come amid intense debate about US export controls on advanced AI chips. The case therefore touches not only on economic interests, but also on geopolitical strategies and national security issues.
The international race for high-performance AI models has accelerated significantly in recent years. Companies are investing billions in computing power, training data, and security mechanisms. Models like Claude are considered so-called frontier systems, meaning technologically leading platforms with complex capabilities in logical reasoning, coding, and tool utilization.
Distillation is a well-known process in this field. It is used to make high-performance models more efficient and compact. However, the method becomes problematic when it is used not to optimize one's own systems, but to replicate competing models. Anthropic is now leveling precisely this accusation against several Chinese companies.
The allegations against DeepSeek, Moonshot AI and MiniMax
According to Anthropic, DeepSeek, Moonshot AI, and MiniMax allegedly set up more than 24,000 fake accounts to systematically access Claude. These accounts generated a total of more than 16 million interactions. The goal was to extract the model's core capabilities using distillation.
Anthropic explained that the labs had specifically targeted agent-based thinking, tool use, and coding. The aim was therefore not isolated tests, but the targeted analysis and takeover of complex core functions.
The extent of the activities varied depending on the company. According to Anthropic, DeepSeek conducted more than 150,000 interactions. These were primarily aimed at improving the fundamental logic and orientation of its own model. A particular focus was apparently on censorship-proof alternatives to politically sensitive queries.
DeepSeek had already garnered attention a year earlier when the company released its open-source R1 inference model. This model achieved performance levels nearly on par with leading American labs, but at a fraction of the cost. Now, DeepSeek is expected to introduce another model with version V4, reportedly capable of outperforming Claude and ChatGPT in terms of coding.
Moonshot AI reportedly conducted more than 3.4 million transactions with Claude. The focus was on agent-based reasoning, tool usage, coding, and data analysis. The development of computer usage agents and computer vision capabilities also played a role. Just last month, Moonshot AI released the open-source model Kimi K2.5 as well as its own coding agent.
MiniMax is associated with approximately 13 million interactions. According to Anthropic, these activities focused on agent-based coding as well as the deployment and orchestration of tools. Particularly noteworthy was that MiniMax redirected almost half of its traffic to adopt features from a newly introduced Claude version as soon as it became available.
Distillation as a point of contention in the AI competition
Distillation is a common method in AI development. A high-performing model serves as a reference, and its output is used to train a smaller or more efficient model. Within a company, this is a legitimate process.
In competition between different companies, however, distillation can be used to replicate other companies' technologies. In this context, OpenAI also sent a memo to members of the House of Representatives earlier this month. In it, DeepSeek was accused of using distillation to make its own products resemble leading US models.
Anthropic's recent allegations are thus intensifying a debate that had already gained momentum.
Connection to US export controls for AI chips
The allegations come at a time of political debate over the export of advanced AI chips to China. The aim of previous controls was to slow China's AI development by restricting access to high-performance hardware.
Last month, the Trump administration officially allowed US companies like Nvidia to export advanced AI chips such as the H200 to China. Critics see this as a relaxation of restrictions that could strengthen China's computing power at a crucial stage in global competition.
Anthropic argues that the extent of observed distillation activity presupposes access to advanced chips. The company blog states that distillation attacks would actually strengthen the justification for export controls. Restricted access to chips not only limits direct model training but also the potential scale of illegal distillation.
Security concerns and geopolitical consequences
In addition to economic issues, Anthropic also emphasizes security risks. The company and other US AI firms are developing safeguards to prevent their models from being misused for the development of biological weapons or for malicious cyber activities.
Models produced through illegal distillation likely lack comparable safety features. This could lead to the spread of dangerous capabilities without appropriate safeguards.
Anthropic also points out that authoritarian governments could use AI for offensive cyber operations, disinformation campaigns, and mass surveillance. This risk increases particularly when such models are released as open source.
Anthropic and the geopolitical dimension of AI competition
The conflict surrounding Anthropic and the alleged distillation of Claude illustrates how closely technological innovation, economic competition, and geopolitical interests are now intertwined. The accusations against DeepSeek, Moonshot AI, and MiniMax not only concern potential violations of terms of service but also raise fundamental questions about intellectual property, security standards, and the global balance of power.
At the same time, the case intensifies the debate about export controls for AI chips and the strategic handling of advanced computing power. Anthropic calls for a coordinated response from the entire AI industry, cloud providers, and policymakers. The outcome of this discussion is likely to have far-reaching consequences for the future development and regulation of AI systems. (Image: Shutterstock / Photo For Everything)
- iOS 26.4: Two apps get the old search layout
- iOS 26.4 Beta 2 tests secure RCS chats with Android
- iOS 26.4 Beta 2 launches: Apple continues testing phase
- Apple TV is betting on Wonder Pets Season 2
- Apple Sports expands soccer and tournament schedules
- Apple TV shows new thriller "Unconditional"
- iOS 26.3.1: Apple tests new iPhone update
- How Trump's import tariffs are making life difficult for Apple
- WhatsApp is testing a feature to hide texts
- Apple Ferret-UI Lite: The smart way to on-device AI
- Illegal tariffs: Will Apple get its 2 billion back?
- MacBook Air M1 sold out: What does Apple plan next?
- Perplexity brings the Comet browser to the iPhone
- Apple and the battle over NFC fees in Brazil
- Avalanche disaster: iPhone becomes a lifesaver
- Apple vs. Jon Prosser: Trial gains momentum
- Apple case designed to connect iPhone to satellites
- Apple and CSAM: Lawsuit raises serious allegations
- WhatsApp adds an important feature to groups
- Apple C1X modem: First failure raises questions
- Apple TV adds "The Hunt" to its March lineup
- Lawsuit against Meta: Apple becomes part of the debate



