Join Us Monday, February 23

Anthropic says its Chinese competitors are stealing from the AI startup to gain an edge in the global AI race.

On Monday, Anthropic said three of China’s biggest AI labs, DeepSeek, MiniMax, and Moonshot AI, were “illicitly” using Claude “to improve their own models,” through a process known as distillation.

“These campaigns are growing in intensity and sophistication,” Anthropic said as part of its lengthy statement on Monday. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community.

Anthropic said the distillation efforts were “industrial-scale campaigns” that included roughly 24,000 fraudulent Claude accounts that generated over 16 million exchanges “in violation of our terms of service and regional access restrictions.”

Distillation is the process of training a less powerful model on the output of a more powerful model. The practice is a legitimate way that many US companies use to train their models for public release. Increasingly, major US companies are also stating that their Chinese competitors are improperly using the practice to steal their work.

In January 2025, OpenAI said DeepSeek may have “inappropriately” used OpenAI’s outputs to train their models. Earlier this month, Google disclosed it had “identified an increase in model extraction attempts or ‘distillation attacks.'”

“Competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently,” Anthropic said on Monday.

Anthropic disclosed remarkable detail about the extent to which DeepSeek, MiniMax, and Moonshot AI “illicitly” used their systems. Claude is not available for commercial access in China, though Anthropic said the rival labs found workarounds.

Among the notable findings, Anthropic said DeepSeek sought to create “censorship-safe alternatives to policy-sensitive queries.” The company also said it detected MiniMax’s campaign “while it was still active,” giving them an in-depth look at what their competitor was doing.

“When we released a new model during MiniMax’s active campaign, they pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system,” Anthropic said.

Representatives for DeepSeek, MiniMax, and Moonshot AI did not immediately respond to Business Insider’s request for comment.

Beyond cheating in the AI, Anthropic said improper distillation poses security risks because less-trained models may lack the proper safeguards, such as those to prevent the development of bioweapons.

In response to such distillation campaigns, Anthropic said it has built in “behavioral fingerprinting systems,” shares data with other AI companies on what to look out for, and continues to develop additional countermeasures.

Anthropic CEO Dario Amodei recently wrote that leading models are approaching the point where, without proper safeguards, they could help direct someone in building a bioweapon.

Amodei is also an outspoken advocate of US-export controls, a topic that divides some leading tech CEOs. Nvidia CEO Jensen Huang has repeatedly said that restricting US companies, including his own, from selling advanced chips to China won’t curb China’s AI advancements.

“Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation,” Anthropic said.

Anthropic has also faced allegations of using copyrighted material to train its models. In January, the Washington Post reported new details about an endeavor at the company called Project Panama, which the company reportedly described as “our effort to destructively scan all the books in the world.” Last year, Anthropic settled a class-action lawsuit brought by the authors and publishers of some of the books for $1.5 billion. As part of the settlement, the company didn’t admit any wrongdoing.



Read the full article here

Share.
Leave A Reply