Anthropic AI is calling for action against “distillation attacks” after accusing three AI companies of misusing its cloud chatbots. On its website, Anthropic claimed that DeepSeek, Moonshot, and MiniMax are “running an industrial-scale campaign to illegally extract the cloud’s capabilities to improve their own models.”
Distillation in the AI world refers to when less capable models rely on feedback from more powerful ones to train themselves. Although distillation isn’t entirely a bad thing, Anthropic said these types of attacks can be used in more nefarious ways. According to Anthropic, these three Chinese AI companies were responsible for “more than 16 million exchanges with the cloud through approximately 24,000 fraudulent accounts.” From Anthropic’s perspective, these competing companies were using the cloud as a shortcut to developing more advanced AI models, which also allowed them to bypass some security measures.
Anthropic said in its post that it was able to tie each of these distilling attack campaigns to specific companies with “high confidence” based on IP address correlation, metadata requests, and infrastructure indicators, as well as corroborating with others in the AI industry who have observed similar behavior.
Early last year, OpenAI faced similar claims of rival companies spoofing its models and banned suspicious accounts in response. As for Anthropic, the company behind the cloud said it will upgrade its systems to make distillation attacks harder and easier to detect. While Anthropic is pointing the finger at these other companies, it is also facing a lawsuit from music publishers who accused the AI company of using illegal copies of songs to train its cloud chatbot.
<a href