Compass Investments

Crypto vs. Dollar

📌 Observers are sneering at Anthropic’s claims that Chinese AI research centers are stealing their information – transcript

Anthropic claims that three Chinese AI labs have been siphoning off Claudes results in large volume using fake accounts. . Ai

– Anthropic claims that three Chinese AI labs have been siphoning off Claude’s results in large volume using fake accounts.

The company believes such actions undermine export controls and jeopardize security safeguards.

Some critics on the X network have accused Anthropic of double standards regarding the AI model training process.

Anthropic has accused the three Chinese AI labs of misappropriating millions of responses from its Claude chatbot to train its systems, which the company says violates its terms of service and weakens U.S. export restrictions.

In a blog post published Monday, Anthropic said it discovered “purposeful actions” on the part of AI developers DeepSeek, Moonshot and MiniMax to extract Claude’s capabilities through distillation of models. The company claims that these labs generated over 16 million dialogs using approximately 24,000 fake accounts.

Anthropic’s claim sparked skepticism and ridicule at X. Opponents questioned the company’s position given the way the underlying AI models, including Claude, are trained, reflecting broader ongoing disputes over intellectual property rights, copyright and fair use.

“You learned from the open Internet and now you call it

‘distillation attacks’ when others are learning from you, wrote Tory Green, co-founder of IO.Net, an AI infrastructure company. “Labs that like to talk about ‘open research’ are suddenly complaining about open access.

Oh no, not my private AI, how dare they use it to train their model, only Anthropic has the right to use other people’s intellectual property, this is unacceptable! wrote another X user.

Distillation is a way of training an AI where a smaller model is trained on the output of a larger model.

In cybersecurity, the term can also refer to pattern mining attacks, where an attacker uses legitimate access to systematically query a system and then applies the answers to train a competing model.

These campaigns are becoming increasingly active and sophisticated, Anthropic noted Monday. Response time is short and the threat extends beyond a single organization or region. It will require swift, coordinated action from all industry players, lawmakers and the global AI community.

Distillation may be legal: AI labs are using it to create smaller and more affordable models for their customers, Anthropic added in a separate post at X. However, foreign labs that illegally borrow U.S. models could bypass the protections and apply the models’ capabilities to their military, intelligence and surveillance systems.

In June, Reddit sued Anthropic, claiming the company copied more than 100,000 posts and comments, using that data to improve Claude.

Bitcoin

Bitcoin

$71,227.53

BTC -2.01%

Ethereum

Ethereum

$2,080.52

ETH -2.22%

Binance Coin

Binance Coin

$648.41

BNB -1.47%

XRP

XRP

$1.40

XRP -1.83%

Dogecoin

Dogecoin

$0.09

DOGE -5.20%

Cardano

Cardano

$0.27

ADA -2.65%

Solana

Solana

$89.03

SOL -2.05%