Anthropic said three Chinese artificial intelligence companies illicitly extracted the capabilities of its Claude platform to boost their own models, the American AI company said in a Monday blog.
DeepSeek, Moonshot, and MiniMax were found to have used a technique called "distillation", which includes training a less capable model on the outputs of a stronger model. These companies infiltrated Anthropic's access by using approximately 24,000 fraudulent accounts that were able to generate more than 16 million exchanges with Claude.
"Illicitly distilled models lack necessary safeguards, creating significant national security risks," Anthropic said.
DeepSeek, Moonshot, and MiniMax have yet to respond to MT Newswires' request for comment on the matter.