Google and OpenAI warn that competitors are using 'distillation attacks' to steal reasoning capabilities from their models, with China's DeepSeek leading sophisticated campaigns that could reshape the competitive landscape of AI development.
The AI industry is facing a paradoxical threat: the very technology that powers cutting-edge AI systems is being weaponized to steal and replicate those same capabilities. In a striking development that highlights the cutthroat nature of the AI arms race, both Google and OpenAI have issued warnings about competitors using sophisticated techniques to probe their models and extract the underlying reasoning that makes them valuable.
The Distillation Threat: AI Eating Its Own Tail
At the heart of this emerging crisis is a technique called "distillation," where attackers use carefully crafted prompts to extract the reasoning patterns from trained AI models. Google's Threat Intelligence Group has identified what it calls "distillation attacks," where competitors are using more than 100,000 prompts to try to replicate Gemini's reasoning abilities across multiple languages and tasks.
"Your model is really valuable IP, and if you can distill the logic behind it, there's very real potential that you can replicate that technology – which is not inexpensive," explains John Hultquist, chief analyst at Google Threat Intelligence Group. The implications are profound: companies that have spent billions developing their models could see their competitive advantages evaporate as rivals clone their capabilities at a fraction of the cost.
DeepSeek: The Distillation Pioneer
While Google declined to name specific perpetrators, OpenAI has been more direct in its accusations. In a memo to the House Select Committee on China, OpenAI specifically named DeepSeek and other Chinese LLM providers as the primary actors behind these distillation campaigns. The Chinese company has allegedly moved beyond simple chain-of-thought extraction to multi-stage operations involving synthetic data generation, large-scale data cleaning, and other stealthy methods.
The sophistication of these attacks is alarming. OpenAI reports that DeepSeek employees have developed methods to circumvent access restrictions, using obfuscated third-party routers and other techniques to mask their source. They've even created custom code to access US AI models and obtain outputs for distillation in programmatic ways.
The Economics of AI Theft
This isn't just a technical challenge – it's an economic one. American tech giants have invested billions in training and developing their large language models. When competitors can abuse legitimate access to mature models like Gemini and use this information to train newer models, it dramatically reduces the barriers to entry in the AI market.
The math is compelling for would-be competitors. Rather than spending years and billions of dollars on training data, infrastructure, and research, a company can potentially achieve similar capabilities by reverse-engineering an existing model. This creates a perverse incentive structure where the most successful AI companies become the most attractive targets for intellectual property theft.
The Enforcement Challenge
The nature of LLMs makes them particularly vulnerable to these attacks. Public-facing AI models are widely accessible, and enforcement against abusive accounts becomes a game of whack-a-mole. Google can block accounts that violate its terms of service or even take users to court, but new accounts can be created just as quickly.
"As more organizations have models that they provide access to, it's inevitable," Hultquist warns. "As this technology is adopted and developed by businesses like financial institutions, their intellectual property could also be targeted in this way."
Beyond Corporate Espionage
The threat extends far beyond corporate competition. As AI models become more integrated into critical infrastructure and sensitive operations, the potential for misuse grows exponentially. Financial institutions, healthcare providers, and government agencies that develop their own models could find their intellectual property targeted by the same techniques currently being used against tech giants.
The Ecosystem Security Approach
Recognizing that this problem cannot be solved by any single company, OpenAI is calling for an "ecosystem security" approach. This would require cooperation between industry players and government intervention to develop best practices for distillation defenses.
OpenAI's recommendations include closing API router loopholes that allow competitors like DeepSeek to access US models, restricting "adversary" access to US compute and cloud infrastructure, and developing shared intelligence about distillation attempts. The company acknowledges that "it is not enough for any one lab to harden its protection because adversaries will simply default to the least protected provider."
The Democratic AI Dilemma
Perhaps most concerning is OpenAI's framing of this as a matter of "American-led, democratic AI" versus authoritarian alternatives. The company warns that illicit model distillation poses a risk to democratic values in AI development, suggesting that the techniques being used by Chinese companies could enable the spread of AI systems that don't align with Western ethical standards and governance models.
The Future of AI Development
This distillation threat could fundamentally reshape how AI companies approach model development and deployment. We may see a shift toward more closed, proprietary systems with limited access, or the development of new architectures that are inherently more resistant to distillation attacks.
The irony is palpable: AI, a technology often touted for its ability to learn and adapt, is now being used to learn from and adapt the very systems that created it. As the industry grapples with this challenge, one thing is clear – the future of AI development may depend not just on who can build the best models, but on who can best protect them from being copied.

The distillation attacks represent a critical inflection point for the AI industry. As companies race to develop increasingly sophisticated models, they must also invest in protecting their intellectual property from the very technology they're creating. The question isn't whether AI will continue to advance – it's whether that advancement will be driven by innovation or imitation.

Comments
Please log in or register to join the discussion