Defense Secretary Pete Hegseth is "close" to designating Anthropic as a supply chain risk and severing business ties, according to a senior Pentagon official.
The Pentagon is moving toward designating AI company Anthropic as a "supply chain risk" and cutting business ties, according to a senior Defense Department official cited by Axios. Defense Secretary Pete Hegeth is reportedly "close" to making this decision, which would mark a significant escalation in the government's scrutiny of AI partnerships.
This development comes amid growing concerns about the security implications of AI technology partnerships between government agencies and private companies. The designation of Anthropic as a supply chain risk would effectively bar the Pentagon from doing business with the company, potentially disrupting existing contracts and future procurement plans.
Anthropic, founded by former OpenAI researchers, has positioned itself as a safety-focused AI company and has been actively pursuing government contracts. The company recently opened an office in Bengaluru, India, where it reported that its India run rate revenue doubled since October. India represents Anthropic's second-largest market for Claude.ai, with a developer community engaged in "some of the most technically intense AI work" the company has seen.
The potential move against Anthropic follows a pattern of increased government oversight of AI companies and their relationships with defense and intelligence agencies. This scrutiny has intensified as AI capabilities have advanced rapidly, raising questions about data security, model integrity, and the potential for foreign influence or compromise.
While the specific concerns driving the Pentagon's consideration of this action were not detailed in the report, such designations typically relate to concerns about:
- Potential foreign government influence or control
- Data security and privacy risks
- Supply chain vulnerabilities
- Compliance with federal security standards
- Geopolitical considerations
The timing is notable given that Anthropic recently announced it was curating training data for 10 Indic languages, suggesting an expansion of its presence in the Indian market. This geographic expansion may be contributing to the Pentagon's risk assessment, particularly given the complex US-India relationship and concerns about technology transfer.
The designation would represent a significant setback for Anthropic, which has been competing aggressively for government contracts against other AI companies like OpenAI and Google. It would also signal a more cautious approach by the Pentagon toward AI partnerships, potentially affecting other companies with international operations or connections.
This development highlights the growing tension between the rapid advancement of AI technology and national security concerns. As AI companies expand globally and seek government contracts, they face increasing scrutiny over their operations, partnerships, and potential vulnerabilities.
The Pentagon's consideration of this action against Anthropic underscores the complex balancing act facing both government agencies and AI companies as they navigate the intersection of technological innovation, national security, and international business operations.
The outcome of this situation could have broader implications for the AI industry, potentially affecting how companies structure their international operations and approach government contracts. It may also influence the Pentagon's AI procurement strategy and its relationships with other AI companies.
As of now, neither the Pentagon nor Anthropic has issued official statements regarding this potential designation. The situation remains fluid, and the final decision could still change depending on further review or negotiations between the parties involved.
This case exemplifies the growing challenges facing AI companies as they attempt to balance commercial growth, international expansion, and government partnerships while navigating an increasingly complex regulatory and security landscape.

Comments
Please log in or register to join the discussion