AI company Anthropic is taking legal action against the US government after being designated a national security risk, marking the first time an American company has received such classification.
Anthropic has filed a lawsuit against the US government after being designated a supply chain risk to national security, a move the company calls "legally unsound" and unprecedented for an American firm. The AI company announced the legal challenge on March 6, 2026, following its formal notification from the Department of Defense on March 4.
CEO Dario Amodei confirmed that the Department of War notified Anthropic of its designation via letter, effectively barring the company from securing military contracts. This classification typically applies to foreign adversaries, making Anthropic the first US company to receive such treatment.

The dispute stems from Anthropic's refusal to remove safety guardrails that would have allowed its AI technology to be used for fully autonomous weapons and domestic mass surveillance. When Anthropic publicly stated it would not permit these applications, President Trump responded on his social media platform, calling the company "A RADICAL LEFT, WOKE COMPANY" that made a "DISASTROUS MISTAKE" in attempting to dictate terms to the government.
Trump accused Anthropic of ignoring the US Constitution and trying to control military operations, ordering all federal departments to cease using its products. Amodei defended the company's position, stating: "we do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making – that is the role of the military."
Amodei clarified that Anthropic's concerns were limited to two specific areas: fully autonomous weapons and mass domestic surveillance. He emphasized these relate to high-level usage areas rather than operational decision-making.
The timing of the government's decision coincided with OpenAI announcing a deal with the Department of Defense to use its AI technology for military applications. OpenAI claimed its agreement included more guardrails than any previous AI deployment contract, with explicit prohibitions on autonomous weapons, high-stakes automated decisions, mass domestic surveillance, and use by intelligence agencies.
In a statement, OpenAI expressed disagreement with the government's designation of Anthropic as a supply chain risk. The company suggested that enforceability factors such as cloud-only deployment, a functioning safety stack, and cleared personnel may have contributed to its successful negotiations compared to Anthropic's.
Amodei apologized for the tone of an internal memo that was leaked on March 4, shortly after Trump's social media criticism. He stated that Anthropic did not leak the memo and that it did not reflect his considered views, describing it as an out-of-date assessment written six days prior.
The Department of Defense did not respond to requests for comment on Anthropic's lawsuit or the circumstances surrounding the designation.
This legal battle represents a significant escalation in tensions between AI companies and the US government over the ethical boundaries of military AI applications. Anthropic's lawsuit challenges what it views as an unprecedented use of national security designations against a domestic technology company based on safety and ethical concerns rather than traditional security threats.

Comments
Please log in or register to join the discussion