President Trump orders U.S. federal agencies to cease using Anthropic's AI technology following a public dispute over military and surveillance applications, threatening legal consequences if the company doesn't comply within six months.
President Donald Trump has ordered every U.S. federal agency to immediately cease using technology from AI company Anthropic, escalating a contract dispute that began with the Pentagon's demands for unrestricted access to the company's Claude models.

In a Truth Social post Friday afternoon, Trump accused Anthropic of attempting to "dictate how our great military fights and wins wars," calling the company "radical left, woke" and claiming its position "puts AMERICAN LIVES at risk."
The dispute centers on a contract worth up to $200 million that Anthropic signed with the Pentagon last summer. Anthropic had sought written guarantees that its Claude models would not be used for mass domestic surveillance of U.S. citizens or to control weapons systems capable of firing without human involvement.
Pentagon Demands vs. Anthropic Safeguards
The Pentagon countered that it needed the right to deploy Claude for "all lawful purposes," arguing it was unworkable to negotiate individual use-case exemptions with a private company. After months of private talks collapsed into a public standoff this week, Anthropic CEO Dario Amodei said Thursday his company "cannot in good conscience accede" to the DoD's terms.
The Pentagon responded by threatening to invoke the Korean War-era Defense Production Act to compel Anthropic's compliance and warned it would designate the company a "supply chain risk" — a label typically reserved for companies from adversarial nations such as Huawei.
National Security Implications
Claude was the only AI model approved for use in classified military systems, creating immediate operational challenges for defense contractors. Palantir, which uses Claude to power its most sensitive government contracts, will need to find a replacement quickly.
OpenAI CEO Sam Altman said Friday he shares Anthropic's position on autonomous weapons' ethical "red lines," complicating its candidacy as a direct replacement. Meanwhile, Elon Musk has already agreed in principle to the Pentagon's "all lawful purposes" request, potentially lining up his company's Grok as a replacement.
Six-Month Phase-Out Period
Trump gave agencies a six-month phase-out window and warned that if Anthropic failed to cooperate during that period, he would use "the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."
The directive represents a significant escalation in tensions between the federal government and AI companies over the ethical boundaries of military AI deployment. Anthropic's stance reflects growing concerns within the tech industry about the risks of autonomous weapons and mass surveillance, while the Pentagon argues that such restrictions could hamper national security operations.
As federal agencies begin the process of removing Anthropic's technology from their systems, the incident highlights the complex balance between technological innovation, ethical considerations, and national security imperatives in the rapidly evolving AI landscape.

Comments
Please log in or register to join the discussion