Judge Blocks Pentagon's Attempt to 'Punish' Anthropic Over AI Safety Stance
#Regulation

Judge Blocks Pentagon's Attempt to 'Punish' Anthropic Over AI Safety Stance

Startups Reporter
4 min read

A federal judge has indefinitely blocked the Pentagon's effort to label Anthropic a supply chain risk, ruling that the move violated the AI company's First Amendment rights and was retaliatory for its stance on AI safety guardrails.

A federal judge in California has indefinitely blocked the Pentagon's attempt to punish Anthropic by labeling it a supply chain risk, ruling that the government's actions violated the AI company's constitutional rights.

US District Judge Rita Lin, an appointee of former President Joe Biden, issued a stinging 43-page ruling that criticized the Defense Department's February decision to designate Anthropic as a supply chain risk. The designation would have required any company working with the military to prove it wasn't using Anthropic's products.

In her ruling, Lin wrote: "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."

Pages from the Anthropic website and the company's logo on February 26, 2026.

Pages from the Anthropic website and the company's logo on February 26, 2026. Patrick Sison/AP

The judge said she would delay implementation of her ruling for one week to allow the government to appeal, but made clear she disapproved of the Pentagon's actions, which she said violated Anthropic's First Amendment and due process rights.

This marks the latest judicial rebuke of Defense Secretary Pete Hegseth, who has faced multiple court challenges over his use of government authority. Earlier this month, a federal judge in DC ruled that Hegseth violated reporters' First Amendment rights when he implemented a restrictive new press policy. In February, another judge found that Hegseth infringed on a Democratic senator's free speech rights.

Anthropic welcomed the ruling, with a company spokesperson saying: "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."

Anthropic cofounder and CEO Dario Amodei.

Anthropic cofounder and CEO Dario Amodei. Chance Yeh/Getty Images

The supply chain risk designation was unprecedented in its application to an American company. Previously, such labels had only been used for companies connected to foreign adversaries.

The dispute centers on Anthropic's refusal to remove contractual guardrails around the use of its Claude AI model. The company had two specific red lines: it did not want its AI systems used in autonomous weapons or domestic mass surveillance.

Judge Lin's ruling suggests the Pentagon's actions were retaliatory. She wrote that the Department of Defense designated Anthropic as a supply chain risk because of its "hostile manner through the press" and for "bringing public scrutiny to the government's contracting position."

"Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," Lin wrote.

The Pentagon's position was that it needed unfettered access to Claude for "all lawful purposes," particularly in wartime scenarios. The Defense Department's chief technology officer, Emil Michael, argued that companies shouldn't be able to "pollute the supply chain" with policies that could result in "ineffective weapons, ineffective body armor, ineffective protection."

The Anthropic House pavilion on the promenade ahead of the World Economic Forum (WEF) in Davos, Switzerland, on Monday, Jan. 19, 2026.

The Anthropic House pavilion on the promenade ahead of the World Economic Forum (WEF) in Davos, Switzerland, on Monday, Jan. 19, 2026. Krisztian Bocsi/Bloomberg/Getty Images

Anthropic argued in its lawsuit that the Pentagon was aware of its position on Claude's limitations and that its stance constitutes protected speech. The company said the designation violated its First Amendment rights, tarnished its reputation, and jeopardized hundreds of millions of dollars' worth of contracts.

The Pentagon's actions went beyond the supply chain designation. President Donald Trump and Defense Secretary Hegseth ordered federal agencies to cease using Anthropic's products and sever ties with companies that do business with Anthropic.

A separate challenge by Anthropic to other authorities Hegseth invoked to make the supply chain risk designation is still pending before a federal court in Washington, DC. CNN has reached out to the Pentagon for comment.

The case highlights the growing tension between AI companies' ethical stances and government demands for unrestricted access to advanced technologies. As AI systems become more powerful and widespread, the debate over appropriate guardrails and limitations is likely to intensify, with significant implications for both national security and civil liberties.

Comments

Loading comments...