Pentagon Pressures Anthropic to Drop AI Guardrails or Face Blacklist
#AI

Pentagon Pressures Anthropic to Drop AI Guardrails or Face Blacklist

Startups Reporter
2 min read

Defense Secretary Pete Hegseth threatens to terminate Anthropic's $200M Pentagon contract and blacklist the company if it refuses to remove safeguards on AI model Claude for military use.

The US Defense Department has escalated its standoff with Anthropic, giving the AI company's CEO until Friday to remove safety restrictions on its Claude model or face severe consequences, including potential blacklisting that could devastate its government business.

(L-R): Anthropic Co-founder and CEO Dario Amodei, US Defense Secretary Pete Hegseth.

During a tense Tuesday meeting at the Pentagon, Defense Secretary Pete Hegseth presented Anthropic CEO Dario Amodei with an ultimatum: comply with demands to lift restrictions on the company's AI model for military applications or lose a $200 million Pentagon contract. According to sources familiar with the discussions, the Pentagon wants Anthropic to allow unrestricted use of Claude for "all lawful use," but the company has drawn firm lines around two specific applications.

Anthropic's refusal centers on concerns about AI-controlled weapons and mass domestic surveillance of American citizens. The company argues that current AI technology lacks the reliability needed for weapons systems and that no comprehensive regulations exist governing AI's role in mass surveillance. These safety guardrails have become the focal point of negotiations that have been ongoing for months.

The Pentagon's threats extend beyond contract termination. Officials warned they could invoke the Defense Production Act, a law granting the government broad powers to influence businesses during national defense emergencies. More significantly, they threatened to designate Anthropic as a "supply chain risk" – a label typically reserved for companies linked to foreign adversaries like Russia or China. Such a designation would force enterprise customers with government contracts to ensure their work doesn't interact with Anthropic's tools, potentially crippling the company's business.

Despite the high stakes, sources describe the meeting's tone as "cordial and respectful" with no raised voices. Hegseth reportedly praised Anthropic's products and expressed desire to continue working together. However, Amodei stood firm on the company's redlines regarding autonomous weapons and surveillance.

Anthropic has positioned itself as the AI industry's safety-conscious outlier since its founding by former OpenAI employees who departed over disagreements about development pace and safety protocols. The company recently reinforced this stance by pledging $20 million to a political group advocating for increased AI regulation.

A Pentagon official confirmed the meeting occurred but declined further comment. Anthropic characterized the discussion as a "good-faith" conversation about technology usage, emphasizing its commitment to supporting national security missions "in line with what our models can reliably and responsibly do."

The standoff highlights the growing tension between AI safety advocates and government agencies seeking to leverage advanced AI capabilities for military applications. With the Friday deadline approaching, Anthropic faces a stark choice: compromise its safety principles or risk becoming a pariah in the government contracting world.

The outcome could set a precedent for how AI companies navigate the balance between ethical guardrails and lucrative government contracts, particularly as military applications of artificial intelligence become increasingly central to national security strategy.

Comments

Loading comments...