Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards
#AI

Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards

Business Reporter
2 min read

Defense Secretary Hegseth has issued an ultimatum to Anthropic, demanding the AI company remove safety restrictions from its Claude model by Friday or face potential government action.

Defense Secretary Pete Hegseth has issued an ultimatum to Anthropic, demanding the AI company remove safety restrictions from its Claude model by Friday or face potential government action. The deadline comes amid growing tensions between the Pentagon and AI safety advocates over the military's use of artificial intelligence in operations.

The ultimatum and its context

The confrontation centers on Anthropic's refusal to disable certain safety protocols in Claude that limit its use for military applications. According to sources familiar with the discussions, Hegseth views these safeguards as an impediment to national security operations, particularly following the successful deployment of Claude during the recent Maduro raid in Venezuela.

During that operation, Claude was reportedly used to analyze intelligence data and provide strategic recommendations, marking one of the first high-profile military applications of commercial AI technology. The success of the mission has apparently emboldened Pentagon officials to push for fewer restrictions on AI deployment.

Anthropic's position

Anthropic, founded by former OpenAI researchers, has built its reputation on developing AI systems with robust safety measures. The company's Constitutional AI framework embeds ethical guidelines directly into its models, preventing them from being used for certain military applications without explicit authorization.

Company representatives have reportedly argued that removing these safeguards would violate their founding principles and could lead to unintended consequences. They've also expressed concern about setting a precedent that could compromise AI safety standards across the industry.

Industry implications

This standoff represents a significant moment in the ongoing debate over AI governance. Tech companies have increasingly found themselves caught between government demands for access to powerful AI systems and their own commitments to responsible development.

The situation echoes similar tensions that arose when Google faced internal protests over its Project Maven contract with the Pentagon. However, the Anthropic case is notable for the direct ultimatum issued by a cabinet-level official, suggesting a more confrontational approach from the current administration.

What's at stake

Beyond the immediate conflict, this dispute raises fundamental questions about who controls AI development and deployment. Anthropic's safety measures were designed to prevent exactly the kind of military escalation that Hegseth appears to be demanding.

Industry analysts note that if Anthropic complies with the ultimatum, it could trigger a broader relaxation of AI safety standards across the sector. Conversely, if the company refuses and faces government retaliation, it could chill innovation and investment in AI safety research.

The Friday deadline

With the clock ticking toward Friday's deadline, both sides appear to be digging in. Pentagon officials have hinted at potential regulatory actions if Anthropic doesn't comply, while the company's leadership has reportedly begun exploring legal options to resist what they view as government overreach.

The outcome of this confrontation could reshape the relationship between the tech industry and the military for years to come, determining whether AI safety considerations or national security priorities take precedence in the development of artificial intelligence.

Men sitting

Comments

Loading comments...