Anthropic CEO Dario Amodei rejects Pentagon demands to disable safety features on Claude AI, arguing autonomous weapons and mass surveillance pose unacceptable risks to US troops and civilians.
Anthropic has taken a firm stance against the US Department of Defense's demands to remove safety guardrails from its Claude AI systems, warning that autonomous weapons and mass surveillance technologies could endanger both American military personnel and civilians.
In a detailed statement released Thursday, Anthropic CEO Dario Amodei outlined why the company cannot comply with the Pentagon's request to allow unrestricted military use of its AI technology. The dispute centers on two specific use cases that Anthropic believes are "simply outside the bounds of what today's technology can safely and reliably do."

The first concern involves mass domestic surveillance capabilities. Amodei explained that current AI systems can now create "a comprehensive picture of any person's life—automatically and at massive scale" with the help of artificial intelligence. He argued this level of surveillance is only legal "because the law has not yet caught up with the rapidly growing capabilities of AI."
However, the more pressing issue for Anthropic involves the deployment of fully autonomous weapons. Amodei stated unequivocally that "today, frontier AI systems are simply not reliable enough to power fully autonomous weapons." The CEO emphasized that Anthropic "will not knowingly provide a product that puts America's warfighters and civilians at risk."
The reliability concerns extend beyond technical failures. Amodei pointed out that autonomous weapons "cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day." He argued that such systems need proper guardrails, which currently do not exist, before they can be safely deployed.
Anthropic has attempted to find common ground with the Pentagon by offering to collaborate on research and development to improve the reliability of these systems. However, Amodei noted that the Department of Defense has not accepted this offer.
The standoff has escalated to the point where Secretary of Defense Pete Hegseth has given Anthropic a deadline to comply with the Pentagon's terms and conditions. Hegseth has advocated for a more aggressive military posture, arguing that the US military must focus on warfighting and become more lethal.
Anthropic's position highlights what Amodei sees as inconsistencies in the Pentagon's approach. He pointed out that one threatened sanction labels Anthropic a threat to national security for refusing to remove guardrails, while another seeks to compel the company to do so in the name of national security.
"Regardless, these threats do not change our position: we cannot in good conscience accede to their request," Amodei wrote. The CEO expressed his desire for Anthropic to continue supplying the Pentagon while maintaining its safety standards.
The dispute raises fundamental questions about the role of private companies in military AI development and the balance between national security interests and technological safety. As AI capabilities continue to advance rapidly, the tension between military applications and ethical considerations is likely to intensify.
This confrontation comes at a time when the US military is increasingly interested in autonomous systems to maintain technological superiority. However, Anthropic's stance suggests that some AI companies are drawing firm lines around what they consider acceptable use of their technology, even when faced with potential government penalties.
The outcome of this dispute could have significant implications for the future of military AI development and the extent to which private companies can maintain control over how their technologies are deployed in sensitive national security contexts.

Comments
Please log in or register to join the discussion