OpenAI Signs Pentagon Deal, Urges Fair Treatment for Rivals Amid Anthropic Controversy
#AI

OpenAI Signs Pentagon Deal, Urges Fair Treatment for Rivals Amid Anthropic Controversy

Hardware Reporter
2 min read

OpenAI has signed a controversial AI deal with the US Department of Defense while criticizing the Pentagon's 'scary precedent' of blacklisting Anthropic, calling for equal terms for all AI companies.

OpenAI has signed a deal with the United States Department of Defense (DoD) that allows use of its advanced AI systems in classified environments, while simultaneously urging the Pentagon to extend the same terms to its competitors. The agreement comes amid growing tensions between the US military and Anthropic, which has been designated a Supply-Chain Risk to National Security by Secretary of War Pete Hegseth.

In a Saturday post, OpenAI outlined three "red lines" that will govern its Pentagon partnership:

  • No use of OpenAI technology for mass domestic surveillance
  • No use of OpenAI technology to direct autonomous weapons systems
  • No use of OpenAI technology for high-stakes automated decisions (e.g., systems such as "social credit")

The company emphasized that it retains "full discretion over our safety stack," deploys via cloud only, requires cleared OpenAI personnel to be "in the loop," and has strong contractual protections. According to OpenAI, these measures provide "multi-layered" safeguards beyond existing US legal protections.

Regarding autonomous weapons, the agreement specifies that OpenAI's AI systems "will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities." The deal also requires compliance with DoD Directive 3000.09, which mandates rigorous verification, validation, and testing before deployment.

OpenAI CEO Sam Altman acknowledged concerns about the precedent being set. When asked by New York Times columnist Ross Douthat whether the DoD's treatment of Anthropic worried him about his own company's independence, Altman replied: "Yes; I think it is an extremely scary precedent and I wish they handled it a different way."

Altman elaborated that while he believes Anthropic "didn't handle it well either," he holds the government more responsible as "the more powerful party." He expressed hope for "a much better resolution" to the standoff.

OpenAI said it signed the Pentagon deal "in the hopes of de-escalation," warning that enforcing the Supply-Chain Risk designation on Anthropic "would be very bad for our industry and our country." The company has made its position clear to the government and is calling for equal contractual terms to be offered to all AI companies.

Anthropic has remained silent on the matter over the weekend, other than vowing to appeal its designation in court. The controversy highlights growing tensions between AI companies and government agencies over the deployment of artificial intelligence in military applications, with OpenAI positioning itself as willing to work within Pentagon constraints while advocating for fair treatment of its competitors.

The Trump administration's actions against Anthropic represent an unprecedented move against a domestic tech firm, raising questions about the future of government-AI company relationships and the potential for similar actions against other industry players in future disputes.

Comments

Loading comments...