A former OpenAI geopolitics team lead reveals how frontier AI companies maintain deliberately ambiguous policies on military use, allowing leadership to preserve flexibility while avoiding clear ethical commitments.
A former OpenAI geopolitics team lead has exposed how leading AI companies maintain deliberately vague and inconsistent policies regarding military use of their technology, creating a system designed to preserve executive "optionality" rather than establish clear ethical boundaries.
Sarah Shoker, who led OpenAI's Geopolitics Team for approximately three years before leaving in June 2025, published a detailed analysis on fishbowlification revealing the incoherent nature of frontier AI labs' military policies. Her insider perspective comes amid heightened scrutiny of AI companies' relationships with defense agencies, particularly following the failed Pentagon-Anthropic talks that dominated tech news this week.
According to Shoker, these policies are intentionally structured to be "incoherent, vague, and often prone to change," allowing leadership to maintain flexibility in their decisions about military partnerships. This approach stands in stark contrast to the public messaging from AI companies about responsible development and ethical AI use.
The timing of Shoker's revelations is particularly notable given the current tensions in the AI industry. The Pentagon's recent decision to blacklist Anthropic has sparked controversy, with OpenAI CEO Sam Altman describing it as setting an "extremely scary precedent." Meanwhile, OpenAI rushed to secure a deal with the Department of Defense, allegedly to "de-escalate things" in the wake of Anthropic's blacklisting.
Shoker's analysis suggests this isn't merely about competition between AI companies, but rather reflects a broader industry pattern where military policy ambiguity serves strategic business interests. By keeping policies vague, companies can:
- Rapidly pivot between civilian and military applications as opportunities arise
- Avoid committing to principles that might limit future revenue streams
- Maintain plausible deniability about their role in military applications
- Respond to shifting geopolitical pressures without public backlash
The former OpenAI employee's departure in June 2025 adds credibility to her assessment, as it suggests she witnessed these dynamics firsthand during a critical period when AI companies were scaling their operations and navigating complex relationships with government agencies.
This revelation raises serious questions about the tech industry's commitment to ethical AI development. While companies publicly champion responsible AI use, their internal policies appear designed to maximize flexibility rather than establish clear moral boundaries. The contrast between public statements about AI safety and the reality of military partnerships reveals a troubling disconnect between rhetoric and practice.
The broader implications extend beyond individual companies. As AI becomes increasingly integrated into defense systems and national security infrastructure, the lack of clear policies creates risks for both the technology's development and its societal impact. Without transparent guidelines, the public cannot meaningfully evaluate whether AI companies are honoring their stated commitments to beneficial AI development.
Shoker's insider account suggests that the current system prioritizes business interests and strategic flexibility over ethical clarity, leaving the industry vulnerable to criticism about its true priorities in AI development and deployment.

Comments
Please log in or register to join the discussion