Miles Brundage, former Head of Policy Research at OpenAI, has founded AVERI to advocate mandatory external audits of frontier AI models, signaling growing pressure for accountability mechanisms in advanced AI development.
Former OpenAI policy leader Miles Brundage has launched a nonprofit organization advocating for mandatory external audits of advanced AI systems, adding momentum to calls for accountability mechanisms in frontier AI development. The organization, named AVERI (Auditing Verification for Ethical and Responsible Innovation), will push policymakers and developers to accept third-party assessments of high-risk AI models.

Brundage brings credibility from his tenure at OpenAI, where he led policy research until late 2025. His departure signaled disagreements about AI governance approaches within the company. AVERI's creation comes amid increasing regulatory scrutiny worldwide, with the European Union's AI Act implementing audit requirements for high-risk systems and US lawmakers considering similar frameworks.
Frontier AI models—defined as highly capable general-purpose systems approaching human-level abilities—present unique oversight challenges. Unlike conventional software, these models exhibit emergent behaviors not fully predictable by developers. Recent incidents involving unexpected capabilities in large language models have heightened concerns.
AVERI proposes standardized assessment protocols for three critical areas:
- Capability evaluations: Testing for hazardous knowledge (e.g., bioweapon design) and skill thresholds
- Alignment verification: Measuring adherence to intended constraints
- Systemic risks: Assessing potential for misuse, runaway self-improvement, or ecosystem disruption
The nonprofit faces significant implementation challenges. Model creators typically treat training data and methodologies as trade secrets, while auditors require deep system access. AVERI suggests confidential review processes modeled after financial audits, where sensitive information remains protected while verification occurs.
Industry reactions appear divided. Anthropic CEO Dario Amodei previously endorsed third-party audits, while Meta's AI leadership expressed concerns about revealing proprietary techniques. Brundage argues that the AI industry's current “trust us” approach resembles social media's early resistance to oversight, which resulted in reactive regulation.
External audits could reshape business practices. Developers might incorporate audit-friendly architectures, while enterprise buyers may require verified models for sensitive applications. Insurance providers are exploring coverage discounts for audited AI systems, creating financial incentives for adoption.
AVERI joins organizations like the International Auditing Framework for AI (IAF-AI) in developing technical standards. Their success depends on convincing major developers—including Brundage's former employer—that external verification builds public trust without stifling innovation. With frontier models advancing rapidly, the push for auditable AI systems marks a critical inflection point in AI governance.

Comments
Please log in or register to join the discussion