After 18 months of allowing unrestricted artificial intelligence development, the Trump administration is drafting rules to require pre-deployment government reviews for high-risk frontier AI models, a shift that aligns with earlier Biden-era policy goals despite administration denials. The Commerce Department has also finalized evaluation agreements with three major AI vendors, excluding a fourth locked in litigation with the administration.

The Trump administration has reversed its 18-month deregulatory stance on artificial intelligence, shifting from a policy that allowed unrestricted AI development to a framework that will require mandatory pre-deployment government reviews for high-risk frontier AI models. The change comes as the Department of Commerce’s Center for AI Standards and Innovation (CAISI) has finalized pre-deployment evaluation agreements with three major AI vendors, excluding a fourth vendor locked in ongoing litigation with the administration.
Regulatory Actions
The policy shift follows a series of executive orders and agency actions dating to 2023.
First, former President Joe Biden signed Executive Order 14110 on October 30, 2023, titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The order required federal agencies to establish guidelines for AI transparency, bias testing, and security reporting, and extended compliance requirements to federal contractors using AI systems.
On January 20, 2025, President Donald Trump signed Executive Order 14110 Rescission, titled "Removing Barriers to American Leadership in Artificial Intelligence," which rescinded Biden’s EO 14110. The order directed all federal agencies to identify and rescind any rules derived from the Biden order within 60 days, effectively eliminating all federal AI compliance requirements for vendors and allowing unrestricted development of all AI models, including high-risk frontier systems.
In May 2026, the administration confirmed it is forming an AI working group composed of tech executives and government officials to draft new rules for high-risk AI. National Economic Council Director Kevin Hassett stated the administration is studying a new executive order that would create a formal roadmap for releasing frontier models, mirroring the U.S. Food and Drug Administration’s (FDA) drug approval process. The proposed order would require vendors to prove high-risk models are safe and do not create unmitigated vulnerabilities before public release.
Also in May 2026, CAISI finalized pre-deployment evaluation agreements with Google DeepMind, Microsoft, and xAI. The agreements require the three vendors to submit frontier models for CAISI-led pre-deployment evaluations and participate in targeted research to advance AI security measurement standards. Anthropic was excluded from the agreements, as the administration has blocked federal agencies from using Anthropic tools, and Anthropic is challenging that policy in federal court.
Compliance Requirements
Each regulatory action carries distinct requirements for federal agencies, AI vendors, and federal contractors.
Biden’s EO 14110, though rescinded, previously required federal agencies to publish annual AI risk reports, and required federal contractors to conduct bias and security audits of AI systems used in government work. These requirements were eliminated in March 2025, 60 days after Trump’s rescission order took effect.
Trump’s January 2025 order required all federal agencies to review existing AI rules and rescind any that derived from EO 14110 by March 21, 2025. No compliance requirements were imposed on AI vendors during the 18-month deregulatory period from January 2025 to May 2026.
The pending May 2026 executive order, if enacted, would impose two tiers of requirements. High-risk frontier models, defined as systems with capabilities that could enable cyberwarfare, bio-threats, or other national security risks, would require formal government review before public release. Vendors would need to submit evidence of safety testing, capability assessments, and vulnerability mitigation to CAISI or a designated review body. Everyday AI applications, including non-frontier consumer and enterprise tools, would not be subject to these review requirements. Administration officials have framed the rules as a response to specific national security risks, not a broad adoption of EU-style AI regulation.
The May 2026 CAISI agreements require Google DeepMind, Microsoft, and xAI to provide CAISI with full access to frontier model architectures, training data, and testing results for pre-deployment evaluations. Vendors must also collaborate with CAISI on research to develop standardized metrics for measuring AI security and national security risks. CAISI director Chris Fall stated independent measurement science is essential to understanding frontier AI implications. Anthropic, excluded from the agreements, remains subject to the administration’s federal agency use ban pending the outcome of its litigation.
Compliance Timeline
Key dates for past and future compliance obligations are listed below.
- October 30, 2023: Biden’s EO 14110 takes effect, establishing initial AI compliance requirements for federal agencies and contractors.
- January 20, 2025: Trump signs EO 14110 rescission, eliminates all federal AI compliance rules.
- March 21, 2025: Federal agencies complete rescission of all rules derived from Biden’s EO 14110, 18-month deregulatory period begins.
- May 8, 2026: Administration confirms AI working group formation and pending executive order for frontier model reviews. CAISI finalizes agreements with Google DeepMind, Microsoft, and xAI.
- Pending: New executive order for high-risk AI reviews to be signed, effective date to be announced. Vendors with existing frontier models would have 90 days after enactment to submit models for review. New frontier models would require review and approval before public release.
Implementation Context
Reporting by Steven J. Vaughan-Nichols
first outlined the policy shift in May 2026, drawing on interviews with administration officials and industry experts. The policy shift follows escalating concerns about frontier AI risks, including the potential for models to enable cyberattacks or bio-threats. Administration officials have cited Anthropic’s Mythos model as a key driver of the rule change, noting its capabilities could be misused by malicious actors. Despite the alignment with Biden’s original EO 14110 goals, administration officials deny the new rules mirror the prior framework. Darrell West, senior fellow at the Brookings Institution’s Center for Technology Innovation, noted the new policy returns to Biden-era objectives, a characterization the administration has rejected.
Gregory Falco, assistant professor of mechanical and aerospace engineering at Cornell University, highlighted gaps in federal capacity to implement the new rules. Falco stated the federal government lacks in-house technical expertise, infrastructure, and day-to-day insight to evaluate frontier models independently. He added that purely voluntary self-governance is insufficient, suggesting the administration may rely on vendor self-reporting or third-party auditors to conduct reviews.
The exclusion of Anthropic from CAISI agreements stems from an ongoing feud between the administration and the vendor. The administration moved to block federal agency use of Anthropic tools in early 2025, and Anthropic filed suit challenging the ban. Trump recently softened his tone toward the company, telling CNBC Anthropic was "shaping up," but the administration is also considering rules forbidding vendors from interfering with government use of AI models.
Action Items for Compliance Teams
AI vendors and organizations using AI should take the following steps to prepare for the new regulatory framework.
Vendors developing frontier models should begin documenting model capabilities, security testing results, and potential national security risks now, even before the new executive order is enacted. Vendors party to CAISI agreements must allocate resources for pre-deployment evaluations and collaborative research with CAISI, including designating point-of-contact staff for agency coordination.
Organizations using Anthropic tools should monitor the ongoing litigation, as federal agency use restrictions remain in place pending court resolution. All organizations using AI should inventory their models to determine if they meet the administration’s definition of high-risk frontier systems, and prepare for potential review requirements if the new order is enacted.
Federal contractors that previously complied with Biden’s EO 14110 should maintain audit records, as the new administration may adopt similar reporting requirements for high-risk models. Compliance teams should track the AI working group’s progress and monitor for the release of the new executive order, which will include formal compliance deadlines and submission requirements.

Comments
Please log in or register to join the discussion