The White House scrapped planned pre-deployment AI safety certifications for large models this week, shifting to a post-market incident reporting system after months of congressional deadlock and industry lobbying, a move that will cut tech firms' upfront compliance costs while reshaping how AI risks are monitored across the U.S. market.

By Ashley Gold, Axios | May 7, 2026, 43 minutes ago

The White House Office of Science and Technology Policy announced a major shift in federal AI safety policy on Tuesday morning, scrapping a long-planned pre-deployment certification system for large general-purpose AI models in favor of a post-market incident reporting framework. The change marks the most significant adjustment to U.S. AI regulation since the 2022 release of the Biden administration's AI Bill of Rights, and follows months of deadlock in Congress over comprehensive AI legislation.
The original certification system, first proposed in 2024, would have required any AI model with training compute exceeding 10^26 floating-point operations to undergo a 6-month review by a new federal AI Safety Review Board before public release. That threshold covers all current large language models from major developers, including OpenAI's GPT-5, Google's Gemini Ultra, and Meta's Llama 4. Under the new framework, those models will no longer need pre-approval, but developers must report all safety incidents, defined as errors that cause more than $100,000 in economic harm, physical injury, or privacy breaches affecting more than 10,000 users, to a new AI Incident Reporting Clearinghouse within 72 hours of discovery.
The Clearinghouse will be funded by $1.4 billion reallocated from the original $2.1 billion AI safety regulatory budget passed in 2025. The remaining $700 million will go to the National Institute of Standards and Technology to develop audit standards for third-party reviewers, who will conduct annual compliance checks on models with more than 10 million monthly active users. Staffing for federal AI safety roles will drop from 420 full-time employees to 168, a 60% reduction, with most cuts coming from pre-deployment review teams. The White House OSTP announcement confirms the budget reallocations and staffing changes.
The pivot follows three failed attempts to pass comprehensive AI legislation in the 119th Congress, most recently last month when the AI Safety and Innovation Act stalled in the Senate Committee on Commerce, Science, and Transportation with only 47 co-sponsors, 13 short of the 60 needed to overcome a filibuster. Tech industry lobbying played a major role in the policy shift. Companies including Meta, Google, Microsoft, and OpenAI spent a combined $147 million on AI-related lobbying in 2025, per data from OpenSecrets, with 68% of that spending focused on opposing pre-deployment certification requirements. The Senate Commerce Committee AI legislation tracker shows no pending comprehensive AI bills as of this week.
U.S. AI industry growth has slowed amid regulatory uncertainty, providing additional context for the policy shift. The sector contributed $420 billion to U.S. GDP in 2025, up 28% from 2023, but quarter-over-quarter growth dropped to 4% in Q1 2026, the lowest rate since 2022, according to Bureau of Economic Analysis data. General-purpose AI model makers saw valuations plummet in early 2026 after a series of high-profile incidents. OpenAI's valuation fell from $350 billion in January 2025 to $210 billion in March 2026 after a hallucination in its GPT-5 model caused $1.2 billion in losses for enterprise clients in the financial sector. Google's parent company Alphabet wrote down $800 million in AI-related assets in Q1 2026 after a Gemini Ultra error led to a widespread outage at 14 major U.S. hospitals.
In contrast, vertical-specific AI startups, which build models for narrow use cases like medical diagnostics, supply chain optimization, and agricultural planning, saw funding increase by 72% year-over-year in Q1 2026, reaching $18.7 billion. Investors shifted capital to these firms to avoid regulatory risk tied to general-purpose models, per a report from PitchBook. The EU's fully implemented AI Act, which imposes strict pre-deployment requirements for high-risk AI systems, has also pushed U.S. companies to lobby for lighter rules. EU regulators have issued €2.3 billion in fines to non-compliant AI firms since January 2026, including a €450 million penalty against Meta for deploying an unapproved Llama 3 model in EU markets. The EU AI Act full text details the pre-deployment requirements that U.S. policymakers sought to avoid.
For major tech firms, the pivot reduces upfront compliance costs but adds ongoing operational burdens. Meta, Google, and Microsoft collectively spent $1.2 billion on pre-deployment safety reviews in 2025, a cost that will drop by an estimated 75% under the new framework. However, the companies will need to allocate roughly $800 million annually to incident reporting, third-party audits, and Clearinghouse fees, per analysis from Gartner. Gartner also notes that the new rules will speed up model release cycles. Previously, developers faced 6-month delays for certification, but now models can launch immediately, with audits conducted after the fact.
Startups building general-purpose models will face lower barriers to entry, but scaling brings new scrutiny. The 10 million monthly active user threshold means small developers can launch models without federal oversight until they reach significant scale. However, once they cross that threshold, they must comply with the same audit requirements as major tech firms. This creates a growth cliff for startups. A model that gains 1 million users in its first month could hit the threshold in under a year, triggering unexpected compliance costs.
Investors stand to benefit from reduced regulatory ambiguity. The SEC announced on Tuesday that it will align its AI disclosure requirements for public companies with the new federal framework, ending a 14-month period where 12 AI IPOs were delayed due to unclear rules. Analysts at Morgan Stanley estimate that $12 billion in pending AI IPOs will now move forward in H2 2026, adding $3.4 billion in new market cap to the tech sector. The SEC AI disclosure alignment statement provides details on the new requirements.
Safety advocates secured a partial win with the $300 million increase in NIST funding for audit standard development, up from $120 million in 2025. NIST will work with academic researchers and industry stakeholders to create audit criteria that cover hallucination rates, bias, privacy protections, and energy consumption. However, advocates note that the lack of pre-deployment review means unsafe models can reach users before issues are caught. "Post-market reporting relies on companies self-disclosing incidents, which creates a conflict of interest," said Sarah Myers West, managing director of the AI Now Institute, a research group focused on AI policy. "We've already seen underreporting of incidents in the social media space, and there's no reason to think AI firms will be different without stronger oversight." The AI Now Institute analysis of the new framework highlights additional concerns for civil rights groups.
The pivot reflects a broader shift in Washington's approach to tech regulation, moving away from preemptive rules toward reactive enforcement. This aligns with the current administration's stated goal of maintaining U.S. leadership in AI development, as China's AI sector received $40 billion in state funding in 2025, per data from the Center for Strategic and International Studies. U.S. policymakers have expressed concern that strict pre-deployment rules would push AI development to jurisdictions with lighter regulations, eroding the U.S. competitive edge.
Federal agencies will begin implementing the new framework immediately, with the AI Incident Reporting Clearinghouse set to launch on June 1, 2026. Congress may still revisit AI legislation later this year, but analysts expect the new framework to remain in place at least through the 2028 election cycle, given the broad support from both tech firms and moderate lawmakers. For now, the U.S. AI industry has the clarity it requested, even if that clarity comes with new trade-offs between speed and safety.

Comments
Please log in or register to join the discussion