Article illustration 1

Executive Order 2025‑12: A Preemption Playbook with a Safety Exception

On December 11, 2025, President Trump signed an executive order (EO) that seeks to strip states of the authority to regulate artificial intelligence. The order’s language, however, contains a narrow carve‑out that explicitly protects state laws targeting child safety, including those aimed at AI‑generated child sexual abuse material (AI‑CSAM). The result: while the federal government may pursue a national AI framework, it will not interfere with state‑driven child‑protection measures.

“The EO’s child‑safety carve‑out may operate as a limit on EO execution strategy— even though it’s found only in Section 8, not other sections.” – Stanford Cyberlaw

How the Order Works

The EO is structured around several key sections:

Section Purpose Key Points
3 Establish an AI Litigation Task Force Tasked with challenging state AI laws that conflict with the administration’s policy.
4 Create a list of litigation targets Identifies state laws deemed inconsistent with the EO.
5 Tied federal broadband funding to compliance States must align AI regulation with federal expectations to receive funds.
8(a) Direct senior officials to draft a preemption proposal A blueprint for how state AI laws might be overridden.
8(b) Carve‑outs for child safety and other areas Explicitly exempts laws related to child safety protections, AI compute infrastructure, state AI procurement, and unspecified “other topics.”

The carve‑outs in Section 8(b) are the only part of the EO that limits the reach of the preemption proposal. While Sections 3, 4, and 5 lack similar restrictions, the author of the source article argues that the carve‑outs implicitly apply to those sections as well, especially given the limited resources available to federal agencies.

Why Child‑Safety Laws Are Safe

  1. Policy Alignment – The EO’s stated goal is to “sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework.” Child‑safety laws, which focus on the output of AI models rather than the models themselves, do not threaten that objective.
  2. Federal Precedent – The TAKE IT DOWN Act already criminalizes non‑consensual deepfake pornography, and federal law has long prohibited computer‑generated CSAM. States adding their own layers of protection is a natural extension.
  3. Political Reality – Challenging child‑safety legislation would be a political disaster. The optics of a DOJ suing states over CSAM laws would be disastrous, especially in a climate still sensitive to the Epstein scandal.

Operational Constraints

The EO’s implementation faces a classic “resource squeeze.” With many federal employees reassigned or dismissed, the DOJ, FTC, and other agencies have limited bandwidth. The carve‑outs therefore serve a practical triage function: focus limited staff on high‑profile state AI governance laws (e.g., California’s AI Act) while ignoring the comparatively straightforward child‑safety statutes.

The Broader Implications for State AI Governance

  • State Autonomy Preserved – The carve‑outs signal that states can continue to legislate on AI‑chatbot safety for children without federal retaliation.
  • Preemption Likely on Other Areas – States that push for broad AI governance—such as data‑privacy standards or algorithmic accountability—may still face federal preemption or litigation.
  • Potential for New Child‑Safety Legislation – With the EO’s “hands‑off” stance on child safety, states are poised to introduce bills regulating AI chatbots, especially after high‑profile incidents involving teen suicides.

“Child safety has been a predominant issue in the federal and state legislatures alike for several years running, and even if state laws addressing that topic might be ‘burdensome’ on AI companies, it is a hard sell politically to let only AI companies get a hall pass from compliance.” – Stanford Cyberlaw

Bottom Line

The Trump administration’s executive order represents a calculated compromise: a sweeping preemption agenda tempered by a narrow exemption for child‑safety laws. While the EO may curb state innovation in AI governance, it leaves the most urgent child‑protection statutes untouched. For developers and policymakers, the key takeaway is that state‑level AI regulation—particularly around child safety—remains a viable path forward, even in the face of federal preemption.

Source: Stanford Cyberlaw, “Well, at Least the Anti‑States‑Rights AI EO Spares AI‑CSAM Laws” (https://cyberlaw.stanford.edu/blog/2025/12/well-at-least-the-anti-states-rights-ai-eo-spares-ai-csam-laws/)

Article illustration 2