Anthropic urges the United States and allied democracies to tighten export controls on AI chips and restrict access to advanced foundation models, arguing that decisive action before 2028 is needed to keep AI governance in democratic hands.
Anthropic Calls for New U.S. Export Controls to Limit China’s AI Development Before 2028

Regulatory action proposed by Anthropic
Anthropic’s public brief, posted on 15 May 2026, asks the U.S. government to adopt two concrete measures:
- Tighten export controls on high‑performance AI hardware – specifically Nvidia’s A100, H200 and future GPUs that are capable of training large foundation models.
- Restrict the re‑export of U.S.‑origin AI models – meaning that any model trained on U.S. cloud infrastructure or using U.S.‑derived weights could not be shared with entities in China without a special license.
Anthropic frames these steps as “the only realistic way to preserve a democratic AI ecosystem before the emergence of transformative systems in 2028.”
What the measures require
1. Export‑control tightening (EAR/ITAR updates)
- Scope expansion – The Bureau of Industry and Security (BIS) would need to move AI‑accelerator GPUs from the current “Category 5” (dual‑use) to a higher‑sensitivity tier, similar to the 2023 restrictions on advanced semiconductor equipment.
- License‑by‑exception reduction – Existing “license‑exception TMP” provisions that allow bulk sales of GPUs to research institutions would be narrowed, requiring a case‑by‑case review for any shipment destined for China or for entities with a Chinese‑affiliated supply chain.
- Compliance reporting – Exporters must file quarterly declarations of end‑use and end‑user certifications, with penalties for mis‑representation up to $1 million per violation.
2. Model‑access restrictions
- Model‑origin definition – Any foundation model that incorporates U.S.‑origin training data, weights, or compute resources would be classified as a “controlled AI asset.”
- Re‑export license – Transfer of such models to Chinese nationals, companies, or government‑affiliated labs would require a specific license from the Office of Export Enforcement (OEE).
- Audit trail – Companies must maintain immutable logs (e.g., using blockchain‑based provenance tools) that record every request to export a model, the approving authority, and the final recipient.
Both actions align with existing frameworks such as the Export Control Reform Act (ECRA) of 2018 and the International Traffic in Arms Regulations (ITAR), but they extend the definition of “technology” to include software artifacts that were previously treated as “public domain.”
Compliance timeline suggested by Anthropic
| Milestone | Deadline | Required actions |
|---|---|---|
| Policy announcement | 30 June 2026 | BIS publishes a Notice of Proposed Rulemaking (NPRM) to reclassify AI GPUs and define “controlled AI assets.” |
| Public comment period | 31 July 2026 – 31 August 2026 | Stakeholders submit comments; Anthropic pledges to provide technical data on model‑distillation risks. |
| Final rule issuance | 30 September 2026 | BIS issues final rule; OEE updates licensing guidance for AI models. |
| Implementation start | 1 January 2027 | Exporters must obtain new licenses for any GPU shipments to China; AI firms must integrate model‑audit tooling. |
| First compliance audit | 1 July 2027 | BIS conducts on‑site audits of major GPU distributors and cloud providers. |
| Review & adjustment | 1 January 2028 | Government evaluates effectiveness; may tighten or relax controls based on measured impact on Chinese AI capability gaps. |
Anthropic argues that this schedule gives democratic nations a 12‑month window to close the “compute advantage” gap before the projected 2028 rollout of transformative AI systems.
Why the timeline matters
- Rapid model scaling – Recent papers show that a 10‑petaflop‑day training run can produce a model comparable to Claude 2 within six months. Delaying controls beyond early 2027 would allow Chinese labs to complete several such runs before the rules take effect.
- Supply‑chain inertia – GPU manufacturers typically ship in quarterly batches; a January 2027 start date forces the first batch of restricted chips to be held back, reducing the immediate influx of high‑end compute to China.
- Norm‑setting window – If democratic actors retain the lead on AI development through 2027, they can embed human‑rights safeguards into model architectures and licensing agreements before authoritarian regimes can replicate them at scale.
Practical steps for companies
- Inventory hardware – Identify all AI‑accelerator GPUs in inventory destined for export and classify them under the new BIS schedule.
- Implement model provenance – Deploy tools such as Weights & Biases Model Registry or open‑source provenance frameworks to tag every model with its origin metadata.
- Update internal licensing processes – Train export‑control teams on the new “controlled AI asset” definition and integrate license‑request workflows into ERP systems.
- Engage with regulators – Submit technical comments during the NPRM period; provide anonymized data on distillation attacks to help shape realistic thresholds.
- Prepare audit evidence – Keep immutable logs (e.g., using OpenTelemetry) that can be exported to BIS auditors on demand.
Outlook
Anthropic’s proposal is not a new law, but a policy blueprint that could be adopted by the United States and its allies within the next year. If enacted, the measures would create a legal barrier that limits China’s ability to acquire the most powerful AI chips and to repurpose U.S.‑origin models for domestic use. The success of the approach will depend on the speed of implementation, the rigor of compliance audits, and the willingness of allied jurisdictions to harmonize their export‑control regimes.
*For further reading on U.S. export controls and AI, see the BIS AI Export Guidance (2023) and the EU Digital Sovereignty Strategy.*

Comments
Please log in or register to join the discussion