The European Union has agreed to delay enforcement of key high-risk AI Act provisions by 16 months and simplify overlapping compliance rules, bowing to pressure from major tech and industrial firms that warned strict regulations would push the bloc out of the global AI race. The provisional Digital Omnibus package also adds new bans on non-consensual sexual deepfakes and AI-generated child abuse material, balancing industry demands with new user protections.

The European Union's flagship artificial intelligence regulation, the AI Act, will see key enforcement deadlines pushed back by up to 16 months, after EU lawmakers and member states reached a provisional agreement on a simplification package Thursday following months of industry backlash.
The deal, branded the "Digital Omnibus on AI," trims and delays parts of the 2024 regulation that tech and industrial firms had repeatedly warned were unworkable. Under the original timeline, rules covering high-risk AI systems used in biometrics, critical infrastructure, education, employment, migration and border control were set to take effect on August 2, 2026. The new agreement moves that deadline to December 2, 2027, a 16-month delay. For AI systems embedded in physical products such as lifts, toys and industrial machinery, compliance deadlines now stretch to August 2, 2028, two years later than originally planned.
The rollback marks a significant shift for the EU, which spent years positioning itself as the world's leading tech regulator, adopting strict rules for data privacy under the General Data Protection Regulation (GDPR), digital markets via the Digital Markets Act (DMA) and artificial intelligence. That stance has faced mounting pressure from both the U.S. government and European industry groups, which argue the bloc has focused on regulating technologies it struggles to produce at scale, risking a competitive disadvantage in the global AI race.
Executives from major European firms including ASML, Airbus, Ericsson, Nokia, SAP, Siemens and Mistral AI publicly warned earlier this week that overregulation would push the EU out of the AI market entirely. Smaller businesses also raised concerns about overlapping compliance requirements, noting that AI systems embedded in existing products were subject to both the AI Act and older product safety laws, creating duplicate paperwork and unclear standards.
European Commission officials maintain the delay is not a retreat from the AI Act's core goals, but a practical adjustment to align rule enforcement with the development of technical standards and compliance tools. "Our businesses and citizens want two things from AI rules," said Henna Virkkunen, the Commission's Executive Vice-President for Tech Sovereignty, Security and Democracy. "They want to be able to innovate and feel safe." Commission President Ursula von der Leyen welcomed the agreement in a post on X, framing it as a way to provide "a simple, innovation-friendly environment" while maintaining protections for EU citizens.
The package includes several changes that go beyond deadline extensions. Industrial sectors subject to overlapping AI Act and product safety requirements will see rules untangled to reduce duplicate compliance work, a major win for manufacturers of physical goods with embedded AI. Smaller companies will also get additional breathing room to meet requirements, with the Commission committing to publish clearer guidance and technical tools to support compliance ahead of the new deadlines.
Not all changes loosen existing rules. The agreement adds new prohibited uses of AI to the Act's list of banned practices, following global backlash over abusive generative AI tools including xAI's Grok chatbot. AI systems used to create non-consensual sexual deepfakes and child sexual abuse material (CSAM) are now explicitly banned under EU law, with violations carrying potential fines of up to 7% of a company's global annual revenue, in line with existing AI Act penalty tiers.
Providers that claim exemptions from high-risk AI classification will still be required to register those systems in the EU's public AI database, a measure designed to maintain transparency even as some compliance deadlines are extended. The Commission also confirmed that core provisions of the AI Act, including rules for general-purpose AI models, will remain on their original timeline, with some requirements taking effect as early as 2025.
For users, the changes bring a mix of short-term delays and long-term protections. The delayed high-risk rules mean systems used in employment screening, border checks and education will not face full EU oversight until late 2027, a point of concern for digital rights advocates who argue that vulnerable groups are most at risk from unregulated high-risk AI. However, the new bans on abusive deepfakes address a growing gap in existing laws, which often struggled to classify non-consensual synthetic media as a distinct harm.
The provisional agreement still needs formal approval from the European Parliament and EU member states, a process that typically takes several weeks. Once adopted, the changes will be binding across all 27 EU member states, replacing the original deadline provisions in the AI Act. Critics have already labeled the move a retreat, arguing that the EU is prioritizing industry profits over user safety, while supporters say the adjustments are necessary to keep the bloc's AI industry viable amid fierce competition from the U.S. and China.
The EU's adjustment comes as other major jurisdictions review their own AI rules. California's Consumer Privacy Act (CCPA) already includes provisions governing automated decision-making, and state lawmakers have proposed additional AI transparency requirements in 2026. Unlike the EU's unified approach, the U.S. has so far relied on sector-specific rules and state-level legislation, creating a patchwork of requirements for tech firms operating across multiple markets. The GDPR, which set a global benchmark for data privacy in 2018, remains the foundation for many of the EU's tech regulations, including the AI Act's requirements for transparency and user consent.

Comments
Please log in or register to join the discussion