European Parliament building

Photo by Christian Lue on Unsplash

The European Commission has proposed what it rather euphemistically calls “simplification” of the EU AI Act through its Digital Omnibus package. According to Euronews reporting, the practical effect is straightforward: high-risk AI provisions originally due August 2026 may now not take effect until December 2027—a delay of over a year.

One struggles to maintain enthusiasm for EU technology regulation when the pattern repeats so reliably. The AI Act was heralded as the world’s first comprehensive AI framework when it passed in 2024. Now, before it has fully taken effect, it is being substantially weakened under pressure from the very companies it was meant to regulate.

The pressure is not subtle. According to Fortune’s analysis, the Trump administration has explicitly criticised Europe’s “fear-driven” regulatory approach. Meta and Alphabet have warned that the Act’s definitions of “high-risk” AI could discourage experimentation. The tech lobby achieved what it always achieves: delays that allow continued operation under favourable conditions whilst the regulatory framework remains perpetually imminent rather than actual.

The specific changes in the Digital Omnibus merit examination. According to Cooley’s legal analysis, the Commission proposes simplified requirements for small and medium-sized companies, extended deadlines for high-risk system compliance, and centralised enforcement through the AI Office rather than member state authorities.

The centralisation is particularly notable. Rather than 27 national authorities developing potentially inconsistent interpretations, the Commission’s AI Office would oversee high-risk systems directly. This could improve coherence but also reduces the likelihood of aggressive enforcement—Commission bodies tend toward diplomatic caution that national regulators sometimes lack.

Consumer advocates are unimpressed. According to the official EU digital strategy page, the Act’s stated purpose is ensuring AI systems are “human-centric, trustworthy and safe.” The Digital Omnibus, critics argue, prioritises business interests over these protections. Agustín Reyna of the European Consumer Organisation stated bluntly that the proposal benefits Big Tech almost exclusively.

The irony is that many member states failed to meet the August 2025 deadline for designating competent authorities to enforce the Act. The delays may reflect not merely industry lobbying but genuine implementation challenges. Building regulatory capacity for AI oversight requires expertise that government agencies often lack.

For American technology companies, the practical implication is continued ambiguity. The AI Act’s extraterritorial reach means US firms serving European customers must comply—eventually. But “eventually” keeps receding. Companies that delayed compliance investments awaiting clarity may find their patience rewarded, whilst those who invested early may have overspent on premature preparation.

The broader lesson concerns the gap between regulatory ambition and execution. The EU positions itself as a global leader in technology governance. Yet its landmark AI legislation faces the same implementation challenges that have plagued GDPR enforcement: grand frameworks, modest practical effect. Whether the Digital Omnibus represents pragmatic adjustment or regulatory capture depends rather on one’s priors about European technology policy.