Part of 2026 May 19, 2026 ·
--- days
-- hrs
-- min
-- sec
Content Hub Build Article
Build May 9, 2026 · 8 min read

EU AI Act Omnibus: More Time, Same Architecture

EU AI Act Omnibus: More Time, Same Architecture

This is exactly the kind of regulatory recalibration that implementation teams should have been planning for. Human x AI Europe on May 19 in Vienna is where the people actually building AI governance frameworks will be comparing notes on what this means in practice.

What Actually Changed

At 4:30 a.m. on 7 May 2026, after marathon negotiations that nearly collapsed a week earlier, the Council and Parliament reached provisional agreement on the Digital Omnibus on AI. The headline: compliance deadlines for high-risk AI systems shift significantly. The substance: the AI Act's core requirements remain largely unchanged.

Here's the new timeline:

Standalone high-risk AI systems (Annex III categories: employment, biometrics, critical infrastructure, credit scoring, life and health insurance) now face a compliance deadline of 2 December 2027, pushed back from August 2026.

High-risk AI embedded in regulated products (Annex I categories: medical devices, toys, vehicles, lifts) must comply by 2 August 2028, a 12-month extension.

Transparency obligations for AI-generated content (watermarking, machine-readable labeling) apply from 2 December 2026 for providers whose systems were on the market before August 2026. That's a three-month grace period, not the six months originally proposed.

As Debevoise's analysis notes, "the omnibus proposal is not a major reset of the EU AI Act. It is better understood as a targeted simplification package that gives businesses longer to prepare."

The Machinery Exemption: What Germany Won

The most contentious issue in negotiations was industrial AI. German Chancellor Friedrich Merz personally lobbied for exemptions, telling German CEOs he would "push to ease the regulatory burden in the EU on AI and, where possible, to exempt industrial AI from the current regulatory straightjacket."

The result: machinery gets carved out. AI systems covered by the EU Machinery Regulation (EU) 2023/1230 will not be directly subject to AI Act high-risk requirements. Instead, the Commission will adopt delegated acts under the Machinery Regulation to add AI-specific health and safety requirements.

TechPolicy.Press reports that Parliament initially sought exemptions for all twelve sectors covered by Annex I product safety legislation. In the end, only machinery received the carve-out. Medical devices, toys, connected vehicles, and other sectors remain under the AI Act's direct application, though the Commission can issue implementing acts to resolve specific overlaps.

This matters for implementation planning. Organizations in the machinery sector need to track Machinery Regulation developments. Everyone else should continue building AI Act compliance frameworks.

New Prohibition: AI-Generated Non-Consensual Intimate Content

The Omnibus adds a new prohibited AI practice: systems that generate non-consensual sexually explicit or intimate content (NCII/NCIM), including so-called "nudifier" applications, and child sexual abuse material (CSAM).

According to Orrick's analysis, the prohibition covers "systems designed for those purposes, or systems where such outputs are reasonably foreseeable and reproducible in the absence of reasonable, proportionate and effective safeguards."

This prohibition applies from 2 December 2026. Organizations deploying generative AI systems should update risk assessment procedures now to capture this new category.

Bias Detection: Expanded but Constrained

The agreement extends the legal basis for processing special category personal data (sensitive data under GDPR) for bias detection and correction. Previously, this applied only to providers of high-risk systems under Article 10(5). Now it covers all AI systems, both providers and deployers.

The catch: processing must meet a "strict necessity" standard with mandatory safeguards including pseudonymization, access controls, no onward sharing, and timely deletion. Hogan Lovells notes this should "help reduce tension between the EU AI Act's bias-monitoring expectations and the GDPR."

This is a practical improvement. Teams building bias monitoring systems have been stuck between two regulatory frameworks with conflicting requirements. The Omnibus provides a clearer path, though the strict necessity standard means documentation requirements remain substantial.

Small Mid-Caps Get Relief

The AI Act's SME (Small and Medium-sized Enterprise) privileges now extend to a new category: small mid-cap companies. Orrick defines this category as enterprises employing fewer than 750 people with annual turnover not exceeding €150 million or annual balance sheet total not exceeding €129 million.

Benefits include simplified technical documentation templates, proportionate quality management expectations, priority access to regulatory sandboxes, and tailored penalty caps.

Governance Changes

The AI Office gains supervisory competence over AI systems based on general-purpose AI (GPAI) models where the model and system are developed by the same provider. It also supervises AI systems integrated into very large online platforms (VLOPs) and very large online search engines (VLOSEs) designated under the Digital Services Act.

National authorities retain competence for law enforcement, border management, judicial authorities, and financial institutions.

The deadline for Member States to establish at least one national AI regulatory sandbox extends from August 2026 to August 2027. A new EU-level sandbox operated by the AI Office will provide priority access for SMEs, startups, and small mid-caps.

What Didn't Change

The AI Act's core architecture remains intact:

  • Risk-based classification system
  • High-risk requirements (risk management, data governance, documentation, transparency, human oversight, accuracy)
  • GPAI model obligations
  • Prohibited practices (with the new NCII/CSAM addition)
  • Fundamental rights impact assessment requirements for deployers

IAPP reports that the obligation for providers to register AI systems in the EU database for high-risk systems has been reinstated, even where providers consider their systems exempt from high-risk classification.

What Implementation Teams Should Do Now

The Omnibus buys time. It doesn't eliminate work. Here's the practical checklist:

Immediate (by December 2026):

  • Update AI policies to capture the new NCII/CSAM prohibition
  • Ensure generative AI systems have transparency solutions (watermarking, machine-readable labeling) ready for deployment

Near-term (2027):

  • Refresh implementation trackers with new deadlines
  • Continue building high-risk AI governance frameworks; the requirements haven't changed, only the timeline
  • Monitor Commission guidance and technical standards as they're published

For machinery sector organizations:

  • Track delegated acts under the Machinery Regulation
  • Maintain documentation that demonstrates compliance with AI-specific health and safety requirements

For all other Annex I sectors:

  • Plan for August 2028 compliance
  • Watch for Commission implementing acts addressing sector-specific overlaps

The Bigger Picture

As TechPolicy.Press observes, the AI Omnibus is "only a precursor to the more consequential digital simplification package: the Data Omnibus." That package proposes changes to GDPR, including narrowing the definition of personal data and recognizing AI training as a legitimate interest for data processing.

The AI Omnibus demonstrates that the EU can move quickly when deadlines force action. It also shows that industry pressure, particularly from Germany, can reshape regulatory timelines. The core framework survived, but the precedent is set: implementation challenges can trigger recalibration.

For teams building AI governance, the message is clear: the destination hasn't changed, but the journey just got longer. Use the time to build systems that will actually work when the deadlines arrive.

Frequently Asked Questions

Q: When do high-risk AI system obligations now apply under the EU AI Act?

A: Standalone high-risk AI systems (Annex III) must comply by 2 December 2027. High-risk AI systems embedded in regulated products (Annex I) must comply by 2 August 2028.

Q: What is the new prohibition on AI-generated intimate content?

A: The Omnibus bans AI systems that generate non-consensual sexually explicit or intimate content (NCII/NCIM), including "nudifier" applications, and child sexual abuse material (CSAM). This prohibition applies from 2 December 2026.

Q: How does the machinery exemption work?

A: AI systems covered by the EU Machinery Regulation are exempted from direct AI Act applicability. The Commission will adopt delegated acts under the Machinery Regulation to add AI-specific health and safety requirements instead.

Q: What is a "small mid-cap" company under the AI Act?

A: Small mid-cap enterprises employ fewer than 750 people and have annual turnover not exceeding €150 million or annual balance sheet total not exceeding €129 million. They now receive SME-style regulatory relief including simplified documentation and proportionate penalties.

Q: When must providers implement transparency solutions for AI-generated content?

A: Providers of AI systems that generate synthetic content must ensure outputs are marked in machine-readable format by 2 December 2026. This applies to systems placed on the market before August 2026.

Q: Is the AI Omnibus now law?

A: No. The provisional agreement requires formal adoption by both the European Parliament and Council, followed by legal-linguistic revision and publication in the Official Journal. Co-legislators intend to complete this process before 2 August 2026.

Enjoyed this? Get the Daily Brief.

Curated AI insights for European leaders — straight to your inbox.

Created by People. Powered by AI. Enabled by Cities.

One day to shape
Europe's AI future

Secure your place at the most important AI convergence event in Central Europe.