Part of 2026 May 19, 2026 ·
--- days
-- hrs
-- min
-- sec
Content Hub Build Article
Build May 8, 2026 · 8 min read

EU AI Act Delay: What the Omnibus Deal Actually Means for Your Compliance Timeline

EU AI Act Delay: What the Omnibus Deal Actually Means for Your Compliance Timeline

In Brief

New deadlines confirmed: Stand-alone high-risk AI systems now face compliance by 2 December 2027; AI embedded in regulated products by 2 August 2028

Watermarking deadline tightened: Providers must implement transparency measures for AI-generated content by 2 December 2026, not February 2027

New prohibition added: AI systems generating non-consensual intimate imagery or child sexual abuse material are now banned

Machinery exemption secured: The Machinery Regulation is carved out from direct AI Act applicability, a win for German industrial interests

Registration obligation survives: Providers must register systems in the EU database even when claiming exemption from high-risk classification

The provisional agreement reached on 7 May 2026 gives implementation teams breathing room, but the architecture of the AI Act remains intact. This is likely the last delay.

The question now is whether 16 extra months changes anything fundamental about European AI competitiveness, or whether it just postpones the same compliance crunch. That conversation is happening at Human x AI Europe in Vienna on May 19, where founders, investors, policymakers, and builders are working through exactly this tension.

What Actually Happened on 7 May

The Council of the EU and European Parliament reached provisional agreement on targeted amendments to the AI Act after negotiations that ran until approximately 4:30 a.m. The deal forms part of the Digital Omnibus on AI, itself one of ten simplification packages the Commission has tabled since February 2025.

The original August 2026 deadline for high-risk AI obligations was never realistic. Harmonised standards weren't ready. Conformity assessment infrastructure wasn't in place. The Commission knew this when it proposed the delay in November 2025. The co-legislators agreed.

Here's the new calendar:

  • Already in force: Article 5 prohibitions, Article 4 AI literacy, Articles 50-55 GPAI
  • 2 December 2026: Watermarking and synthetic content disclosure
  • 2 August 2027: National regulatory sandboxes operational
  • 2 December 2027: High-risk obligations for Annex III stand-alone systems
  • 2 August 2028: High-risk obligations for AI embedded in Annex I products

The watermarking deadline is the nearest live obligation. Seven months from now. That's engineering work, not paperwork.

The Machinery Carve-Out: Germany Got What It Wanted

According to Politico, Chancellor Friedrich Merz pushed hard for changes that would keep Siemens and Bosch competitive. The result: the Machinery Regulation is exempted from direct AI Act applicability. Health and safety requirements for high-risk AI in machinery products will be added through delegated acts under the Machinery Regulation itself.

For machinery manufacturers, this means one conformity assessment regime under sectoral law, with AI-specific requirements layered into that regime rather than running on a parallel track.

For everyone else in Annex I sectors (medical devices, toys, lifts, watercraft), the picture is murkier. As Hogan Lovells notes, the Commission can limit AI Act application through implementing acts where sectoral law has similar requirements. Those implementing acts don't exist yet. If you produce regulated AI-enabled products outside machinery, your conformity assessment path will be defined by a Commission decision that hasn't been written.

Engaging with the relevant Directorate-General now is the rational move.

The Registration Obligation That Survived

Most coverage will lead with the new dates and the nudification ban. The operationally consequential provision is buried lower: the registration obligation under Article 6(3) survived intact.

The Commission's original proposal would have deleted the requirement to register AI systems in the EU database when providers self-assess them as not meeting the high-risk threshold. Both Council and Parliament rejected the deletion.

This converts self-assessment from a private internal memo into a public artefact. Every provider claiming their HR, education, credit, or law-enforcement-adjacent AI doesn't meet the high-risk threshold will have to file that position in an EU database and stand behind it. National competent authorities will have a queryable list of borderline classification calls perfectly arranged for thematic enforcement sweeps.

The "classify out of scope, hope nobody notices" strategy is now operationally dead.

New Prohibition: AI-Generated Intimate Imagery

The deal adds a new banned practice under Article 5: AI systems that generate non-consensual sexual or intimate content, or child sexual abuse material (CSAM). According to IAPP, this prohibition takes effect 2 December 2026.

This wasn't in the Commission's text. The European Parliament pushed it through, reportedly in response to controversy surrounding explicit content generated by xAI's Grok chatbot. When legislators spot a vehicle, they use it. Expect more of the same.

Why This Is Probably the Last Delay

Five reasons to plan against the 2027 and 2028 dates holding:

The political argument for delay has been used. The case for postponement was that standards and infrastructure weren't ready. Using the same argument twice destroys its credibility.

The Cypriot Presidency framed this as a flagship deliverable. The Council statement explicitly calls it the first deliverable under the One Europe, One Market roadmap. Reopening the file would mean conceding the simplification flagship didn't work.

The Annex I compromise was face-saving, not structural. Parliament's push to move Section A products into the sectoral track wholesale was rejected. The institutions held the line on architecture. Holding the line on dates follows naturally.

Two consecutive postponements would terminate Brussels-effect leverage. A regulation that gets delayed every time enforcement approaches stops setting standards. The institutions know this.

Staged applicability has already worked. AI literacy and GPAI obligations entered into force on schedule. The model works.

The "Brussels will postpone again" thesis was always wishful thinking. It now has nowhere to go.

What This Means for Compliance Programs

As David Smith at The DPO Centre puts it: "The proposed delay should not be seen as a pause button. It is breathing space and organisations should use it wisely."

The practical checklist:

  • Build an AI inventory now. Know what systems exist, where they're deployed, what data they process, who owns them.
  • Classify before December 2027. Every Annex III-adjacent system needs a documented classification decision. That decision will be public.
  • Watermarking by December 2026. If you ship generative AI features into the EU market, UI labelling, machine-readable metadata embedding, and detection capability must be operational in seven months.
  • Engage on Annex I implementing acts. If you produce regulated AI-enabled products outside machinery, the rules that will govern your conformity assessment are being written now.
  • Don't wait for the final text. The provisional agreement is, in practice, what you should plan against. Formal adoption is expected within weeks.

As Nils Rauer of Pinsent Masons observes: "The political agreement on the AI Omnibus package is a pragmatic step towards greater legal certainty. By pushing back key obligations for high-risk systems, lawmakers have recognised that the AI Act cannot work effectively without clear standards, guidance and compliance tools in place."

The extra time is real. The question is whether it gets used for building governance that works, or for hoping the goalposts move again.

They won't.

Frequently Asked Questions

Q: When do high-risk AI obligations now apply under the EU AI Act?

A: Stand-alone high-risk AI systems under Annex III must comply by 2 December 2027. High-risk AI systems embedded in regulated products under Annex I must comply by 2 August 2028.

Q: What is the new deadline for AI watermarking and transparency requirements?

A: Providers must implement transparency measures for AI-generated content by 2 December 2026. The grace period was shortened from six months to three months.

Q: Is the machinery sector exempt from the EU AI Act?

A: Yes. The Machinery Regulation is exempted from direct AI Act applicability. AI-specific health and safety requirements will be added through delegated acts under the Machinery Regulation itself.

Q: Do providers still need to register AI systems in the EU database if they claim exemption from high-risk classification?

A: Yes. The registration obligation under Article 6(3) survived. Providers must register systems in the EU database even when self-assessing that their system does not meet the high-risk threshold.

Q: What new AI practices are prohibited under the Omnibus agreement?

A: AI systems that generate non-consensual sexual or intimate content, or child sexual abuse material (CSAM), are now banned. This prohibition takes effect 2 December 2026.

Q: Will the EU AI Act be delayed again after this Omnibus agreement?

A: A second delay is very unlikely. The political argument for postponement has been used, the institutions framed this deal as a flagship deliverable, and consecutive delays would undermine the AI Act's credibility as a global standard-setter.

Enjoyed this? Get the Daily Brief.

Curated AI insights for European leaders — straight to your inbox.

Created by People. Powered by AI. Enabled by Cities.

One day to shape
Europe's AI future

Secure your place at the most important AI convergence event in Central Europe.