Reading checklists is the easy part. The hard conversations happen when policymakers, technologists, and implementers sit in the same room. That room is Human x AI Europe on May 19 in Vienna, where Europe's AI governance future gets built.
The Deadline That Keeps Moving (But Shouldn't Change Your Plan)
The European Commission's Digital Omnibus package proposes pushing certain high-risk obligations to December 2027. The European Parliament has voted in favour. The Council of the EU has not yet agreed. Political agreement must be reached before June for the delay to take legal effect before the original August 2026 deadline.
Do not plan around it.
The compliance work required for August 2026 is not wasted effort even if the extension materialises. Technical documentation, quality management systems, data governance frameworks, and human oversight mechanisms are engineering investments that improve AI systems independently of their regulatory function. Build for August. Welcome any extension as a margin of safety, not a reason to delay.
Step 1: Build the AI System Inventory
According to CSA Labs research, over 50% of organisations lack systematic inventories of AI systems currently in production or development. This is the minimum prerequisite for any compliance programme. Classification is impossible without visibility.
What to document for each system:
- Purpose and intended use case
- Input data types and sources
- Output decisions and affected user groups
- Role in the AI value chain (provider, deployer, importer, distributor)
- Risk tier classification (unacceptable, high, limited, minimal)
The inventory must include internally developed systems, third-party AI solutions, embedded AI components in products, and general-purpose AI models (GPAI models, meaning AI trained on broad data that can perform a wide range of tasks) used in business operations. Update it every time a system is added, modified, or retired.
Step 2: Eliminate Prohibited Systems
The ban on unacceptable-risk AI took effect on 2 February 2025. If the inventory includes any prohibited system, decommission it immediately.
Prohibited practices include:
- Social scoring mechanisms
- Untargeted biometric scraping to build facial recognition databases
- Subliminal manipulation techniques
- Emotion recognition in workplaces (unless safety-critical)
- Real-time remote biometric identification in public spaces (except under narrow law-enforcement exceptions)
- AI exploiting vulnerabilities of specific groups
- Individual criminal offence risk prediction
Violations of prohibited AI practices carry fines up to EUR 35 million or 7% of global annual turnover, whichever is higher.
Step 3: Confirm High-Risk Classification
Annex III of the EU AI Act defines specific functional domains, not broad conceptual categories. A system qualifies as high-risk if it falls into one of the following:
- Biometric identification and categorisation: remote biometric identification, AI categorising individuals by protected characteristics
- Critical infrastructure: management or operation of road, rail, aviation, water, gas, heating, and electricity supply
- Education and vocational training: AI determining access to education or assessing learning outcomes
- Employment: AI used in recruitment, screening, or performance evaluation
- Access to essential services: credit scoring, insurance risk assessment, emergency services dispatch
- Law enforcement: AI used for risk assessment, polygraph alternatives, or evidence evaluation
- Migration and border control: AI assessing visa applications or asylum claims
- Administration of justice: AI assisting judicial decisions
Context determines classification. A chatbot drafting emails is limited-risk. The same technology embedded into a hiring workflow that influences shortlisting becomes high-risk, triggering a dramatically different compliance burden.
Step 4: Implement High-Risk System Requirements
Each requirement below maps to a specific article in Regulation (EU) 2024/1689.
Risk Management System (Article 9):
- Establish a risk management process that runs throughout the AI system's lifecycle
- Identify and analyse known and foreseeable risks
- Implement mitigation measures and test their effectiveness
- Document residual risks and communicate them to deployers
- Review and update the risk assessment when the system or its context changes
Data Governance (Article 10):
- Define criteria for training, validation, and testing datasets
- Ensure datasets are relevant, representative, and as free of errors as reasonably achievable
- Address potential biases, especially when processing special categories of personal data
- Document data provenance, collection methods, and preprocessing steps
Technical Documentation (Article 11):
- Prepare documentation before the system enters the market
- Include: general system description, design specifications, development process, risk management measures, data governance practices, performance metrics, and known limitations
- Keep documentation updated for the system's entire lifecycle
Human Oversight (Article 14):
- Design systems to be effectively overseen by natural persons
- Provide deployers with clear instructions on oversight implementation
- Enable human intervention, including the ability to stop the system
Accuracy, Robustness, and Cybersecurity (Article 15):
- Achieve appropriate levels of accuracy for the intended purpose
- Build resilience against errors, faults, and inconsistencies
- Protect against attempts to alter system behaviour through data manipulation
Step 5: Deployer Obligations
Deployers (organisations using AI systems under their authority) have distinct obligations under Article 26:
- Implement human oversight mechanisms as specified by the provider
- Retain automatically generated logs for at least six months
- Conduct Fundamental Rights Impact Assessments (FRIAs) where required
- Inform affected individuals that they are subject to high-risk AI decisions
- Ensure input data is relevant to the intended purpose
Step 6: Conformity Assessment and Registration
Before placing a high-risk AI system on the market:
- Complete the appropriate conformity assessment procedure
- Prepare the EU declaration of conformity
- Affix the CE marking
- Register the system in the EU AI database
For most high-risk systems, providers can conduct internal conformity assessments. Biometric identification systems and certain safety components require third-party assessment by a notified body.
The Readiness Gap
Vision Compliance's 2026 analysis found that 78% of enterprises have not taken meaningful steps toward AI Act compliance. The gaps are consistent across industries:
- 83% have no formal inventory of AI systems
- 74% lack a designated internal owner or governance body for AI compliance
- 61% have no process for generating required technical documentation
Matproof's readiness report draws a direct parallel to GDPR: in 2018, 71% of companies were unprepared. GDPR fines have now exceeded EUR 7.1 billion across more than 2,800 enforcement actions, with over 60% of those fines issued since January 2023. The AI Act is structured to follow the same enforcement pattern.
The 90-Day Checklist
For organisations starting now with less than 90 days to the August deadline:
Week 1-2: Complete AI system inventory. Document every system, including embedded SaaS tools and pilot projects.
Week 3-4: Classify each system by risk tier. Identify any prohibited practices and decommission immediately.
Week 5-6: Assign internal ownership. Designate a governance body or compliance lead for AI Act obligations.
Week 7-8: Begin technical documentation for high-risk systems. Prioritise systems closest to market deployment.
Week 9-10: Implement human oversight mechanisms. Train relevant personnel on intervention procedures.
Week 11-12: Conduct gap analysis. Identify remaining deficiencies and establish remediation timeline.
The regulation is not retroactive. AI systems already on the market before August 2, 2026 may be grandfathered under transitional arrangements, but only if compliance groundwork has been laid. Standing still is not a neutral act.
Frequently Asked Questions
Q: What is the EU AI Act compliance deadline for high-risk AI systems?
A: The primary enforcement date is 2 August 2026 for high-risk AI system obligations under Articles 9-17 (provider requirements) and Article 26 (deployer requirements). The Digital Omnibus package proposes extending certain deadlines to December 2027, but this has not been enacted into law.
Q: How do I determine if my AI system is high-risk under the EU AI Act?
A: Check whether the system falls into one of the eight functional domains listed in Annex III: biometric identification, critical infrastructure, education, employment, access to essential services, law enforcement, migration, or administration of justice. Context matters: the same technology can be limited-risk or high-risk depending on its use case.
Q: What are the maximum fines for EU AI Act non-compliance?
A: Violations of prohibited AI practices carry fines up to EUR 35 million or 7% of global annual turnover, whichever is higher. Breaches of high-risk AI system requirements can incur fines up to EUR 15 million or 3% of global annual turnover.
Q: Does the EU AI Act apply to companies outside the European Union?
A: Yes. The Act has extraterritorial scope. It applies to any organisation that places AI systems on the EU market, puts them into service within the Union, or uses them in ways that affect EU citizens, regardless of where the company is headquartered.
Q: What documentation is required for high-risk AI systems under the EU AI Act?
A: Article 11 requires technical documentation including: general system description, design specifications, development process, risk management measures, data governance practices, performance metrics, and known limitations. This documentation must be prepared before market placement and maintained throughout the system's lifecycle.
Q: What happens if the Digital Omnibus delay is not enacted before August 2026?
A: The original August 2, 2026 deadline remains in effect. Organisations that paused compliance preparations pending political certainty face enforcement exposure. The compliance work required for August 2026 improves AI systems regardless of regulatory timing.