Part of 2026 May 19, 2026 ·
--- days
-- hrs
-- min
-- sec
Content Hub Build Article
Build Apr 30, 2026 · 8 min read

AI Act Compliance: The Official Resource Map That Actually Matters

AI Act Compliance: The Official Resource Map That Actually Matters

In Brief

The EU AI Act entered into force in August 2024, with prohibited practices already enforceable since February 2025 and high-risk obligations hitting in August 2026. The European Commission and AI Office have published guidelines on AI system definitions, prohibited practices, and general-purpose AI (GPAI) models.

Key templates now exist for serious incident reporting and training data transparency summaries. Critical guidance on high-risk systems, data governance, and conformity assessments remains in development, expected May-June 2026.

Organizations must begin compliance work now despite incomplete guidance, using available official resources as the foundation. The regulatory framework is taking shape in real time, and the people building it will gather at Human x AI Europe in Vienna on May 19 to discuss what comes next.

The AI Act text is 144 pages. The recitals alone could fill a small book. But here's what nobody tells teams scrambling toward compliance: the regulation itself is only the starting point. The real implementation guidance lives in a growing ecosystem of Commission guidelines, AI Office templates, codes of practice, and soft law documents that most organizations haven't mapped.

This matters because compliance teams are making decisions right now based on incomplete information. They're building governance frameworks, classifying systems, and allocating budgets without knowing which official resources exist and which are still coming. That's a recipe for rework.

Dastra's recent mapping of official AI Act resources provides a useful starting point for tracking what's published and what's pending. Here's how to use it.

What's Already Published and Enforceable

The foundation documents are live. The AI Act text itself (Regulation EU 2024/1689) has been in force since August 1, 2024. Prohibited practices under Article 5 became enforceable on February 2, 2025. That means social scoring, manipulative AI systems, certain biometric categorization, and emotion recognition in workplaces and schools are already banned.

The Commission has published two critical guidelines to support enforcement of these early obligations:

Guidelines on the definition of an AI system clarify what actually falls under the regulation's scope. This matters more than it sounds. The definition determines whether a system is regulated at all. If a team can't answer "is this an AI system under the Act?" with confidence, they can't classify risk or apply the right controls.

Guidelines on prohibited AI practices provide legal explanations and practical examples for each Article 5 prohibition. These aren't optional reading. They're the interpretive framework regulators will use when assessing compliance.

For general-purpose AI models (the foundation models and large language models powering much of today's AI deployment), the GPAI Code of Practice was published on July 10, 2025. It's voluntary, but it's also the clearest signal of what the AI Office expects from model providers on transparency, copyright compliance, and safety measures.

Supporting the Code, the Commission released guidelines on the scope and obligations of GPAI models in November 2025, clarifying who qualifies as a provider and what the lifecycle obligations look like.

The Templates That Save Time

Two official templates are now available that compliance teams should download immediately:

The serious incident reporting template for systemic-risk GPAI models establishes the format and information requirements for mandatory incident notifications. If an organization operates or deploys a GPAI model with systemic risk classification, this template defines what "reporting" actually means in practice.

The public summary template for training data content requires GPAI providers to document data sources, including large datasets and top domain names used in training. This isn't just a transparency exercise. It's the mechanism that enables parties with legitimate interests (including copyright holders) to exercise their rights under EU law.

Both templates are available through the European Commission's AI policy portal.

What's Still Missing (And When to Expect It)

Here's where compliance planning gets uncomfortable. Several critical guidance documents remain in development, and organizations can't wait for them to start building governance structures.

Guidelines on high-risk AI systems are expected in May or June 2026, according to Dastra's regulatory tracking. These will clarify classification criteria under Article 6 and Annex III. High-risk obligations become enforceable in August 2026. That's a narrow window between guidance publication and compliance deadline.

The Code of Practice on marking and labeling AI-generated content (covering deepfakes, chatbot disclosures, and synthetic media) is still in draft form. A second draft was published in March 2026, with final publication expected by June 2026. Transparency obligations under Article 50 take effect in August 2026.

Guidelines on data governance, technical documentation, logging, and human oversight are all listed as "expected" with no firm dates. These cover Articles 9-15, the operational core of high-risk system requirements.

Harmonized standards for risk management, robustness, security, and accuracy are in progress through CEN/CENELEC but not yet finalized.

How to Use This Information

Stop waiting for perfect guidance. Start with what exists.

Step one: Build an AI system inventory using the published definition guidelines. Every system needs a one-paragraph intended purpose statement and a preliminary risk classification.

Step two: Screen all systems against Article 5 prohibitions using the published guidelines. Document why each system is not prohibited. This documentation should exist before any regulator asks for it.

Step three: For any GPAI models in use (whether developed internally or procured from vendors), map obligations against the GPAI Code of Practice and published guidelines. Collect evidence of vendor compliance where applicable.

Step four: For systems likely to be classified as high-risk, begin building the governance infrastructure now: risk management processes, documentation templates, human oversight protocols, logging requirements. The specific standards may shift, but the categories of obligation are clear from the Act text.

Step five: Set up a monitoring process for new guidance. The Future of Life Institute's AI Act portal and the Commission's official AI policy page are the primary sources. Check monthly at minimum.

The Governance Reality

The AI Act creates a multi-layered enforcement structure. The European AI Office handles GPAI model oversight directly. National supervisory authorities handle everything else, coordinated through the European AI Board.

Penalties scale with violation severity: up to €35 million or 7% of global annual turnover for prohibited practice violations, €15 million or 3% for other violations, €7.5 million or 1.5% for certain misstatements.

But the real risk isn't fines. It's building compliance programs on assumptions that turn out to be wrong, then having to rebuild when official guidance contradicts internal interpretations. The organizations that track official resources systematically will spend less time on rework.

The regulatory framework is incomplete. That's not an excuse to delay. It's a reason to build adaptable governance structures that can incorporate new guidance as it arrives. The teams that treat compliance as an ongoing process rather than a one-time project will be ready when August 2026 arrives.

Frequently Asked Questions

Q: When do high-risk AI system obligations become enforceable under the AI Act?

A: High-risk obligations under Annex III become enforceable on August 2, 2026. Some additional obligations for certain Annex I systems extend to August 2, 2027.

Q: What official guidelines has the European Commission already published for AI Act compliance?

A: The Commission has published guidelines on the definition of an AI system, guidelines on prohibited AI practices, and guidelines on the scope and obligations of GPAI models. The GPAI Code of Practice was published July 10, 2025.

Q: Where can organizations find official templates for AI Act compliance?

A: The European Commission has published templates for serious incident reporting (for systemic-risk GPAI models) and public summaries of training data content, available through the Commission's AI policy portal.

Q: What guidance is still missing for AI Act compliance?

A: Guidelines on high-risk AI systems, data governance, technical documentation, logging, human oversight, and the final Code of Practice on AI-generated content labeling are all still in development, with most expected by mid-2026.

Q: What are the maximum penalties for AI Act violations?

A: Penalties reach up to €35 million or 7% of global annual turnover for prohibited practice violations, €15 million or 3% for other violations, and €7.5 million or 1.5% for certain misstatements to authorities.

Q: How should organizations track new AI Act guidance as it's published?

A: Monitor the European Commission's official AI policy page and the Future of Life Institute's AI Act portal monthly. Set up a systematic process to incorporate new guidance into existing compliance frameworks.

Enjoyed this? Get the Daily Brief.

Curated AI insights for European leaders — straight to your inbox.

Created by People. Powered by AI. Enabled by Cities.

One day to shape
Europe's AI future

Secure your place at the most important AI convergence event in Central Europe.