Part of 2026 May 19, 2026 ·
--- days
-- hrs
-- min
-- sec
Content Hub Build Article
Build May 12, 2026 · 10 min read

The EU AI Act and Board Obligations: What Directors Actually Need to Do

The EU AI Act and Board Obligations: What Directors Actually Need to Do

In Brief

  • The EU AI Act places direct compliance obligations on boards and senior leadership, not just IT departments
  • AI literacy requirements under Article 4 became enforceable in February 2025; high-risk AI system obligations apply from August 2026
  • Deployers of high-risk AI systems must implement human oversight, maintain logs for at least six months, and conduct Fundamental Rights Impact Assessments (FRIAs) where applicable
  • Boards must ensure staff have sufficient AI literacy, assign competent individuals for human oversight, and establish governance structures before deployment
  • Fines for non-compliance can reach up to €35 million or 7% of global annual turnover

This article maps the compliance terrain. The real conversation about what it means for European governance happens May 19 in Vienna at Human x AI Europe, where policymakers, technologists, and implementers will be in the same room.

The EU AI Act entered into force in August 2024. The grace periods are ending. And too many boards still think AI compliance is something the IT department handles.

That assumption will prove expensive.

The European Commission's regulatory framework makes clear that the AI Act is not a technical regulation. It is a governance regulation. The obligations land on organizations, which means they land on the people who run those organizations. Directors who cannot explain how their company uses AI, who owns the decisions it makes, and what happens when it fails are not meeting their fiduciary duties.

The Timeline That Matters

The AI Act's obligations roll out in stages. According to GUBERNA's analysis, the key dates are:

February 2, 2025: Prohibitions on unacceptable-risk AI practices and AI literacy obligations became enforceable. This deadline has already passed.

August 2, 2025: Governance rules and obligations for General-Purpose AI (GPAI) models apply.

August 2, 2026: Full obligations for high-risk AI systems take effect.

August 2, 2027: Extended transition period for high-risk AI systems embedded in regulated products.

The August 2026 deadline is the one that will catch most organizations unprepared. That is when deployer obligations for high-risk AI systems become fully enforceable, with penalties of up to €35 million or 7% of global annual turnover.

Who Counts as a Deployer

The distinction between provider and deployer is fundamental. Article 26 of the AI Act defines a deployer as any natural or legal person using an AI system under its authority, except for personal non-professional use.

The practical implication: if an organization buys an AI-powered recruitment tool from a vendor, the vendor is the provider. The organization using it to screen candidates is the deployer. Both have obligations. The deployer cannot outsource compliance to the vendor.

As one compliance guide puts it: Your contract with the provider does not shield you. Your organization's name is on the line.

The Nine Deployer Obligations Under Article 26

For high-risk AI systems, deployers face nine substantive obligations. Each requires concrete action before deployment:

1. Follow instructions for use. Deployers must use high-risk AI systems exactly as specified by the provider. Using a system outside its intended scope constitutes misuse and triggers legal consequences.

2. Assign human oversight to competent persons. According to VDE's analysis, human oversight must be assigned to natural persons who have the necessary competence, training, and authority. A human rubber-stamp is not oversight.

3. Ensure input data quality. Where the deployer controls input data, it must be relevant and sufficiently representative for the system's intended purpose.

4. Monitor system operation. Continuous monitoring based on instructions for use. If risks to health, safety, or fundamental rights emerge, the deployer must inform the provider, distributor, and market surveillance authority immediately and suspend use.

5. Report serious incidents. Notification to the provider, importer, distributor, and market surveillance authority within 15 days of detection.

6. Maintain logs. Automatically generated logs must be stored for at least six months, in compliance with data protection requirements.

7. Ensure transparency. Workers and their representatives must be informed before high-risk AI systems are deployed in the workplace.

8. Conduct Data Protection Impact Assessments. Where relevant under GDPR Article 35.

9. Cooperate with competent authorities. Full cooperation with national market surveillance authorities.

The AI Literacy Requirement

Article 4 of the AI Act requires providers and deployers to ensure a sufficient level of AI literacy among their staff and other persons dealing with AI systems on their behalf.

This obligation became enforceable in February 2025. It applies to all AI systems, not just high-risk ones.

Research from the Boards Impact Forum makes the governance implication explicit: If employees are required to be trained to a basic level, the expectation is clear: boards cannot remain behind. MIT CISR research shows that companies with at least three directors who understand key AI-related concepts significantly outperform their peers.

AI literacy under the Act means the skills, knowledge, and understanding necessary to make informed deployment decisions, gain awareness of opportunities and risks, and recognize possible harm. It is not confined to technical staff. Managers, project managers, sales teams, and end-users must receive training appropriate to their role.

Fundamental Rights Impact Assessments

Article 27 introduces a requirement that goes beyond data protection: the Fundamental Rights Impact Assessment (FRIA).

FRIAs are mandatory for:

  • Public bodies deploying high-risk AI systems
  • Private organizations delivering public services
  • Companies deploying AI for creditworthiness evaluation or life and health insurance risk assessment and pricing

As Archer IRM explains, a FRIA differs fundamentally from a Data Protection Impact Assessment (DPIA). Where a DPIA asks How does this affect personal data?, a FRIA asks How does this affect human beings as rights holders?

A FRIA must address:

  • Which fundamental rights the AI system affects
  • How it might compromise dignity, equality, privacy, or access to legal remedy
  • What happens to specific individuals when the system is wrong
  • Who is accountable for those outcomes
  • How an affected person can challenge a decision made about them

The FRIA deadline is August 2, 2026. Most organizations have not started.

What Boards Must Actually Do

The Corporate Governance Institute identifies five ways the AI Act reshapes boardroom responsibilities:

Explainability. Boards must be able to explain the mode of operation for AI systems in use. Not endless technical details, but basic understanding: where the system gets its training data, how it is used, whether flaws have surfaced.

Risk classification. Directors must understand which AI systems fall into which risk categories and what obligations attach to each.

Governance structures. According to heyData's compliance guide, companies need dedicated AI governance structures: an AI Compliance Officer, an internal AI governance committee, regular risk reports and audits, and ethical guidelines for AI use.

Documentation. Maintain internal documentation on AI system compliance, including risk assessments, technical specifications, and conformity records.

Incident response. Clear procedures for reporting serious incidents within the 15-day window.

Columbia Law School's CLS Blue Sky Blog frames the challenge precisely: The question for directors is no longer whether artificial intelligence and data systems matter. The challenge is how to exercise real oversight without turning the board into a technology department.

The Practical Starting Point

Before the August 2026 deadline, every organization deploying AI systems should answer three questions:

1. What AI systems are in use? Conduct a complete inventory. Include systems embedded in purchased software and third-party platforms.

2. What is the organization's role for each system? Provider, deployer, or both? The obligations differ.

3. Which systems are high-risk? Map each system against Annex III categories: biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, justice.

For each high-risk system, the deployer checklist applies in full. No exceptions. No delegation to vendors.

The AI Act is not a technical regulation that can be handled by the IT department. It is a governance regulation that requires board-level attention, documented processes, and named individuals with authority to intervene when systems fail.

Organizations that treat this as a compliance checkbox will discover, expensively, that the Act was designed to prevent exactly that approach.

Frequently Asked Questions

Q: When do EU AI Act obligations for high-risk AI systems become enforceable?

A: Full obligations for high-risk AI systems under Article 26 become enforceable on August 2, 2026. AI literacy requirements under Article 4 became enforceable on February 2, 2025. Product-integrated high-risk AI rules have an extended deadline of August 2, 2027.

Q: What is the difference between an AI provider and an AI deployer under the EU AI Act?

A: A provider develops an AI system and places it on the market under their own name or trademark. A deployer uses an AI system under their authority in a professional capacity. If a company buys an AI recruitment tool from a vendor, the vendor is the provider and the company using it is the deployer. Both have separate, non-transferable compliance obligations.

Q: What are the penalties for non-compliance with the EU AI Act?

A: Fines can reach up to €35 million or 7% of global annual turnover, whichever is higher. Penalties vary by violation type: prohibited AI practices carry the highest fines, while violations of other provisions may result in fines up to €15 million or 3% of turnover.

Q: Who must conduct a Fundamental Rights Impact Assessment (FRIA)?

A: FRIAs are mandatory for public bodies deploying high-risk AI systems, private organizations delivering public services, and companies deploying AI for creditworthiness evaluation or life and health insurance risk assessment and pricing. The FRIA must be completed before first use of the system.

Q: How long must deployers retain AI system logs under the EU AI Act?

A: Deployers must store automatically generated logs for a minimum of six months, or longer if specified by applicable law. Logs must be maintained in compliance with data protection requirements under GDPR.

Q: Does the EU AI Act apply to companies outside the European Union?

A: Yes. The AI Act has extraterritorial reach. It applies to any provider or deployer whose AI system is placed on the EU market or whose AI system's output is used within the EU, regardless of where the company is established. Non-EU companies must appoint an authorized representative in the EU.

Enjoyed this? Get the Daily Brief.

Curated AI insights for European leaders — straight to your inbox.

Created by People. Powered by AI. Enabled by Cities.

One day to shape
Europe's AI future

Secure your place at the most important AI convergence event in Central Europe.