The EU AI Act isn't coming. It's here. And the gap between "we're aware of it" and "we're compliant" is where most implementation projects are currently stuck.
As of February 2026, the regulatory clock is ticking loudly. The AI Act requires every EU Member State to establish at least one AI regulatory sandbox by August 2026—that's six months away. The prohibited AI practices? Already banned since February 2025. High-risk system requirements? Enforcement begins August 2026.
Here's the uncomfortable truth: most organizations still don't know which risk tier they fall into.
The Classification Problem Nobody Wants to Talk About
The AI Act assigns applications to three risk categories: unacceptable (banned), high-risk (heavily regulated), and everything else (largely unregulated). Simple enough on paper. In practice, classification is where teams get stuck for months.
Consider a CV-scanning tool that ranks job applicants. High-risk, clearly. But what about the recommendation engine that surfaces candidates to recruiters without explicit ranking? What about the chatbot that pre-screens applications before human review? The boundaries blur fast.
The EU AI Act Compliance Checker offers a 10-minute self-assessment that helps organizations identify their likely obligations. It's a starting point, not a legal opinion—but it's the kind of practical tool that should be the first step for any team still operating on assumptions.
The tool asks straightforward questions and maps answers to potential obligations. For SMEs and startups especially, this matters: the difference between "high-risk" and "not explicitly regulated" is the difference between a compliance program and a competitive advantage.
What the Compliance Checker Actually Reveals
Running through the Compliance Checker surfaces questions most teams haven't asked themselves:
- Does the system make or influence decisions about natural persons?
- Is it used in employment, education, law enforcement, or critical infrastructure?
- Does it process biometric data for identification purposes?
- Is it a general-purpose AI model that could be integrated into high-risk applications?
A mid-size logistics company might assume their route optimization AI is low-risk—until they realize it influences driver scheduling decisions that affect employment conditions. A healthcare startup might think their symptom-checker is clearly high-risk—until they discover it falls under a different classification because it doesn't diagnose, only informs.
The Checker won't give legal certainty. But it will reveal the questions that need answering before August.
The GPAI Model Complication
General-purpose AI models add another layer of complexity. On 18 July 2025, the European Commission published draft Guidelines clarifying obligations for GPAI model providers. The Code of Practice offers a framework for demonstrating compliance—but providers can also choose alternative methods.
This flexibility sounds helpful until implementation teams realize they need to document their chosen approach, justify it, and maintain evidence of compliance. The Code of Practice isn't mandatory, but "we chose not to follow it" requires a defensible alternative.
For teams integrating third-party GPAI models into their products, the questions multiply: What obligations transfer to the integrator? How does modification affect classification? Recent analysis from legal compliance professionals draws on practical experience to address these questions—and the answers aren't always intuitive.
The Sandbox Deadline Nobody's Tracking
Every Member State must establish at least one AI regulatory sandbox by August 2026. Current progress varies significantly across the EU. Some countries are well advanced; others are still in planning phases.
For organizations developing AI systems, sandboxes offer a controlled environment to test compliance approaches before full deployment. But accessing a sandbox requires preparation: documentation, risk assessments, and a clear understanding of what's being tested.
Teams waiting for sandboxes to open before starting compliance work are already behind. The sandbox is for testing approaches, not discovering obligations.
The AI Literacy Requirement Everyone Forgets
Article 4 of the AI Act addresses AI literacy—and it's not optional. Organizations deploying AI systems need to ensure that staff interacting with those systems have appropriate understanding of their capabilities and limitations.
AI literacy programs across Europe are emerging to support this requirement, but most organizations haven't mapped their training needs to their AI deployments. The question isn't "do we have AI training?" but "do the people operating our high-risk systems understand what they're operating?"
This is a change management problem disguised as a compliance requirement. And change management problems don't solve themselves in six months.
The Whistleblowing Dimension
The AI Act intersects with the EU Whistleblowing Directive in ways that create both obligations and risks. Organizations need channels for reporting AI-related concerns, and those channels need to function before problems emerge.
For implementation teams, this means building feedback loops into AI systems from the start—not as an afterthought. If operators can't report concerns about system behavior, compliance gaps will surface through enforcement rather than internal correction.
What Actually Needs to Happen Before August
Here's the implementation checklist that matters:
Week 1-2: Classification
- Run every AI system through the Compliance Checker
- Document the reasoning for each classification
- Identify systems where classification is uncertain
Week 3-4: Gap Analysis
- For high-risk systems: map current state against Article 9-15 requirements
- For GPAI integrations: document provider obligations and your responsibilities
- For all systems: assess AI literacy gaps among operators
Month 2-3: Remediation Planning
- Prioritize gaps by enforcement timeline and business impact
- Assign ownership for each remediation workstream
- Establish monitoring for systems approaching high-risk thresholds
Month 4-6: Implementation and Documentation
- Execute remediation plans
- Build evidence packages for compliance demonstration
- Test whistleblowing and feedback channels
Ongoing: Monitoring
- Track regulatory guidance updates (the Commission is still publishing clarifications)
- Monitor sandbox availability in relevant Member States
- Review classification decisions as systems evolve
The Real Constraint
The AI Act isn't technically complex. The requirements are clear, the timelines are published, and the tools exist to assess obligations.
The constraint is organizational: getting classification decisions made, getting remediation funded, getting training delivered, getting documentation maintained. These are human problems, not technical ones.
Teams that treat AI Act compliance as a legal project will struggle. Teams that treat it as an implementation project—with owners, timelines, and accountability—will ship compliant systems.
The Compliance Checker takes 10 minutes. The implementation takes months. The deadline is August.
Start with the 10 minutes.