Today, 05.02.2026
GOOD MORNING
There's a particular kind of irony in watching the European Commission miss its own deadline for guidance on high-risk AI systems—the very guidance that companies need to comply with the AI Act. The deadline was 2 February. It's now 5 February. The guidance isn't here. And yet, the Commission is simultaneously pushing a Digital Omnibus package that would delay the high-risk compliance requirements by up to 16 months because, as Deputy Director-General Renate Nikolay put it at a Parliament hearing last month, these standards are not ready.
This isn't bureaucratic trivia. It's the central tension defining Europe's AI moment: the gap between regulatory ambition and implementation reality. And it's playing out against a backdrop of record funding, infrastructure buildouts, and a global safety report that landed this week with over 100 experts warning about capabilities outpacing governance.
The Regulatory Calendar
Let's start with what actually happened—or didn't. The Commission was required under Article 6(5) of the AI Act to publish guidelines by 2 February 2026 specifying how operators of high-risk AI systems can meet their obligations. This includes a post-market monitoring plan for providers. According to IAPP reporting, the Commission indicated it's still integrating feedback and plans to publish a draft for consultation by the end of February, with final adoption potentially in March or April.
Here's the mechanism hiding under the headline: Article 6 is the classification engine of the entire AI Act. It determines whether your AI system counts as high-risk—and therefore whether you face the full weight of documentation, conformity assessment, and compliance requirements. Without clear guidance on how to make that determination, companies are operating in a fog. The high-risk compliance requirements are currently due to take effect in August 2026.
The delay was foreshadowed. At a 26 January hearing of the European Parliament's Committee on Civil Liberties, Justice and Home Affairs, Nikolay acknowledged the problem directly:
These standards are not ready, and that's why we allowed ourselves in the AI omnibus to give us a bit more time to work on either guidelines or specification or standards, so that we can provide this legal certainty for the sector.
Renate Nikolay
This sounds bureaucratic. It's not. What's happening is a fundamental recalibration of the AI Act's implementation timeline, driven by three converging pressures: member states struggling to appoint enforcers, standardization bodies missing their own deadlines (CEN and CENELEC are now aiming for end of 2026), and industry lobbying for breathing room. The EDPB and EDPS joint opinion from January expressed concerns about the proposed postponement, noting that given the rapid evolution of the AI landscape, the co-legislators should consider whether the original timeline can be maintained for certain obligations.
Laura Caroli, who negotiated the AI Act as a policy advisor to Parliament co-rapporteur Brando Benifei, put it bluntly:
There was one thing that was fixed from the very beginning, from the very letter of the law. It's just not there, and it is supposed to give clarity. You're not giving clarity.
Laura Caroli
Meanwhile, Ireland published the General Scheme of its Regulation of Artificial Intelligence Bill 2026 yesterday—the national legislation needed to implement the AI Act's supervision and enforcement provisions. The bill proposes establishing a new statutory independent body, Oifig Intleachta Shaorga na hÉireann (the AI Office of Ireland), under the Department of Enterprise, Tourism and Employment. The Irish Council for Civil Liberties welcomed the publication, noting that most of its recommendations had been incorporated, particularly the plan for the AI Office to be a statutory, independent authority with adequate resources.
Watch the calendar: the Digital Omnibus is under ordinary legislative procedure and formal adoption is expected later in 2026. But timing depends on negotiations, and the uncertainty is itself creating compliance paralysis.
The Safety Report
While Europe debates implementation timelines, the second International AI Safety Report landed on 3 February—and it deserves more attention than it's getting. Led by Turing Award winner Yoshua Bengio and authored by over 100 AI experts, backed by over 30 countries and international organizations, it represents the largest global collaboration on AI safety to date.
The report is structured around three questions: what can general-purpose AI do today, what emerging risks does it pose, and how can those risks be mitigated? The Extended Summary for Policymakers provides a detailed 20-page overview including key findings, concrete examples, and notable developments since the 2025 edition.
What makes this significant isn't just the content—it's the institutional architecture. The Expert Advisory Panel includes representatives nominated by Australia, Brazil, Canada, Chile, China, the EU, France, Germany, India, Indonesia, Ireland, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the OECD, the Philippines, the Republic of Korea, Rwanda, Saudi Arabia, Singapore, Spain, Switzerland, Türkiye, the UAE, Ukraine, the UK, and the UN. That's a remarkable coalition for a document that doesn't endorse any particular policy approach but does establish a shared scientific baseline.
The timing matters. As the EU AI Act Newsletter noted this week, EU tech chief Henna Virkkunen defended Europe's approach at Davos by pointing out that the US has over 200 state-level AI regulations—arguing that one EU law is better than a hundred American ones. The safety report provides the scientific foundation that both regulatory approaches will need to draw on.
The Funding Picture
If you only remember one thing about European AI funding right now, remember this: January 2026 produced five new unicorns in a single month. According to TechCrunch, the new billion-dollar club includes Belgium's Aikido Security (cybersecurity, $60M Series B at $1B valuation), Lithuania's Cast AI (cloud optimization), France's Harmattan AI (defense tech, $200M Series B at $1.4B valuation—just two years after founding), Germany's Osapiens (ESG software, $100M Series C at $1.1B), and Ukraine's Preply (edtech, $150M Series D at $1.2B).
The Harmattan AI round is particularly striking. Led by Dassault Aviation—maker of the Rafale fighter jets—it reflects surging demand for autonomous defense aircraft. The company had already signed agreements with the French and British ministries of defense and Ukrainian drone maker Skyeton before securing this marquee backer.
And then there's ElevenLabs, which announced yesterday it raised $500 million at an $11 billion valuation—more than tripling its January 2025 valuation of $3.3 billion. The London-based AI voice startup, backed by Nvidia, Sequoia, and Andreessen Horowitz, closed 2025 with over $330 million in annual recurring revenue. Cofounder Mati Staniszewski said the company is building toward an IPO.
The broader picture: European AI startups raised a record $21.6 billion in 2025, according to Dealroom data cited by CNBC. Two of the biggest rounds were for French AI model builder Mistral (€1.7 billion) and UK AI infrastructure company Nscale ($1.1 billion).
DIGITALEUROPE unveiled its Future Unicorn Award 2026 finalists yesterday, with winners to be announced at Masters of Digital on 25-26 February. The Future Unicorn finalists include Finland's Canatu (carbon nanotube products), France's Quandela (quantum computing), and Denmark's Sparrow Quantum (photonic quantum chips). The Dual-Use Technology finalists include Ukraine's Farsight Vision and Ratel, Ireland's Mbryonics, and Spain's Sateliot.
As DIGITALEUROPE Director-General Cecilia Bonefeld-Dahl noted:
Europe accounts for only 20% of global unicorns, while the US holds nearly half. That gap is a direct consequence of how we fund growth. The US invested $339 billion in venture capital in 2025, while Europe invested just a fraction of that.
Cecilia Bonefeld-Dahl
The Infrastructure Play
Power is becoming Europe's competitive battleground. According to Verne Global CEO Dominic Ward, 2026 will see demand for data center capacity shift up another gear due to the expected breakout of AI inference, the emergence of agentic AI, and ongoing demand for general cloud computing.
The numbers are staggering. The global data center industry's operational footprint is currently estimated at 50 GW, growing to potentially 100-200 GW by 2030. OpenAI alone has committed $1.4 trillion to deliver 30 GW of compute over the next eight years, with a stated goal of deploying a gigawatt every week in the future.
Data Center Knowledge reports that Europe's data center market is undergoing structural realignment. Traditional FLAP-D hubs (Frankfurt, London, Amsterdam, Paris, Dublin) face tightening constraints on power, land, and policies, while Southern Europe and select frontier markets are accelerating. Spain and Italy are Europe's strongest growth zones, supported by favorable infrastructure, government incentives, and new subsea cable routes.
The EU's response is the AI Factories initiative. Through 2025-2026, at least 15 AI Factories and several Antennas are expected to be operational, enabling the pan-EU AI ecosystem and prioritizing access for AI startups and SMEs. At least 9 new AI-optimized supercomputers will be procured and deployed across the EU, more than tripling current EuroHPC AI computing capacity.
The InvestAI Facility will comprise a new European fund of €20 billion to create up to 5 AI Gigafactories—large-scale facilities dedicated to developing and training next-generation AI models containing trillions of parameters. These will bring together computing power (over 100,000 advanced AI processors) with emphasis on power capacity, reliable supply chains, advanced networking, energy efficiency, and AI-driven automation.
2026 will also mark the beginning of widespread adoption of liquid cooling as a new generation of NVIDIA GPUs is adopted. GB300 GPUs peak at 120 kW per rack—well beyond the limits of air-cooled infrastructure. The shift to liquid will mean a change of infrastructure design and skills to service and operate these mission-critical environments.
Think Tank Watch
The Institute for Public Policy Research (IPPR) released a report this week raising concerns about AI's role in the news ecosystem. The British think tank found that major AI companies are emerging as new gatekeepers on the internet, controlling how citizens access information. According to the report, official news outlets like BBC News were insufficiently cited by leading AI tools including ChatGPT and Google Gemini.
The IPPR outlined three policy recommendations: require AI companies to pay for news they use through collective licensing deals, introduce standardized nutrition labels for AI news so the public can see where AI answers come from, and use public funding to protect independent news in the AI era.
This connects to a broader pattern. As Euractiv reported, several major AI companies that have recently released new models appear non-compliant with AI Act transparency rules requiring foundation model developers to publicly disclose information about training data. The Commission will only formally supervise and enforce these rules from August 2025, extending the compliance grace period until summer 2026.
The UC Berkeley Center for Long-Term Cybersecurity hosted a panel on Establishing AI Risk Thresholds and Red Lines as a pre-summit event for the India AI Impact Summit 2026 (19-20 February in New Delhi). Sarah Myers West of the AI Now Institute argued that the goal isn't to make AI systems safe. It's to make people safe—emphasizing that certain systems, especially those designed for surveillance or military targeting, have demonstrable failure models that are inherently unsafe.
City-Level Signals
The European Commission's smart cities and communities initiative is accelerating. The CitiVERSE EDIC consortium, launched in December with €80 million in funding through Digital Europe, aims to connect 100 European cities over the next two years to promote data sharing, best practices, and collaborative solutions.
Key tools under development include the EU Local Digital Twins Toolbox—a collection of advanced reusable tools, reference architectures, open standards, and technical specifications designed to help local communities create AI-based local digital twins. These digital twins use artificial intelligence to predict how changes in a city might affect traffic, pollution, or public health, enabling better real-time decisions.
Euro Security reports that 2026 will be pivotal for video AI and smart city technologies in Germany, with the EU AI Act shaping development. Local authorities are moving away from surveillance-oriented solutions aimed at identification toward systems that recognize patterns, movements, people flows, and anomalies. Biometric identification like real-time facial recognition remains limited to narrowly defined exceptional cases.
The Mayors of Europe funding radar lists several active Horizon Europe calls relevant to cities, including climate-neutral cities through pre-commercial procurement (deadline 20 January 2026), real-time monitoring of emissions in waterfront cities, and AI-powered traffic analytics for road safety.
The Numbers That Matter
- €21.6 billion: European AI startup funding in 2025, a record (Dealroom)
- 5: New European unicorns in January 2026 alone
- $11 billion: ElevenLabs valuation after $500M raise, up from $3.3B in January 2025
- 16 months: Maximum delay proposed in Digital Omnibus for high-risk AI compliance
- 100+: Expert authors on the International AI Safety Report 2026
- 30+: Countries backing the International AI Safety Report
- 15: AI Factories expected to be operational by end of 2026
- €20 billion: InvestAI Facility fund for AI Gigafactories
The Week Ahead
9-10 February: Joint ECON/FISC exchange of views with Commissioner Wopke Hoekstra on 2026 Commission Work Programme taxation files
Mid-February: Expected publication of Commission draft guidelines on high-risk AI classification (for stakeholder consultation)
19-20 February: India AI Impact Summit 2026, New Delhi—a global gathering on AI governance
25-26 February: DIGITALEUROPE Masters of Digital 2026, Brussels—Future Unicorn Award winners announced; confirmed speakers include EVPs Henna Virkkunen and Roxana Mînzatu, Commissioners Maria Luís Albuquerque and Dan Jørgensen, EIB President Nadia Calviño
End of February: Deadline for applications to ENGAGE.EU Think Tank on Responsible AI and Society (8 February)
August 2026: High-risk AI compliance requirements currently due to take effect (subject to Digital Omnibus negotiations)
The Thought That Lingers
There's a Bulgarian word—преди—that means before or previously. It's the kind of word that only matters in retrospect, when you're looking back at what came before a change.
We may be living in the преди of European AI governance right now. Before the high-risk rules took effect. Before the AI Factories came online. Before the Digital Omnibus either passed or didn't. Before we knew whether the gap between regulatory ambition and implementation reality would close or widen.
The Commission missing its own deadline isn't a scandal—it's a symptom. The question isn't whether Europe can write ambitious AI rules. It clearly can. The question is whether it can build the institutional capacity to make those rules mean something in practice. The Irish bill published yesterday, with its statutory independent AI Office, is one answer. The 15 AI Factories coming online is another. The €20 billion InvestAI Facility is a third.
But the clock is ticking. The safety report landed this week with 100+ experts warning about capabilities outpacing governance. Five new unicorns emerged in January. ElevenLabs tripled its valuation in a year. The infrastructure buildout is accelerating. And the guidance that companies need to comply with the AI Act still isn't here.
What comes after преди depends on what happens in the next six months.
Human×AI Daily Brief is compiled from IAPP, Euractiv, European Commission, EDPB/EDPS, TechCrunch, CNBC, DIGITALEUROPE, Verne Global, Data Center Knowledge, International AI Safety Report, IPPR, UC Berkeley CLTC, Euro Security, and Mayors of Europe. This is meant to be useful, not comprehensive.