Today, 13.02.2026
GOOD MORNING
There's a peculiar rhythm to Brussels in February—the Commission buildings humming with activity while the rest of the city shivers through another grey morning. But this week, the chill isn't just meteorological. The European Commission has missed a deadline it set for itself, and the implications are rippling through boardrooms and compliance teams across the continent.
The Lead: Europe's AI Act Implementation Hits a Wall
The European Commission was supposed to deliver guidance on Article 6 of the AI Act by 2 February. It didn't. This sounds bureaucratic. It's not.
Article 6 is the mechanism that determines whether your AI system counts as high-risk—and therefore whether you face the full weight of the Act's documentation, conformity assessment, and post-market monitoring requirements. Without clear guidance, companies are flying blind toward an August deadline that may or may not hold.
As IAPP reported this week, the Commission indicated it's still integrating months of feedback and plans to publish a final draft by month's end, with adoption potentially slipping to March or April. But here's the mechanism hiding under the headline: the Commission is simultaneously proposing to delay the very requirements it's struggling to explain.
The Digital Omnibus package, introduced late last year, would push back high-risk AI obligations by up to 16 months—potentially to December 2027. European Commission Deputy Director-General Renate Nikolay acknowledged the tension during a January European Parliament hearing: These standards are not ready, and that's why we allowed ourselves in the AI omnibus to give us a bit more time.
Laura Caroli, a former AI Act negotiator and policy advisor to Parliament co-rapporteur Brando Benifei, put it more bluntly to IAPP:
There was one thing that was fixed from the very beginning, from the very letter of the law. It's just not there, and it is supposed to give clarity. You're not giving clarity.
Laura Caroli
The contrast is almost too neat. Last summer, Commission representatives promised they would hold firm on the Act's timeline even as industry pressure for delays intensified. Now, the same institution is missing its own deadlines while proposing to extend everyone else's.
For operators, this creates what CX Today calls a dual-speed planning nightmare. Do you prepare for August 2026 compliance, or bet on the Omnibus passing and plan for late 2027? The prudent answer—prepare for August—is also the expensive one. And the uncertainty itself is becoming a competitive disadvantage.
The standardization bodies tasked with developing technical standards for AI missed their fall 2025 deadline and are now aiming for end of 2026. Some member states still haven't appointed their national competent authorities. The entire implementation apparatus is running behind schedule, and the question is whether the Omnibus represents pragmatic adjustment or regulatory retreat under pressure.
EU lawmakers have questioned how much Big Tech has influenced the Commission's omnibus proposal, noting it has coincided with months of pressure from the U.S. government to lessen Europe's regulatory regime. The timing is uncomfortable: just as the Trump administration signals intent to challenge state AI laws and export the U.S. tech stack globally, Europe appears to be softening its own approach.
Watch the calendar—reality lives there. The August 2026 deadline for high-risk AI systems remains the legal baseline until the Omnibus passes (if it passes). Companies that assume delay and get caught short will have no one to blame but themselves.
The Infrastructure Play
While Brussels debates timelines, the infrastructure race accelerates. Mistral AI announced this week it will invest €1.2 billion ($1.43 billion) in AI data centers in Sweden—its first infrastructure investment outside France.
This investment is a concrete step toward building independent capabilities in Europe, dedicated to AI. By delivering a fully vertical offer with locally processed and stored data, we are reinforcing Europe's strategic autonomy and competitiveness.
Arthur Mensch, Mistral CEO
The choice of Sweden is strategic. Nordic countries offer cooler temperatures (reducing cooling costs), some of Europe's lowest energy prices, and abundant renewable power. The facility, developed in partnership with Swedish company EcoDataCenter, is scheduled to open in 2027.
This follows OpenAI's announcement last year of an AI data center in Norway as part of its Stargate initiative. The pattern is clear: AI infrastructure is flowing to where power is cheap, green, and available.
But availability is the constraint. According to the European Data Centre Association's 2026 report, 67% of European data center operators now cite access to power as their single greatest operational challenge. The report forecasts €176 billion in cumulative investment between 2026 and 2031, but warns that future capacity growth will be constrained primarily by grid readiness rather than access to capital.
BCS research puts it starkly: only one in five European data centers are currently ready for AI workloads. The problem isn't just power—it's power density. Traditional data centers were built for 8-12 kW per rack. AI clusters are pushing beyond 100 kW per rack, requiring liquid cooling systems that most existing facilities lack.
Meanwhile, Gartner predicts European spending on sovereign cloud infrastructure will triple from $6.9 billion in 2025 to $23.1 billion by 2027. The geopolitical driver is clear: reducing reliance on U.S. hyperscalers in an era of uncertain transatlantic relations.
The Funding Picture
The capital keeps flowing, though the distribution tells a story.
Anthropic closed a $30 billion funding round yesterday at a $380 billion post-money valuation—more than double its September valuation. The round, led by Coatue and Singapore sovereign wealth fund GIC, includes portions of previously announced investments from Microsoft and Nvidia. Anthropic reports $14 billion in annualized revenue, with roughly 80% from enterprise customers.
This is the second-largest private tech financing round on record, behind only OpenAI's $40 billion raise last year. The message: frontier AI development remains extraordinarily capital-intensive, and the winners are consolidating their positions.
In Europe, the picture is more modest but still active. Several European startups crossed the unicorn threshold in recent weeks: Belgium's Aikido Security (cybersecurity), Harmattan AI (artificial intelligence), Germany's Osapiens (sustainability compliance software), and Cast AI (cloud optimization).
Station F launched F/ai, a new AI program backed by OpenAI, Anthropic, Google, Microsoft, Meta, and others. The Paris-based startup campus selected 20 AI-native startups through a recommendation-based process, with a key qualifier being potential to reach €1 million in revenue within six months.
Stanhope AI, a London-based deep-tech startup spun out of UCL and King's College London, raised $8 million in seed funding led by Frontline Ventures. The company is building AI models based on the Free Energy Principle—a framework for explaining how intelligent systems minimize uncertainty through continuous perception and action.
And Simile secured $100 million from Index Ventures and others for AI that predicts human behavior, with backing from AI luminaries Fei-Fei Li and Andrej Karpathy.
The pattern: European AI funding is healthy but concentrated in specific niches—enterprise tools, compliance software, infrastructure optimization. The frontier model race remains an American (and increasingly Chinese) affair.
Think Tank Watch
The International AI Safety Report 2026 dropped on 3 February, and it deserves more attention than it's getting.
Led by Turing Award winner Yoshua Bengio and authored by over 100 AI experts, the report is backed by more than 30 countries and international organizations. It represents the largest global collaboration on AI safety to date.
The findings are sobering. AI capabilities continue to improve rapidly, driven by inference-time scaling—allowing models to use more computing power to generate intermediate steps before giving final answers. This has led to particularly large performance gains on complex reasoning tasks in mathematics, software engineering, and science.
But the report emphasizes that capabilities remain jagged: leading systems may excel at some difficult tasks while failing at simpler ones. And the trajectory through 2030 is deeply uncertain—progress could plateau, continue at current rates, or accelerate dramatically.
On risks, the report identifies three categories: malicious use (from criminal activity to cyberattacks to biological weapons development), malfunctions (reliability challenges and possible loss of control), and systemic risks (labor market impacts and risks to human autonomy).
The biological risk section is particularly striking. The report notes that AI systems now match or exceed expert-level performance on benchmarks measuring knowledge relevant to biological weapons development. OpenAI's o3 model outperforms 94% of domain experts at troubleshooting virology lab protocols. For the first time, all three major AI companies released models with heightened safeguards after pre-deployment testing couldn't rule out that systems could meaningfully help novices develop biological weapons.
On labor markets, the report notes disagreement on magnitude but suggests early evidence points to junior positions in fields like writing and translation being most at risk. One study found a clinician's ability to detect tumors without AI assistance dropped by 6% just three months after the introduction of AI support—a concerning signal about cognitive offloading.
The report will inform discussions at India's AI Impact Summit later this month. It's worth reading in full.
The Policy Situation
The geopolitical context for European AI policy is shifting rapidly.
The Atlantic Council's analysis of AI and geopolitics in 2026 highlights several trends worth watching. The U.S. is pushing AI tech exports to counter China, with the Trump administration's National Security Strategy explicitly stating: We want to ensure that US technology and US standards—particularly in AI, biotech, and quantum computing—drive the world forward.
This creates pressure on Europe from both directions. The U.S. wants Europe to adopt American AI infrastructure and standards. China is advancing in open-source AI models and applied AI deployment. Europe's regulatory approach—once seen as a potential global standard—is now being questioned as a competitive liability.
ECB Executive Board member Isabel Schnabel's speech this week, Made in Europe, offered a counterpoint. She argued that Europe's quality of life, strong institutions, and social protection represent genuine competitive advantages—and that the key to European competitiveness is unlocking the full potential of the Single Market through a 28th regime that would give firms seamless access to the entire European market.
Europe can build on these fundamentals to become even stronger. The key is to unlock the full potential of the Single Market—one of Europe's most powerful assets—to deliver what Europe lacks today: not talent, not ideas, not research—but scale.
Isabel Schnabel
The question is whether Europe can achieve scale while maintaining its regulatory distinctiveness, or whether the two are fundamentally in tension.
City-Level Signals
While the high-level debates continue, European cities are quietly building practical AI capabilities.
Eurocities reports that Gothenburg, Espoo, Munich, and Riga are all running AI pilots that demonstrate what responsible municipal AI looks like in practice.
In Gothenburg, early pilots in home care, schools, and libraries are showing how generative AI can remove barriers between people and information. Now I get the answers in seconds, even while standing by the patient's door, says a home-care worker involved in the pilot.
Espoo's AI-powered translation automatically renders city services into 13 languages. Munich is developing a conversational assistant that lets users talk to the city's geodata portal as if it were a person—using open-source models like Mistral, Gemma3, and Qwen2.5 that performed very close, sometimes even better, than commercial LLMs.
Riga has moved from a decentralised chaos to a centralised research approach, establishing an internal working group and operating on three levels: using AI assistants like Copilot, developing tailored AI functions, and integrating AI capabilities directly into municipal systems.
The common challenges: trust, data governance, skills, and language support. Most commercial AI models don't handle European languages well, creating opportunities for local solutions.
The Smart Communities Network held its final meeting in the current project phase in June, but the work continues through new initiatives like the LDT CitiVERSE EDIC (European Digital Infrastructure Consortium) and the LDT4SSC project, which will offer open calls for cities to experiment with local digital twins starting in November 2025.
The Numbers That Matter
- €176 billion: Projected cumulative European data center investment, 2026-2031 (EUDCA)
- 67%: European data center operators citing power access as their top operational challenge
- 20%: Share of European data centers currently ready for AI workloads (BCS)
- $30 billion: Anthropic's latest funding round, at $380 billion valuation
- €1.2 billion: Mistral AI's infrastructure investment in Sweden
- 16 months: Maximum proposed delay for high-risk AI Act obligations under Digital Omnibus
- 90%: Share of European data center electricity from renewable sources (EUDCA)
- $23.1 billion: Projected European sovereign cloud spending by 2027 (Gartner)
The Week Ahead
February 14: Valentine's Day, but also the informal deadline for Commission to circulate revised high-risk AI guidelines for feedback
February 20-21: India AI Impact Summit, where the International AI Safety Report 2026 will inform discussions
Late February: Expected publication of Commission's final draft guidelines on Article 6 high-risk classification
March 12: Smart City Exchange Forum 2026 in Tallinn, focused on urban resilience
Ongoing: European Parliament scrutiny of Digital Omnibus proposal continues
The Thought That Lingers
There's a tension at the heart of European AI policy that the missed deadline makes visible. The AI Act was designed to be the world's first comprehensive AI regulation—a statement that Europe could lead on governance even if it couldn't lead on development. But comprehensive regulation requires comprehensive guidance, and comprehensive guidance requires time that the technology isn't giving anyone.
The Commission is caught between two imperatives: maintaining the credibility of the regulatory framework it fought to create, and acknowledging that the framework may be moving faster than the institutions can support. The Omnibus proposal is an admission that something has to give.
What's less clear is whether the delay represents a temporary adjustment or a permanent retreat. If the standards aren't ready by late 2027 either, what then? At some point, the question becomes whether Europe is regulating AI or merely announcing intentions to regulate it.
The companies that have been preparing for August 2026 compliance are understandably frustrated. The companies that have been lobbying for delay are understandably pleased. But both groups face the same underlying uncertainty: no one knows what the rules will actually be, or when they'll actually apply.
In the meantime, the infrastructure race continues, the funding flows, and the capabilities advance. The AI systems being deployed today will be subject to whatever rules eventually emerge. The question is whether those rules will arrive in time to matter.
Human×AI Daily Brief is compiled from IAPP, CNBC, European Commission, European Data Centre Association, Atlantic Council, Eurocities, International AI Safety Report, and other sources. This is meant to be useful, not comprehensive.