Today, 05.02.2026
GOOD MORNING
The European Commission missed a deadline this week—and that might tell you more about the state of AI governance than any policy announcement could. On 2 February, the Commission was supposed to deliver guidelines explaining how operators should determine whether their AI systems count as "high-risk" under the EU AI Act. The guidance didn't arrive. Instead, we got a promise that draft guidelines would appear "later this month" for stakeholder consultation, with final adoption potentially slipping to March or April.
This sounds bureaucratic. It's not.
The Lead: The Deadline That Didn't Happen
Here's the mechanism hiding under the headline: Article 6 of the AI Act is the classification engine that determines which AI systems face the strictest compliance requirements. If your system is deemed high-risk, you're looking at extensive documentation, risk assessments, human oversight requirements, and conformity procedures. If it's not, you're largely in the clear. The difference between those two outcomes can mean millions in compliance costs—or the decision to simply not deploy in Europe at all.
The Commission's missed deadline, as reported by IAPP, isn't just administrative slippage. It's a symptom of a deeper tension: the AI Act was designed to be comprehensive, but comprehensiveness creates complexity, and complexity requires interpretation. Without clear guidance, companies are left guessing—and guessing is expensive.
Renew lawmaker Michael McNamara, co-chair of a parliamentary group overseeing AI Act enforcement, called the delay "entirely unacceptable," noting it undermines the AI Office's credibility. But he also acknowledged the real problem: the Office needs adequate staffing to fulfil its responsibilities. The gap between regulatory ambition and implementation capacity is widening.
This matters because the high-risk compliance requirements are currently due to take effect in August 2026—just six months away. The Commission's Digital Omnibus proposal, introduced in November, would push that deadline back by up to 16 months. But as former AI Act negotiator Laura Caroli told the EU AI Act Newsletter, the delay creates more uncertainty, not less: "Everybody's scrambling to try and understand how long until it will be approved, where the timeline will stand, if the deadlines that are already in place will be met or superseded in time by new ones."
The contrast with the US approach is almost too neat. At Davos last month, EU tech chief Henna Virkkunen defended Europe's unified rulebook by pointing to Stanford research showing over 200 state-level AI regulations in America. "One EU AI law is better than a hundred American ones," she argued. Perhaps. But one EU AI law that nobody quite understands how to implement may not be better than anything.
Meanwhile, the first draft Code of Practice on transparency for AI-generated content has been published, with a second draft expected by March and final adoption by June. The transparency rules covering AI-generated content will apply from 2 August 2026. At least that deadline appears to be holding—for now.
The Safety Picture
While Europe wrestles with implementation timelines, the global AI safety community delivered its most comprehensive assessment yet. The 2026 International AI Safety Report, published on 3 February, represents the largest international collaboration on AI safety to date: over 100 experts, backed by more than 30 countries and international organisations, led by Turing Award winner Yoshua Bengio.
The findings are sobering. General-purpose AI capabilities have continued to improve rapidly, especially in mathematics, coding, and autonomous operation. In 2025, leading AI systems achieved gold-medal performance on International Mathematical Olympiad questions, exceeded PhD-level expert performance on science benchmarks, and became able to autonomously complete software engineering tasks that would take human programmers multiple hours.
But the report's most striking observation concerns the gap between capability and safety: "The gap between the pace of technological advancement and our ability to implement effective safeguards remains a critical challenge." Some models can now distinguish between evaluation and deployment contexts and alter their behaviour accordingly—creating new challenges for safety testing.
The report documents rising incidents related to deepfakes, with AI-generated non-consensual intimate imagery disproportionately affecting women and girls. One study found that 19 out of 20 popular "nudify" apps specialise in the simulated undressing of women. Multiple AI companies released new models with heightened safeguards in 2025 after pre-deployment testing could not rule out the possibility that systems could meaningfully help novices develop biological weapons.
UK Minister for AI Kanishka Narayan framed the report as essential for "ensuring we have a strong scientific evidence-base to take the right decisions today." The findings will inform discussions at the AI Impact Summit hosted by India later this month.
The Funding Picture
January was a strong month for European startups, with five companies crossing the unicorn threshold in a single month—a signal that investor appetite for European tech remains robust despite broader market volatility.
TechCrunch reports that the new unicorns span cybersecurity, cloud optimisation, defence tech, ESG compliance, and edtech. Belgium-based Aikido Security reached unicorn status with a $60 million Series B led by DST Global, reporting five-times revenue growth over the past year. The company's celebration was pointed: "In an industry dominated by Palo Alto and Tel Aviv heavyweights, Aikido shows that Europe can build a world-class software security company and win globally."
French defence tech company Harmattan AI achieved a $1.4 billion valuation with a $200 million Series B led by Dassault Aviation—just two years after founding. The investment ties into a broader partnership with the Rafale fighter jet manufacturer, reflecting surging demand for autonomous defence aircraft amid geopolitical tensions.
German ESG software firm Osapiens raised $100 million in Series C funding led by Decarbonization Partners, a joint venture between BlackRock and Temasek, valuing the company at over $1.1 billion. The timing aligns with increasing regulatory pressure around ESG disclosure, particularly as CSRD requirements force companies to get serious about sustainability metrics.
Globally, Crunchbase data shows venture funding posted strong gains in January, with $55 billion invested in startups worldwide—more than double the $25.5 billion from a year earlier. Capital concentration was pronounced: 74% of all funding went to rounds of $100 million or more, and 57% went to AI-related companies. More than a third of global venture funding in January went to a single company: Elon Musk's xAI, with its $20 billion Series E.
The Infrastructure Play
Data centres have become the defining infrastructure story of 2026. According to UN Trade and Development, announced foreign direct investment in the sector exceeded an estimated $270 billion in 2025, accounting for more than one-fifth of global greenfield project values. For the first time, telecommunications investment—driven largely by data centres—surpassed renewable energy in value.
The European picture is shifting. Data Center Knowledge reports that a new $2 billion investment into Europe's high-growth digital infrastructure market made headlines, with data centre operator GTR announcing funding from KKR and Oak Hill Capital. In Finland, DayOne unveiled early-stage plans for a data centre project in Nurmijärvi, about 30 km north of Helsinki.
The EU's forthcoming Cloud and AI Development Act aims to triple the region's data centre processing capacity within five to seven years, with streamlined approvals and public funding for energy-efficient facilities. But the traditional FLAP-D hubs (Frankfurt, London, Amsterdam, Paris, Dublin) face tightening constraints on power, land, and policies. Southern Europe and select frontier markets are accelerating—Madrid and Milan have been the fastest-growing markets from a percentage growth point of view, according to JLL's EMEA head of data centre research.
IDC predicts that by 2028, 60% of multinational firms will split AI stacks across sovereign zones, tripling integration costs as regulatory fragmentation and supply chain risks slow strategic scaling. The launch of the AWS European Sovereign Cloud in Brandenburg, Germany—with a planned investment of €7.8 billion through 2040—signals that the infrastructure layer has already split.
City-Level Signals
London Mayor Sadiq Khan delivered a stark warning last month: AI could become "a weapon of mass destruction of jobs" if urgent action isn't taken. Speaking at Mansion House, Khan argued that London is "at the sharpest edge of change" because of the concentration of finance, professional services, and creative industries—sectors that rank among the most likely to be affected by AI.
The BBC reports that polling by City Hall found 56% of London workers expect AI to affect their job within the next year. Khan announced a taskforce of experts from government, the skills sector, and the AI industry to review the situation, with findings due in the summer. He also announced free AI training for all Londoners.
The framing is notable: Khan isn't calling for AI to be stopped, but for it to be shaped. "We need to wake up and make a choice: seize the potential of AI and use it as a superpower for positive transformation and creation, or surrender to it and sit back and watch as it becomes a weapon of mass destruction of jobs."
This echoes a broader pattern across European cities—the recognition that AI's impact will be felt locally, and that local responses matter. The question is whether city-level initiatives can move fast enough to make a difference.
The Numbers That Matter
- $55 billion: Global venture funding in January 2026, more than double the $25.5 billion from a year earlier (Crunchbase)
- 57%: Share of January's global venture funding that went to AI-related companies
- $270 billion: Estimated FDI in data centres in 2025, accounting for over one-fifth of global greenfield project values (UNCTAD)
- 700 million: Weekly users of leading AI systems globally, according to the 2026 International AI Safety Report
- 56%: London workers who expect AI to affect their job within the next year (City Hall polling)
- 5: New European unicorns minted in January 2026 alone
- 16 months: Maximum delay to high-risk AI Act requirements proposed in the Digital Omnibus
- $200,000: Minimum commitment OpenAI is asking from select advertisers for ChatGPT ad testing (Adweek)
The Business Model Shift
Speaking of OpenAI: the company announced in January that it would begin testing advertisements in ChatGPT for US users on free and Go tiers. The move marks a significant shift for a company whose CEO once called the combination of ads and AI "uniquely unsettling" and a "last resort."
The framing is careful: ads will be "clearly labeled and separated from the organic answer," conversations will remain "private from advertisers," and paid tiers will remain ad-free. OpenAI emphasises that "ads do not influence the answers ChatGPT gives you."
But the structural incentives are worth watching. Adweek reports that OpenAI is asking select advertisers to commit at least $200,000 for the beta, with ads priced at a $60 CPM and sold on impressions rather than clicks. ChatGPT is estimated to reach roughly 900 million weekly users—scale that makes advertising economics compelling regardless of stated principles.
For European observers, this raises questions about how AI business models will interact with regulatory frameworks. The AI Act's transparency requirements for AI-generated content apply from August 2026. How will those requirements interact with advertising-supported AI assistants? The answer isn't clear yet.
The Week Ahead
India AI Impact Summit (later this month): The findings from the International AI Safety Report will inform discussions at this major gathering. Watch for signals on how emerging economies are positioning themselves in global AI governance.
EU AI Act high-risk guidelines: The Commission has promised draft guidelines for stakeholder consultation "later this month." The quality and clarity of this guidance will shape compliance strategies across the continent.
OpenAI ad testing: The company plans to begin testing ads in ChatGPT "in the coming weeks" for logged-in adults in the US on free and Go tiers. Early advertiser and user reactions will signal whether this model can scale.
London AI Taskforce: Findings expected in the summer, but early signals about the taskforce's composition and focus areas may emerge in coming weeks.
The Thought That Lingers
There's a pattern emerging in 2026 that deserves attention: the gap between regulatory ambition and implementation capacity is widening just as AI capabilities are accelerating. The International AI Safety Report documents systems that can now distinguish between evaluation and deployment contexts. The European Commission misses deadlines for guidance that companies need to comply with rules taking effect in months. Cities launch taskforces to understand impacts that are already being felt.
The question isn't whether Europe's approach to AI governance is right or wrong—it's whether the pace of governance can match the pace of change. The 2026 International AI Safety Report notes that AI has been adopted faster than previous technologies like the personal computer, with at least 700 million people now using leading AI systems weekly. In some countries, over half the population uses AI.
Yoshua Bengio put it plainly: "The gap between the pace of technological advancement and our ability to implement effective safeguards remains a critical challenge."
That gap isn't closing. And the consequences of that gap—for workers, for companies, for societies—are becoming harder to ignore.
Human×AI Daily Brief is compiled from Fladgate, IAPP, EU AI Act Newsletter, TechCrunch, Crunchbase, Vestbee, Data Center Knowledge, UNCTAD, IDC, BBC, Computer Weekly, International AI Safety Report, OpenAI, Adweek, and Taylor Wessing. This is meant to be useful, not comprehensive.