Today, 30.04.2026
Good morning, Human. The last day of April brings a question that sounds almost philosophical until you realize it's becoming a regulatory necessity: should AI agents have their own digital identity? CEPS (the Centre for European Policy Studies) thinks so, and their argument cuts to the heart of how Europe plans to govern autonomous systems that are already negotiating services, exchanging data, and interacting with physical infrastructure on our behalf. Meanwhile, a Paris startup just raised €1.5 million to turn regulatory compliance from a cost center into a strategic weapon, and Earlybird closed its largest fund ever at €360 million with a governance model designed to outlast its founders. Three stories, one thread: the infrastructure of trust is being built right now, and the companies paying attention are the ones positioning themselves to benefit.
In Brief
What: CEPS argues that AI agents cannot be governed without their own digital identity infrastructure, a foundational shift that would extend Europe's eIDAS framework to autonomous systems. Why it matters: As AI agents increasingly act on behalf of humans, the absence of identity mechanisms creates accountability gaps that current regulations cannot address. What it means for Europe: The Apply AI Strategy's push for rapid AI deployment makes this more urgent, and Europe's existing digital identity infrastructure (eIDAS 2.0, the Digital Services Act) provides a foundation that other regions lack.
These developments are exactly the kind of signals worth unpacking in person. The real conversation happens May 19 in Vienna at Human x AI Europe, where Europe's AI future gets built in the room, not just on the page.
The Lead: AI Agents and the Identity Problem
CEPS has published what may be the most consequential policy brief of the month, and it arrived with almost no fanfare. The argument is deceptively simple: Europe's digital governance frameworks were built around a premise that no longer holds. For decades, digital identity architecture assumed only humans and legally constituted organizations could participate in the digital environment. AI agents have quietly invalidated that assumption.
The brief, authored by CEPS researchers, makes a structural argument rather than a speculative one. AI agents are already acting on humans' behalf, exchanging data with each other, negotiating services, producing information, and increasingly interacting with physical infrastructure. When an autonomous AI agent performs an action in the digital or physical world, how can it be reliably attributed or verified? And who is ultimately accountable?
The answer, according to CEPS, is that without reliable mechanisms to record and verify agent activity, societies lack the necessary evidence to understand and govern the behavior of autonomous systems. Policymakers cannot meaningfully assess risks, assign responsibility, or design safeguards if autonomous agents' actions remain opaque.
This is not a theoretical concern. The European Commission's Apply AI Strategy aims to accelerate AI deployment across Europe's economy and public sector. As various initiatives encourage the development of increasingly autonomous systems, the number of interactions between software agents, services, and physical infrastructure will only grow. Ensuring these interactions remain attributable and verifiable becomes essential for maintaining trust in increasingly automated environments.
The CEPS brief calls for cryptographically anchored identities for AI agents, secure protocols governing interactions between humans and agents, and mechanisms capable of certifying real-world context. Without these mechanisms, the brief warns, the agentic internet could evolve into an environment where impersonation, unverifiable automation, and synthetic evidence become structurally indistinguishable from legitimate actions.
What makes this particularly relevant for European policymakers is that the continent already has the building blocks. The updated eIDAS framework (2.0) has established what constitutes a "trusted" digital identity. The Digital Services Act should be interoperable with the European Digital Identity Wallet for user authentication. The AI Act introduces transparency obligations requiring certain AI-generated outputs to be disclosed or clearly identifiable. Taken together, these frameworks regulate identities, platforms, and AI tools, but they remain centered on human participation and human interactions with systems and content.
The gap CEPS identifies is not a failure of existing regulation but a recognition that the regulatory architecture needs a new layer. Infrastructure is needed for effective governance. While regulation defines permitted conduct, infrastructure determines whether actions can be attributed, interactions authenticated, and decisions contested.
The Funding Picture
Two funding stories today illustrate different approaches to the same underlying challenge: building durable competitive advantages in a market where AI is reshaping every category.
Cleo Labs, a Paris-based RegTech company, has closed a €1.5 million round led by Larry Berger, with participation from Kima Ventures, Financière Saint-James, and a scout ticket from Accel. The company has built an AI platform called MARIA (Multi-Agent Regulatory Intelligence Architecture) that continuously tracks more than 25,000 regulatory bodies across 106 countries. The platform offers pre-launch regulatory mapping and ongoing monitoring with real-time alerts when regulations change.
The problem Cleo Labs addresses is genuinely painful. Bringing a single connected product, such as a bike helmet, to market can require navigating more than 100 regulations spanning material standards, country-specific certifications, labeling rules, and customs requirements. Non-compliance costs run into hundreds of billions of dollars annually. Decathlon is among Cleo Labs' existing clients, using the platform to accelerate and de-risk international product launches.
The company also won The Pitch by Deel, a global startup competition that drew more than 35,000 applications, taking first place in the regional final hosted at Station F. Proceeds from the raise will go toward developing the platform's technology, supporting commercial growth across Europe, and laying groundwork for a future push into the US market.
Earlybird VC closed Fund VIII at €360 million, the largest in the Berlin firm's nearly 30-year history. The fund continues a track record of raising new capital every three to four years without exception, through bull markets and corrections alike. Across all its investment strategies, including Earlybird Health, the firm now manages €2.5 billion in assets.
The more interesting story is structural. With Fund VIII, Earlybird has implemented what it calls a "perpetual active ownership" model. Only active partners will own the firm, and ownership will always be transferred to active partners when someone leaves. There will be no external sale, no outside investors, and no dilution of the principle that the people building the firm are the ones who own and shape it.
Partner Dr. Andre Retterath, who leads Earlybird's AI and infrastructure practice, articulated the firm's thesis on where value accrues in the AI stack: "At the application layer, it has never been easier to build a product. You can spin something up over a weekend. The constraint has shifted from building to distribution. So while applications are noisy and highly competitive, infrastructure offers stronger moats."
Fund VIII has already deployed capital into Black Forest Labs, SpAItial AI, Sintra AI, Arago, Porters, and Rivia. The thesis is consistent: back deeply technical companies before the category is obvious, hold conviction through cycles, stay independent.
Think Tank Watch
The ELLIS network (European Laboratory for Learning and Intelligent Systems) has spotlighted Neslihan Bayramoglu, a Senior Research Fellow at the University of Oulu in Finland and ELLIS Member. Her research focuses on applied artificial intelligence in health, specifically machine learning and computer vision for medical image analysis and large-scale health data.
Bayramoglu's work represents the kind of applied AI research that often gets overlooked in favor of foundation model headlines. She contributed to establishing one of the earliest connections between deep learning and histopathology image analysis, particularly in breast cancer imaging. Her recent research has focused on machine learning analysis of routinely collected health data, with a specific emphasis on musculoskeletal diseases, particularly osteoarthritis.
Her stated goal is worth noting: "to contribute to machine learning tools that can genuinely ease the lives of people struggling with health issues, whether pain, chronic conditions, or limitations that make everyday life difficult." She participated in the ELLIS #WomenInELLIS campaign to highlight the human side of research and share perspectives she finds important for young female researchers.
The ELLIS network now counts 41 units and one associate unit at world-class institutions in 17 countries, 16 research programs, and a pan-European PhD program. It represents the kind of distributed research infrastructure that Europe has built over the past decade, connecting top researchers across borders to strengthen AI leadership made in Europe.
The Regulatory Calendar
The EU AI Act enforcement timeline continues to create planning challenges for organizations. The current legal deadline for Annex III (high-risk) systems remains August 2, 2026. A preliminary political agreement would push this to December 2, 2027, but that agreement has not been published in the Official Journal, which means the August 2026 date is still binding law.
Organizations face a real planning decision. Two independent law firm analyses, from A&O Shearman and Plesner, reach the same conclusion: organizations should continue planning against the August 2, 2026 deadline. Not because an extension is unlikely to happen, but because it has not happened yet.
The technical file required under Article 11 is not a checklist that can be completed at the last minute. It requires documented system design decisions, training data governance, and fundamental rights impact assessments. Organizations that begin this process now will accumulate credible evidence over time, while those who wait until mid-2027 will find themselves scrambling under significant pressure.
The compliance limbo created by a preliminary agreement that is not yet law is the real story here, not the extension dates themselves. Organizations that treat December 2, 2027 as the current deadline are operating on a legal assumption that has not been validated. Those that continue against August 2, 2026 are accepting a harder near-term lift in exchange for certainty.
The Numbers That Matter
€360 million: Earlybird Fund VIII, the largest in the firm's 29-year history, with a perpetual ownership model designed to outlast its founders.
25,000+: Regulatory bodies tracked by Cleo Labs' MARIA platform across 106 countries, addressing the fragmented compliance landscape for physical products.
41: ELLIS units across 17 countries, representing Europe's distributed AI research infrastructure.
August 2, 2026: The legally binding deadline for EU AI Act Annex III compliance, despite preliminary political agreement on an extension.
€2.5 billion: Earlybird's total assets under management across fund streams, with 9 IPOs and 41 trade sales in its portfolio history.
The Week Ahead
The ELSA General Assembly runs May 5-7 at CISPA Helmholtz Center for Information Security in Saarbrücken, marking the final gathering of the European Lighthouse on Secure and Safe AI project. The AI to Accelerate Scientific Understanding Workshop follows May 26-29 at Ciniq in Berlin, bringing together researchers working on AI applications in scientific discovery.
For organizations tracking EU AI Act compliance, the gap between preliminary political agreement and Official Journal publication remains the key variable. Until the extension is formally published, August 2, 2026 remains the operative deadline.
The Thought That Lingers
The CEPS brief on AI agent identity raises a question that will define the next decade of digital governance: what happens when the entities acting in our digital systems are no longer exclusively human? Europe has spent years building identity infrastructure for people. The next challenge is extending that infrastructure to the autonomous systems that increasingly act on their behalf. The companies and policymakers who recognize this shift early will shape how trust works in an agentic world. Those who wait for the regulations to arrive will find themselves governed by rules they had no hand in writing.
Frequently Asked Questions
What is the CEPS proposal for AI agent identity?
CEPS proposes extending Europe's eIDAS digital identity framework to include AI agents, giving them cryptographically anchored identities that would make their actions attributable and verifiable. This would address accountability gaps as AI agents increasingly act autonomously on behalf of humans.
Why does Cleo Labs' regulatory compliance platform matter?
Cleo Labs' MARIA platform tracks over 25,000 regulatory bodies across 106 countries, turning compliance from a cost center into a competitive advantage. For companies like Decathlon, this means faster, less risky international product launches in a world where a single product can face over 100 different regulations.
What makes Earlybird's Fund VIII different from other VC funds?
Earlybird implemented a "perpetual active ownership" model where only active partners own the firm, with ownership transferring to active partners when someone leaves. This ensures no external sale or dilution of the principle that the people building the firm are the ones who own it.
Is the EU AI Act deadline really August 2026 or December 2027?
The legally binding deadline remains August 2, 2026. While there's a preliminary political agreement to extend it to December 2, 2027, this hasn't been published in the Official Journal yet, so organizations should continue planning against the August 2026 date.
Human×AI Daily Brief is compiled from CEPS, Tech.eu, FinTech Global, Tech.eu, ELLIS, EU AI Act Service Desk, and Tech Jacks Solutions. This is meant to be useful, not comprehensive.