Part of 2026 May 19, 2026 ·
--- days
-- hrs
-- min
-- sec
Content Hub Debate Article
Debate Apr 21, 2026 · 10 min read

CEPS Ideas Lab 2026: What Europe's Biggest Policy Gathering Reveals About the AI Governance Debate

CEPS Ideas Lab 2026: What Europe's Biggest Policy Gathering Reveals About the AI Governance Debate

The questions raised at Ideas Lab don't stay in Brussels. If the intersection of AI governance, European competitiveness, and institutional design matters to your work, Human x AI Europe in Vienna on May 19 is where these conversations continue with practitioners who are building the answers.

The Surface Debate and What Lies Beneath

When CEPS convened its Ideas Lab 2026 at The Square in Brussels on 2-3 March, the official agenda covered familiar territory: European security, economic competitiveness, and institutional reform. The framing document speaks of enhanced cooperation in R&I, digitalisation and higher mobility and Europe's need to demonstrate its resolve to protect and build upon its many achievements.

This language is instructive. Not for what it says, but for what it assumes.

The assumption embedded in CEPS's framing is that Europe's challenges in AI and digitalisation are primarily coordination problems, solvable through better cooperation and clearer resolve. This may be true. It may also be a category error that prevents the real disagreements from surfacing.

Three Debates Masquerading as One

When European policymakers discuss AI strategy, they often conflate at least three distinct disagreements:

The facts disagreement: Does Europe actually lag in AI capability, or does it lead in specific domains (industrial AI, privacy-preserving systems, regulatory frameworks) that matter more than foundation model scale? The answer depends entirely on which metrics one privileges. Those who cite compute capacity and venture funding see a crisis. Those who cite regulatory influence and industrial application see a different picture.

The values disagreement: Should AI development prioritise speed-to-market or rights-by-design? This is not a technical question. It is a question about what kind of society Europe wants to be, and whether the costs of moving slower are acceptable given the benefits of moving more carefully.

The incentives disagreement: Even if European stakeholders agree on facts and values, they face misaligned incentives. Member states compete for AI investment. Startups need capital that often comes with strings attached to non-European platforms. Researchers face pressure to publish in venues that reward scale over applicability. These structural incentives may matter more than any policy declaration.

Until these three disagreements are separated, European AI debates will continue to generate heat without light. Participants will talk past each other, each addressing a different underlying question while appearing to discuss the same topic.

The Security-AI Nexus: A Case Study in Conflation

CEPS's agenda places a European security pillar at the top of its priorities, citing increasing hybrid warfare from Russia and a loosening transatlantic alliance. This framing connects directly to AI through the concept of technological sovereignty.

But what does technological sovereignty actually mean in this context? The term carries at least four distinct meanings:

  • Infrastructure sovereignty: European-owned compute infrastructure and data centres
  • Model sovereignty: Foundation models trained on European data with European values embedded
  • Regulatory sovereignty: The ability to set rules that apply to AI systems operating in Europe regardless of origin
  • Supply chain sovereignty: Reduced dependency on specific foreign providers for critical components

A policymaker might support all four, three of four, or only one. A startup founder might see infrastructure sovereignty as essential but model sovereignty as counterproductive. A researcher might prioritise regulatory sovereignty while viewing supply chain sovereignty as economically unrealistic.

The strongest version of the sovereignty argument holds that Europe cannot protect its citizens' rights or its strategic interests if critical AI infrastructure is controlled by entities whose incentives diverge from European values. This argument deserves serious engagement.

The strongest version of the openness argument holds that AI development benefits from global collaboration, that European isolation would slow progress without meaningfully increasing security, and that regulatory sovereignty is sufficient to protect European interests without requiring infrastructure or model sovereignty. This argument also deserves serious engagement.

What does not deserve engagement is the pretence that these positions are simply pro-European versus anti-European, or cautious versus reckless. The disagreement is more interesting than that.

The Competitiveness Question: What Would Have to Be True?

CEPS's framing calls for Europe to reflect more confidence and preparedness for the future. This raises a question worth asking directly: what would have to be true for Europe to be genuinely competitive in AI?

One answer: Europe would need to match US and Chinese investment in compute infrastructure, attract and retain top AI talent, and create regulatory conditions that favour European champions. This is the catch-up theory of competitiveness.

Another answer: Europe would need to define competitiveness differently, focusing on domains where European strengths (industrial expertise, regulatory credibility, privacy-conscious design) create genuine advantages. This is the differentiation theory of competitiveness.

A third answer: Europe would need to accept that AI competitiveness is not a meaningful goal for a political union, that the relevant unit of competition is the firm or the research institution, and that European policy should focus on enabling conditions rather than picking winners. This is the framework theory of competitiveness.

Each theory implies different policies. The catch-up theory suggests massive public investment in compute and aggressive talent acquisition. The differentiation theory suggests doubling down on regulatory leadership and industrial AI applications. The framework theory suggests removing barriers and letting market dynamics determine outcomes.

The debate at Ideas Lab and similar forums often proceeds as if everyone agrees on which theory is correct. They do not. Making the disagreement explicit would be more productive than assuming consensus that does not exist.

The Institutional Question: Who Decides?

CEPS's call for Europe to demonstrate its resolve to protect and build upon its many achievements raises a governance question that AI policy makes acute: who has legitimate authority to make decisions about AI development and deployment?

The EU institutions have claimed significant authority through the AI Act and related regulations. Member states retain authority over security, education, and healthcare, domains where AI applications are increasingly consequential. Cities and regions are making their own AI procurement and deployment decisions. Civil society organisations claim standing to represent affected communities. And the AI developers themselves, whether European or not, make daily decisions that shape what AI systems can do.

This is not a coordination problem. It is a legitimacy problem. Different actors have different claims to authority, and those claims sometimes conflict.

The most productive debates acknowledge this complexity rather than assuming it away. When someone says Europe should do something about AI, the question which Europe, through which institutions, with what mandate? is not pedantic. It is essential.

What Ideas Lab Reveals About the State of European AI Discourse

The CEPS Ideas Lab format, combining high-level plenaries and a wide array of lab sessions, reflects a particular theory of how policy progress happens: through convening the right people, exposing them to the right research, and facilitating the right conversations.

This theory has merit. Ideas Lab has attracted significant attention, with over 4,600 views on its event page alone. The invitation-only format suggests a curated audience of decision-makers.

But convening power is not the same as clarifying power. The question is whether events like Ideas Lab help participants understand what they actually disagree about, or whether they reinforce the comfortable assumption that everyone shares the same goals and merely differs on tactics.

The upcoming CEPS events listed alongside Ideas Lab offer a window into the specific debates that will shape European AI policy in the coming months: website-blocking costs, the Apply AI Strategy Task Force, cybersecurity governance for EU institutions, and international participation in FP10. Each of these topics contains its own nested disagreements about facts, values, and incentives.

The Question Worth Asking

If European AI policy debates are stuck, the solution is not more consensus-building. It is better disagreement.

Better disagreement means naming the type of conflict at play. Is this a dispute about what is true, what is valuable, or what incentives are operating? Better disagreement means steel-manning opposing positions before critiquing them. Better disagreement means asking what would have to be true for the other side to be right.

CEPS Ideas Lab 2026 brought together people who care about Europe's future. The question is whether they left with a clearer understanding of what they actually disagree about, or whether they left with the comfortable illusion of shared purpose.

The former would be more useful. The latter is more common.

Frequently Asked Questions

Q: What is CEPS Ideas Lab 2026?

A: CEPS Ideas Lab 2026 was a two-day invitation-only policy conference held 2-3 March 2026 at The Square in Brussels, organised by the Centre for European Policy Studies. It focused on European security, economic competitiveness, and institutional reform, with particular attention to AI, digitalisation, and R&I cooperation.

Q: What are the main themes of European AI policy debate in 2026?

A: European AI policy debates in 2026 centre on three intersecting questions: whether Europe lags in AI capability (a facts disagreement), whether speed or rights-protection should be prioritised (a values disagreement), and how to align incentives across member states, startups, and researchers (an incentives disagreement).

Q: What does "technological sovereignty" mean in European AI policy?

A: Technological sovereignty carries at least four distinct meanings: infrastructure sovereignty (European-owned compute), model sovereignty (European foundation models), regulatory sovereignty (ability to set rules for all AI systems operating in Europe), and supply chain sovereignty (reduced dependency on foreign providers). Policymakers often conflate these different goals.

Q: How does the AI Act relate to European AI competitiveness debates?

A: The AI Act represents Europe's claim to regulatory sovereignty, the ability to set rules that apply to AI systems regardless of origin. Whether this regulatory approach enhances or hinders European competitiveness depends on which theory of competitiveness one accepts: catch-up, differentiation, or framework-based approaches.

Q: When is the next major European AI policy event after Ideas Lab 2026?

A: CEPS has scheduled several follow-up events in April 2026, including sessions on the Apply AI Strategy Task Force (15 April), cybersecurity governance for EU institutions (17 April), and international participation in FP10 (23 April). Human x AI Europe takes place in Vienna on 19 May 2026.

Q: What is the difference between facts, values, and incentives disagreements in AI policy?

A: Facts disagreements concern what is empirically true (e.g., does Europe lag in AI?). Values disagreements concern what should be prioritised (e.g., speed versus rights). Incentives disagreements concern structural pressures that shape behaviour regardless of stated goals. Productive policy debate requires identifying which type of disagreement is actually at play.

Created by People. Powered by AI. Enabled by Cities.

One day to shape
Europe's AI future

Early bird tickets available. Secure your place at the most important AI convergence event in Central Europe.