Part of 2026 May 19, 2026 ·
--- days
-- hrs
-- min
-- sec
Content Hub Canvas Article
Canvas May 6, 2026 · 9 min read

The Regulatory Imagination: What Early Career Researchers Reveal About AI Governance

The Regulatory Imagination: What Early Career Researchers Reveal About AI Governance

In Brief

The AI Regulation Early Career Researchers Conference 2026, hosted by CREATe at the University of Glasgow, brought together emerging scholars to examine how artificial intelligence intersects with labor law, data protection, consumer rights, and intellectual property. Keynote speakers questioned the principle of technological neutrality and analyzed AI litigation through economic frameworks. Panels explored algorithmic management in workplaces, dark patterns in consumer AI, and the challenge of translating legal principles into system design. The conference surfaces a generation of researchers asking not just what rules should govern AI, but what values those rules encode.

The research presented here offers a starting point. The deeper conversation continues May 19 in Vienna, where Human x AI Europe convenes the practitioners, policymakers, and scholars shaping what comes next.

Stand in a conference room at the University of Glasgow's Advanced Research Centre and notice what fills the space. Not the usual suspects of AI discourse: no executives announcing products, no policymakers reading prepared remarks about innovation ecosystems. Instead, early career researchers presenting work-in-progress on algorithmic management, neurodata in workplaces, and the performative nature of transparency. The atmosphere is different. There's a quality of genuine uncertainty here, the productive kind that comes from people still forming their questions rather than defending established positions.

The AI Regulation Early Career Researchers Conference 2026, held on 31 March and 1 April, was conceived in response to a recognition that AI regulation raises questions across multiple areas of law and policy simultaneously. CREATe, the Centre for Regulation of the Creative Economy at the University of Glasgow, hosted the event with funding from the Society of Legal Scholars. The result was something rare: a space where the next generation of regulatory thinkers could test ideas before they hardened into positions.

The Question Behind the Questions

Silvia de Conca of Vrije Universiteit Amsterdam opened with a keynote that reframed the entire conversation. Her question was deceptively simple: "What could (should?) we regulate AI for?" The answer, she argued, is far from obvious. De Conca examined the normative choices embedded in existing frameworks at both Council of Europe and European Union levels, including the ECHR, Convention 108+, the EU Charter, the AI Act, the DSA, the GDPR, and the DMA.

Her critique of technological neutrality deserves attention. The principle sounds reasonable: regulate outcomes, not technologies. But de Conca questioned whether this neutrality masks a deeper problem. When regulators defer to technological expertise, who actually holds normative power? In her view, the regulator should be the one with normative authority, grounded primarily in legislative and regulatory experience rather than technological fluency. The alternative, she suggested, is a kind of "human rights neutrality" driven by fear of losing innovation.

This is the cultural diagnostic the conference offered: not just debates about specific rules, but an examination of what values get encoded when rules are written.

The Licensing Economy Takes Shape

Martin Kretschmer, CREATe Director, introduced ongoing research with Amy Thomas on what they call the AI licensing economy. The project collects and analyzes information on all known commercial agreements between content providers and AI developers. The research maps licensing terms across sectors and modalities, identifying patterns in where these agreements concentrate.

The significance lies in what this mapping reveals about power. As AI systems require training data, the terms on which that data becomes available shape who can build what. The licensing economy is not a neutral marketplace; it is an architecture of access and exclusion. Understanding its contours matters for anyone concerned with competition, creativity, or the distribution of AI's benefits.

Workplaces as Laboratories

The first panel, chaired by Qingqin Zhang, examined the intersection of AI, labor law, and data protection. The presentations revealed workplaces as sites where AI governance questions become viscerally concrete.

Ines Neves of the University of Porto argued for moving beyond a purely risk-based reading of the AI Act. Her focus on industrial management and ergonomics highlighted AI systems deployed to optimize logistics, occupational safety, and worker well-being. These systems pursue objectives that warrant regulatory treatment beyond simple prohibition or rigid risk classification. The question becomes: how do legitimate workplace uses of AI get governed?

Tomasz Mirosławski of the University of Miskolc examined what he calls "technological subordination," where constant software-based monitoring replaces the instructions of a human supervisor. The transformation of working environments into analyzable digital data raises questions about compatibility with existing labor codes and data protection regimes. Mirosławski framed these issues through the lens of employee dignity, a concept that resists easy quantification but remains essential.

Neil Saddington of the University of Glasgow presented research on "refracted transparency," examining how workplace transparency and the right to information can be abused. Drawing on performative transparency theory and analysis of Article 15 of the GDPR, Saddington highlighted the risk of transparency becoming a mechanism of control rather than accountability.

Perhaps most striking was José Miguel Diéguez Rodríguez's presentation on neurodata-driven algorithmic management. AI-enabled workplace neurotechnologies used for occupational safety challenge the stability of core GDPR classifications. When does brain data become health data under Article 9? The research identified persistent uncertainty regarding legal classification and an absence of explicit national frameworks authorizing neurodata use. The proposal under the Digital Omnibus Directive may represent progress, but the gap between technological capability and legal clarity remains wide.

Consumer Protection and the Design of Choice

The second panel, chaired by Weiwei Yi, addressed AI in consumer protection contexts. Beny Saputra of Central European University examined AI robo-advisers on digital investment platforms and the risk of consumer manipulation through dark patterns. The legal fragmentation he identified points to a recurring theme: AI systems operate across regulatory boundaries that were drawn for different purposes.

Amanda Horzyk of the University of Edinburgh explored how law affects practice and interacts with AI systems design. Her presentation covered frameworks from the UK, EU, and international bodies, but focused on the particular difficulty of incorporating transparency and explainability principles into system design. The challenge is not merely technical; it involves translation between legal concepts and engineering practices that operate with different assumptions about what counts as an explanation.

The Transatlantic Divergence

The conference took place against a backdrop of significant regulatory divergence. Recent analysis of U.S. federal AI policy shows the White House pushing for a uniform national standard while states continue legislating at scale. Executive Order 14365 directs the Attorney General to establish an AI Litigation Task Force to challenge state laws that may be preempted by federal law. The stated goal is avoiding "50 discordant State" standards.

The European approach, by contrast, has moved toward comprehensive frameworks like the AI Act. A comprehensive review in the Journal of Economy and Technology emphasizes that effective AI governance must balance innovation and economic competitiveness with societal safeguards, ethical standards, and legal accountability. The review integrates perspectives from economics, computer science, and law, noting that the challenge of defining AI itself complicates regulatory scope.

What the Glasgow conference revealed is a generation of researchers thinking across these divergent approaches, asking not which jurisdiction has the right answer but what questions each approach foregrounds or obscures.

What Gets Naturalized

The conference's methodology workshop, led by Kristofer Erickson and Amy Thomas, introduced empirical methods for AI regulation research. The emphasis on evidence-based approaches reflects a commitment to grounding regulatory debates in observable patterns rather than abstract principles alone.

This matters because regulatory frameworks do not merely respond to technologies; they shape what becomes normal. The early career researchers at Glasgow are asking what gets naturalized when particular rules are adopted. Which harms become visible? Which remain invisible? Whose interests get encoded as defaults?

These are not soft questions. They determine whether AI governance serves as a genuine constraint on power or merely a legitimation of existing arrangements. The next generation of regulatory scholars appears to understand this. The question is whether policymakers are listening.

Frequently Asked Questions

Q: What was the AI Regulation Early Career Researchers Conference 2026?

A: A conference hosted by CREATe at the University of Glasgow on 31 March and 1 April 2026, bringing together PhD students, postdoctoral researchers, and early career lecturers to present research on AI regulation across labor law, data protection, consumer protection, and intellectual property.

Q: What is technological neutrality in AI regulation?

A: The principle that regulation should target outcomes rather than specific technologies. Keynote speaker Silvia de Conca questioned this approach, arguing it may shift normative power away from regulators toward those with technological expertise.

Q: How does algorithmic management affect worker rights?

A: Algorithmic management replaces human supervisors with software-based monitoring and control, raising questions about compatibility with labor codes, data protection regimes, and worker dignity. Researchers at the conference examined these issues across multiple jurisdictions.

Q: What is the AI licensing economy?

A: Research by Martin Kretschmer and Amy Thomas mapping commercial agreements between content providers and AI developers. The project analyzes licensing terms across sectors to understand how access to training data shapes who can build AI systems.

Q: How does U.S. AI regulation differ from the EU approach?

A: The U.S. is pursuing a uniform national standard through executive orders and litigation against state laws, while the EU has adopted comprehensive frameworks like the AI Act. Executive Order 14365 established an AI Litigation Task Force to challenge state regulations.

Q: What are the challenges of regulating neurodata in workplaces?

A: AI-enabled workplace neurotechnologies create uncertainty about legal classification under GDPR categories like health data. Researchers identified an absence of explicit national frameworks authorizing neurodata use for occupational safety purposes.

Enjoyed this? Get the Daily Brief.

Curated AI insights for European leaders — straight to your inbox.

Created by People. Powered by AI. Enabled by Cities.

One day to shape
Europe's AI future

Secure your place at the most important AI convergence event in Central Europe.