The Room Where It Happens
There's a particular quality to academic workshops that address urgent questions. The Humanities Institute at University College Dublin has that quality today – a seminar room on the second floor, afternoon light, the kind of space where ideas get tested before they become policy.
The workshop "Minorities in AI: Who shapes AI ethics?" brings together an unusual constellation: Silvia Ivani from UCD, Anna Croon from Umeå University, Leda Berio from UCD, and Asmelash Teka Hadgu from the DAIR Institute (Distributed AI Research Institute), the organization co-founded by Timnit Gebru after her departure from Google. The event is funded by the Centre for Ethics in Public Life and organized by the Minorities and Philosophy UCD Chapter.
What makes this gathering worth attention isn't the prestige of the speakers – though that's considerable. It's the premise. As the organizers, Delia Lodi Rizzini and Agnese Casellato, frame it: the mainstream discourse on AI treats problems of privacy, fairness, and bias as technical challenges to be solved. A rising number of voices in ethics see them differently – as symptoms of tacit philosophical commitments that remain unexamined.
The Architecture of Assumptions
Consider what happens when an AI system is designed. Every choice – what data to train on, what outcomes to optimize for, how to define "fairness" mathematically – encodes assumptions about what matters and who matters. These assumptions don't announce themselves. They become infrastructure.
The workshop's intellectual framework draws on what might be called non-reductionist principles: ethics of care, enactivism (the theory that cognition arises through dynamic interaction between an organism and its environment), user privacy, and phenomenology (the philosophical study of structures of experience and consciousness). These aren't decorative additions to technical work. They represent fundamentally different starting points for thinking about what AI should do and for whom.
Research published in the ACM Digital Library by Abeba Birhane and colleagues examined papers from the two premier AI ethics conferences – FAccT (Fairness, Accountability, and Transparency) and AIES (AI, Ethics, and Society) – and found something troubling. Although the goals of most papers were commendable, their consideration of negative impacts on traditionally marginalized groups remained shallow. The field, they argued, would benefit from approaches sensitive to structural and historical power asymmetries.
This is the gap the Dublin workshop addresses. Not by adding marginalized voices as an afterthought, but by asking whether the entire framework needs reconstruction.
What Gets Naturalized
Pay attention to what becomes normal. This is perhaps the most important diagnostic question for anyone watching AI development. The workshop's inclusion of student reactions – built into the schedule as formal sessions – signals something about method. The next generation of ethicists isn't being asked to absorb received wisdom. They're being asked to respond, to challenge, to articulate what they notice.
The speakers represent different angles on the same problem. Silvia Ivani has worked on structural challenges of AI in healthcare – a domain where algorithmic decisions can determine who receives treatment and who waits. Anna Croon brings perspectives from Umeå University's work on feminist AI. Leda Berio addresses questions of social cognition and trust. Asmelash Teka Hadgu, joining from the DAIR Institute, represents an organization explicitly founded to challenge the concentration of AI research in a handful of powerful institutions.
What connects these perspectives isn't a shared political position. It's a shared methodological commitment: that AI ethics cannot be done properly without attending to who is doing it, from what position, with what assumptions, and with what consequences for whom.
The European Context
This workshop doesn't exist in isolation. A 2024 meeting at Dublin City University on the ethics of AI, held under the auspices of the European Future Talks, brought together over sixty participants including church representatives, government officials, academic experts, and civil society actors. One of their conclusions: "AI should not reinforce inequality. The idea that the 'maximisation of shareholder value' is justified by collateral social benefits does not seem adequate as a guiding principle."
The European Union's AI Act provides regulatory scaffolding. But regulation is not the same as ethics. The Act establishes what's prohibited and what requires oversight. It doesn't – cannot – determine what values should guide development in the first place. That work happens in rooms like the one at UCD today.
Philosophy departments across universities are launching dedicated AI ethics courses, recognizing that technical education alone doesn't prepare developers to navigate the moral terrain they're creating. As one student in a University of Colorado Denver course put it: "I think every comp-sci person should take a class like this. They need to take that hard look and ask, do I really want to do that?"
The Question That Lingers
The workshop runs from 14:20 to 18:00 GMT, ending with a reception. Wine and snacks. The social architecture of academic life, where conversations continue past the formal program.
But the question posed in the title – "Who shapes AI ethics?" – doesn't resolve with a reception. It's a question about power, about whose experiences count as data, whose concerns count as legitimate, whose frameworks become the default.
The answer, right now, is uncomfortable. AI ethics is shaped predominantly by well-resourced institutions in wealthy countries, by researchers with access to publication venues and policy conversations, by companies with the resources to fund ethics teams (and the discretion to ignore them). The margins – the communities most affected by algorithmic systems, the scholars working outside dominant paradigms, the voices that don't fit neatly into existing frameworks – remain margins.
What the Dublin workshop represents is a refusal to accept that arrangement as natural. The organizers describe their goal as providing "an interdisciplinary platform bridging academia and the industry, current ethicists and future ones, voices in the minorities and changes that reach the mainstream."
That's not a solution. It's a practice. And practices, repeated and refined, can become cultures. Cultures can become norms. Norms can become – eventually – the infrastructure that shapes what AI does and for whom.
The question is whether enough people are paying attention to make that happen.
Frequently Asked Questions
Q: What is the "Minorities in AI" workshop at UCD?
A: It's a workshop held on April 10, 2026, at University College Dublin's Humanities Institute, bringing together philosophers, ethicists, and AI practitioners to examine who shapes AI ethics discourse. The event features speakers from DAIR Institute, Umeå University, and UCD, with interactive student sessions.
Q: What is the DAIR Institute?
A: The Distributed AI Research Institute (DAIR) is an independent AI research organization co-founded by Timnit Gebru. It focuses on AI research that centers the experiences and concerns of communities most affected by algorithmic systems, rather than concentrating research in large corporate or academic institutions.
Q: How does this workshop relate to the EU AI Act?
A: The EU AI Act provides regulatory requirements for AI systems, but the workshop addresses a different layer: the underlying philosophical assumptions and values that should guide AI development. Regulation establishes legal boundaries; ethics work determines what values inform design choices within those boundaries.
Q: What are "non-reductionist principles" in AI ethics?
A: These include ethics of care, enactivism, phenomenology, and user privacy frameworks – approaches that resist reducing ethical questions to technical optimization problems. They emphasize relational, experiential, and contextual dimensions of ethics rather than purely computational definitions of fairness or bias.
Q: What did research find about marginalized groups in AI ethics papers?
A: Research by Abeba Birhane and colleagues examining FAccT and AIES conference papers found that while most papers had commendable goals, their consideration of negative impacts on traditionally marginalized groups remained shallow, suggesting the field needs more attention to structural and historical power asymmetries.
Q: Who organized the UCD workshop?
A: The workshop was organized by Delia Lodi Rizzini (head of the minorities and AI committee) and Agnese Casellato (co-Chair of MAP) from the Minorities and Philosophy UCD Chapter, with funding from the Centre for Ethics in Public Life at University College Dublin.