When AI Ethics Meets the Translation Problem
In Brief: AIOLIA is a €3 million EU Horizon project running from February 2025 to January 2028 that aims to bridge the gap between high-level AI ethics principles and practical engineering implementation. Led by CEA (France) with 20 partners across Europe, Asia, and North America, the project creates actionable guidelines, training materials, and global networks to operationalize the EU AI Act and international AI regulations. The initiative represents a significant attempt to move AI governance from abstract values to concrete design choices.
The question of how to translate ethical principles into engineering practice will be central to discussions at Human x AI Europe in Vienna on May 19, where practitioners and policymakers will work through exactly these implementation challenges.
The Gap Everyone Acknowledges, Few Know How to Close
A curious pattern emerges in AI governance debates. Nearly everyone agrees that AI systems should be "fair," "transparent," and "human-centric." The EU AI Act says so. The OECD principles say so. UNESCO's recommendations say so. And yet, when an engineer sits down to build a medical imaging system or a hiring algorithm, these words offer remarkably little guidance.
What does "fairness" mean when a diagnostic AI performs differently across demographic groups? Which trade-offs are acceptable? Who decides?
This is not a facts disagreement or a values disagreement. It is something more fundamental: a translation problem. The vocabulary of ethics and the vocabulary of engineering operate in different registers, and the bridge between them remains under construction.
AIOLIA, a new EU Horizon project launched in February 2025, represents one of the more ambitious attempts to build that bridge. The project's premise deserves examination not because it will definitively solve the translation problem, but because its approach reveals something important about where AI governance currently stands.
What AIOLIA Actually Proposes
According to the Karlsruhe Institute of Technology's project description, AIOLIA operates on three tiers: guidance, training, and networking.
The guidance component takes a bottom-up approach. Rather than starting with abstract principles and working downward, the project begins with real-world use cases in human cognition and behavior, then works backward to identify what ethical considerations actually matter in those specific contexts. The resulting guidelines are co-created by academic, policy, and industry partners representing diverse professional and geographic contexts.
The training component uses the ADDIE methodology (Analysis, Design, Development, Implementation, Evaluation) to create modular materials hosted on the Embassy of Good Science platform. The formats range from traditional lectures and videos to podcasts, TikToks, and a chatbot designed to teach AI ethics. This diversity of formats reflects an understanding that different stakeholders learn differently.
The networking component connects seven research ethics and integrity networks with three computer science networks, creating channels for disseminating guidelines to ethics experts, early-stage researchers, and policymakers.
ERCIM's project overview notes that the total budget is €2,999,895, with the project running 36 months from February 2025 to January 2028.
The Consortium Structure Tells a Story
The AIOLIA consortium includes 20 partners, and the composition is revealing. CEA (France's atomic energy commission) leads the project. CEPS (Centre for European Policy Studies) provides policy expertise. The Karlsruhe Institute of Technology brings technology assessment capabilities. Sheffield Hallam University contributes through its CENTRIC security research center.
The international dimension is notable. McGill University represents Canada. Osaka University represents Japan. The Chinese Academy of Science and Technology for Development (CASTED) represents China. The Science and Technology Policy Institute (STEPI) represents South Korea. This geographic spread suggests an attempt to create guidelines that can function across different regulatory and cultural contexts.
The inclusion of EURACTIV, a media organization, signals an awareness that guidelines only matter if people know about them. The presence of Oxipit, a Lithuanian medical AI company, and NIT Institute from Serbia suggests an effort to ground the work in actual implementation challenges.
The Harder Question: Will Translation Work?
The project's ambition is clear. The question is whether the approach can succeed where previous efforts have struggled.
Consider the challenge more precisely. When the EU AI Act requires that high-risk AI systems be "transparent," what does this mean for a deep learning model with millions of parameters? The model's decision-making process may be mathematically describable but practically incomprehensible to human reviewers. Does transparency require explainability? If so, what level of explanation satisfies the requirement?
AIOLIA's bottom-up approach has theoretical advantages here. By starting with specific use cases rather than abstract principles, the project may identify which aspects of transparency actually matter in particular contexts. A medical imaging AI might require different transparency mechanisms than a hiring algorithm, even if both fall under the same regulatory category.
The risk, of course, is that context-specific guidelines become so specific that they fail to generalize. If every use case requires its own ethical framework, the project produces a collection of case studies rather than actionable guidance.
The Global Dimension Complicates Everything
AIOLIA's international partnerships introduce both opportunities and tensions. The project explicitly aims to create guidelines that can inform "key international AI dialogues and processes," utilizing UNESCO's platform to reach Africa and South Asia.
This ambition raises a question the project materials do not fully address: to what extent can AI ethics be operationalized across fundamentally different regulatory philosophies?
The EU AI Act takes a risk-based approach, categorizing AI systems by their potential for harm. China's AI regulations emphasize state oversight and content control. The United States has largely relied on sector-specific guidance rather than comprehensive legislation. Japan and South Korea occupy different positions still.
Creating guidelines that work across these contexts requires either finding genuine common ground or accepting that "operationalized ethics" will mean different things in different jurisdictions. The project's success may depend on which of these paths proves viable.
What Success Would Look Like
If AIOLIA achieves its goals, what changes?
The most concrete outcome would be training materials that actually help engineers make better decisions. Not better in some abstract sense, but better in the sense of producing AI systems that comply with regulations, avoid predictable harms, and maintain public trust.
A second outcome would be networks that persist beyond the project's January 2028 end date. The connections between ethics researchers, computer scientists, and policymakers could become infrastructure for ongoing dialogue rather than a three-year collaboration.
A third outcome, perhaps the most ambitious, would be guidelines that influence international AI governance discussions. If AIOLIA's work informs UNESCO processes or bilateral AI dialogues, the project's impact extends well beyond its immediate participants.
The Debate Worth Having
The deeper question AIOLIA raises is whether AI ethics can be operationalized at all, or whether the translation problem is inherent to the enterprise.
One position holds that ethical principles are necessarily abstract because they must apply across contexts. Any attempt to make them concrete risks losing their essential character. On this view, the gap between ethics and engineering is a feature, not a bug.
The opposing position holds that ethics without implementation is merely aspiration. If principles cannot guide action, they serve primarily rhetorical functions. On this view, the translation problem must be solved, even if imperfectly.
AIOLIA implicitly takes the second position. The project's existence is a bet that the gap can be closed, that guidelines can be both principled and practical.
Whether that bet pays off will depend on factors the project cannot fully control: the willingness of engineers to adopt new practices, the capacity of regulators to enforce requirements, and the evolution of AI technology itself. But the attempt is worth watching, because the alternative is a governance framework that remains permanently disconnected from the systems it purports to govern.
Frequently Asked Questions
Q: What is AIOLIA and who funds it?
A: AIOLIA (Operationalizing AI Ethics for Learning and Practice: A Global Approach) is a €2,999,895 EU Horizon Coordination and Support Action funded under Grant Agreement 101187937. It runs from February 2025 to January 2028.
Q: How many partners are involved in AIOLIA?
A: The consortium includes 20 partners across Europe, Asia, and North America, led by CEA (France) and including institutions from Germany, UK, Netherlands, Belgium, Greece, Sweden, Serbia, Lithuania, Italy, Spain, Canada, China, Japan, and South Korea.
Q: What problem does AIOLIA aim to solve?
A: AIOLIA addresses the gap between high-level AI ethics principles (as expressed in the EU AI Act and international frameworks) and their practical application in engineering and system design. The project creates actionable, context-specific guidelines.
Q: Where will AIOLIA's training materials be hosted?
A: Training materials will be hosted on the Embassy of Good Science platform, including lectures, videos, mock reviews, podcasts, TikToks, and a chatbot designed to teach AI ethics.
Q: What is AIOLIA's approach to creating ethics guidelines?
A: AIOLIA uses a bottom-up approach, starting with real-world use cases in human cognition and behavior rather than abstract principles, then translating findings into contextual guidelines co-created by academic, policy, and industry partners.
Q: When does AIOLIA conclude and what are its expected outputs?
A: The project concludes in January 2028. Expected outputs include operationalized ethics guidelines, modular training materials, and established networks connecting ethics researchers, computer scientists, and policymakers across multiple continents.