A curriculum framework filed in Bologna contains a requirement that would strike most education policymakers as unusual: every module must demonstrate how AI literacy integrates with ethical reasoning, safety protocols, and privacy safeguards — before any technical content is introduced. The framework is called Multiplanetary Education®. Its author is Luciana Villanti.
That sequence matters. In most AI education programmes, ethics arrives as an afterthought — a compliance layer bolted onto technical instruction after the models are already running. Villanti inverts the architecture. The human layer comes first. The machine layer serves it. And now she brings that inversion to the Human × AI Conference in Vienna.
The Infrastructure Question
Villanti's career resists easy categorisation. Three decades spanning education, healthcare, and energy systems. Three continents. An MBA from Bologna Business School that she describes not as a pivot but as a bridge — connecting her technical background to global infrastructure models and educational ecosystems.
The through-line is a question that most AI deployment strategies fail to ask: what happens when the connectivity drops out?
Her answer is the AI Cells concept — a distributed, offline AI infrastructure designed for environments where bandwidth is unreliable, institutional support is thin, and the communities that need intelligent systems most are precisely the ones least equipped to access them through conventional cloud architectures. The concept operates at the intersection of two constraints that European AI policy tends to treat separately: infrastructure access and human agency.
The Education Layer
Bologna International Homeschool, which Villanti founded, is the institutional vehicle for these ideas. But calling it a school understates the proposition. It is a deployment framework for human-centered AI education — one designed to function in contexts where the standard assumptions of Western ed-tech (reliable internet, institutional procurement budgets, centralised learning management systems) do not hold.
Multiplanetary Education®, the STEAM curriculum she developed, integrates AI literacy with systems thinking, ethical reasoning, and privacy. The word "multiplanetary" is not marketing. It is a design constraint: build educational systems robust enough to function beyond the infrastructure assumptions of a single planet. That engineering discipline — designing for the hardest deployment environment first — produces frameworks that are, by construction, more resilient in the environments we actually have.
By mapping how human agency, institutional readiness, and access intersect, Villanti analyses how AI interprets our world and identifies where solutions are needed most. The method is diagnostic before it is prescriptive.
Why Vienna
The Human × AI Conference is built around a premise that Villanti's work makes concrete: the deployment pathway matters as much as the model. Europe's AI sovereignty conversation tends to focus on compute capacity, regulatory frameworks, and industrial policy. Those are necessary conditions. They are not sufficient ones.
Villanti occupies the space between infrastructure and adoption — the question of how intelligent systems reach the communities, classrooms, and healthcare facilities where they generate the most value. Her experience across three continents gives her a comparative lens that most European AI practitioners lack: she has seen what deployment looks like when you cannot assume that the grid stays on, the school has a procurement budget, or the nearest data centre is within a thousand kilometres.
That perspective is precisely what the conference's discussion of AI ecosystems needs. Sovereign infrastructure means little if the deployment pathways exclude the populations that stand to benefit most.
Implications
- For education policymakers: Villanti's Multiplanetary Education® framework demonstrates that AI literacy and ethical reasoning can be integrated from the ground up — not retrofitted as a compliance layer after deployment.
- For infrastructure strategists: The AI Cells concept challenges the assumption that AI deployment requires persistent cloud connectivity — opening a design space for distributed, offline-capable systems in underserved regions.
- For conference attendees: Expect a practitioner's perspective on what human-centered AI looks like when stripped of the comfortable assumptions of Western tech infrastructure — and what Europe can learn from that constraint.
Luciana Villanti joins Human × AI on May 19, 2026, in Vienna.