A boardroom in Paris. A regulated enterprise — financial services, healthcare, critical infrastructure — wants to deploy AI at scale. The use case is clear. The business case is compelling. Then the compliance team arrives. Risk assessment frameworks. Impact evaluations. Transparency requirements. The EU AI Act classification. The room divides into two camps: the builders who want to ship, and the lawyers who want to stop. The project stalls.
Cristian Santibanez has seen this scene play out dozens of times. He has also seen what happens when the framing changes — when governance is not bolted on at the end but wired into the system from day one. The result, consistently, is not slower deployment but faster. Not lower performance but higher. Not less ambitious AI but more. This is the thesis behind Ethiqs.ai, the AI strategy, training, and governance firm he founded to help regulated organisations deploy AI that is compliant by design and high-performance by consequence.
The Performance Paradox
The intuition that ethics constrains performance is rooted in a specific mental model: that governance is a filter applied after the system is built, catching problems at the cost of speed. Santibanez inverts the architecture. In his framework, governance is not a filter but a design constraint — and design constraints, as any engineer knows, produce better systems. A bridge designed for a specific load tolerance is not weaker than one designed for infinite loads. It is more efficient, more reliable, and more buildable.
The same principle applies to AI in regulated environments. An agentic AI system designed from the outset to meet EU AI Act requirements — with documented decision pathways, auditable outputs, and human oversight mechanisms — does not just satisfy regulators. It satisfies users. It satisfies procurement teams. It satisfies the enterprise customer who needs to explain to their board why they are trusting a machine with consequential decisions. Compliance becomes a feature, not a tax.
Santibanez’s track record makes the argument concrete. Over fifteen years at the intersection of technology, strategy, and impact, he has moved between worlds that rarely speak to each other: co-founding Autonomy, a mobility innovation platform; leading digital and IoT products at HyperloopTT, where he built labs teams and advised the CEO on AI strategy; lecturing at Sciences Po, where policy meets technology in the curriculum. French-Chilean, fluent in six languages, based in Paris — his perspective is structurally multilateral, which turns out to be exactly what AI governance requires.
The Training Gap
There is a specific failure mode that Santibanez encounters repeatedly in enterprise AI deployments: the leadership team understands the technology but not the regulatory landscape, or understands the regulations but not what agentic AI systems actually do. The gap between these two forms of literacy is where most deployment failures originate — not in the models themselves, but in the organisational decisions about how to use them.
This is why training is central to Ethiqs.ai’s model. Not training in the machine learning sense, but executive education: bringing over a thousand leaders across industries to the point where they can make informed decisions about AI deployment in regulated contexts. The curriculum is not abstract ethics. It is operational governance — how to classify AI systems under the EU AI Act, how to design human oversight that is genuine rather than theatrical, how to build audit trails that satisfy regulators without crippling engineering velocity.
The distinction matters. Most AI ethics programmes teach principles. Santibanez teaches architecture — how to structure AI systems so that compliance emerges from the design rather than being imposed on it afterward. The difference is the same as between a building that passes a fire inspection because someone added extinguishers to the corridors, and a building that passes because the architect designed the exits, materials, and ventilation correctly from the start.
Why Vienna
The Human × AI Conference sits at a particular intersection that makes Santibanez’s work directly relevant: the point where European AI ambition meets European regulatory reality. The continent has produced the world’s most comprehensive AI regulatory framework. It has also produced a widespread anxiety that this framework will slow European AI development relative to less regulated competitors in the United States and China.
Santibanez’s argument dissolves this anxiety by reframing the question. The issue is not whether regulation constrains AI deployment. It is whether the constraint is structural or incidental. Incidental constraints — compliance bolted on after the fact — genuinely slow you down. Structural constraints — governance embedded in the architecture — make the system better. The founders and enterprise leaders who understand this distinction are not handicapped by European regulation. They are weaponising it.
His session will be operational, not philosophical. Drawing on twenty-plus bespoke AI deployments across regulated industries, he will show what compliant-by-design AI architecture looks like in practice — how to structure agentic systems that meet EU AI Act requirements without sacrificing the autonomy and performance that make them valuable in the first place.
Implications
- For enterprise leaders: AI governance is not a cost centre — it is a design decision. Systems built compliant-by-design ship faster, scale better, and survive regulatory scrutiny that retroactively governed systems cannot. The choice is not between speed and ethics but between two architectures, one of which performs better on every dimension.
- For AI practitioners: The EU AI Act creates a new class of technical requirement — auditability, transparency, human oversight — that must be addressed at the system architecture level, not the policy layer. Engineers who understand how to embed these requirements natively will define the next generation of enterprise AI deployment.
- For conference attendees: Expect an operational framework, not a philosophy lecture. Santibanez brings fifteen years of cross-sector experience and over a thousand executive training sessions to a single question: how do you build AI that is simultaneously high-performance, compliant, and deployable in the most regulated markets on earth?
Cristian Santibanez joins Human × AI on May 19, 2026, in Vienna.