Somewhere in a Semperit factory in Austria, a humanoid robot is learning to handle industrial polymer products. It does not learn the way a chatbot learns — by processing text scraped from the internet. It learns by watching, reaching, gripping, failing, and watching again. Each training cycle requires data that does not exist in any public dataset: the precise torque needed to rotate a rubber seal without deforming it, the visual pattern of a correctly stacked pallet under warehouse lighting, the recovery sequence when a conveyor belt changes speed unexpectedly.
Christian Tauber builds the systems that generate this data. His company, NEOALP, occupies a position in the AI landscape that most industry observers have not yet mapped: the infrastructure layer between general-purpose AI models and the physical world where those models must actually perform.
The Data Desert
The success of large language models created a dangerous illusion: that AI progress is primarily a function of model architecture and compute power. In the domain of physical AI — robots that must see, interpret, plan, and act in unstructured environments — the binding constraint is neither architecture nor compute. It is data.
LLMs benefited from a windfall that will not repeat. The internet contained trillions of words of human-generated text, freely available for training. Robotics has no equivalent corpus. Every grasp, every navigation decision, every object manipulation must be recorded in specific industrial contexts, under specific lighting conditions, with specific materials. A model trained on warehouse logistics in Munich is not automatically useful in a polymer factory in Vienna.
NEOALP's solution is a pipeline that combines both approaches: collect real training data on the factory floor, then multiply it using AI-generated synthetic data — simulated environments, procedurally varied scenarios, digitally augmented edge cases that would take years to encounter organically. The models are then trained, validated against real-world performance metrics, and hardened until they meet industrial safety standards. This is not fine-tuning a foundation model. This is building a new class of AI system from the ground up.
Vision-Language-Action
Tauber works with a model category that most of the AI industry has not yet internalised: Vision-Language-Action Models (VLAs). Where LLMs process text and vision models process images, VLAs must process visual input, understand linguistic instructions, and generate physical actions — all in real time, all in environments where a miscalculation can damage equipment, halt production, or injure a human colleague.
The integration challenge is as much organisational as it is technical. NEOALP does not simply deliver a robot and leave. The company works across the full project lifecycle — from initial technical consulting through system architecture, integration with existing building, logistics, and automation platforms, and ongoing operational support. As the official integration partner of Innok Robotics in Austria, NEOALP connects autonomous mobile robots with digital twin technology and factory management systems, creating closed-loop environments where robots and human workers share operational authority.
Why Vienna
The Human × AI Conference exists at the intersection of technology and European strategic ambition. Tauber represents a category of founder that the conference's framing is specifically designed to highlight: the builder whose work is too applied for academic AI conferences and too technical for business summits, but whose infrastructure is load-bearing for the entire European AI ecosystem.
NEOALP is embedded in Vienna's AI Factory Austria AI:AT initiative — a collaborative environment connecting startups, research institutions, and industrial partners to accelerate applied AI deployment across the country. That positioning is itself a data point for the conference's broader argument: that Europe's AI advantage lies not in competing with US hyperscalers on model scale, but in building the integration infrastructure that turns general-purpose AI into domain-specific industrial capability.
Tauber brings to Vienna the practitioner's answer to a question the AI industry has been deferring: what does it actually take to make a robot useful? Not a demo robot, not a research robot, but a robot that shows up to a factory shift and does useful work alongside human colleagues, day after day, without supervision. His answer — rooted in data pipelines, synthetic training environments, and the patient engineering of trust between humans and machines — may be the most consequential contribution to the conference's enterprise transformation conversation.
Implications
- For industrial enterprises: The gap between AI demos and industrial deployment is a data problem, not a model problem. NEOALP's approach — combining real and synthetic data in domain-specific pipelines — offers a roadmap for companies evaluating humanoid or autonomous robotics for their operations.
- For AI founders: The next wave of AI value creation may not come from building better models but from building the data and integration infrastructure that connects existing models to physical-world applications. NEOALP's positioning as an integration layer suggests a category of startup that is structurally underrepresented in European VC portfolios.
- For conference attendees: Expect a ground-level view of what physical AI deployment actually requires — the data engineering, the safety validation, the organisational change management — from a founder who is deploying humanoid robots in European factories today, not in a research paper.
Christian Tauber joins Human × AI on May 19, 2026, in Vienna.