Public vision-language models were unreliable for interpreting games. So we built our own — a model that actually plays the game to find the bugs.
— Sabtain AhmadSabtain Ahmad is CTO and Co-Founder of ManaMind, a London/Vienna-based AI startup building autonomous agents that test video games. The company's proprietary vision-language models play games to detect bugs, aiming to cut studio testing costs by up to 80%.
ManaMind secured USD 1.1 million in pre-seed funding to launch its AI game testing platform — addressing one of the most labour-intensive and costly stages of game production: quality assurance.
Ahmad holds a PhD in Artificial Intelligence from TU Wien, where he researched scalable and privacy-preserving distributed machine learning. His doctoral work focused on optimising edge intelligence for smart environmental monitoring — a framework for energy and communication efficiency that earned him the Critical Infrastructure Award from the Austrian Academy of Sciences.
Before founding ManaMind, he spent over three years building AI systems for industrial automation, and completed a research collaboration with Umeå University in Sweden on distributed machine learning.
Ahmad's path from TU Wien to a venture-backed AI startup reflects a pattern increasingly visible in Europe's AI ecosystem: deep technical research finding commercial application in domains the US startup scene has overlooked. Game QA is a multi-billion-dollar problem that still relies heavily on manual human testing — precisely the kind of structured, repetitive, high-stakes environment where autonomous agents can deliver transformative value.
Join 200+ leaders in Vienna on May 19, 2026 for a day of insight, inspiration, and meaningful connections at the intersection of human potential and AI.