The Hype Says Anyone Can Build Now. The Implementation Reality Is Messier.
In Brief
- Barrier collapse: AI and low-code tools are compressing the cost and technical requirements to launch a startup, enabling domain experts without engineering teams to build products
- The mighty middle emerges: A new class of ventures between lifestyle businesses and unicorns becomes viable when founders can reach profitability without massive VC rounds
- Red Queen risk: Lower barriers mean faster imitation cycles and fiercer competition – more founders doesn't automatically mean more durable companies
- Europe's opportunity: Deep domain expertise in regulated industries and vertical markets may matter more than frontier AI labs
- Implementation gap: Over 60% of European SMEs report inadequate preparation for AI Act compliance, revealing the distance between building and operating
This shift in who can build – and what success looks like – is exactly the conversation happening at Human x AI Europe in Vienna on May 19, where founders, investors, and policymakers are working through what European AI entrepreneurship actually requires.
The Pitch Sounds Great. Now Show the Process.
A recent Sifted analysis from London Business School makes a compelling case: AI is fundamentally changing who can start companies in Europe. The argument runs like this – when the financial and technical costs of building collapse, participation expands. Domain experts in healthcare, logistics, and financial services can now move from insight to product without first assembling a traditional engineering organisation.
Stockholm-based Lovable is helping non-technical builders ship real software. Anthropic's Claude now lets users build and share AI-powered apps with radically lower overheads. The founder pipeline is widening.
This is real. It's also incomplete.
The question isn't whether more people can start building. The question is whether they can ship, operate, and sustain what they build. And that's where the implementation gap opens up.
The Mighty Middle Thesis – And Its Constraints
The Sifted piece introduces a useful concept: the mighty middle. This refers to a segment of firms that pursue meaningful, durable growth without necessarily aiming for the extreme outcomes VC portfolios are built around. If more founders can build strong products with less capital, more startups can plausibly reach eight-figure outcomes without needing to become global monopolies.
This aligns well with Europe's institutional reality: fragmented markets, multilingual customer bases, and strong industries where vertical depth matters more than winner-take-all dynamics.
Here's the implementation question that doesn't get asked enough: What does durable actually require?
Building a product is one thing. Operating it – handling data drift, managing compliance, responding to user behaviour changes, maintaining observability – is another. The tools that compress the build cycle don't automatically compress the operational learning curve.
A founder who ships a healthcare AI application in three weeks still needs to answer: What happens when the model's predictions start degrading? Who gets paged at 2 AM? How does rollback work? What's the audit trail for regulatory review?
The Red Queen Effect Is Real
The Sifted analysis acknowledges a cautionary note: the Red Queen effect. If AI and low-code make it easier for more people to start companies, everyone may end up running faster just to stay in the same place. Lower barriers can mean a surge of look-alike products, faster feature replication, and fiercer price competition.
This isn't theoretical. In software categories where distribution is already saturated and switching costs are low, AI doesn't automatically create more durable ventures – it compresses the cycle of entry, imitation, and churn.
The counter-argument is that new founders bring different lived experiences, sector knowledge, and local context. The outcome may be not just more competition in the same arenas, but a richer entrepreneurial landscape that addresses overlooked pains and underserved markets.
Both things can be true. The question is which dynamic dominates in any given market.
The Compliance Gap Nobody Wants to Talk About
Here's where the implementation reality gets uncomfortable.
According to a survey from the European Digital SME Alliance, more than 60% of small and medium-sized tech companies say they are not adequately prepared for compliance with any phase of the EU AI Act. Nearly half reported that they hadn't yet conducted a risk classification of their own AI systems – a foundational first step.
The AI Act's first enforcement provisions went into force in February 2026, targeting unacceptable risk AI systems. Penalties for violations can reach €35 million or 7% of global annual turnover, whichever is higher. For a seed-stage startup, that's not a fine. That's an extinction event.
The heavier obligations around high-risk AI systems – hiring tools, credit scoring, medical diagnostics – kick in August 2026. General-purpose AI model rules landed in August 2025.
This creates a specific implementation problem: the tools that make building easier don't make compliance easier. A domain expert who can now ship a credit-scoring application without an engineering team still needs to navigate 144 pages of legislation that references dozens of yet-to-be-published standards.
What Actually Needs to Happen
The democratisation of building narrative is appealing. It's also insufficient. Here's what the implementation layer requires:
Before launch, answer three questions:
- What does good enough look like for this use case?
- Who owns this when it fails?
- How does rollback work?
If all three can't be answered, the team isn't ready to ship.
Build observability before accuracy. Too many teams optimise for model performance while ignoring operational visibility. The model works great in staging, passes all the tests, launches to production – and then drifts for six weeks before anyone notices. By then, thousands of decisions have been made based on degraded outputs.
Classify your risk tier now. Determine where your product sits in the AI Act's risk framework. If you're nowhere near the prohibited categories, the February enforcement date was mostly symbolic – but August 2026 is not.
Document everything. The AI Act places heavy emphasis on transparency, documentation, and human oversight. Start building audit trails now – training data provenance, model decision logs, risk assessments. This isn't just about compliance. Investors increasingly want to see governance maturity.
Europe's Real Opportunity
The Sifted analysis makes a useful reframe: the interesting comparison between Europe and the US may not be about frontier AI labs and mega-rounds. The US will likely continue to dominate foundation models, hyperscale infrastructure, and massive venture funding. Europe's opportunity may be at the application and venture-creation layer.
AI may unlock more founders, more experiments, and more small-to-strong companies. The ecosystem could see more startups built around deep domain insights rather than deep technical moats.
This is plausible. It's also conditional on solving the implementation gap.
The founders who will thrive aren't just the ones who can build fast. They're the ones who can build, operate, and iterate – with observability, compliance, and rollback plans baked in from the start.
The model is the easy part. The process is what ships.
Frequently Asked Questions
Q: What is the mighty middle in European AI entrepreneurship?
A: The mighty middle refers to startups that pursue meaningful, durable growth without aiming for unicorn-scale outcomes. These ventures can reach eight-figure revenues with less capital because AI tools compress build costs, making profitability achievable earlier without massive VC funding.
Q: When do the EU AI Act's high-risk system requirements take effect?
A: The high-risk AI system obligations – covering hiring tools, credit scoring, and medical diagnostics – take effect in August 2026. The first enforcement provisions targeting unacceptable risk systems went into force in February 2026.
Q: What percentage of European SMEs are prepared for AI Act compliance?
A: According to the European Digital SME Alliance, more than 60% of small and medium-sized tech companies report inadequate preparation for any phase of the AI Act. Nearly half haven't conducted a risk classification of their AI systems.
Q: What are the maximum penalties under the EU AI Act?
A: Penalties for AI Act violations can reach €35 million or 7% of global annual turnover, whichever is higher. For early-stage startups, this represents an existential financial risk.
Q: How does the Red Queen effect apply to AI-enabled startups?
A: The Red Queen effect describes a scenario where lower barriers to building create a surge of similar products, faster feature replication, and fiercer price competition – meaning everyone runs faster just to maintain their position, without necessarily building more durable ventures.
Q: What should founders document before launching an AI product?
A: Founders should document training data provenance, model decision logs, risk assessments, and audit trails for regulatory review. The AI Act emphasises transparency, documentation, and human oversight – and investors increasingly evaluate governance maturity during due diligence.