Today, 23.03.2026
Good morning, Human. Sometimes the most revealing stories aren't about breakthroughs – they're about breakdowns. This weekend delivered three distinct trust failures across the AI ecosystem: a compliance startup accused of fabricating the very certifications it sold, a $29 billion coding company caught obscuring the origins of its flagship model, and a major publisher pulling a horror novel over AI-generation concerns. Each story operates at a different layer of the stack, but together they illuminate something important about where the industry's accountability gaps actually live.
The Lead: When Compliance Becomes the Fraud
The Delve story deserves the lead because it strikes at something foundational: the infrastructure of trust that allows the entire AI ecosystem to function. According to TechCrunch, an anonymous Substack post published this week accuses the Y Combinator-backed compliance startup of falsely convincing hundreds of customers they were compliant with privacy and security regulations, potentially exposing those customers to criminal liability under HIPAA and hefty fines under GDPR.
The mechanism alleged here is worth understanding. According to the whistleblower account credited to DeepDelver, Delve achieves its claim of being the fastest platform by producing fake evidence, generating auditor conclusions on behalf of certification mills that rubber stamp reports, and skipping major framework requirements while telling clients they have achieved 100% compliance. The accusations include fabricated evidence of board meetings, tests, and processes that never happened.
Delve raised a $32 million Series A at a $300 million valuation last year, with the round led by Insight Partners. On Friday, the startup published a response calling the Substack post misleading and containing a number of inaccurate claims. The company emphasized that it functions as an automation platform rather than a compliance report issuer, with final reports and opinions issued solely by independent, licensed auditors, not Delve.
Here's the mechanism hiding under the headline: the allegations suggest Delve's auditors – primarily two firms called Accorp and Gradient – may themselves be part of the problem. Reddit discussions cite the original investigation's claim that these US-based auditors are Indian certification mills operating through empty US shell companies and mailbox agents. If accurate, this would represent not just a failure of one startup but a systemic vulnerability in how compliance automation intersects with audit independence.
Why this matters for the European AI ecosystem: the EU AI Act's high-risk compliance requirements become fully enforceable on August 2, 2026. According to the European Parliament's think tank, this triggers comprehensive requirements around risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. The Delve allegations – whether ultimately proven or not – raise uncomfortable questions about whether the compliance automation industry is equipped to handle this regulatory moment, or whether speed-to-certification has become a race to the bottom.
The Attribution Question: Cursor's Kimi Revelation
The second trust fracture this weekend came from an unexpected direction: a model ID that wasn't supposed to be visible. According to TechCrunch via Yahoo, AI coding company Cursor launched its new Composer 2 model this week, promoting it as offering frontier-level coding intelligence. Within hours, a developer named Fynn discovered that the API response contained a revealing string: kimi-k2p5-rl-0317-s515-fast.
That model ID pointed to Kimi K2.5, an open-source model from Moonshot AI, a Chinese company backed by Alibaba and HongShan (formerly Sequoia China). Cursor's vice president of developer education Lee Robinson acknowledged the base model, stating that only ~1/4 of the compute spent on the final model came from the base, the rest is from our training.
The context matters here. Cursor is a well-funded U.S. startup that raised a $2.3 billion round last fall at a $29.3 billion valuation, reportedly exceeding $2 billion in annualized revenue. Building on top of a Chinese model might feel particularly fraught right now, with the AI arms race often framed as an existential battle between the United States and China.
Cursor co-founder Aman Sanger acknowledged the oversight: It was a miss to not mention the Kimi base in our blog from the start. We'll fix that for the next model. Moonshot AI's Kimi account on X subsequently congratulated Cursor, confirming the use was part of an authorized commercial partnership with Fireworks AI.
As The Decoder noted, there's nothing inherently wrong with fine-tuning an open-source model for a specific use case – it's common practice and often the smarter path. The problem is shipping someone else's base model under your own brand without saying so. The bigger question this raises: if Cursor's fine-tuned model can genuinely compete with billion-dollar proprietary efforts, what does that say about the actual value of proprietary base model development?
The Creative Frontier: When AI Detection Meets Publishing
The third story operates at a different layer entirely – the intersection of AI detection, creative work, and institutional gatekeeping. According to TechCrunch, Hachette Book Group announced it will not publish a horror novel called Shy Girl over concerns that artificial intelligence was used to generate the text. The novel was scheduled for U.S. release this spring; Hachette will also discontinue the book in the UK, where it's already available.
The BBC reports this appears to be the first commercial novel from a major publishing house to be pulled over evidence of AI use. Reviewers on GoodReads and YouTube had been speculating about AI generation for months, with one Reddit post noting almost every noun in the book is accompanied by an adjective and the text overuses weather similes.
Author Mia Ballard denied using AI to write the novel, instead blaming an acquaintance she'd hired to edit the original self-published version. In an email to The New York Times, Ballard stated: This controversy has changed my life in many ways and my mental health is at an all time low and my name is ruined for something I didn't even personally do.
As Ars Technica observed, despite numerous claims that AI writing can be easily identified, plenty of readers enjoyed the book and promoted it online. That may both terrify and horrify actual writers, but it remains a reality they'll need to face. The detection question – how do you prove something wasn't AI-generated? – is becoming as fraught as the generation question itself.
The Regulatory Calendar
Watch the calendar – reality lives there. The EU AI Act's enforcement machinery continues to assemble ahead of the August 2, 2026 deadline for high-risk AI systems. According to the European Parliament's analysis, as of March 2026, only eight of 27 Member States have designated their single points of contact for AI Act enforcement – a concerning gap with less than five months until full applicability.
The IAPP reports the European Commission missed its February 2 deadline to provide guidance on how operators of high-risk AI systems can meet their obligations under Article 6. The Commission indicated it is integrating months of feedback and plans to publish a final draft by the end of March, with final adoption potentially coming in April. Meanwhile, the Digital Omnibus proposal could push certain deadlines to December 2027 – but only if it passes, and that remains uncertain.
The Funding Picture
The capital concentration story continues to intensify. According to eeNews Europe, AI startups worldwide raised approximately $220 billion across January and February 2026. But the structure matters more than the headline: in February alone, OpenAI raised $110 billion, Anthropic another $30 billion, and Waymo added $16 billion. Those three rounds together represented 83% of all venture capital raised globally that month.
For Europe, this matters less as a spectacle than as a signal. A new Prosus and Dealroom.co report titled State of AI in Europe: The Invisible Giant reveals Europe's AI funding hit a record $21.8 billion in 2025, up 58% in a single year. Europe has 133 million monthly LLM users – nearly double the U.S. figure. Yet almost every model those users run was built in America or China, and 73% of lead investors in late-stage European AI companies are American.
The Numbers That Matter
$220 billion – Global AI startup funding in January-February 2026, with 83% of February's total going to just three companies (OpenAI, Anthropic, Waymo), per eeNews Europe
8 of 27 – EU Member States that have designated their AI Act single points of contact as of March 2026, per the European Parliament
$300 million – Delve's valuation at its Series A, now under scrutiny following compliance fraud allegations, per TechCrunch
$29.3 billion – Cursor's valuation, with the company now acknowledging its Composer 2 model was built on Chinese open-source Kimi K2.5, per TechCrunch
133 million – Monthly LLM users in Europe, nearly double the U.S. figure, yet almost all models they use were built elsewhere, per Prosus/Dealroom
August 2, 2026 – When EU AI Act high-risk system requirements become fully enforceable, per European Commission
The Week Ahead
The European Commission is expected to publish final draft guidance on AI Act Article 6 (high-risk classification) by month's end. The Digital Omnibus negotiations continue in Parliament, with implications for whether the August 2026 deadline holds or shifts. Meanwhile, the Delve story will likely develop further as affected companies assess their exposure and the startup responds to mounting scrutiny.
The Thought That Lingers
Three stories, three different trust failures, one common thread: the gap between what institutions claim and what they actually deliver. A compliance startup allegedly fabricating the certifications it sells. A coding company obscuring the origins of its model. A publisher discovering that the book it acquired may not have been written by its author. Each represents a different kind of accountability failure – regulatory, technical, creative – but together they suggest something about where the AI ecosystem's real vulnerabilities lie. Not in the models themselves, but in the human systems we've built around them to verify, attribute, and certify. As the EU AI Act's enforcement machinery assembles, the question isn't just whether companies can comply with new rules. It's whether the compliance infrastructure itself can be trusted.
These questions of trust, attribution, and accountability are exactly what we'll be exploring at Human x AI Europe on May 19 in Vienna – in the room where Europe decides what kind of future it wants to build.
Human×AI Daily Brief is compiled from TechCrunch, BBC, Ars Technica, The Decoder, European Parliament, IAPP, eeNews Europe, Prosus/Dealroom, and European Commission sources. This is meant to be useful, not comprehensive.
Frequently Asked Questions
Q: What is Delve accused of doing with compliance certifications?
A: According to an anonymous Substack investigation, Delve allegedly provided customers with fabricated evidence of board meetings, tests, and processes that never happened, while using auditors described as certification mills to rubber-stamp reports. Delve denies these allegations, stating it provides automation tools while independent auditors issue final reports.
Q: What model did Cursor use as the base for Composer 2?
A: Cursor's Composer 2 was built on Kimi K2.5, an open-source model from Chinese company Moonshot AI. Cursor's VP of developer education stated approximately one-quarter of the compute came from the base model, with the rest from Cursor's own training. The company acknowledged it should have disclosed this upfront.
Q: When do EU AI Act high-risk requirements become enforceable?
A: The comprehensive requirements for high-risk AI systems listed in Annex III become fully applicable on August 2, 2026. This includes obligations around risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.
Q: How many EU Member States have designated AI Act enforcement authorities?
A: As of March 2026, only 8 of 27 EU Member States have designated their single points of contact for AI Act enforcement, according to the European Parliament's think tank – a concerning gap with less than five months until full applicability.
Q: Why did Hachette pull the horror novel Shy Girl?
A: Hachette pulled the novel over concerns that AI was used to generate the text, following months of speculation on GoodReads and YouTube. Author Mia Ballard denied using AI herself, claiming an editor she hired may have used it without her knowledge. This appears to be the first commercial novel from a major publisher pulled over AI concerns.
Q: What does the Prosus/Dealroom report reveal about European AI usage?
A: Europe has 133 million monthly LLM users – nearly double the U.S. figure – but almost every model those users run was built in America or China. Additionally, 73% of lead investors in late-stage European AI companies are American, meaning Europe incubates companies while others capture the value.