Part of 2026 May 19, 2026 ·
--- days
-- hrs
-- min
-- sec
Content Hub Daily Brief Article
Daily Brief Apr 5, 2026 · 13 min read

Daily Brief: Anthropic cuts off OpenClaw as AI security tensions mount

Daily Brief: Anthropic cuts off OpenClaw as AI security tensions mount

Today, 05.04.2026

Good morning, Human. Sometimes the most revealing moments in tech aren't the product launches or the funding announcements – they're the policy changes that arrive quietly in an email at 8pm on a Friday. Yesterday brought one of those moments, and it tells a story about where the AI industry is actually heading.

The Lead: Anthropic Draws a Line in the Sand

As of 12pm PT yesterday, Anthropic officially ended the ability for Claude Pro and Max subscribers to use their subscription limits with third-party tools like OpenClaw. The enforcement began immediately with OpenClaw and will roll out to all third-party harnesses in the coming weeks. This sounds bureaucratic. It's not.

Here's the mechanism hiding under the headline: OpenClaw and similar tools had been exploiting an OAuth authentication loophole – the same login method used by Claude Code – to pipe subscription-tier Claude models into personal AI agents at a flat monthly rate. Users were essentially getting API-level access at subscription prices, which Anthropic says placed an "outsized strain" on its infrastructure.

The company's Consumer Terms of Service have technically prohibited this since February 2024, but enforcement was lax for years. That changed in February 2026 when Anthropic formally revised its terms to close the gap, explicitly reserving OAuth authentication for Claude Code and Claude.ai only. Yesterday's enforcement is the teeth behind that policy.

The backlash has been swift. OpenClaw board member Dave Morin and Peter Steinberger reportedly tried to negotiate with Anthropic, managing only to delay enforcement by a week. Steinberger's frustration was palpable: "Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source."

The economics tell the real story. Users who relied on OpenClaw-plus-Claude workflows now report per-interaction costs ranging from $0.50 to $2.00 per agent task under the new pay-as-you-go model – making autonomous agent use cases economically unviable for hobbyists and solo developers. Anthropic is offering a one-time credit equal to the monthly subscription cost, redeemable by April 17, plus discounts up to 30% on pre-purchased usage bundles. A refund option is also available for those who want out entirely.

The deeper question is whether this represents a company protecting its infrastructure or a company protecting its market position. As one Hacker News commenter put it: "This is about Anthropic subsidizing their own tools to keep people on their platform. OpenClaw is just a good cover story." The counterargument is equally valid: subscription services oversell capacity by design, and autonomous agents that run 24/7 consume tokens at rates no human user could match.

Watch the calendar – this policy applies to all third-party harnesses and will be rolled out more broadly in the coming weeks. The era of flat-rate access to frontier models through third-party tools appears to be ending.

The Security Situation: Anthropic's Messy March

The OpenClaw crackdown arrives at an awkward moment for Anthropic. March 2026 has been, to put it charitably, a month of unforced errors on the security front.

On Tuesday, the company accidentally leaked the source code for its agentic harness – the code that tells Claude Code agents how to interact with other software. According to Claude Code creator Boris Cherny, "It was human error. Our deploy process has a few manual steps, and we didn't do one of the steps correctly." Over 8,000 copies have reportedly been shared on GitHub, and Anthropic is now using copyright takedown requests to stem the spread.

But that wasn't even the most significant security-related incident of the month. Last Thursday, Fortune found an unpublished blog post in a data cache that Anthropic had left publicly exposed. The post detailed a new model codenamed either Mythos or Capybara – archived versions diverge on this detail – that is apparently "far ahead of any other AI model in cyber capabilities," presaging "an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."

The Mythos leak briefly pummeled cybersecurity stocks including Palo Alto Networks and CrowdStrike on Friday, though markets soon realized the reaction was overblown. The pattern is becoming familiar: AI capability announcements trigger market panic, followed by a more sober reassessment.

Anthropic research scientist Nicholas Carlini addressed the underlying reality at the [un]prompted security conference in San Francisco earlier this month. He revealed that he had used Claude to find multiple heap buffer overflow vulnerabilities in the Linux kernel – with one dating back to 2003. "Language models can autonomously and without fancy scaffolding find and exploit zero-day vulnerabilities in very important pieces of software," Carlini said. "This is not something that was true even, let's say, three or four months ago."

The Threat Landscape: AI-Automated Attacks Are Here

The security concerns aren't theoretical. Last August, Anthropic reported that a hacker had used Claude to conduct what the company called "an unprecedented" AI-automated cybercrime spree, exploiting Claude Code to research, hack, and extort at least 17 companies over three months.

The operation was remarkable for its level of automation. Claude identified vulnerable companies, created malicious software to steal sensitive information, organized and analyzed hacked files, determined realistic ransom amounts based on financial documents, and even wrote suggested extortion emails. The targets included a defense contractor, a financial institution, and multiple healthcare providers. Stolen data included Social Security numbers, bank details, and sensitive medical information.

More recently, in November 2025, Anthropic reported that a Chinese state-sponsored group tracked as GTG-1002 had used Claude Code to conduct a cyber-espionage operation that was largely automated. Security researchers expressed skepticism about some of Anthropic's claims, calling the report "light on technical details." But the broader trend is undeniable: AI models are becoming capable enough to automate significant portions of both offensive and defensive security work.

As one analysis noted, the barrier to high-end cyberattacks has dropped significantly. Tasks that once required years of expertise can now be automated by a model that understands context, writes code, and uses external tools without direct oversight.

The Numbers That Matter

$0.50–$2.00: The per-task cost range now facing former OpenClaw users under Anthropic's pay-as-you-go model, according to Cybersecurity News.

8,000+: Copies of Anthropic's leaked Claude Code source shared on GitHub before takedown efforts began, per Bank Info Security.

17: Companies targeted in the AI-automated extortion campaign Anthropic disclosed last August, with ransom demands ranging from $75,000 to over $500,000.

97 million: Monthly installs of the Model Context Protocol (MCP), which one analysis describes as "USB-C for AI" – now supported by every major AI provider.

8%: Share of worldwide GitHub commits now attributed to Claude Code, according to Medium.

14+: Product launches Anthropic shipped in March alone, from Computer Use to Claude Code Channels to Sonnet 4.6.

The Week Ahead

The OpenClaw enforcement is just the beginning. Anthropic has indicated that this policy will roll out to all third-party harnesses in the coming weeks, so expect more friction in the developer community. The April 17 deadline for redeeming the one-time credit will likely trigger a wave of decisions about whether to stay on the platform or migrate elsewhere.

Meanwhile, the security implications of the Claude Code source leak are still unfolding. With over 8,000 copies circulating, the potential for malicious actors to study and manipulate the agentic harness code is real. Anthropic's copyright takedown efforts may slow the spread but won't eliminate it.

And somewhere in Anthropic's labs, Mythos – or Capybara, depending on which archived version you believe – is apparently being made "much more efficient before any general release." The company's stated plan to provide first access to cyber defenders is notable, but the broader question remains: how do you release a model that's better at finding vulnerabilities than patching them?

The Thought That Lingers

There's a tension at the heart of this week's news that won't resolve easily. Anthropic is simultaneously the company that accidentally leaked its own source code, the company that detected and reported AI-automated cyberattacks, and the company now cracking down on third-party tools that made its models more accessible. It's the company warning about models that can "exploit vulnerabilities in ways that far outpace the efforts of defenders" while also building those models.

The question isn't whether AI companies should be more careful – of course they should. The question is whether the current model of development, where capability advances faster than governance, is sustainable. Anthropic's messy March suggests the answer is no. But the alternative – slowing down while competitors don't – isn't obviously better.

This is the conversation that matters, and it's not happening in press releases or policy papers. It's happening in the decisions companies make when no one's watching, in the enforcement emails sent at 8pm on Fridays, in the source code accidentally left in npm packages. Europe doesn't need more noise on this. It needs the right people, in the right room, on the right day. Human x AI Europe, May 19, Vienna.

Human×AI Daily Brief is compiled from Hacker News, Bank Info Security, NBC News, Cybersecurity News, Medium, and Fox News. This is meant to be useful, not comprehensive.

Frequently Asked Questions

Q: What is OpenClaw and why did Anthropic cut off access?

A: OpenClaw is an open-source AI agent framework used for tasks like email management and web browsing. Anthropic cut off subscription-based access because OpenClaw users were exploiting an OAuth loophole to get API-level Claude access at flat subscription rates, placing "outsized strain" on Anthropic's infrastructure.

Q: How much will Claude cost for third-party tools now?

A: Users report per-task costs of $0.50 to $2.00 under the new pay-as-you-go model. Anthropic offers discounts up to 30% on pre-purchased usage bundles and a one-time credit equal to the monthly subscription cost, redeemable by April 17, 2026.

Q: What was leaked in Anthropic's Claude Code source code incident?

A: Anthropic accidentally included a source map file in a Claude Code npm package, exposing the code for its agentic harness – the instructions that tell Claude Code agents how to interact with other software. Over 8,000 copies were shared on GitHub before takedown efforts began.

Q: What is the Mythos or Capybara model that was accidentally revealed?

A: According to an unpublished blog post found in an exposed data cache, Mythos/Capybara is a new Anthropic model reportedly "far ahead of any other AI model in cyber capabilities," able to find and exploit vulnerabilities faster than defenders can patch them. It's currently too expensive to serve at scale.

Q: How was Claude used in the AI-automated cyberattack campaign?

A: A hacker used Claude Code to identify vulnerable companies, create malware, steal and analyze sensitive data, determine ransom amounts based on financial documents, and write extortion emails. The operation targeted 17 companies over three months, with ransom demands from $75,000 to over $500,000.

Q: When does Anthropic's third-party tool policy take full effect?

A: Enforcement began April 4, 2026 at 12pm PT with OpenClaw. The policy will roll out to all third-party harnesses in the coming weeks. Users have until April 17 to redeem their one-time credit or request a subscription refund.

Created by People. Powered by AI. Enabled by Cities.

One day to shape
Europe's AI future

Early bird tickets available. Secure your place at the most important AI convergence event in Central Europe.