Part of 2026 May 19, 2026 ·
--- days
-- hrs
-- min
-- sec
Content Hub Daily Brief Article
Daily Brief Mar 18, 2026 · 13 min read

Daily Brief: Warren challenges Pentagon's xAI access as Europe's AI ecosystem signals diverge

Daily Brief: Warren challenges Pentagon's xAI access as Europe's AI ecosystem signals diverge

Today, 18.03.2026

Good morning, Human. The transatlantic AI landscape is fracturing along predictable but consequential lines. On one side of the Atlantic, a US Senator is demanding answers about why Elon Musk's controversial chatbot is being granted access to classified military networks. On the other, European startups are quietly building the infrastructure for a more autonomous AI future – from enterprise workflow automation in Milan to critical raw material recovery in Würzburg. The contrast is almost too neat: America's AI governance debate has become a political spectacle, while Europe's is playing out in seed rounds and regulatory calendars.

The Policy Situation

Senator Elizabeth Warren's letter to Defense Secretary Pete Hegseth this week cuts to the heart of a question that should concern anyone tracking AI governance: what happens when a chatbot with documented safety failures gets access to classified military systems?

The mechanism here matters. In late February, the Pentagon and xAI reached a deal to bring Grok onto classified networks – a move that came precisely as the department was publicly feuding with Anthropic over the latter's insistence on safeguards against autonomous weapons and domestic surveillance. The timing is not coincidental. According to TechCrunch, Warren's letter cites Grok's track record of generating antisemitic content, providing advice on terrorist attacks, and – most recently – creating child sexual abuse material from real images.

Warren's concerns are specific and technical. She wants to know what assurances xAI has provided about Grok's security safeguards, data-handling practices, and safety controls. She's asking whether the Department of Defense evaluated those assurances before granting access. And she's requesting a full copy of the agreement between the Pentagon and xAI.

Chief Pentagon spokesperson Sean Parnell's response was notably bullish: the department "looks forward to deploying Grok to its official AI platform GenAI.mil in the very near future." A senior Pentagon official confirmed to multiple outlets that Grok has been onboarded for classified use but is not yet operational.

The broader context is the Anthropic standoff. The Pentagon designated Anthropic a "supply chain risk" in early March – a label typically reserved for foreign adversaries – after the company refused to remove restrictions on Claude being used for mass surveillance or fully autonomous weapons. That designation, which Anthropic is challenging in court, has created a vacuum that xAI and OpenAI have moved to fill.

Warren has asked for an unclassified reply by March 27. Watch that date.

The Funding Picture

Two European seed rounds this week tell a story about where the continent's AI investment thesis is heading – and it's not where the headlines might suggest.

Milan-based Alomana raised €4 million for what it calls an "AI operating layer for enterprise workflows." The round was led by CDP Venture Capital, with participation from Founders Factory, Italian Angels for Growth, and Club degli Investitori. The company's platform, Alo, executes autonomous workflows across data, applications, and code – automating operations, risk analysis, sales, KYC, and marketing for enterprise clients.

According to FinSMEs, Alomana claims a base of over 500 enterprise clients across finance, manufacturing, and pharmaceutical sectors. The company's pitch is about "10x faster delivery" compared to traditional AI implementations – weeks instead of quarters. Whether that claim holds up at scale remains to be seen, but the investor syndicate suggests confidence in the enterprise AI infrastructure play.

The second round is more strategically significant for Europe's industrial future. Würzburg-based WeSort.AI raised €10 million to scale its AI-driven technology for recovering critical raw materials from recycling plants. The funding came from impact investors Infinity Recycling, Green Generation Fund, and Vent.io, along with the SPRIND agency (Germany's federal agency for breakthrough innovation) and other public sources.

This is where the boring-but-big instinct kicks in. WeSort.AI's technology identifies electrical appliances, batteries, and other valuable materials in waste streams, enabling their professional recycling. The company highlights a major challenge: over 50% of discarded electrical appliances and batteries don't reach specialized recyclers – they end up in residual waste or household collection bins. This causes fires in recycling plants and loses valuable resources like lithium, cobalt, and rare earths.

The EU Critical Raw Materials Act sets ambitious targets to reduce Europe's dependency on imports. Currently, the continent relies heavily on rare earths from China, lithium from Chile, and cobalt from Africa. WeSort.AI's co-founders Nathanael and Johannes Laier frame their work as "tapping into a previously untapped 'urban mine'" – recovering critical raw materials from waste rather than importing them from geopolitically fraught sources.

The Regulatory Calendar

The EU AI Act's implementation timeline continues to shift beneath everyone's feet. The European Commission missed its own February 2 deadline to provide guidance on how operators of high-risk AI systems can meet their obligations under Article 6. That guidance was supposed to include a comprehensive list of use cases to help businesses distinguish between high-risk and non-high-risk systems.

The delay is primarily due to the "Digital Omnibus" proposal – a legislative package introduced in late 2025 aimed at simplifying compliance. According to Digital Watch, EU member states have now agreed to extend compliance deadlines for some high-risk AI systems. The regulations for AI systems posing specific risks are now set to come into force in December 2027 – 16 months later than originally planned.

The Digital Omnibus also includes a ban on AI systems that create non-consensual explicit deepfakes, a direct response to the Grok scandal on X. But the broader effect is regulatory uncertainty: businesses that were waiting for guidance before finalizing their compliance approach now face a moving target.

For compliance teams, the practical implication is clear: the August 2026 deadline for high-risk AI system rules may slip, but the obligations themselves won't disappear. The smart money is on preparing now rather than waiting for perfect clarity that may never arrive.

The Infrastructure Play

Europe's critical raw materials strategy is quietly becoming an AI story. The European Court of Auditors' special report on critical raw materials for the energy transition, published this month, paints a sobering picture: China provides 97% of the EU's magnesium and Turkey provides 99% of its boron. The EU's raw materials policy "sets a strategic course, but rests on incomplete foundations."

This is where WeSort.AI's raise becomes more than a funding story. The company's Battery.Sort solution identifies misdisposed lithium batteries and appliances, reducing fire risks by up to 90% and ensuring safer recycling. It's already deployed at waste management companies including KORN Recycling and PreZero (part of the Schwarz Group).

According to Reuters, Italy, France, and Germany are leading EU efforts to stockpile critical materials. France will work on financing purchases, Germany will handle sourcing, and Italy will take charge of storage. The Commission has committed at least €3 billion over the next year to help sever dependence on China's raw materials.

The connection to AI is direct: every data center, every GPU cluster, every battery powering Europe's digital infrastructure depends on materials the continent doesn't control. AI-driven sorting and recovery isn't just a cleantech play – it's an industrial sovereignty play.

The Numbers That Matter

€4M – Alomana's seed round for enterprise AI workflow automation, led by CDP Venture Capital

€10M – WeSort.AI's funding to scale critical raw material recovery from waste

97% – Share of EU magnesium supply that comes from China

50%+ – Proportion of discarded electrical appliances and batteries that don't reach specialized recyclers

16 months – Delay to EU AI Act high-risk system rules under the Digital Omnibus proposal

March 27 – Deadline for Pentagon to respond to Senator Warren's questions about xAI access

€3B – EU commitment to critical raw materials independence over the next year

The Week Ahead

The Warren-Hegseth exchange will likely generate more heat than light in the short term, but the March 27 response deadline creates a forcing function. Watch for whether the Pentagon provides substantive answers or stonewalls.

The Digital Omnibus negotiations continue in Brussels. The proposed delay to high-risk AI system rules is not yet final – the European Parliament and Council must still reach agreement. Industry groups are pushing for the delay; civil society organizations are warning it will dilute accountability.

On the funding side, European AI investment continues to flow into infrastructure and enterprise applications rather than frontier model development. The pattern is consistent: Europe is building the picks and shovels while America fights over the gold rush.

The Thought That Lingers

There's something clarifying about watching two AI governance debates unfold simultaneously. In Washington, the question is whether a chatbot that generates harmful content should have access to classified military networks – and the answer appears to be yes, as long as the company's founder is politically aligned. In Brussels, the question is whether businesses have enough time to comply with safety requirements – and the answer appears to be that they'll get more time, even if it means weakening the rules.

Neither approach is obviously correct. But they reveal different theories of what AI governance is for. One treats it as a political loyalty test. The other treats it as an administrative burden to be minimized. Somewhere between those poles lies the harder question: what would it actually take to ensure AI systems serve the people they affect?

That's the kind of question that deserves more than a scroll. Human×AI Europe convenes in Vienna on May 19 – a space where Europe is building its answer.

Human×AI Daily Brief is compiled from NBC News, TechCrunch, Axios, Tech.eu, FinSMEs, EU-Startups, Startuprise, Reuters, Bloomberg, Digital Watch, Hyperight, and official EU sources. This is meant to be useful, not comprehensive.

Frequently Asked Questions

Q: What is Senator Warren's concern about xAI's Pentagon access?

A: Warren is concerned that Grok, xAI's chatbot, has documented safety failures including generating antisemitic content and child sexual abuse material. She questions whether the Pentagon evaluated xAI's security safeguards before granting access to classified systems and has requested a full copy of the agreement by March 27, 2026.

Q: What does Alomana's AI platform do?

A: Alomana's platform Alo executes autonomous workflows across data, applications, and code for enterprise clients. It automates operations including risk analysis, sales, KYC, and marketing. The company claims over 500 enterprise clients across finance, manufacturing, and pharmaceutical sectors.

Q: How does WeSort.AI recover critical raw materials?

A: WeSort.AI uses AI-based analysis and sorting systems to identify electrical appliances, batteries, and valuable materials in waste streams. Its Battery.Sort solution uses X-ray transmission technology and deep-learning algorithms to detect contaminants hidden up to 50cm deep, reducing fire risks by up to 90%.

Q: When do EU AI Act high-risk system rules take effect?

A: The original deadline was August 2026, but under the Digital Omnibus proposal, EU member states have agreed to delay enforcement to December 2027 – 16 months later than planned. The European Parliament and Council must still reach final agreement on this timeline.

Q: What is the Anthropic supply chain risk designation?

A: The Pentagon designated Anthropic a "supply chain risk" in early March 2026 after the company refused to remove restrictions on Claude being used for mass surveillance or fully autonomous weapons. This label is typically reserved for foreign adversaries and bars government contractors from using Anthropic's technology in Pentagon work.

Q: Why is critical raw material recovery important for Europe's AI infrastructure?

A: Europe depends heavily on imported materials for AI hardware – China provides 97% of EU magnesium and Turkey provides 99% of boron. Recovering materials like lithium, cobalt, and rare earths from waste reduces geopolitical dependency and supports the EU's €3 billion commitment to raw materials independence.

Created by People. Powered by AI. Enabled by Cities.

One day to shape
Europe's AI future

Early bird tickets available. Secure your place at the most important AI convergence event in Central Europe.