Part of 2026 May 19, 2026 ·
--- days
-- hrs
-- min
-- sec
Content Hub Debate Article
Debate Mar 11, 2026 · 6 min read

A Roadmap for AI, If Anyone Will Listen

A Roadmap for AI, If Anyone Will Listen

The timing was almost too neat. On the last Friday of February 2026, Defense Secretary Pete Hegseth designated Anthropic—whose AI already runs on classified military platforms—a supply-chain risk after the company refused to grant the Pentagon unlimited use of its technology. Hours later, OpenAI announced its own deal with the Defense Department. And somewhere in the background, a bipartisan coalition of thinkers had just finalized something the U.S. government has so far declined to produce: a framework for what responsible AI development should actually look like.

The collision of these events deserves disentangling. Because what looks like a contract dispute is actually three different arguments happening simultaneously—and until those arguments are separated, the debate will continue generating more heat than light.

The Declaration: What It Actually Says

The Pro-Human AI Declaration, released in March 2026, opens with a framing that cuts through the usual techno-optimist versus techno-pessimist binary. Humanity, it argues, faces a fork in the road. One path—which the declaration calls "the race to replace"—leads to humans being supplanted first as workers, then as decision-makers, as power accrues to unaccountable institutions and their machines. The other path leads to AI that massively expands human potential.

The distinction matters because it reframes the question. The debate is not "AI good or AI bad." The debate is: which AI future, governed how, accountable to whom?

The declaration's five pillars—keeping humans in charge, avoiding concentration of power, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable—are not particularly radical in isolation. What makes them notable is their specificity. Among the more muscular provisions: an outright prohibition on superintelligence development until there is scientific consensus it can be done safely and with genuine democratic buy-in; mandatory off-switches on powerful systems; and a ban on architectures capable of self-replication, autonomous self-improvement, or resistance to shutdown.

There's something quite remarkable that has happened in America just in the last four months. Polling suddenly [is showing] that 95% of all Americans oppose an unregulated race to superintelligence.

Max Tegmark

The Coalition: Why It Matters

The signatories read like a deliberate exercise in strange bedfellows. According to reporting on the declaration, organizational endorsers include the American Federation of Teachers, the AFL-CIO Tech Institute, the Congress of Christian Leaders, and the G20 Interfaith Forum Association. Individual signatories include figures who rarely align publicly—Steve Bannon and Susan Rice, Yoshua Bengio and Ralph Nader, Tristan Harris and Richard Branson.

The drafting process itself was notable. Roughly 90 political, community, and thought leaders attended a closed-door meeting in New Orleans in January 2026, held under Chatham House Rules. One of the most consequential design choices: major AI companies and Big Tech representatives were not invited. The organizers explicitly sought to center voices from civil society that experience AI disruption directly—workers, educators, families, religious communities, advocacy organizations—rather than those whose incentives are shaped by the race to deploy.

This is not neutrality. It is a deliberate choice about whose voices should shape the conversation. Whether that choice is wise depends on what problem one thinks needs solving.

The Standoff: What It Revealed

The Pentagon-Anthropic dispute exposed something the declaration's authors had been arguing for months: the complete absence of coherent rules governing artificial intelligence in the United States.

Anthropic's negotiations with the Pentagon centered on two safeguards: prohibiting the use of their tools for mass surveillance of Americans and for "fully autonomous weapons." When the company refused to remove these guardrails, the Pentagon cancelled a $200 million contract. President Trump then ordered all federal agencies to cease using Anthropic's technology. Hegseth's designation of Anthropic as a "supply chain risk"—a label ordinarily reserved for firms with ties to foreign adversaries—was, as the Council on Foreign Relations noted, "a legally dubious power play."

OpenAI moved quickly to fill the gap, announcing its own Pentagon deal. CEO Sam Altman said the agreement would include "red lines" against mass surveillance and autonomous weapons. But buried in an OpenAI FAQ was a telling acknowledgement: when asked what would happen if the government violated the terms, the company wrote, "As with any contract, we could terminate it if the counterparty violates the terms. We don't expect that to happen."

The enforcement paradox is now visible. Anthropic assumed it had the freedom to walk away from a contract that violated its principles. The government's response demonstrated that this freedom may be illusory. If refusing a contract can result in being designated a national security risk, what leverage does any AI company actually have?

This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems.

Dean Ball

The Public: What They Actually Want

The polling data on AI governance is remarkably consistent across multiple surveys. A Future of Life Institute survey of 1,004 likely U.S. voters in February 2026 found that 80% support keeping humans in charge of AI with strong oversight, clear limits, and corporate accountability—versus just 10% who favor fast, lightly regulated development. That 8-to-1 margin held across partisan lines: 73% of Trump voters and 89% of Harris voters chose the pro-human-control vision.

A separate survey by the Information Technology and Innovation Foundation found that 67% of Americans say private technology companies have a responsibility to set limits on how their products are used, even when the government disagrees. On the Anthropic dispute specifically, 50% of Americans view penalizing the company as government overreach that sets a dangerous precedent, versus 35% who say it is necessary for national security.

Gallup research found that 80% of U.S. adults believe the government should maintain rules for AI safety and data security, even if it means developing AI capabilities more slowly. This preference held even though 79% of respondents said it is important for the U.S. to have more advanced AI technology than other countries.

The data suggests something that often gets lost in Washington debates: the public is capable of holding two ideas simultaneously. They want the U.S. to lead in AI. They also want guardrails. These are not contradictory positions—unless one assumes that safety and speed are inherently in tension.

The Question Worth Asking

The Pro-Human AI Declaration is not legislation. It has no enforcement mechanism. It cannot compel any company or government to do anything. Its power, if it has any, lies in creating what its organizers call "common knowledge"—making visible the breadth of opposition to unregulated AI development across ideological lines.

Whether that common knowledge translates into policy depends on factors the declaration cannot control: congressional action, executive priorities, judicial interpretation, and the behavior of AI companies themselves.

Tegmark reached for an analogy in his TechCrunch interview: "You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe, because the FDA won't allow them to release anything until it's safe enough."

The analogy is imperfect—AI systems are not pharmaceuticals, and the regulatory challenges differ in important ways. But the underlying question is worth sitting with: What would it mean to have a pre-deployment safety regime for AI? Who would run it? What would count as "safe enough"? And who gets to decide?

The Pro-Human AI Declaration offers one set of answers. The Pentagon-Anthropic standoff offers another. The gap between them is where the real debate lives.

Created by People. Powered by AI. Enabled by Cities.

One day to shape
Europe's AI future

Early bird tickets available. Secure your place at the most important AI convergence event in Central Europe.