The Transparency Deficit: What AV Companies Won't Tell Congress – and Why It Matters
Senator Ed Markey asked seven autonomous vehicle companies a straightforward question: How often do your vehicles rely on remote human assistance? Every single one refused to answer.
This refusal, documented in a new report from Markey's office, reveals something more interesting than corporate secrecy. It exposes a fundamental disagreement about what autonomous actually means – and who gets to define it.
Naming the Real Disagreement
The debate over remote assistance in autonomous vehicles tends to collapse into a binary: either these cars drive themselves, or they don't. But that framing obscures the actual structure of the disagreement, which has at least three distinct layers.
First, there's a definitional dispute. When Aurora, May Mobility, Motional, Nuro, Tesla, Waymo, and Zoox decline to disclose how often their vehicles receive remote input, they're implicitly arguing that such assistance doesn't compromise the autonomous label. The strongest version of this position holds that remote assistance is analogous to air traffic control – a coordination layer that doesn't diminish the autonomy of individual aircraft. Critics counter that if a vehicle cannot complete its mission without human intervention, calling it autonomous is misleading at best.
Second, there's a facts disagreement. How often does remote assistance occur? Under what circumstances? Does it involve guidance (suggesting a route) or control (actually steering the vehicle)? TechCrunch's reporting notes one revealing admission: Tesla acknowledged that its remote assistance workers are authorized to temporarily assume direct vehicle control as a final escalation maneuver. That's a materially different capability than route suggestions. Without comparable disclosures from other companies, the public cannot assess relative safety or autonomy levels.
Third, there's an incentives disagreement. Companies have obvious commercial reasons to minimize disclosure. Investors, regulators, and consumers might react differently to a vehicle that requires remote intervention once per thousand miles versus once per ten miles. But the incentive structure cuts both ways: companies also have reasons to demonstrate safety, and transparent data could differentiate leaders from laggards.
The question worth asking: which of these three disagreements is most fundamental? If the definitional question were settled – if there were a shared understanding of what autonomous means in practice – would the facts and incentives disputes resolve themselves?
The Regulatory Vacuum
Markey's investigation matters because it highlights a governance gap that extends well beyond the United States. The National Highway Traffic Safety Administration (NHTSA) currently has no standardized framework for evaluating or disclosing remote assistance practices. Markey is now calling on NHTSA to investigate and says he is working on legislation to impose strict guardrails on AV companies' use of remote operators.
This regulatory uncertainty creates asymmetric risks. Companies operating in good faith face the same scrutiny as those cutting corners. Consumers cannot make informed choices. And policymakers lack the data needed to craft evidence-based rules.
The European context adds another dimension. The EU's AI Act establishes risk-based categories for AI systems, with transportation applications potentially falling into high-risk classifications requiring conformity assessments and human oversight documentation. But the Act's implementation is still unfolding, and the specific question of remote assistance in autonomous vehicles remains underspecified.
For European policymakers watching the American debate, the lesson may be cautionary: define terms and disclosure requirements before commercial deployment scales, not after.
The Waymo Paradox
Waymo's rapid expansion – now operating commercially in ten U.S. cities with more coming – makes it the most visible target for transparency demands. The company provides 400,000 rides weekly and completed 15 million rides in 2025 alone, according to recent reporting.
But Waymo's scale creates a paradox. The company has more operational data than any competitor, which means it could provide the most comprehensive transparency – or has the most to lose from disclosure. Its recent $16 billion funding round from Alphabet demonstrates continued investor confidence, yet that confidence rests partly on assumptions about autonomy levels that remain unverified by independent parties.
The strongest argument for Waymo's position would be that premature disclosure of proprietary operational metrics could advantage competitors without improving safety outcomes. The strongest counterargument: if the technology is as capable as claimed, transparency should reinforce rather than undermine public trust.
Remote Assistance as a Feature, Not a Bug
Here's where the debate gets genuinely interesting. Some industry observers argue that remote assistance isn't a failure mode to be minimized – it's a design feature that improves safety. Human oversight, in this view, represents responsible deployment rather than technological inadequacy.
This reframe has merit. Aviation's safety record depends partly on human-machine collaboration, not pure automation. Medical AI systems increasingly emphasize human-in-the-loop designs. Why should autonomous vehicles be held to a different standard?
The problem is that this argument requires transparency to be credible. If companies want credit for responsible human oversight, they need to demonstrate how that oversight works, when it's invoked, and what outcomes it produces. Silence invites the inference that remote assistance is more frequent, more interventionist, or more essential than companies want to admit.
What Would Have to Be True
The path forward requires answering several questions that the current debate has avoided:
For companies: What disclosure framework would protect legitimate competitive interests while providing meaningful transparency? Is there a middle ground between full operational data and complete silence?
For regulators: What metrics actually matter for safety assessment? Frequency of remote assistance? Severity of situations requiring intervention? Outcomes with and without human input?
For the public: What level of human involvement is acceptable in autonomous systems? Is the goal full automation, or is supervised autonomy sufficient?
The Baidu incident in Wuhan – where robotaxis stalled throughout the city, trapping passengers for up to two hours due to system failure – illustrates the stakes. When autonomous systems fail, the consequences are immediate and visible. When remote assistance prevents failures, the intervention remains invisible unless companies choose to disclose it.
The Silence That Speaks
Senator Markey characterized the companies' collective refusal as a stunning lack of transparency. But silence is itself a form of communication. It signals that companies believe the costs of disclosure exceed the benefits – at least under current regulatory conditions.
That calculation could change. Markey's promised legislation, NHTSA investigations, or a high-profile incident involving undisclosed remote assistance could shift the incentive structure rapidly. Companies that establish transparency norms now may find themselves better positioned than those forced into disclosure by crisis.
The autonomous vehicle industry is asking the public to trust machines with human lives. That trust cannot be built on silence. It requires the kind of rigorous, verifiable transparency that allows informed consent – from passengers, from regulators, and from the communities where these vehicles operate.
The question is not whether transparency will come, but whether it will arrive through deliberate policy design or reactive crisis management. The companies' current silence suggests they're betting on the latter. That bet may not age well.
The governance questions raised by autonomous vehicle transparency – who decides what autonomous means, what disclosure obligations apply, how human oversight should be structured – will be central to European AI policy for years to come. These conversations deserve rooms where complexity can be engaged without tribal capture. One such room opens in Vienna on May 19 at Human x AI Europe, where the people shaping these frameworks will be working through exactly these tensions.
Frequently Asked Questions
Q: What did Senator Markey's investigation reveal about autonomous vehicle companies?
A: Markey sent letters to seven U.S. AV companies – Aurora, May Mobility, Motional, Nuro, Tesla, Waymo, and Zoox – asking how often their vehicles rely on remote human assistance. All seven refused to provide this information, which Markey called "a stunning lack of transparency."
Q: What is the difference between remote assistance and remote control in autonomous vehicles?
A: Remote assistance typically involves guidance functions like suggesting routes or providing situational awareness to the vehicle's AI system. Remote control means a human operator temporarily takes over direct steering and braking functions. Tesla acknowledged its remote workers can assume direct vehicle control as "a final escalation maneuver."
Q: What regulatory action is Senator Markey pursuing on AV transparency?
A: Markey is calling on the National Highway Traffic Safety Administration (NHTSA) to investigate companies' use of remote assistance workers. He also stated he is working on legislation to impose strict guardrails on AV companies' use of remote operators.
Q: How does the EU AI Act relate to autonomous vehicle transparency requirements?
A: The EU AI Act establishes risk-based categories for AI systems, with transportation applications potentially classified as "high-risk." Such systems require conformity assessments and human oversight documentation, though specific requirements for remote assistance disclosure in autonomous vehicles remain underspecified as implementation unfolds.
Q: How many rides does Waymo currently provide?
A: Waymo provides approximately 400,000 rides per week across six major U.S. metropolitan areas. In 2025, the company completed 15 million rides total, more than tripling its annual volume from the previous year.
Q: Why might AV companies resist disclosing remote assistance data?
A: Companies may fear that disclosure could disadvantage them competitively, concern investors about autonomy levels, or invite regulatory scrutiny. However, this silence also prevents differentiation between safety leaders and laggards, and undermines public trust in claims about vehicle autonomy.