Part of 2026 May 19, 2026 ·
--- days
-- hrs
-- min
-- sec
Content Hub Canvas Article
Canvas May 7, 2026 · 9 min read

The Transparency Ecosystem Takes Shape: What the EU Code of Practice Reveals About Seeing AI

The Transparency Ecosystem Takes Shape: What the EU Code of Practice Reveals About Seeing AI

The Transparency Ecosystem Takes Shape: What the EU Code of Practice Reveals About Seeing AI

When synthetic media becomes indistinguishable from the authentic, the question shifts from "what is real?" to "how do we build systems that help us know?"

In Brief: The Partnership on AI has released recommendations for the EU Code of Practice on AI-generated content transparency, advocating for multi-layered marking techniques, tiered detection access, and standardized disclosure icons. With Article 50 obligations taking effect in August 2026, the Code represents a critical attempt to operationalize transparency requirements for providers and deployers of generative AI systems. The recommendations emphasize that technical solutions alone are insufficient without user research, education, and consistent terminology across the ecosystem.

For those tracking how these transparency frameworks will actually function in practice, the conversation continues at Human x AI Europe in Vienna on May 19, where implementation questions move from policy documents to working sessions.

The Artifact as Diagnostic

Stand in front of any screen today and notice what has changed. Not the resolution, not the interface chrome, but something more fundamental: the relationship between seeing and believing has been quietly renegotiated. A video of a political candidate, an audio message from a family member, an image documenting a conflict zone. Each now carries an invisible question mark that previous generations of media never bore.

The Partnership on AI's recent recommendations to the EU Code of Practice arrive at precisely this moment of perceptual uncertainty. The document reads less like a policy brief and more like an attempt to build infrastructure for trust itself. What emerges is not a simple technical specification but something closer to a philosophy of disclosure, one that recognizes transparency as an ecosystem rather than a feature.

Three Layers, Not One

The most striking aspect of PAI's recommendations concerns the technical foundation. The organization advocates for what it calls a "multi-layered approach" to marking AI-generated content, drawing from its Synthetic Media Framework. This means not one marking technique but three: watermarking, fingerprinting, and cryptographic metadata working in concert.

The reasoning reveals something important about the current state of synthetic media detection. No single technique proves robust enough. Watermarks can be stripped. Metadata can be altered. Fingerprints can be evaded. But layered together, these approaches create redundancy that makes circumvention significantly more difficult.

The EU's Code of Practice process, which began in November 2025 and concludes this month, has been structured around two working groups: one for providers (those creating generative AI systems) and one for deployers (those using these systems to generate or publish content). The distinction matters because obligations differ. Providers must ensure outputs are machine-readable and detectable as artificially generated. Deployers must disclose deepfakes and AI-generated text on matters of public interest.

The Detection Paradox

PAI's recommendations surface a tension that deserves attention. Detection tools, which identify synthetic content through unintentionally added cues, face a peculiar problem: making them publicly accessible helps bad actors evade them. The more widely available a detection method becomes, the faster adversarial techniques evolve to circumvent it.

The proposed solution involves tiered access rather than unrestricted public availability. Researchers, journalists, and platform integrity teams might receive different levels of access than the general public. This creates its own complications around equity and gatekeeping, but it acknowledges a reality that purely open approaches cannot address.

Legal analysis of the draft Code notes that providers must offer free-of-charge interfaces or publicly available tools for verification. The tension between openness and security remains unresolved, and perhaps unresolvable in any permanent sense. What works today may require revision as capabilities advance.

What Users Actually See

Technical marking means nothing if humans cannot interpret the signals. PAI's recommendations emphasize this gap between machine-readable transparency and human understanding. The organization calls for a standardized direct disclosure icon, a visual element that would serve as "a universal entry point to other transparency information."

The second draft of the Code, published in March 2026, proposes an "AI" visual icon for this purpose. But standardization alone does not guarantee comprehension. PAI recommends user research across demographic groups, particularly those most vulnerable to synthetic media harms: youth facing AI-generated image abuse, elderly populations targeted by deepfake fraud.

The organization's own research has shown how transparency labels can backfire, creating false confidence or label blindness. A disclosure icon that users learn to ignore provides no protection. One that raises doubts about authentic content creates different harms.

The Model-Level Question

A notable shift occurred between drafts of the EU Code. The first version suggested that indirect disclosures should be incorporated at the model layer of the value chain. The second draft removed this requirement. PAI expresses concern about this change, noting that model-level transparency had "broad buy-in in 2023 from key players."

The arguments against model-level mandates carry weight. General-purpose infrastructure differs from application-specific systems. Open-source developers and smaller organizations face different resource constraints than major providers. But the removal of this requirement in what PAI describes as "a short engagement window" raises questions about whose interests shaped the revision.

The General-Purpose AI Code of Practice, published separately in July 2025, addresses some model-level concerns through its transparency chapter. But the relationship between these two codes, and the gaps between them, remains an area requiring attention.

Beyond the Code

PAI's recommendations acknowledge a fundamental limitation: the Code of Practice is voluntary. Major organizations like Meta are not participating in similar efforts. Voluntary frameworks depend on participation, and participation depends on incentives that may not align with business models built on engagement and scale.

The organization calls for parallel investments in formal standards efforts at NIST (National Institute of Standards and Technology) and C2PA (Coalition for Content Provenance and Authenticity), regulatory efforts across countries, and direct advocacy with industry teams. The Code represents one mechanism among many, not a comprehensive solution.

Article 50 obligations under the AI Act take effect on 2 August 2026. The timeline leaves little room for iteration. Providers and deployers must prepare for compliance with a framework still being finalized, using technical solutions still being evaluated, for a problem still evolving.

The Human Question

What becomes visible through these policy documents is not merely a technical challenge but a cultural one. The infrastructure being built will shape how future audiences relate to media, how trust forms and dissolves, how shared understanding becomes possible or impossible.

PAI's closing observation deserves attention: "The future of synthetic media is not just a technical challenge. It is a human one that real people must shape." The statement reads as obvious, perhaps even banal. But its implications are not. Every decision about marking techniques, disclosure icons, detection access, and model-level requirements encodes assumptions about human cognition, social trust, and democratic participation.

The transparency ecosystem taking shape through the EU Code of Practice will not solve the problem of synthetic media. No framework can. But it may create conditions under which the problem becomes manageable, where audiences have tools for assessment rather than only intuition, where the question "is this real?" has somewhere to begin.

The artifact remembers what the discourse forgets. These policy documents, these technical specifications, these working group deliberations will become the infrastructure through which future generations encounter AI-generated media. What gets built now will shape what becomes normal later.

Frequently Asked Questions

Q: When do the EU AI Act transparency obligations for AI-generated content take effect?

A: Article 50 transparency obligations become applicable on 2 August 2026. The Code of Practice is expected to be finalized in May-June 2026 to give providers and deployers time to prepare for compliance.

Q: What marking techniques does the Partnership on AI recommend for AI-generated content?

A: PAI recommends a three-layer approach combining watermarking, fingerprinting, and cryptographic metadata. The organization argues that no single technique is sufficiently robust, and multiple reinforcing mechanisms are necessary for effective transparency.

Q: What is the difference between providers and deployers under the EU Code of Practice?

A: Providers are organizations that place generative AI systems on the market and must ensure outputs are machine-readable and detectable as AI-generated. Deployers are entities using these systems to generate or publish content and must disclose deepfakes and AI-generated text on matters of public interest.

Q: Is the EU Code of Practice on AI-generated content mandatory?

A: The Code is voluntary. Organizations can sign up to demonstrate compliance with Article 50 obligations, but adherence does not guarantee compliance, and alternative routes to compliance may exist. Notable organizations like Meta are not participating.

Q: Why does PAI recommend tiered access to detection tools rather than full public access?

A: Detection tools that identify synthetic content through unintentional cues become less effective when widely accessible, as bad actors can use them to evade detection. Tiered access balances transparency with security by providing different access levels to researchers, journalists, and the general public.

Q: What happened to model-level transparency requirements between drafts of the Code?

A: The first draft suggested indirect disclosures should be incorporated at the model layer. This requirement was removed in subsequent drafts, which PAI notes occurred despite "broad buy-in in 2023 from key players." Arguments against model-level mandates cite burdens on open-source and smaller developers.

Enjoyed this? Get the Daily Brief.

Curated AI insights for European leaders — straight to your inbox.

Created by People. Powered by AI. Enabled by Cities.

One day to shape
Europe's AI future

Secure your place at the most important AI convergence event in Central Europe.