The Commission's Transparency Consultation: What Article 50 Means for AI Providers and Deployers
In Brief:
- The European Commission opened a targeted consultation on 8 May 2026 for draft guidelines on AI transparency obligations under Article 50 of the AI Act
- Stakeholders have until 3 June 2026 to submit feedback; the rules become enforceable on 2 August 2026
- Providers must implement machine-readable marking for AI-generated content; deployers must disclose deepfakes and AI-generated text on matters of public interest
- Non-compliance carries fines up to €15 million or 3% of global turnover
- A voluntary Code of Practice, expected in June 2026, will serve as the de facto compliance benchmark
The regulatory machinery shaping Europe's AI future will be on full display at Human x AI Europe in Vienna on May 19, where policymakers, technologists, and industry leaders will debate exactly these implementation questions.
The Consultation Opens
On 8 May 2026, the European Commission published draft guidelines on transparency obligations for AI systems, opening a four-week window for stakeholder feedback. The timing is deliberate: Article 50 of the AI Act (Regulation EU 2024/1689) becomes enforceable on 2 August 2026, leaving providers and deployers roughly 85 days from the consultation's close to achieve compliance.
The guidelines target a specific problem. From August, people in the European Union must be informed when they interact with AI systems or encounter AI-generated content. The Commission's draft aims to clarify what that obligation means in practice, addressing scope, technical requirements, and the division of responsibilities across the AI value chain.
What Article 50 Actually Requires
Article 50 establishes transparency obligations across four distinct scenarios, each with different addressees and requirements.
For providers of AI systems:
- Systems designed for direct interaction with natural persons must inform users they are engaging with AI, unless this is obvious from context
- Generative AI systems producing synthetic audio, image, video, or text must mark outputs in a machine-readable format, detectable as artificially generated or manipulated
For deployers of AI systems:
- Emotion recognition and biometric categorisation systems require disclosure to exposed individuals
- Deepfakes (AI-generated content resembling real persons, places, or events that would falsely appear authentic) must be disclosed
- AI-generated text published on matters of public interest must be labelled, unless it has undergone human editorial review with a named person holding editorial responsibility
The distinction between providers and deployers matters. As Bird & Bird's analysis notes, companies integrating third-party models via API often assume they have no significant obligations. They are wrong. System providers remain fully liable under Article 50(2) to ensure content marking, regardless of whether they trained the underlying model.
The Technical Challenge: No Silver Bullet
The draft Code of Practice on Transparency of AI-Generated Content, published in December 2025 and now in its second iteration, makes one point explicit: no single marking technique suffices.
The Code mandates a multilayered approach combining:
Metadata embedding: Provenance information (using standards like C2PA) added to file metadata. Fragile, as metadata strips easily when content is screenshotted or re-uploaded.
Imperceptible watermarking: Marks interwoven with content that survive compression, cropping, and format conversion. The technical bar here is high: watermarks must be robust enough to persist through common transformations while remaining undetectable to users.
Fingerprinting or logging: A fallback mechanism for identifying content ex-post when active marking fails.
For text, the challenge is particularly acute. Watermarking text without degrading quality or creating detectable patterns remains technically difficult. The Code permits an alternative: provenance certificates, digitally signed manifests that formally guarantee content origin without embedding marks in the text itself.
The Compliance Blind Spot
The consultation targets companies across the AI value chain, but the obligations fall unevenly. Jones Day's commentary identifies a structural gap: upstream model providers must implement marking techniques at the model level before placing products on the market. Downstream system providers cannot assume compliance flows automatically from their API provider.
This creates a dependency chain. If a model provider fails to implement marking by design, every downstream system provider inherits a compliance problem they cannot easily solve. The Code addresses this by requiring model providers to ensure their models include content marking before market placement, making this a contractual and technical prerequisite for downstream compliance.
Penalties and Enforcement
The enforcement architecture follows the AI Act's tiered structure. Non-compliance with Article 50 transparency requirements carries administrative fines of up to €15 million or 3% of worldwide annual turnover, whichever is higher.
National competent authorities will conduct market surveillance, investigate complaints, and perform audits. The European AI Board coordinates enforcement across member states. For companies with global operations, the extraterritorial reach of the AI Act means these obligations apply wherever the output is used in the EU, regardless of where the provider or deployer is established.
The Voluntary Code as De Facto Standard
The Code of Practice is technically voluntary. Signatories commit to its measures to demonstrate compliance, but are not legally required to sign. The practical reality is different.
Companies that sign and implement the Code's measures gain a presumption of conformity. Those choosing alternative compliance paths face a shifted burden of proof: they must demonstrate their solution is at least as effective as the Code's measures. In an enforcement scenario, market surveillance authorities will use the Code as the benchmark for state of the art.
As Slaughter and May observes, whether a company signs the Code or not, it will still need an internal compliance regime. Not signing does not avoid the work; it shifts the burden from "follow the blueprint" to "prove your alternative blueprint is equally good."
What the Consultation Asks
The Commission's draft guidelines seek feedback on several interpretive questions:
- What constitutes "obvious" AI interaction, exempting providers from disclosure?
- How should "matters of public interest" be defined for AI-generated text?
- What technical solutions meet the "effective, interoperable, robust, and reliable" standard?
- How should artistic, satirical, or fictional works be treated under the deepfake disclosure requirement?
The consultation runs until 3 June 2026. Only responses submitted through the official online questionnaire will be considered in the final summary report.
Implementation Timeline
The regulatory calendar is tight:
- 8 May 2026: Consultation opens
- 3 June 2026: Consultation closes
- May-June 2026: Final Code of Practice expected
- 2 August 2026: Article 50 obligations become enforceable
For providers and deployers, this leaves approximately 60 days between final guidance and live enforcement. Companies that have not begun compliance preparation face a compressed implementation window.
Implications
The transparency consultation represents the final interpretive layer before enforcement. Three dynamics warrant attention:
Supply chain liability: The Code's emphasis on marking by design creates upstream dependencies. Procurement decisions for AI models and APIs now carry compliance implications.
Technical feasibility caveats: The Code acknowledges that technical solutions must be "as far as technically feasible." This creates interpretive space, but also uncertainty about what regulators will accept as adequate.
Cross-border fragmentation: As Oxford Global Society's analysis notes, the EU's approach differs significantly from China's all-encompassing labelling requirements. Companies operating across jurisdictions face divergent compliance obligations with limited interoperability.
The consultation closes in less than a month. The enforcement date is fixed. The window for shaping the final guidance is narrowing.
Frequently Asked Questions
Q: When do the AI Act transparency obligations under Article 50 become enforceable?
A: Article 50 transparency obligations become enforceable on 2 August 2026. The consultation on draft guidelines runs until 3 June 2026, with final guidance expected in June 2026.
Q: What are the penalties for non-compliance with Article 50 transparency requirements?
A: Non-compliance carries administrative fines of up to €15 million or 3% of total worldwide annual turnover for the preceding financial year, whichever is higher. SMEs face proportionate penalties based on the lower of the fixed amount or percentage cap.
Q: Who is responsible for labelling deepfakes under the AI Act?
A: Deployers of AI systems that generate or manipulate image, audio, or video content constituting deepfakes must disclose that the content has been artificially generated or manipulated. This applies to professional use; private individuals using AI for personal, non-professional purposes are not subject to these obligations.
Q: Does the Code of Practice on AI-generated content marking have legal force?
A: The Code is voluntary, but companies that sign and implement its measures gain a presumption of conformity with Article 50 obligations. Companies choosing alternative compliance paths must demonstrate their solutions are equally effective, effectively making the Code the de facto compliance benchmark.
Q: What technical marking methods does the Code of Practice require?
A: The Code mandates a multilayered approach combining metadata embedding (e.g., C2PA standards), imperceptible watermarking interwoven with content, and fingerprinting or logging as a fallback. No single technique is considered sufficient on its own.
Q: Are there exemptions from the transparency disclosure requirements?
A: Yes. Exemptions apply when AI interaction is obvious to a reasonably well-informed person, for law enforcement purposes with appropriate safeguards, and for artistic, creative, satirical, or fictional works where disclosure should not hamper enjoyment. AI-generated text that has undergone human editorial review with named editorial responsibility may also be exempt from disclosure requirements.