There is a particular kind of document that emerges when a community reaches a threshold. Not a press release, not a policy brief—something closer to a declaration. The European Visual Artists Manifesto on AI, drafted by European Visual Artists (EVA) and signed by collective management organisations across the continent, is one of these documents. It positions copyright not as a commercial technicality but as a fundamental human right—property as protected by the Universal Declaration of Human Rights and the EU Charter of Fundamental Rights.
This is not hyperbole. This is a reframing.
Stand in any European museum and you encounter the accumulated labor of centuries: brushstrokes, compositions, the particular way light falls across a canvas because someone decided it should fall that way. Now consider that millions of such decisions—the visual grammar of human creativity—have been scraped, processed, and absorbed into generative AI systems without consent, without compensation, without even notification. The manifesto names this extraction for what it is: a violation not just of economic interest but of something more fundamental about what it means to make things.
The Architecture of the Argument
The manifesto's core demands are deceptively simple: transparency about how works are used, prior consent before training, fair remuneration for that use, and an end to the expansion of copyright exceptions that enable mass extraction. But beneath these demands lies a more radical proposition—that the current legal framework, particularly the text and data mining exceptions in Articles 3 and 4 of the EU's Digital Single Market Directive, was never designed for this moment.
As EVA's statement on AI makes clear, the organisation considers it "inadmissible and contrary to the Charter of Fundamental Rights of the European Union" that European digital legislation would allow, directly or indirectly, the recognition of AI-generated works as works protected by copyright. The logic is precise: copyright is a human right. Machines are not humans. Therefore, extending copyright protection to machine outputs would be a category error with constitutional implications.
The practical problem is equally stark. Visual artists, as EVA notes, are "vulnerable, in a weak bargaining position and often self-financed." The opt-out mechanism provided by Article 4 of the DSM Directive—which theoretically allows creators to reserve their rights through machine-readable means—doesn't work for the visual sector because "no common standards are available and the EU does not provide any support or guidance on such standardization." Artists cannot opt out of a system that doesn't provide them the tools to do so.
The Parliamentary Response
The manifesto has found resonance in Brussels. In January 2026, the European Parliament's Legal Affairs Committee adopted proposals demanding full transparency and fair remuneration for rightsholders whose work is used by generative AI. The vote was decisive: 17 in favour, 3 against, 2 abstentions.
The committee's demands go further than the manifesto in some respects. MEPs want EU copyright law to apply to all generative AI systems available on the EU market, regardless of where the training takes place. They're calling for "an itemized list identifying each copyright-protected content used for training"—not the "sufficiently detailed summary" required by the AI Act, which the committee considers "completely inadequate." Failure to comply with transparency requirements, the committee suggests, could constitute copyright infringement with legal consequences.
Rapporteur Axel Voss put it plainly:
"Generative AI must not operate outside the rule of law. If copyrighted works are used to train AI systems, creators are entitled to transparency, legal certainty, and fair compensation."
The committee also rejected the idea of a global licence allowing providers to train their systems in exchange for a flat-rate payment—a model that would effectively commoditise creative work into a bulk resource. Instead, they're calling for sector-specific voluntary collective licensing agreements accessible to individual creators and small enterprises.
The Licensing Question
Here is where the manifesto's vision meets the machinery of implementation. Collective management organisations—the CMOs that have long administered rights for visual artists—are positioning themselves as the infrastructure for a new licensing ecosystem. DACS in the UK, Pictoright in the Netherlands, and their counterparts across Europe have signed the manifesto and are developing frameworks for AI licensing.
The Swedish music rights society STIM has already launched what it calls the world's first collective AI music licence, establishing a model where AI companies can train on copyrighted music lawfully with royalties flowing back to songwriters. The framework includes mandatory use of third-party attribution technology, making revenues auditable in real time. It's a proof of concept that collective licensing can work in the AI context—though whether it can scale to the visual arts, with their different market structures and rights configurations, remains to be seen.
The Dutch collecting society Buma/Stemra has declared a general opt-out for their entire repertoire, meaning no AI model may be trained on affiliated music without a specific licence. This is the manifesto's logic made operational: consent first, then negotiation.
What's Being Naturalised
Pay attention to what's being naturalised. The tech industry's preferred framing treats training data as a kind of raw material—abundant, extractable, and properly governed by exceptions rather than permissions. The manifesto inverts this: creative works are not inputs to be processed but expressions of human agency that carry rights regardless of their digital form.
This matters beyond the immediate economic stakes. As thirteen leading European and international organisations representing writers, performers, musicians and other creative professionals wrote in a joint letter to the European Parliament: "Generative AI models would not exist without the works created by our members. Yet they now directly compete with these works, threatening to replace human creativity and labour with devastating economic consequences."
The manifesto also raises a question that the policy discourse often elides: what happens to works already ingested? As EVA notes, "AI systems most likely cannot 'unlearn' protected works that they have been trained with." The opt-out option "may not erase the damage already done to artists." This is why the European Parliament is asking the Commission to examine whether remuneration could apply to past use—a form of retrospective settlement that acknowledges the extraction has already occurred.
The Broader Stakes
The European Visual Artists Manifesto is not just about money, though money matters. It's about what kind of cultural ecosystem we want to build. If generative AI is trained primarily on work created without consent, the outputs will reflect that origin—not just legally but aesthetically. The manifesto's insistence on human-centred AI is also an insistence on human-centred aesthetics: art that emerges from intention, labour, and the particular choices of particular people.
A coalition of organisations representing hundreds of thousands of writers, translators, journalists, performers, composers, and visual artists has written to the European Commission warning that the current legal framework "is too often misinterpreted, insufficiently applied, poorly enforced or simply ignored by generative AI models." They're calling for a clearer and more efficient legal framework that preserves "the rights of creators and the integrity of their works."
The manifesto, then, is both a diagnostic and a demand. It names what has happened—mass extraction without consent—and articulates what should happen instead: transparency, authorisation, remuneration. It positions European visual artists not as obstacles to innovation but as stakeholders in a cultural economy that AI companies have been treating as a commons.
The European Parliament's plenary vote on the copyright and AI report is scheduled for March 2026. The outcome will shape not just the economics of creative work but the cultural logic of what we consider valuable, what we consider human, and what we consider worth protecting.
The artifact remembers what the discourse forgets. The manifesto is an artifact. It remembers that someone made these images, that making them took time and skill and intention, and that using them without permission is not innovation—it's extraction. Whether European policymakers will remember the same thing remains to be seen.