Part of 2026 May 19, 2026 ·
--- days
-- hrs
-- min
-- sec
Content Hub Debate Article
Debate Mar 26, 2026 · 13 min read

Spotify's Artist Profile Protection: A Case Study in Platform Governance Under Pressure

Spotify's Artist Profile Protection: A Case Study in Platform Governance Under Pressure

A Case Study in Platform Governance Under Pressure

When a streaming platform announces that protecting artist identity has become a top priority for the year, the statement deserves parsing. What changed? What pressure made this the priority? And what does the chosen solution reveal about the underlying trade-offs?

Spotify's new Artist Profile Protection feature, now in limited beta, allows artists to review and approve releases before they appear on their profiles. The timing is not coincidental. It arrives one week after Sony Music disclosed that it has requested the removal of more than 135,000 AI-generated deepfake songs impersonating its artists from streaming services – a figure the company believes represents only a fraction of the total uploaded.

The question worth asking: is this a technical fix for a technical problem, or a governance response to a structural shift in how music gets made and distributed?

The Problem, Disaggregated

The phrase AI slop has become shorthand for a cluster of distinct problems that deserve separate treatment. When Spotify says music has been landing on the wrong artist pages, the company is describing at least three different failure modes:

Metadata errors: Legitimate releases misattributed due to technical mistakes in the distribution pipeline. This has always existed; AI didn't create it.

Name confusion: Artists sharing common names inadvertently having their catalogs mixed. Also pre-AI, though the volume of new uploads exacerbates it.

Malicious impersonation: Bad actors deliberately attaching AI-generated content to established artists' profiles to siphon streams and royalties. This is the new problem – and it's scaling.

Music Business Worldwide reports that last year, AI-generated tracks mimicking Tyler, the Creator's album flooded Spotify ahead of its official release, with a fake version briefly holding the number two spot. Father John Misty and Jeff Tweedy were among artists targeted by AI-generated fakes uploaded to their profiles without consent. After King Gizzard & the Lizard Wizard removed their catalog from Spotify, an AI-generated copycat called King Lizard Wizard appeared with songs using identical titles and lyrics.

The distinction matters because each problem has different solutions. Metadata errors need better tooling. Name confusion needs disambiguation systems. Malicious impersonation needs enforcement mechanisms and, potentially, structural changes to how open distribution works.

The Trade-Off at the Heart of Open Distribution

Spotify's announcement contains a revealing tension. The company acknowledges that open-access distribution channels have lowered the barrier for independent artists to share music with the world, promote collaborations easily, and transfer music between distributors seamlessly. Then it adds: But that openness comes with gaps that bad actors can exploit.

This is the core trade-off. The same infrastructure that democratized music distribution – allowing anyone to upload without gatekeepers – also democratized fraud. The question is not whether to have trade-offs but which ones to accept.

The Artist Profile Protection feature represents one answer: shift verification responsibility to artists themselves. Artists who opt in receive notifications when music is delivered to their profile and can approve or decline it. Only approved releases appear on their profile, contribute to their stats, and show up in recommendations.

The strongest version of this approach: it gives artists agency over their own identity without requiring Spotify to make judgment calls about what is or isn't legitimate. It's a consent-based model that respects artist autonomy.

The strongest critique: it creates an asymmetric burden. Established artists with teams can manage incoming requests. Independent artists juggling day jobs may miss notifications. Spotify acknowledges this, noting that if an artist doesn't take action on an incoming release, it will be blocked by default – meaning legitimate collaborations could be delayed if an artist forgets to respond.

The Scale of the Problem

The numbers emerging from the industry suggest this is not a marginal issue. Sony Music has identified 135,000 deepfake tracks targeting its roster, with 60,000 flagged since March 2025 alone. Dennis Kooker, President of Sony's Global Digital Business, told the BBC that these deepfakes cause direct commercial harm to legitimate recording artists and potentially damage a release campaign or tarnish the reputation of an artist.

Spotify itself reported removing over 75 million spammy tracks in the twelve months prior to September 2025. French streaming service Deezer has disclosed that it receives approximately 60,000 fully AI-generated tracks per day – around 39% of all daily deliveries. The IFPI (International Federation of the Phonographic Industry), the global trade body representing the recording industry, estimates that up to 10% of content across all streaming platforms could be fraudulent.

These figures reveal something important: the problem is not a few bad actors but a systemic incentive structure. When royalty pools grow – Spotify's total music payouts increased from $1 billion in 2014 to $10 billion in 2024 – the rewards for gaming the system grow proportionally.

Competing Approaches to the Same Problem

Spotify's Artist Profile Protection is one response among several emerging across the industry. The approaches reveal different assumptions about where responsibility should sit:

Artist-side verification (Spotify): Artists approve releases before they appear. Pros: respects autonomy, avoids platform judgment calls. Cons: creates burden on artists, may miss sophisticated attacks.

Platform-side detection (Deezer): Automated systems identify and flag AI-generated content. Deezer claims 34% of songs submitted to its service are now categorized as AI-generated. Pros: scales without artist effort. Cons: imperfect accuracy, potential for false positives.

Distributor-side disclosure (DDEX standard): Industry-wide metadata standards require disclosure of AI involvement at the point of distribution. Spotify is supporting this standard alongside partners including Believe, CD Baby, DistroKid, and others. Pros: creates transparency across platforms. Cons: relies on honest disclosure, which bad actors won't provide.

Self-reporting tags (Apple Music): Apple's Transparency Tags require labels and distributors to disclose AI use but leave enforcement to them. Pros: low platform overhead. Cons: voluntary compliance is unlikely to catch fraud.

Outright bans (Bandcamp): Some platforms prohibit AI-generated content entirely. Pros: clear policy. Cons: difficult to enforce, may penalize legitimate AI-assisted creativity.

The question is not which approach is right but which combination of approaches creates the best balance of protection, scalability, and creative freedom.

The Deeper Governance Question

Spotify's feature addresses a symptom – fraudulent uploads reaching artist profiles – but the underlying condition is a mismatch between the speed of content generation and the speed of content verification. When anyone can generate thousands of tracks in hours, systems designed for human-paced creation break down.

This is not unique to music. Similar dynamics appear in text (AI-generated spam), images (deepfakes), and code (automated vulnerability exploitation). The pattern: generative AI lowers the cost of production faster than verification systems can adapt.

The music industry's response offers a preview of governance challenges across creative industries. The options are familiar: platform-side detection, creator-side verification, industry-wide standards, or some combination. Each involves trade-offs between scalability, accuracy, and burden distribution.

IFPI CEO Victoria Oakley's comment to the BBC captures the policy challenge: I think we've seen a lot of governments really grappling with this issue because they are trying to square a circle: They are trying to protect creativity and at the same time encourage innovation.

The honest answer is that the circle cannot be squared – only managed. Protection and innovation exist in tension. The question is where to draw the line and who gets to draw it.

What to Watch

Several developments will shape how this plays out:

Adoption rates: How many artists actually enable Artist Profile Protection? If uptake is low, the feature becomes a liability shield for Spotify rather than a practical solution.

False positive rates: How often do legitimate collaborations get blocked? The feature's default-to-block design could create friction for artists who work with multiple collaborators.

Detection technology: Can platforms like Deezer improve AI detection accuracy enough to make artist-side verification unnecessary for most cases?

Regulatory response: Will governments mandate disclosure, detection, or both? The UK recently reconsidered proposals that would have allowed AI firms to train on copyrighted works without permission – a sign that regulatory intervention remains on the table.

Industry coordination: Will the DDEX disclosure standard achieve broad adoption, or will platforms continue with fragmented approaches?

The answers will determine whether Spotify's feature becomes a model for platform governance or a stopgap measure overtaken by events.

The tension between open distribution and identity protection is one of several governance questions that will define how AI reshapes creative industries. These are exactly the kinds of trade-offs that benefit from structured debate rather than tribal positioning. Human x AI Europe, taking place May 19 in Vienna, is convening policymakers, technologists, and industry leaders to work through these questions together – in the room where Europe decides what kind of future it wants to build.

Frequently Asked Questions

Q: What is Spotify's Artist Profile Protection feature?

A: Artist Profile Protection is a beta feature that allows artists to review and approve or decline releases before they appear on their Spotify profile. Only approved releases contribute to an artist's catalog, stats, and recommendations. Artists receive email notifications when music is delivered with their name attached.

Q: How many AI-generated deepfake songs has Sony Music identified?

A: Sony Music has requested removal of more than 135,000 AI-generated deepfake songs impersonating its artists, with approximately 60,000 flagged since March 2025 alone. The company believes this represents only a fraction of total fraudulent uploads across streaming platforms.

Q: What happens if an artist doesn't respond to a release approval request on Spotify?

A: If an artist doesn't take action on an incoming release, it will be blocked by default and won't appear on their Spotify profile. However, the release may still go live on other streaming services, so artists may need to notify their label or distributor separately.

Q: What is the DDEX AI disclosure standard?

A: DDEX (Digital Data Exchange) is developing an industry standard for AI disclosures in music credits. The standard allows artists and rights holders to indicate where and how AI played a role in track creation – whether AI-generated vocals, instrumentation, or post-production. Spotify and multiple distributors are supporting this standard.

Q: How much AI-generated content is being uploaded to streaming platforms daily?

A: French streaming service Deezer reports receiving approximately 60,000 fully AI-generated tracks per day, representing around 39% of all daily deliveries. The IFPI estimates that up to 10% of content across all streaming platforms could be fraudulent.

Q: Does Spotify's Artist Profile Protection feature detect AI-generated content?

A: No. The feature does not automatically detect AI-generated content. It gives artists the ability to manually review and approve or decline releases delivered to their profile, regardless of how the music was created. Detection of AI content remains a separate challenge addressed through other mechanisms like spam filters and industry disclosure standards.

Created by People. Powered by AI. Enabled by Cities.

One day to shape
Europe's AI future

Early bird tickets available. Secure your place at the most important AI convergence event in Central Europe.