The Smash-and-Grab Sublime: What Seedance 2.0 Reveals About the New Cultural Economy
There's a video circulating on X. Tom Cruise fighting Brad Pitt. Neither actor was involved. Neither consented. The whole thing was conjured from a two-line prompt in Seedance 2.0, ByteDance's new AI video generator. "I hate to say it," wrote Deadpool screenwriter Rhett Reese in response. "It's likely over for us."
Pay attention to that phrase: likely over. Not definitely. Not certainly. The conditional tense tells us something. We're in a moment of radical uncertainty about what creative labor means, what intellectual property protects, and whether the frameworks we've built over centuries can survive contact with systems that treat the entire visual history of cinema as raw material.
This isn't just a story about copyright infringement. It's a story about what happens when the speed of technological deployment outpaces the speed of cultural negotiation.
The Artifact
According to TechCrunch, ByteDance launched Seedance 2.0 earlier this week. The model is currently available to Chinese users through ByteDance's Jianying app, with global availability planned for CapCut. Like OpenAI's Sora, it generates videos—currently limited to 15 seconds—from text prompts.
Within a single day, the Motion Picture Association issued a statement from CEO Charles Rivkin demanding ByteDance "immediately cease its infringing activity." The language was unambiguous: "In a single day, the Chinese AI service Seedance 2.0 has engaged in unauthorized use of U.S. copyrighted works on a massive scale."
Disney sent a cease-and-desist letter accusing ByteDance of a "virtual smash-and-grab of Disney's IP." Paramount followed suit, claiming that Seedance outputs featuring their characters are "often indistinguishable, both visually and audibly" from their original films and TV shows.
The Human Artistry Campaign—backed by Hollywood unions and trade groups—condemned Seedance 2.0 as "an attack on every creator around the world." SAG-AFTRA issued its own statement standing "with the studios in condemning the blatant infringement."
Spider-Man. Darth Vader. Baby Yoda. All apparently generated without authorization, without licensing, without the elaborate negotiations that have historically governed how cultural icons move through the world.
The Contrast
Here's where the story becomes more than a simple tale of infringement. Just two months ago, Disney and OpenAI announced a landmark three-year licensing agreement. Under this deal, OpenAI's Sora can generate short, user-prompted videos featuring more than 200 Disney, Marvel, Pixar, and Star Wars characters. Disney invested $1 billion in OpenAI and became a major customer of the company's APIs.
"Technological innovation has continually shaped the evolution of entertainment. Through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works."
Disney CEO Robert Iger
The contrast is almost too neat. On one side: a negotiated framework, a billion-dollar investment, shared governance structures, joint steering committees to monitor content. On the other: a model launched without apparent guardrails, generating Disney characters within hours of deployment.
But the neatness is the point. What we're witnessing isn't just a conflict between a "good" AI company and a "bad" one. We're watching the emergence of two fundamentally different models for how generative AI might relate to existing creative ecosystems.
The Fault Lines
The legal landscape is genuinely unsettled. As IP Watchdog reports, dozens of lawsuits are currently testing whether training AI models on copyrighted content constitutes infringement or falls under fair use protections. The outcomes have been mixed.
In June 2025, two judges in the Northern District of California found that using copyrighted works to train generative AI models that don't substantially reproduce the training content was "transformative fair use as a matter of law." Judge William Alsup called the technology "spectacularly transformative."
But the Thomson Reuters v. Ross Intelligence case went the other way, with the court rejecting the fair use defense because Ross sought to create a "direct market substitute" for Westlaw's offerings.
The U.S. Copyright Office's recent Generative AI Training report rejected a blanket application of fair use for AI training, emphasizing potential economic harm through "lost sales, missed licensing opportunities, and dilution of market value."
In Europe, the situation is equally complex. The EU AI Act—the world's first comprehensive AI regulation—was supposed to protect creators. But just last month, the European Parliament's Legal Affairs Committee adopted proposals demanding "full transparency and fair remuneration of rightsholders for the use of copyrighted work by generative artificial intelligence."
And yet, a coalition of 40 creative industry organizations representing nearly 17 million European professionals has called the AI Act's implementation a "betrayal"—arguing that the final measures "fail to address the core concerns" of creators and favor AI providers over rights holders.
The Deeper Question
What does this feel like? That's not a soft question.
Stand in front of a Seedance-generated video of Spider-Man doing something Spider-Man has never done in any Marvel film. Notice the uncanny precision. Notice how the character moves with the weight and rhythm you recognize from years of watching these films. Notice that no animator drew these frames, no director blocked these shots, no actor performed these movements.
Something is being bypassed. Not just legal frameworks—though those too—but the entire apparatus of creative labor that has historically mediated between imagination and artifact. The writers' rooms, the storyboard sessions, the motion capture stages, the rendering farms, the color grading suites. All of it compressed into a two-line prompt.
The question isn't whether this is legal. Courts will decide that. The question is what kind of cultural economy we're building, and who gets to participate in it.
Two Models, One Future
The Disney-OpenAI deal and the Seedance controversy represent two possible futures for creative AI.
In one model, major rights holders negotiate directly with AI companies, creating licensed ecosystems where intellectual property flows through contractual channels. Disney gets equity, governance rights, and control over how its characters appear. OpenAI gets legitimacy and access to some of the most valuable IP in entertainment history. Fans get to play with characters they love within defined parameters.
As the Kluwer Copyright Blog notes, this model represents "private ordering"—contracts and platform governance stepping in where statutory law has lagged. It's efficient. It's pragmatic. It may be inevitable.
But it's not without critics. The Writers Guild of America East has voiced concerns about how such deals "may sideline human creators and devalue creative labor." When Disney licenses characters to OpenAI, who benefits? The corporation that owns the IP, certainly. The AI company that gains access to it. But what about the animators who developed those characters' visual language? The writers who gave them voice? The performers whose movements were studied and synthesized?
The other model—the one Seedance 2.0 represents—is simpler and more brutal. Train on everything. Generate anything. Let the lawyers sort it out later. This approach treats the entire history of visual culture as a commons to be mined, regardless of who created it or under what terms.
Neither model is stable. The first concentrates power among those who already hold it—major studios, major tech companies. The second threatens to dissolve the economic foundations that have historically supported creative work.
What's Being Naturalized
Here's what I keep returning to: the speed.
Seedance 2.0 launched this week. Within days, it was generating recognizable versions of some of the most protected intellectual property on earth. The cease-and-desist letters are already flying. But the model is already deployed. The videos are already circulating. The precedent—whatever it turns out to be—is already being set.
This is the new tempo of cultural change. Not the slow accretion of norms through case law and legislative deliberation, but the rapid deployment of capabilities that outpace our ability to negotiate their meaning.
What's being naturalized isn't just a technology. It's a relationship between creation and reproduction, between authorship and synthesis, between the labor that produces culture and the systems that consume it.
The artifact remembers what the discourse forgets. Years from now, we may look back at this week—at the Tom Cruise/Brad Pitt video, at the cease-and-desist letters, at the scramble to define what's permissible—and recognize it as the moment when something shifted. Not the technology itself, but our collective understanding of what images are, where they come from, and who has the right to make them.
The Question That Remains
Rhett Reese wrote that it's "likely over" for screenwriters. But likely isn't certain. The conditional tense leaves room for negotiation, for resistance, for the slow work of building frameworks that might distribute the benefits of these technologies more equitably.
The European Parliament is pushing for transparency requirements and fair remuneration. Courts are working through the fair use doctrine case by case. Creative unions are organizing. Some AI companies are signing licensing deals; others are deploying first and asking forgiveness later.
The outcome isn't predetermined. But it will be shaped by the choices made now—by policymakers, by courts, by companies, by creators, and by the publics who consume what all of them produce.
What kind of cultural economy do we want? That's the question Seedance 2.0 forces us to ask. Not because it answers it, but because it makes the asking unavoidable.
The biggest shift is always what becomes normal. Pay attention to what's being naturalized.