China's Seedance unleashes dispute on how far AI can go in fiction

By Lee Jung-woo Posted : February 25, 2026, 17:25 Updated : February 25, 2026, 17:25
On Feb. 11, 2026, film director Ruairí Robinson uploaded a 15-second video to his X account, created using Seedance 2.0, showing Brad Pitt fighting Tom Cruise on the rooftop of an abandoned building. Screenshot of the YouTube video uploaded by Ruairi Robinson
SEOUL, February 25 (AJP) - The release of Seedance 2.0, an AI-powered video generator developed by ByteDance, has triggered a fresh wave of copyright disputes between technology firms and the global entertainment industry amid blurry border in the fictional realm. 

After users began circulating AI-generated clips featuring characters resembling Spider-Man and Deadpool, major studios including Disney and Paramount Pictures issued cease-and-desist letters, accusing the platform of enabling large-scale copyright infringement.

To assess the legal and regulatory implications, AJP spoke with leading legal scholars in Asia, Europe, Australia, and the United States. 

Their views suggest that the controversy sits at the intersection of unsettled law, rapid technological change, and growing pressure for industry self-regulation.

Training Data vs. Copyrighted Expression

Park Kyung Sin, a professor of law at Korea University, draws a crucial distinction between learning from copyrighted material and reproducing it.

“Whether AI can claim authorship is a separate question from whether it should be allowed to train on copyrighted works,” Park said. “Copyright protects expressions, not ideas.”

In his view, most AI training resembles human learning.

“AI does not retain books, films, or images as such. It retains statistical relationships between tokens. Just as a human can read a book and absorb its ideas without infringing copyright, a machine should be able to do the same.”

However, Park cautions that the technical process itself may pose legal problems.

“The tokenization process involves copying copyrighted works. Technically, that can constitute infringement. Courts are now debating whether this copying qualifies as fair use.”

U.S. courts have reached mixed conclusions, while some European rulings have taken a more restrictive stance.

More troubling, Park argues, is Seedance’s apparent ability to reproduce recognizable characters.

“If Seedance retains and reproduces complete images of Spider-Man on demand, that is clearly infringement,” he said. “ByteDance can and should install filters to block outputs that replicate training material.”

He added that while parody and satire can sometimes justify reuse, mass automated reproduction does not.

Who Owns AI-Created Works?

Ilanah Fhima, professor of intellectual property law at University College London, points to deep international uncertainty over authorship.

“There is no global consensus on whether copyright should exist in AI-generated works,” she said. “Most copyright laws were written before this technology existed.”
 
This picture taken on February 5, 2026 shows advertising promoting ByteDance's cloud and AI service platform 'Volcano Engine' and chatbot 'Doubao' at the Beijing Capital International airport in Beijing. AFP-Yonhap
Copyright traditionally rests on human originality. That assumption is now under strain.
“The U.S. has refused to recognize AI as an author,” Fhima noted. “The UK, however, has long recognized ‘computer-generated works,’ although this has not yet been tested in modern AI cases.”

China has issued rulings suggesting limited protection for AI-generated content, while other jurisdictions remain undecided.

Another unresolved question concerns users.

“There is an ongoing debate about whether the person who designs the prompt should receive copyright, based on their creative input,” Fhima said.

These differences mean that cases against platforms like ByteDance could set precedents far beyond any single country.

Fair Use and Platform Responsibility

David Super, a professor at Georgetown University, sees multiple legal vulnerabilities in AI video tools.

“First, training often involves physical copying,” he said. “That likely counts as copying under copyright law.”

Whether such copying qualifies as fair use remains unclear.

“Fair use usually requires that the copying not undermine the market for the original work,” Super explained. “If AI-generated videos substitute for movies or licensed clips, that argument weakens.”

Beyond training data, Super highlighted the issue of secondary liability.

“Selling technology that enables illegal copying can itself create liability,” he said, citing landmark U.S. cases involving video recorders and file-sharing software.

While earlier rulings protected technologies with legitimate uses, later decisions imposed liability when companies encouraged infringement.

“Many AI tools advertise themselves as ‘uncensored,’” Super said. “Courts may see that as encouraging copyright violations.”

If so, manufacturers could be held responsible for how users deploy their products.

Cultural Change and Platform Regulation

Akshaya Kamalnath, professor at the Australian National University College of Law, argues that the debate extends beyond courtroom doctrine.

“This is also a cultural shift,” she said. “AI clips often function more like memes than traditional films.”

That distinction, she suggests, may push regulators toward platform-based solutions rather than relying solely on copyright lawsuits.

Actors and performers, meanwhile, face separate risks.

“Individuals can object to the unauthorized use of their likeness,” Kamalnath noted.

“This overlaps with deepfake regulation.”

In India, platforms must remove deepfakes within hours of receiving notices. Similar rules are emerging elsewhere.
 
A worker sweeps next to an AI hoarding at the AI Impact Summit 2026 at Bharat Mandapam in New Delhi, India, 18 February 2026. India hosted the AI Impact Summit 2026 from 16 to 20 Feb. 2026. EPA-Yonhap
She also points to competitive pressures.

“Companies are racing to release products first and fix problems later. This reflects competition not only between firms, but between countries.”

The Rise of Industry Guardrails

James Grimmelmann, professor of digital and information law at Cornell University, observes that many AI firms are now moving toward self-regulation.

“There are still no definitive court rulings,” he said. “But most companies are building copyright guardrails into their systems.”

These systems aim to block outputs involving identifiable characters, celebrities, or copyrighted scenes.

“Seedance launched without strong safeguards,” Grimmelmann said. “ByteDance quickly realized that was risky and began adding them.”

He believes such measures are now becoming standard.

“Companies understand that without guardrails, they face serious legal exposure.”

A Test Case for AI and Entertainment

Taken together, experts agree that the most immediate legal danger for platforms like Seedance lies not in training data, but in output.

Reproducing recognizable characters, enabling large-scale imitation, and marketing tools in ways that encourage infringement all carry significant risk.

ByteDance has promised new safeguards, but has not paused the rollout of Seedance 2.0.

At the same time, lawsuits against other AI firms signal that the conflict is far from isolated.

The coming years are likely to determine whether courts accept the idea that AI “learns like humans,” or conclude that mass automated generation fundamentally alters creative markets.

For now, the Seedance controversy stands as an early test of how law, culture, and technology will negotiate the boundaries of creativity in the age of machines.

Copyright ⓒ Aju Press All rights reserved.