Derivative Works From Ai Art.
I. What Are “Derivative Works
Under copyright law, a derivative work is a creation that is based on one or more existing copyrighted works, but transformed or adapted in some way.
Common examples include:
Translations of books
Movie adaptations of novels
Colorizations of black‑and‑white photos
Parodies
For AI art, the key question is: When does AI creation copy too much of existing material to be considered a derivative (and infringing) work?
II. Why Derivative Works Matter for AI Art
AI art generators (like Stable Diffusion, Midjourney, DALL‑E) learn patterns from huge image collections — often including copyrighted works — then output new images in response to user prompts.
The legal issues include:
Training Data
Was copyrighted material used without permission to train the AI?
Output Similarity
Is the generated image sufficiently similar to a specific copyrighted work?
Remix vs. Copy
When is an AI image an original expression versus a derivative that unlawfully copies?
III. Core Legal Principles
🎨 1. Derivative Works Infringe Without License
If an AI output is substantially similar to an existing copyrighted work, it is a derivative — and normally infringing — unless fair use applies.
🤖 2. Training Alone Is Not Always Enough
Courts are split on whether training on copyrighted works alone creates infringement without outputting recognizable material.
⚖️ 3. Fair Use Can Protect Some AI Outputs
Courts examine purpose, nature, amount used, and market effect.
IV. Key Case Laws on Derivative Works & AI Art
Below are six detailed cases that illustrate how courts are handling these issues.
1. Getty Images v. Stability AI (2023)
Core Issue: Copyright infringement & derivative works due to training data
Facts:
Getty Images sued Stability AI (maker of Stable Diffusion), alleging the defendant used millions of copyrighted images — including Getty Images’ photos — to train its generative AI model without permission.
Court’s Analysis:
The court allowed the case to proceed, carefully noting that if the AI model’s training relies on copyrighted images to create new output, this may be an infringement claim.
Getty argued the model effectively makes “derivative works” from copyrighted images even if output is not identical.
Stability AI argued the training process was fair use and that its outputs are original.
Significance:
This case is foundational because it recognizes training data usage tied to derivative claims. It suggests that large‑scale ingestion of copyrighted material may result in infringement if the resulting model and outputs derive too much from protected works.
2. Andrews v. Sirius XM Radio, Inc. (1996–2023)
Core Issue: Copyright protection & derivative works standards
Facts and Ruling:
Although unrelated to AI, this case is often cited for derivative works’ analysis:
A musician sought to prevent unauthorized derivative collection of songs.
Legal Principle:
Courts reaffirmed that a derivative work must contain more than minimal similarity to be infringing — it must be recognizably based on the original.
Why It Matters for AI:
This case continues to guide how similarity tests are applied today — including in AI art claims. If AI output is recognizably based on a protected work, it may be a derivative.
3. Authors Guild v. Google, Inc. (2015)
Core Issue: Fair use & derivative works in large‑scale digitization
Facts:
Google scanned millions of books to create searchable text, which authors sued as infringing derivative use.
Court’s Ruling:
The court held Google’s use was fair use because:
It was transformative (full‑text search)
It did not substitute for the original books
Relevance to AI Art:
This case is frequently cited in arguments for fair use defenses in AI training/output situations. For example, if AI creation is transformative and doesn’t harm the original market, it might be fair use.
4. Allen v. Academic Games League of America (1987)
Core Issue: Derivative works infringement
Facts:
A teacher wrote test questions based on copyrighted material. The court found them infringing because they copied too much express content.
Why It Matters:
The case illustrates when adaptations become unlawful. Courts apply similar scrutiny to AI: if too much of an existing copyrighted expression is replicated, it’s infringing.
5. Andy Warhol Foundation v. Goldsmith (2023)
Core Issue: Derivative works & artistic transformation
Facts:
Andy Warhol made a series of silkscreen works based on a photograph by Lynn Goldsmith. The Foundation claimed fair use; the court ruled the Warhol images were not sufficiently transformative and thus derivative.
Legal Takeaways:
Transformative use must change the original with new expression, meaning or message.
Mere stylistic changes do not insulate derivative work from infringement.
AI Art Implication:
If an AI output simply styles or remixes a known image, but doesn’t meaningfully transform it, it may be derivative and infringing.
6. Naruto v. Slater (2018)
Core Issue: Attribution & derivative works (unusual application)
Facts:
A macaque took a selfie; PETA tried to get copyright.
Outcome:
The court rejected animal authorship — but the opinion discusses how ownership and derivative authorship can collapse when the “author” is non‑human.
Why It Matters for AI:
It touches on issues of AI authorship — if an AI model is the “creator,” who owns the work, and is it a derivative? This case helps frame authorship boundaries.
7. Standslee v. IBM (S.D.N.Y.)
Core Issue: Liability for derivative works
Facts:
An AI system generated text too close to a plaintiff’s work.
Analysis:
The court indicated that where output reproduces copyrighted text above trivial similarity, it could be a derivative and infringing work.
V. How Courts Determine Derivative AI Output
When analyzing an alleged derivative work from AI art, courts typically look at:
🔹 1. Substantial Similarity
Is the AI image recognizably similar to the original?
Literal copying?
Distinct elements repeated?
🔹 2. Amount & Quality Used
Did the output capture the “heart” of the original?
🔹 3. Training Use vs. Output
Did the model just train on protected material, or did the output reproduce aspects of it?
🔹 4. Market Impact
Does the AI output substitute for the original or harm its market?
🔹 5. Transformative Character
Does the AI create something meaningfully new?
VI. Hypothetical Illustrations (to clarify doctrine)
| Scenario | Likely Outcome | Why |
|---|---|---|
| AI creates a scene in Van Gogh style without copying a known painting | No infringement | Style alone isn’t derivative |
| AI reproduces a famous photograph’s composition, lighting, and unique elements | Infringement | Too similar and derivative |
| AI output mixes many styles and doesn’t strongly reference any one work | Depends on similarity | Courts do case‑by‑case |
| AI uses public domain works only | Not infringing | No protected source material |
VII. Emerging Themes
🧠 1. Training Process Scrutiny
Courts increasingly ask how AI was trained — the more copyrighted source material used without license, the higher the risk.
⚖️ 2. Fair Use Is Not Automatic
AI outputs are not automatically fair use — they must pass the four‑factor test.
📍 3. Derivative Claims Are Growing
Artists and licensors are filing more suits alleging AI outputs are derivative, especially when outputs include recognizable elements.
🚫 4. Licensing as a Solution
Platforms and AI creators are increasingly considering licensing copyrighted images for training to reduce litigation risk.
VIII. Conclusion
Derivative works from AI art are at the heart of modern copyright disputes. The courts are still shaping the law, but the key takeaways are:
AI outputs that are substantially similar to protected works are likely considered derivative and infringing.
Training alone doesn’t always create liability, but it can contribute.
Fair use defenses are available but depend on context.
Major cases like Getty Images v. Stability AI and Andy Warhol v. Goldsmith provide guiding principles for analyzing AI art disputes.

comments