The copyright case against AI art generators just got stronger with more artists and evidence
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As the first year of the generative AI era passes into the history books, the issue of whether generative AI models — which train on large volumes of human created works and data, scraped from the internet typically without the express consent of the creators — are guilty of copyright infringement still largely remains to be determined.
But there’s been a major new development in one of the leading lawsuits by human artists against AI image and video generator companies, including the popular Midjourney, DeviantArt, Runway, and Stability AI, the last of which created the Stable Diffusion model powering many currently available AI art generation apps.
VentureBeat uses Midjourney and other AI art generators to create article artwork. We’ve reached out to the companies named as defendants in the case for their response to this latest filing, and will update if and when we hear back.
Artists’ case suffered a setback initially
Recall that back in October, U.S. District Court Judge William H. Orrick, of the Northern District of California ruled to dismiss much of the initial class-action lawsuit filed against said aforementioned AI companies by three visual artists — Sarah Anderson, Kelly McKernan, and Karla Ortiz.
VB Event
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
Orrick’s reasoning was that many of the artworks cited as being infringed by the AI companies had not actually been registered for copyright by the artists with the U.S. Copyright Office. However, Orrick’s decision left the door open for the plaintiffs (the artists) to refile an amended complaint.
That they have done, and while I am no trained lawyer, it seems to have gotten much stronger as a result.
New plaintiffs join
In the amended complaint filed this week, the original defendants are joined by seven additional artists: Hawke Southworth, Grzegorz Rutkowski, Gregory Manchess, Gerald Brom, Jingna Zhang, Julia Kaye, Adam Ellis.
Rutkowski’s name may be familiar to some readers of VentureBeat and our colleagues at GamesBeat as he is an artist from Poland known for creating works for video games, roleplaying games, and card games including the titles Horizon Forbidden West, Dungeons & Dragons, and Magic: The Gathering.
As early as a year ago, Rutkowski was covered by news outlets for complaining that AI art apps based on the Stable Diffusion generation model were replicating his fantastical and epic style, sometimes by name, allowing users to generate new works resembling his for which he received zero compensation. He was also not asked ahead of time by these apps for permission to use his name.
Yesterday, Rutkwoski posted on his Instagram and X (formerly Twitter) accounts about the amended complaint, stating “It’s a freaking pleasure to be on one side with such great artists.”
Another one of the new plaintiffs, Jingna Zhang, a Singaporean American artist and photographer whose fashion photography has appeared in such prestigious places as Vogue magazine, also posted on her Instagram account @zemotion announcing her participation in the class action lawsuit, and writing: “the rapid commercialization of generative AI models, built upon the unauthorized use of billions of images—both from artists and everyday individuals—violates that [copyright] protection. This should not be allowed to go unchecked.”
Zhang further urged “everyone to read the amended complaint—just google stable diffusion litigation or see link in my bio—it breaks down the tech behind image gen AI models & copyright in a way that’s easy to understand, gives a clearer picture on what the lawsuit is about, & sets the record straight on some misleading headlines that have been in the press this year.”
New evidence and arguments
On to the new evidence and arguments presented in the amended complaint, which appear to me — with the heavy disclaimer I have no training in law or legal matters beyond my research of them as a journalist — to make for a stronger case on behalf of the artists.
First up is the fact that the complaint notes that even non-copyrighted works may be automatically eligible for copyright protections if they include the artists’ “distinctive mark,” such as their signature, which many do contain.
Secondly, the complaint notes that any AI companies that relied upon the widely-used LAION-400M and LAION-5B datasets — which do contain copyrighted works but only links to them and other metadata about them, and were made available for research purposes — would have had to download the actual images to train their models, thus making “unauthorized copies.”
Perhaps most damningly for the AI art companies, the complaint notes that the very architecture of diffusion models themselves — in which an AI adds visual “noise” or additional pixels to an image in multiple steps, then tries to reverse the process to get close to the resulting initial image — is itself designed to come as close to possible to replicating the initial training material.
As the complaint summarizes the technology: “Starting with a patch of random noise, the model applies the steps in reverse order. As it progressively removes noise (or “denoises”) the data, the model is eventually able to reveal that image, as illustrated below:”
Later, the complaint states: “In sum, diffusion is a way for a machine-learning model to calculate how to reconstruct a copy of its training image…Furthermore, being able to reconstruct copies of the training images is not an incidental side effect. The primary objective of a diffusion model is to reconstruct copies of its training images with maximum accuracy and fidelity.”
The complaint also cites Nicholas Carlini, a research scientist at Google DeepMind and co-author of a January 2023 research paper, “Extracting Training Data from Diffusion Models,” which the complaint notes states “diffusion models are explicitly trained to reconstruct the training set.”
In addition, the complaint cites another scientific paper from researchers at MIT, Harvard, and Brown published in July 2023 that states “diffusion models—and Stable Diffusion in particular—exceptionally good at creating convincing images resembling the work of specific artists if the artist’s name is provided in the prompt.”
This is definitely the case, though some AI companies, such as DeviantArt and OpenAI (not a defendant in this case) have created systems artists to opt-out of having their works used for training AI models.
The complaint also admits there remains an unanswered question that Carlini and his colleagues brought up: “[d]o large-scale models work by generating novel output, or do they just copy and interpolate between individual training examples?”
The answer to this question — or the lack of one — may be the deciding factor in this case. And it is clear from using AI art generators ourselves here at VentureBeat that they are capable of mimicking existing artwork, though not exactly, and ultimately, it is entirely dependent on the text prompt provided by the user. Providing Midjourney, for example, with the prompt “the mona lisa” turns up four images, only of which even closely resembles the actual world famous painting by Leonardo da Vinci.
As with many technologies, the fact of the matter is the results of AI art generators come down to how people use them. Those who seek to use them to copy existing artists can find a willing partner. But those who use them to create new imagery can do so as well. However, what’s also unambiguous is the fact that the AI art generators did rely on human-made artworks — including likely some copyrighted artworks — to train their models. Whether this is covered by fair use or qualifies as a copyright violation will ultimately be decided by the court
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.