Artists Suffer Setback In Landmark AI Copyright Case

Companies do not need to store artists' images to train AI models, instead extracting parameters that characterise these works.

The burden has fallen firmly on artists to protect their works and prove infringement.

A California judge has reduced the scope of a lawsuit by a number of artists against Stability AI, Midjourney, and DeviantArt. The plaintiffs argue that the AI companies misused their work in the process of training and using generative artificial intelligence (GAI) platforms.

Judge Orrick ruled that the companies had not directly infringed copyright—in part because data collection was carried out on their behalf by a third party. Additionally, the way the systems work means that copies of images are not necessarily stored on the companies' servers.

Fortunately for the artists, they are allowed to amend and refile their cases. The judge also declined to throw out the case against Stability entirely.

Training GAI

One of the major issues that creators have had with GAI is that the systems are trained using a huge amount of content, scraped from the web. However, this does not mean that billions of images are stored on the companies' servers, which would have been a smoking gun for the plaintiffs.

Stability has claimed that its Stable Diffusion product is trained by collecting attributes from works available on the web, rather than wholesale copying those images. For example, information about lines, colors, shades, and other parameters are developed. This muddies the question of whether AI companies "copied" works used to train their models.

Orrick raised the issue of whether these companies could be considered liable for direct infringement of artists' copyrights if an AI system "contains only algorithms and instructions that can be applied to the creation of images that include only a few elements" of copyrighted works. If copyright was infringed, how does that happen, and at what point did it occur?

Orrick commented, "Even Stability recognizes that determination of the truth of these allegations – whether copying in violation of the Copyright Act occurred in the context of training Stable Diffusion or occurs when Stable Diffusion is run – cannot be resolved at this juncture."

"Black Box"

The judge also dismissed two claims because the artists had not registered their works with the US Copyright Office, which is required when bringing a copyright lawsuit.

Screenshot from haveibeentrained.com
One plaintiff had to rely on a website to establish that her works had been used to train AI.

A third plaintiff, Sarah Anderson, had registered her works, and so was allowed to continue. There is a high degree of opacity in the way AI models are trained, and what they actually do with the data they are fed, again making it difficult to know which images have been used, and how. To prove that Stable Diffusion used her artworks, Anderson relied haveibeentrained.com, which enables artists to search their names and learn whether their works have been used to train AI models.

The case is not over, since the plaintiffs can amend and resubmit their lawsuits. However, it will be of concern that the burden falls primarily on artists to protect their works by registering them, and proving that AI models have infringed copyright by using them.


Subscribe to our newsletter and follow us on Twitter.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to REX Wire.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.