Nightshade – An AI Tool For Artists from Glaze Maker

The rise of generative AI art tools such as DALL-E, Midjourney and Stable Diffusion are incredibly powerful but not exactly beloved by artists whose work might be powering these machine learning models, generally without permission. Until now, artists haven’t really had much they could do to fight back. There have been some lawsuits such as this one against Midjourney, Stable Diffusion and Deviant Art as well as Getty Images suing the creators of Stable Diffusion.

There is one tool however, Glaze, developed by University of Chicago. Glaze is describe as:

Glaze is a system designed to protect human artists by disrupting style mimicry. At a high level, Glaze works by understanding the AI models that are training on human art, and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style. For example, human eyes might find a glazed charcoal portrait with a realism style to be unchanged, but an AI model might see the glazed version as a modern abstract style, a la Jackson Pollock. So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.

You can read more about Glaze here and here.

The same team at University of Chicago have gone from the defensive to the offensive with a new upcoming tool Nightshade:

Nightshade exploits a security vulnerability in generative AI models, one arising from the fact that they are trained on vast amounts of data—in this case, images that have been hoovered from the internet. Nightshade messes with those images. 

Artists who want to upload their work online but don’t want their images to be scraped by AI companies can upload them to Glaze and choose to mask it with an art style different from theirs. They can then also opt to use Nightshade. Once AI developers scrape the internet to get more data to tweak an existing AI model or build a new one, these poisoned samples make their way into the model’s data set and cause it to malfunction. 

Poisoned data samples can manipulate models into learning, for example, that images of hats are cakes, and images of handbags are toasters. The poisoned data is very difficult to remove, as it requires tech companies to painstakingly find and delete each corrupted sample. 

The researchers tested the attack on Stable Diffusion’s latest models and on an AI model they trained themselves from scratch. When they fed Stable Diffusion just 50 poisoned images of dogs and then prompted it to create images of dogs itself, the output started looking weird—creatures with too many limbs and cartoonish faces. With 300 poisoned samples, an attacker can manipulate Stable Diffusion to generate images of dogs to look like cats. 

You can learn more about Glaze and Nightshade, perhaps the first tool for Artists to fight back against generative AI tools from using their work without authorization, in the video below.

Scroll to Top