© 2024 Ideastream Public Media

1375 Euclid Avenue, Cleveland, Ohio 44115
(216) 916-6100 | (877) 399-3307

WKSU is a public media service licensed to Kent State University and operated by Ideastream Public Media.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

New tools help artists fight AI by directly disrupting the systems

An illustration of a scraper being misdirected by Kudurru.
Kurt Paulsen
/
Kudurru
An illustration of a scraper being misdirected by Kudurru.

Artists have been fighting back on a number of fronts against artificial intelligence companies that they say steal their works to train AI models — including launching class-action lawsuits and speaking out at government hearings.

Now, visual artists are taking a more direct approach: They're starting to use tools that contaminate and confuse the AI systems themselves.

One such tool, Nightshade, won't help artists combat existing AI models that have already been trained on their creative works. But Ben Zhao, who leads the research team at the University of Chicago that built the soon-to-be-launched digital tool, says it promises to break future AI models.

"You can think of Nightshade as adding a small poison pill inside an artwork in such a way that it's literally trying to confuse the training model on what is actually in the image," Zhao says.

How Nightshade works

AI models like DALL-E or Stable Diffusion usually identify images through the words used to describe them in the metadata. For instance, a picture of a dog pairs with the word "dog." Zhao says

Nightshade confuses this pairing by creating a mismatch between image and text.

"So it will, for example, take an image of a dog, alter it in subtle ways, so that it still looks like a dog to you and I — except to the AI, it now looks like a cat," Zhao says.

Examples of images generated by Nightshade-poisoned AI models and the clean AI model.
/ Glaze and Nightshade team at University of Chicago
/
Glaze and Nightshade team at University of Chicago
Examples of images generated by Nightshade-poisoned AI models and the clean AI model.

Zhao says he hopes Nightshade will be able to pollute future AI models to such a degree that AI companies will be forced to either revert to old versions of their platforms — or stop using artists' works to create new ones.

"I would like to bring about a world where AI has limits, AI has guardrails, AI has ethical boundaries that are enforced by tools," he says.

Nascent weapons in an artist's AI-disrupting arsenal

Nightshade isn't the only nascent weapon in an artist's AI-disrupting arsenal.

Zhao's team also recently launched Glaze, a tool which subtly changes the pixels in an artwork to make it hard for an AI model to mimic a specific artist's style.

"Glaze is just a very first step in people coming together to build tools to help artists," says fashion photographer Jingna Zhang, the founder of Cara, a new online community focused on promoting human-created (as opposed to AI-generated) art. "From what I saw while I tested with my own work, it does interrupt the final output when an image is trained on my style." Zhang says plans are in the works to embed Glaze and Nightshade in Cara.

And then there's Kudurru, created by the for-profit company Spawning.ai. The resource, now in beta, tracks scrapers' IP addresses and blocks them or sends back unwanted content, such as an extended middle finger, or the classic "Rickroll" Internet trolling prank that spams unsuspecting users with a the music video for British singer Rick Astley's 1980s pop hit, "Never Gonna Give You Up."

"We want artists to be able to communicate differently to the bots and the scrapers used for AI purposes, rather than giving them all of their information that they would like to provide to their fans," says Spawning co-founder Jordan Meyer.

Artists are thrilled

Artist Kelly McKernan says they cannot wait to get their hands on these tools.

"I'm just like, let's go!" says the Nashville-based painter and illustrator and single mom. "Let's poison the datasets! Let's do this!"

McKernan says they have been waging a war on AI since last year, when they discovered their name was being used as an AI prompt, and then that more than 50 of their paintings had been scraped for AI models from LAION-5B, a massive image dataset.

Earlier this year, McKernan joined a class-action lawsuit alleging Stability AI and other such companies used billions of online images to train their systems without compensation or consent. The case is ongoing.

"I'm right in the middle of it, along with so many artists," McKernan says.

In the meantime, McKernan says the new digital tools help them feel like they're doing something aggressive and immediate to safeguard their work in a world of slow-moving lawsuits and even slower-moving legislation.

McKernan adds they are disappointed, but not surprised, that President Joe Biden's newly signed executive order on artificial intelligence fails to address AI's impact on the creative industries.

"So, for now, this is kind of like, alright, my house keeps getting broken into, so I'm gonna protect myself with some, like, mace and an ax!" they say of the defensive opportunities afforded by the new tools.

Debates about the efficacy of these tools

While artists are excited to use these tools, some AI security experts and members of the development community are concerned about their efficacy, especially in the long term.

"These types of defenses seem to be effective against many things right now," says Gautam Kamath, who researches data privacy and AI model robustness at Canada's University of Waterloo. "But there's no kind of guarantee that they'll still be effective a year from now, ten years from now. Heck, even a week from now, we don't know for sure."

Social media platforms have also lit up lately with heated debates questioning how effective these tools really are. The conversations sometimes involve the creators of the tools.

Spawning's Meyer says his company is committed to making Kudurru robust.

"There are unknown attack vectors for Kudurru," he says. "If people start finding ways to get around it, we're going to have to adapt."

"This is not about writing a fun little tool that can exist in some isolated world where some people care, some people don't, and the consequences are small and we can move on," says the University of Chicago's Zhao. "This involves real people, their livelihoods, and this actually matters. So, yeah, we will keep going as long as it takes."

An AI developer weighs in

The biggest AI industry players — Google, Meta, OpenAI and Stability AI — did not respond to, or turned down, NPR's requests for comment.

But Yacine Jernite, who leads the machine learning and society team at the AI developer platform Hugging Face, says that even if these tools work really well, that wouldn't be such a bad thing.

"We see them as very much a positive development," Jernite says.

Jernite says data should be broadly available for research and development. But AI companies should also respect artists' wishes to opt out of having their work scraped.

"Any tool that is going to allow artists to express their consent very much fits with our approach of trying to get as many perspectives into what makes a training data set," he says.

Jernite says several artists whose work was used to train AI models shared on the Hugging Face platform have spoken out against the practice and, in some cases, asked that the models be removed. The developers don't have to comply.

"But we found that developers tend to respect the artists' wishes and remove those models," Jernite says.

Still, many artists, including McKernan, don't trust AI companies' opt out programs. "They don't all offer them," the artist says. "And those that do, often don't make the process easy."

Audio and digital stories edited by Meghan Collins Sullivan. Audio produced by Isabella Gomez-Sarmiento.

Copyright 2023 NPR. To see more, visit https://www.npr.org.

Chloe Veltman
Chloe Veltman is a correspondent on NPR's Culture Desk.