AI poisoning tool aims to deter image thieves

KI-Vergiftungstool soll Bilderdiebe abschrecken

Researchers have developed a free tool that allows artists to add misinformation to their images. This is intended to sabotage the training of AI systems .

The Nightshade software alters images in subtle ways that are barely noticeable to humans but can significantly disrupt AI models trained to do so. In this way, artists should be able to protect their works from being used for training without their consent.

The tool is intended to deter companies and individuals who use artistic images to train AI models without first obtaining permission from the creator. This practice is becoming increasingly common as AI relies heavily on large image databases.

A targeted attack on Scraper

Nightshade alters images in a way that confuses AI systems trying to train with them, while making the images look almost identical to human viewers, the researchers said.

For example, a poisoned photo of a cow can look unchanged but trick an AI into seeing it as a leather handbag instead. The distortions are precisely calculated to maximize problems for the AI ​​without overtly altering the image. This means that even a human controller should not notice that the training data has been falsified. Of course the pictures have to be put online. It is also helpful to repeat the underlying tags in the metadata or image captions. On the other hand, the researchers advise against labeling and publishing the poisoned images as such because such material can be quickly filtered out.

This targeted poisoning makes models unreliable and provides a strong disincentive for scraper developers to use artists' images without asking their permission. Further details about the technology have been published in a research paper.

The tool is customizable and allows artists to choose tags that blur the boundaries of represented concepts for AI systems that ingest their works. The researchers say Nightshade can be adapted over time as new techniques emerge that could debunk the Nightshade technique.

Artists and academics develop defense measures

Nightshade comes from the same group at the University of Chicago that developed another defense tool called Glaze , which focuses on disrupting AI's ability to imitate artistic styles. According to the researchers, with Nightshade artists can take offensive action against the unauthorized use of their content by AI.

(Source: Golem.de)
Next post