cross-posted from: https://lemmy.world/post/7258145

The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. Is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission.

ARTICLE - Technology Review

ARTICLE - Mashable

ARTICLE - Gizmodo

The researchers tested the attack on Stable Diffusion’s latest models and on an AI model they trained themselves from scratch. When they fed Stable Diffusion just 50 poisoned images of dogs and then prompted it to create images of dogs itself, the output started looking weird—creatures with too many limbs and cartoonish faces. With 300 poisoned samples, an attacker can manipulate Stable Diffusion to generate images of dogs to look like cats.

  • dejected_warp_core@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    6 days ago

    What’s old is new again.

    https://en.wikipedia.org/wiki/Fictitious_entry

    Fictitious or fake entries are deliberately incorrect entries in reference works such as dictionaries, encyclopedias, maps, and directories, added by the editors as copyright traps to reveal subsequent plagiarism or copyright infringement. There are more specific terms for particular kinds of fictitious entry, such as Mountweazel, trap street, paper town, phantom settlement, and nihilartikel.[1]

    Disregarding the obscene amount of automation at play, the underlying problem and its remedy remain somewhat the same. Put bad data into a very large dataset, such that it evades cursory scans, and the unaware plagiarists are eventually caught red-handed.