Manor Lords and Terra Invicta publishers Hooded Horse are imposing a strict ban on generative AI assets in their games, with company co-founder Tim Bender describing it as an “ethics issue” and “a very frustrating thing to have to worry about”.

“I fucking hate gen AI art and it has made my life more difficult in many ways… suddenly it infests shit in a way it shouldn’t,” Bender told Kotaku in a recent interview. “It is now written into our contracts if we’re publishing the game, ‘no fucking AI assets.'” I assume that’s not a verbatim quote, but I’d love to be proven wrong.

The publishers also take a dim view of using generative AI for “placeholder” work, or indeed any ‘non-final’ aspect of game development. “We’ve gotten to the point where we also talk to developers and we recommend they don’t use any gen AI anywhere in the process because some of them might otherwise think, ‘Okay, well, maybe what I’ll do is for this place, I’ll put it as a placeholder,’ right?” Bender went on.

  • MountingSuspicion@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    19 hours ago

    I don’t think training on all public information is super ethical regardless, but to the extent that others may support it, I understand that SO may be seen as fair game. To my knowledge though, all the big AIs I’m aware of have been trained on GitHub regardless of any individual projects license.

    It’s not about proving individual code theft, it’s about recognizing the model itself is built from theft. Just because an AI image output might not resemble any preexisting piece of art doesn’t mean it isn’t based on theft. Can I ask what you used that was trained on just a projects documentation? Considering the amount of data usually needed for coherent output, I would be surprised if it did not need some additional data.

    • Katana314@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 hours ago

      The example I gave was more around “context” than “model” - data related to the question, not their learning history. I would ask the AI to design a system that interacts with XYZ, and it would be thoroughly confused and have no idea what to do. Then I would ask again, linking it to the project’s documentation page, as well as granting it explicit access to fetch relevant webpages, and it would give a detailed response. That suggests to me it’s only working off of the documentation.

      That said, AIs are not strictly honest, so I think you have a point that the original model training may have grabbed data like that at some point regardless. If most AI models don’t track/cite the details on each source used for generation, be it artwork on Deviantart or licensed Github repos, I think it’s fair to say any of those models should become legally liable; moreso if there’s ways of demonstrating “copying-like” actions from the original.