But how can we really tell what’s AI content and enforce it? I feel like there’s only these tiny tell tales that are getting harder and harder to spot. It’s gotten to the point that legitimate content (certain writing styles and images) are getting “called out” as AI even if the image has been around longer than DALL-E has.
“Slop” is lazy, obvious spam. This is the vast majority of AI posts. That’s why it’s called slop! Fediverse mods will purge these with glee.
Fake images have the same problem as (say) a Twitter screenshot: they aren’t a reputable source. We fix this by enforcing good sourcing, which communities should be doing anyway, with or without AI.
If a little ML is used in a complicated workflow for some art post… is it really slop? At some point it’s just digital art. And these workflows tend to use open weights models that actually support stuff like control nets, whereas Big Tech models tend to be dumb prompting windows, and produce more obvious slop.
There’s basically no Tech Bro culture here.
There isn’t as much incentive to Karma Farm on the Fediverse since it’s not actually used for any checks… Still, this is likely the biggest issue. We will just have to look out for farmers, but I think this is doable.
EDIT:
Upon further consideration, we should encourage users to sign up as mods/helpers, and to try and organize communities well. Otherwise they’ll get overwhelmed filtering slop posts. And they won’t enforce good sourcing, which is already kinda a problem (see: tabloids/twitter screenshots on the front page).
It’s doable, but it’d be easy for that to slip away, too.
Hard to tell with writing, but most AI “art” is watermarked. Other AI can see the watermark. Humans can see it with tools. Google has a free one, so do many other sites.
But how can we really tell what’s AI content and enforce it? I feel like there’s only these tiny tell tales that are getting harder and harder to spot. It’s gotten to the point that legitimate content (certain writing styles and images) are getting “called out” as AI even if the image has been around longer than DALL-E has.
Eh, it’ll be fine:
“Slop” is lazy, obvious spam. This is the vast majority of AI posts. That’s why it’s called slop! Fediverse mods will purge these with glee.
Fake images have the same problem as (say) a Twitter screenshot: they aren’t a reputable source. We fix this by enforcing good sourcing, which communities should be doing anyway, with or without AI.
If a little ML is used in a complicated workflow for some art post… is it really slop? At some point it’s just digital art. And these workflows tend to use open weights models that actually support stuff like control nets, whereas Big Tech models tend to be dumb prompting windows, and produce more obvious slop.
There’s basically no Tech Bro culture here.
There isn’t as much incentive to Karma Farm on the Fediverse since it’s not actually used for any checks… Still, this is likely the biggest issue. We will just have to look out for farmers, but I think this is doable.
EDIT:
Upon further consideration, we should encourage users to sign up as mods/helpers, and to try and organize communities well. Otherwise they’ll get overwhelmed filtering slop posts. And they won’t enforce good sourcing, which is already kinda a problem (see: tabloids/twitter screenshots on the front page).
It’s doable, but it’d be easy for that to slip away, too.
Wait, my relentless FOSS evangelism doesn’t make me a tech bro? Lemmy is one of the techs for which I’m a bro.
You don’t wanna be a “tech bro”.
When protecting our house from robbers, we just need more security than our neighbors.
When running away from a bear, we just need to be faster than our friend.
When reducing AI posts on a platform, we just need to call it out more than reddit.
Honestly if both lemmy and reddit can keep calling it out that would be great. Let 9gag become the toxic dumping ground.
Hard to tell with writing, but most AI “art” is watermarked. Other AI can see the watermark. Humans can see it with tools. Google has a free one, so do many other sites.