The ways they say they are going to use AI is exactly what they said was causing harm.
I disagree. The examples of traffic not going to websites and the stealing of content to train models are two things among several that they state in the very beginning of the video, as an introduction to how AI slop is invading many different sectors of the internet.
While this is sad and frustrating, what’s even worse is that generative AI truly has the potential to break the internet irreversibly. By making it harder and harder to tell what is true.
The beginning part stands out to me as things that they think are too bad, but not really what they consider the worst thing about AI. To me, their main concern is the fact that AI hallucinates and comes up with stuff, so as you say, they won’t use it for research and writing. But they will let their animators use AI programming tools to for example speed up writing expressions for use in After Effects.
However, their added in line at the end of them using AI as a “faster google alternative” is very open-ended and gives me pause. I’m very curious what exactly they mean by that, because at first listen, it could sound like a slippery slope into not fact-checking things. So I checked out their sources link that they always have in the video description, emphasis theirs:
One key driver in the development of “AI Slop” is a lack of oversight. Whether intentionally (to save money, or to mislead) or unintentionally, if generative AI is put on a task and the results are not checked for quality and factuality, low-quality content is the typical result. But the good news is that we can oversee it, and check/change/edit the results before we share them with the world. And then the output quality can be much improved, turning an AI-slop generator into an amazing tool for humans.
TL;DR - But this is still pretty long:
What I take away from what they say in their video is that they think misinformation is what will contribute to the end of the web and the end of their channel - People using AI to pump out misleading and untrue content at a pace and scale that no human content creator or educator can outpace or even keep up with. Essentially, I believe that the way that they see it is that the tools are already out there, and won’t be going away, and so they are going to try and use it responsibly in order to help with mundane tasks. I don’t know if I would consider it ethical, but I disagree that ‘The ways they say they are going to use AI is exactly what they said was causing harm.’
The ways they say they are going to use AI is exactly what they said was causing harm. If that isn’t hypocrisy, what is?
They call out the issues, only to completely ignore those issues in their own use.
I disagree. The examples of traffic not going to websites and the stealing of content to train models are two things among several that they state in the very beginning of the video, as an introduction to how AI slop is invading many different sectors of the internet.
Starting at 01:43:
The beginning part stands out to me as things that they think are too bad, but not really what they consider the worst thing about AI. To me, their main concern is the fact that AI hallucinates and comes up with stuff, so as you say, they won’t use it for research and writing. But they will let their animators use AI programming tools to for example speed up writing expressions for use in After Effects.
However, their added in line at the end of them using AI as a “faster google alternative” is very open-ended and gives me pause. I’m very curious what exactly they mean by that, because at first listen, it could sound like a slippery slope into not fact-checking things. So I checked out their sources link that they always have in the video description, emphasis theirs:
TL;DR - But this is still pretty long: What I take away from what they say in their video is that they think misinformation is what will contribute to the end of the web and the end of their channel - People using AI to pump out misleading and untrue content at a pace and scale that no human content creator or educator can outpace or even keep up with. Essentially, I believe that the way that they see it is that the tools are already out there, and won’t be going away, and so they are going to try and use it responsibly in order to help with mundane tasks. I don’t know if I would consider it ethical, but I disagree that ‘The ways they say they are going to use AI is exactly what they said was causing harm.’