This question has been rolling around in my mind for awhile, and there are a few parts to this question. I will need to step through of how I got to these questions.
I have used AI as a tool in my own art pieces before. For example, I have taken a painting I had made more than a decade ago, and used a locally hosted AI to enhance it. The content of the final image is still my original concept, just enhanced with additional details and also make it into a 32:9 ultrawide wallpaper for my monitor.
From that enhanced image, I sent it through my local AI again (different workflow) to generate a depth map, and a normal map. I also separated the foreground, midground, and background.
Then I took all of that and loaded it into Wallpaper Engine (if you don’t know what that is, it’s an application that can be used to create animated wallpapers). I compiled each of the images proceeded to manually animate, track, and script it to bring the entire thing to life. The end product is something I really enjoy and I even published it on the wallpaper engine steam workshop for others to enjoy as well.
However, with all the AI slop that is being generated endlessly and the stigma that AI has in the art community as a whole, it brought the following questions to mind:
-
Is the piece that I painted and then used AI to rework, and then manually reworked further, still my art?
-
One step further, I didn’t build any of the tools to make the original painting, I didn’t create the programming or scripting languages. I didn’t fabricate the PCBs or chipsets that I built my computer with to run all of those tools. The list can go on and on for how many things I use that were not created/generated by me nor would it be possible/feasible to give credit to every single person involved. So, is any artwork that I make actually mine? Or does it belong to the innumerable shoulders of giants of which we all stand upon?
-
Those questions led me to the main question of this post. Say that a real human grew up with only the experience of seeing AI slop and, as such, can only reference that AI slop experience they had learned; if that human creates something with their own hands, is that piece they create still art? Is it even a piece that they can claim they made?
I’m curious to see what thoughts people have on this.
I think the problem here is the terms used don’t mean the same thing between being applied to a human or an AI. Terms like training, learning, creating, etc. are all applied as anthropomorphisms since we don’t yet have simple terms for how those things are actually happening for AI.
For instance, someone trained to draw isn’t shown millions of examples of drawings and told to make something that looks like that somehow. They are tought the mechanics of putting ink to paper in a way to create whatever image they want.
Even people trying to imitate existing art spend more time perfecting the techniques used than examining the works.
I think that as far as AI as we know it and art are concerned, the main issue the majority have is people typing a prompt and letting a computer do the rest. No matter what, that will never count.
-
Yes it’s your art, you used AI as a tool to create your concept, from your original ideas. If the tool is trained to reproduce others’ work (such as generative and LLM), it’s another story.
-
A painting doesn’t belong to the brush factory nor pigment maker, but neither is the brush or the paint the artistry of it. The greys of AI tool usage is when the tool takes away the art, statement, concept and/or craftsmanship. A photographer can create art with a camera, but they can also be used for stuff that is clearly not art.
-
To my mind, the art comes not from the school or medium, but from the artist challenging, provoking and/or expressing something human. With skill you can delve deeper within the human condition, conceptualise deeper truths, and with mastery of tools and/or craft become the better at conveying it.
An AI, not having an understanding of human-ness can never create art, only mimic it. Studying AI art can thus surely be used as an inspiration for technique and/or reflection, but trying to replicate generated images will probably be a difficult path towards creating art.
Then again, I would contrast art and creatives. Many ad creatives, fonts, decorations, and even wall paint swatches have very little artistic value to them, even though they require creativity and craftsmanship to realise.
-
I think there’s a bit of a misconception about what exactly AI is. Despite what techbros try to make it seem, AI is not thinking in any way. It doesn’t make decisions because it does not exist. It is not an entity. It is an algorithm.
Specifically, it is a statistical algorithm. It is designed to associate an input to an output. When you do it to billions of input-output pairs, you can then use the power of statistics to interpolate and extrapolate, so that you can guess what the output might be, given a new input that you haven’t seen before. In other words, you can perfectly replicate any AI with a big enough sheet of paper and enough time and patience.
That is why AI outputs can’t be considered novel. Inherently, it is just a tool that processes data. As an analogy, you haven’t generated any new data by taking the average of 5 numbers in excel - you have merely processed the existing data
Even if a human learns from AI-generated art, their art is still art, because a human is not a deterministic algorithm.
The problem arises when someone uses generative AI for a significant and notable portion of their workflow. At this point, this is essentially equivalent to applying a filter to someone else’s artwork and calling it new. The debate lies in that there is no clear point for when AI takes up an appropriate vs. inappropriately large portion of a person’s workflow…
I think you’re mostly right about how AI works, but I think some of the conclusions go a bit further than what the mechanics alone really show.
Yes, AI is an algorithm and it’s statistical. It learns patterns and maps inputs to outputs. I don’t really disagree with that part. Where I start to disagree is the idea that this automatically means the output can’t be novel or meaningful. A human brain is also a physical system processing information according to rules. Saying AI is “just an algorithm” only really works as a dismissal if humans aren’t doing something similar, which I’m not convinced is true.
The Excel average comparison also feels a little off to me. Averaging collapses information. Generative models don’t really do that. They explore and recombine patterns across a large possibility space, which feels a lot closer to how people learn and create than how a spreadsheet works. It’s true you could replicate an AI with enough paper and time, but the same thing applies to any finite physical system, including a human brain. That feels more like computability than about creativity or authorship. Another point I do agree with is how AI is used matters a lot. If someone is mostly prompting and picking outputs, that’s closer to curation than creation. But that isn’t really unique to AI. We’ve had similar debates with photography, sampling, filters, and procedural art. Art has never just been about manual effort anyway, it’s more about intent and judgment.
So I think what we aren’t lining up on is less about what AI is, and (as some others have noted here) is more about where we draw the line for authorship and responsibility in how it’s actually used. I do appreciate your perspective on it, and it’s definitely a very grey philosophical to discuss.
“explore and recombine” isn’t really the words I would use to describe generative AI. Remember that it is a deterministic algorithm, so it can’t really “explore.” I think it would be more accurate to say that it interpolates patterns from its training data.
As for comparison to humans, you bring up an interesting point, but one that I think is somewhat oversimplified. It is true that human brains are physical systems, but just because it is physical does not mean that it is deterministic. No computer is able to come even close to modeling a mouse brain, let alone a human brain.
And sure, you could make the argument that you could strip out all extraneous neurons from a human brain to make it deterministic. Remove all the unpredictable elements: memory neurons, mirror neurons, emotional neurons. In that case, sure - you’d probably get something similar to AI. But I think the vast majority of people would then agree that this clump of neurons is no longer a human.
A human uses their entire lived experience to weigh a response. A human pulls from their childhood experience of being scared of monsters in order to make horror. An AI does not do this. It creates horror by interpolating between existing horror art to estimate what horror could be. You are not seeing an AI’s fear - you are seeing other people’s fears, reflected and filtered through the algorithm.
More importantly, a human brain is plastic, meaning that it can learn and change. If a human is told that they are wrong, they will correct themselves next time. This is not what happens with an AI. The only way that an AI can “learn” is by adding on to its training data and then retraining the algorithm. It’s not really “learning,” it’s more accurate to say that you’re deleting the old model and creating a new one that holds more training data. If this were applied to humans, it would be as if you grew an entirely new brain every single time you learned something new. Sounds inefficient? That’s because it is. Why do you think AI is using up so much electricity and resources? Prompting and generating an AI doesn’t use up much resources; it’s actually the training and retraining that uses so much resources.
To summarize: AI is a tool. It’s a pretty smart tool, but it’s a tool. It has some properties that are analogous to human brains, but lacks some properties that make it truly similar. It is in techbros’ best interests to hype up the similarities and hide the dissimilarities, because hype drives up the stock prices. That’s not to say that AI is completely useless. Just as you have said in your comment, I think it can be used to help make art, in a similar way that cameras have been used to help make art.
But in the end, when you cede the decision-making to the AI (that is, when you rely on AI for too much of your workflow), my belief is that the product is no longer yours. How can you claim that a generated artpiece is yours if you didn’t choose to paint a little easter egg in the background? If you didn’t decide to use the color purple for this object? If you didn’t accidentally paint the lips slightly skewed? Even supposing that an AI is completely human-like, the art is still not yours, because at that point, you’re basically just commissioning an artist, and you definitely don’t own art that you’ve commissioned.
To be clear, this is my stance on other tools as well, not just AI

That gif is one of my favorites to use lol
My hot take:
Slop is slop based on context.
It’s SEO spam. It’s thumbnails and autogenerated video for attention farming, it’s lazy Twitter posts parroting Sam Altman’s Ghibli meme, it’s disinformation. It’s faking and lying for internet points or actual money, like a low effort version of oldschool art scams and spamming.
If you spend hours tweaking your original image with some controlnet workflow so complex it puts photoshop layers to shame, and post it somewhere unmonetized just because, how is that slop? That’s just digital art.
I think it’s fair to say not all AI is AI slop.
If you automatically and by default call it slop, I’m going to assume your art is slop.
If you use AI as a tool to create art, then of course it’s art and you thought it up.
LLMs are just tools. It ain’t that complicated.
To me, it’s like saying cgi isn’t art because it takes away from using paint brushes or colored pencils or whatever. I don’t remember when computer graphics first started coming out, were people calling it CGI slop? Probably too long ago to know.
I don’t recall the term CGI slop during my college days of game and graphic design, but I do recall people hating movies with CGI in them. Then as they became better, that rhetoric faded for a few years, but then it came back with AI slop.
Humans can create slop in all aspects of life too. If we didn’t, we would probably be living in a utopia. The problem now is that we create slop faster than ever, because it is like a get rich quick scam.
I treat AI as a tool, rather than a crutch. With the millions of people that are using AI for everything though, using it as a crutch is, unfortunately, far more common.
This is probably one of the few exceptions I have with AI.
It can be used as an assistance tool.
That’s a common issue. People simply enjoy neat boxes and categories, but the world is actually really complex and chaotic, and that’s why these categories are very problematic. While you can create arbitrary categories for everything, these definitions will inevitably be flawed. They’re still useful but far from perfect, with areas of ambiguity and contradictions.
Consider, for example, where one species ends and another begins. It’s a messy and fuzzy situation, so we simply draw an arbitrary boundary. Similarly, what even constitutes “living”? Draw a line and don’t worry about the details. Yes, it’s indecently hard, because humans really love clear definitions with a burning passion. Unfortunately, the world doesn’t really support that notion.
The same problem arises in art. Who created this painting? Well, it was primarily the work of Mister A, but he received significant assistance from his apprentices B, C, D and E. It’s complicated. Let’s just draw a line and stop worrying about the specifics.
What even is art? It’s very messy. Expect uncertainty and contradictions within these fuzzy categories. Yes, but is this slop? Yet another category problem. Same answer.
The best definition of Art that I have heard is “an object/piece that makes one feel things”
However, AI slop makes many people feel anger, so I don’t know if that definition can really fit or not. Probably not.
There is art, such as music, that has the intent of of making people angry or frustrated too. So, it is a grey area and as you said, very messy.
In the age of romanticism, art usually depicted idealized and beautiful things. Then realism emerged, and artists also stared painting poor and ugly people. In social realism, the art was supposed to make you feel a bit uncomfortable. All of that was still clearly art.
I think art requires an intention. When you paint a picture of a seagull covered in oil, you may want the viewer to feel something about the petrochemical industry. When you take a photo of Chinese children working in a toy factory, you might want your audience to feel what the children are going through.
When you’re painting using digital tools, you may draw the same line 20 times to get it just right. As an artist, you have a goal in mind, and you will keep pressing undo until each line in the drawing meets your criteria. If you generate a hundred pictures with an AI and pick the one that fits your goals, you’re essentially acting as a curator of art. There’s a goal and an intention behind the selection process. That’s why the one picture that didn’t get deleted is art.
What if if there’s zero human involvement? If there is no selection process guided by goal or intention, is that still art? Maybe. What if the viewer still feels something when looking at the result. Maybe that could make it art. What if you just look at the clouds drifting in the sky, and that makes you feel something. Is that art too? This is where it gets really messy and the categories fall apart.
I’m no art expert or anything but here’s my view.
1 and 2. Yes. You used tools to create the art, not all that different from using a pencil or paintbrush, or a tablet you didn’t create yourself. It was however your imagination that created it. It wasn’t generated from a couple of sentances, what was created was (depending on skill of couse) what you pictured it should be. You didn’t just generate 1000 possibilities and picked the prettiest one and called it yours.
- It would still require human imagination to create the art, so I would still deem it art.
Look, this community is for seemingly STUPID questions. I have the dumb today and I didn’t come here to brain.
In your case, AI was used as an intermediate step in a larger workflow. You maintained the creative final OK for the output. You aren’t selling the output, and (I assume) you are disclosing your use of AI, or are at least not trying to hide it. IMO this is just about the best case scenario for AI use.
When there is no input but prompts, no QC, being sold as human art to people who don’t know any better (or worse, those who don’t care), that’s where artistic merit dies.
Anybody can bash on some keys. Piano, PC…
I do openly disclose my use of AI and I have no intention on selling them.
While anyone can bash on some keys, it is becoming more difficult to even prove something wasn’t created by AI.
So, that spurs another question; If someone made it a goal to generate something fake and fool everyone that they create, while the artwork was generated and is not their own, the intention was to fool everyone to make a statement… would the deception be a form of art?
Not in my eyes. Fraud is fraud. Gives off prankster “social experiment” vibes and I don’t consider prank videos art either.
That’s a good point, it would be fraud, but art is very interpretive.
It reminds me of that Banksy painting that he put a shredder in the frame and it was put on auction. When the painting sold for $1.4 million, the painting proceeded to shred.
I don’t know if that Banksy painting would be considered fraud or not, but he definitely made a statement.
Yes
This is my favorite way to use AI - as a mentor
Being able to quickly learn new coding languages, translate what I know from one to another and get advice on code structure etc has been super helpful; meanwhile I remain the one writing the actual code
this is not a mentor, this is a translator
I believe it depends on what you define as art. Is art with us here in this room?
“Is it art?” Is a question that’s been asked over and over throughout history.
It changes from person to person and from time to time. Cubism, photography, found art, aleatoricism, algorithmic art, interpretive dance, it’s all gone through “it’s not art” at some point. A banana taped to a wall. An “invisible” sculpture. A tin of the artist’s poop. Jackson Pollock’s dribbles.
The answer doesn’t really matter. It’s right, it’s wrong, who cares?







