I like art. But tying effort, time or practice to art is absolute bullshit. Creative expression. Creating something out of nothing. Putting part of your own mind onto a medium. It’s beautiful.
And then we get AI going full Frankenstein monster on art.
Yeah but I do not perceive effort as something crucial, that’s my point. You are able to put effort into creating a perfect query for AI to create an art - does it make it valid?
In my opinion no, because what it does is regurgiate art - soul, emotion and ideas - of all people it stolen art from before and mixes it into…“art”. There’s no meaning, no soul in it. Thus, not creative, thus, for me, not art. [email protected] written that the prompts used are actually more creative and artistic than the output and I honestly agree.
But tying effort, time or practice to art is absolute bullshit.
Agreed.
Effort, time, and practice can be important to the art itself. Filmmakers love long one-shot scenes because it’s an impressive technical achievement and the end result is often made more interesting because of just how it was made. There are authors and sculptors and filmmakers and composers who created masterpieces that were only made possible by the decades of experience they’ve accumulated. For example, Tolkien’s pre-LOTR career is a fascinating look at how he eventually acquired all the tools to be able to create a compelling narrative in a world he created.
But it’s by no means required, or always better, to have the high effort or high skill option over a lower effort or lower skill artwork. Sometimes additional effort is a waste, or counterproductive. Sometimes there’s beauty in the low skill or constrained or rushed option.
Art is a creative process, and any of the little factors can matter, but very few of the factors always matter. It’s a “you know it when you see it” thing.
This makes me think of an anecdote from Rory Sutherland about “30 years of experience delivered in 30 seconds” when some ad exec drew a logo for a large international company right at the meeting where she was hired and someone thought to question her rates.
Yeah, it’s not so much about the difficulty or the practice, it’s in the innate lack of precision that AI offers to manifest any meaningfully specific artistic vision. Whatever vision the creator might have has to be compromised to accept the closest match AI could come up with.
A definition of art that I liked basically said that art is about creative expression through a medium. Whether that expression is done poorly or well is irrelevant. AI “art” doesn’t really express anything except “here’s a possible thing I could make in response to this prompt”.
The prompts themselves are more creative and more artistic than any of the outputs, and I wonder if they could even be copyrightable.
Right, sounds like someone writing a book versus making a movie adaptation. The book has me fill in the gaps, which can in many cases be more satisfying than whatever the film adaptation comes up with, but sometimes a film adaptation executed well adds something. However a hypothetical book-to-film AI would be utterly mind numbingly uninspiring, and I would just take the prompt and use my own imagination to generate what I’d like more.
It’s been fairly obvious to me for some time that a website where people shared the prompts instead of (or alongside of) the output, and that allowed you to hit a generate button to get different versions would be way fucking cooler than a seemingly infinite scroll of uninspired, uncanny, single generation outputs.
I’m conflicted here. Your first paragraph initially opens the loophole “but AI art” could be the medium in which the artist is expressing themselves. So poor prompting could be beautiful too, in a way. I’m sure photographers of the past felt this way about software post editing when that became popular.
The results may be good to many viewers but apalling to anyone who can tell the difference. If the results don’t matter, does it matter if AI slop is “bad”?
The prompt may have been beautiful, and the process of learning and finding the right tools (i.e. choosing the right model) is akin to the struggles of any artist in learning their craft.
If the results don’t matter, does it matter if AI slop is “bad”?
I mean I didn’t say all art defined that way would be good art. But I also don’t find the outputs artistic at all.
The prompt may have been beautiful, and the process of learning and finding the right tools (i.e. choosing the right model) is akin to the struggles of any artist in learning their craft.
I was afraid going down this road too far would make me sound like I was an advocate for this technology. So I typed out something similar multiple times in my initial reply and then deleted it, but that was kind of my thinking as well.
I read an article where a writer was using autocomplete – in the before times when it wasn’t called AI – to try to write a piece about a relative that recently died that fully included the usage of the tool in the article itself, and that’s the closest thing to AI assisted art I’ve ever read. Now I wish I had saved it.
With a human, a limitation in skill or material properties will tend to manifest in a vague way that lets me fill in the gaps, or else leave non-essential details out.
With GenAI, some details will generally be added, but without any intent behind them.
As another commenter pointed out, I’d probably rather read the prompt and treat it like a book rather than look at the GenAI output stemming from that prompt. If there’s no voice acting, With text content I can just read it instead of looking for meaning in the voice acting when there’s no intent behind the voice. With visual art, then I know there’s intent behind whatever details are there and don’t need to pay too much attention to stuff that doesn’t matter.
Is it out of nothing, though? I have very strong feelings against current copyright laws. This is based on my belief that human artists do not create things out of nothing. They are standing on the shoulders of society, which gave them the experience, framework and tools to create art. And thus everything they created should, after some period, belong again to society. There are still enough other reasons to hate AI.
The artists are looking at the blank canvas, be it digital, real, or methaphorical like a piece of stone or 3D square, and then proceed to express themselves through it. Their ideas, emotions, vision, their perception of the world.
You, my dear, do not have hold of any of it. You may have a hold on their canvas, tools of creation, sources their learned they craft on (although with self-taughts, not even that may be true), but you do not hold any of the main reasons art happenes - their soul, let’s say. they do not owe that to anyone or anything.
Sorry if I sound aggressive just…the notion that society owns something so personal as art? No.
Only in terms of copyright and for published art. And I absolutely stand by my belief that your copyright needs to be limited to like 10 years and after that it becomes public domain.
I do draw myself, on a purely amateur level. And I don’t like this notion that artists create something out of thin air. That’s simply bullshit. Any artist is deeply influenced by their upbringing and personal tastes and that’s what ends up on whatever medium they choose.
I’d argue that there was human effort put in to develop it and make it learn how it’s learning now - but it shouldn’t be used to copy others’ art and make “art” on its own. Instead I’d really like it to see help diagnose and treat diseases, prevent crime and things like that. Ultimately it should be a tool to enhance human reasoning, not something to replace human creativity.
So when you talk about developing disease treatment, to the extent that AI is involved, it’s not Generative AI, some other machine learning techniques, with limitations. E.g. AlphaFold is pretty good at predictions for some proteins, but will fall apart for certain classes. Useful with limitations.
When you have help diagnose, then maybe you are in generative AI territory, and maybe useful to help find medical research that is relevant the doctor could not have kept with on their own, however it shouldn’t be a crutch, and getting caught up in trying to get an answer out of LLM can be just as bad as trying to get a sane answer out of it for anything else. So maybe useful if the Doctor’s think it’s supremely stupid but it did manage to identify actually relevant source material for an unrecognized problem. Other than LLM, then maybe the more ‘traditional’ AI approaches can help with things like quick check on imaging that might have otherwise been skipped (if we actually had enough quality, labeled stuff for asymptomatic problems in scans, which I don’t think we do). Might be able to identify more complex patterns in bloodwork, but again, would have to be trained in nuanced ways I don’t think we are equipped to do.
Prevent crime is a tough one. I don’t think I’ve seen anything resembling success above and beyond a human understanding of crime frequencies in an area, which is generally self-evident by looking at a map of incident reports without an AI saying anything. I know they tried to predict recidivism based on data about a subject, but that was a colossal failure.
The general conundrum is that generative AI is unreliable and not generally more magical than a pretty dumb human taking a look at fairly obvious visualizations. You need use cases where you have some potential improvement that wasn’t worth human attention prior. For example, hypothetically, if you needed to search for a literal needle in a haystack, an effort that human-wise wouldn’t be worth it, an AI approach could help you maybe find it. It might identify a hundred straws of hay as needles and may even miss the needle entirely, but there’s at least some chance it brings the problem down to practical reach of a human, so long as it’s not that important if the needle can’t be found anyway.
I love art. That’s why I hate ai. There was no effort put in, no time no practice nothing human that makes art great. Generative AI is stupid.
I like art. But tying effort, time or practice to art is absolute bullshit. Creative expression. Creating something out of nothing. Putting part of your own mind onto a medium. It’s beautiful.
And then we get AI going full Frankenstein monster on art.
Okay but creative expression and creating something out of nothing is effort, time, and practice.
The amount of effort, time, and practice may vary but it’s still obviously present in actual art.
Using generative ai to create content is devoid of even the slightest amount of effort put in.
Yeah but I do not perceive effort as something crucial, that’s my point. You are able to put effort into creating a perfect query for AI to create an art - does it make it valid?
In my opinion no, because what it does is regurgiate art - soul, emotion and ideas - of all people it stolen art from before and mixes it into…“art”. There’s no meaning, no soul in it. Thus, not creative, thus, for me, not art. [email protected] written that the prompts used are actually more creative and artistic than the output and I honestly agree.
Agreed.
Effort, time, and practice can be important to the art itself. Filmmakers love long one-shot scenes because it’s an impressive technical achievement and the end result is often made more interesting because of just how it was made. There are authors and sculptors and filmmakers and composers who created masterpieces that were only made possible by the decades of experience they’ve accumulated. For example, Tolkien’s pre-LOTR career is a fascinating look at how he eventually acquired all the tools to be able to create a compelling narrative in a world he created.
But it’s by no means required, or always better, to have the high effort or high skill option over a lower effort or lower skill artwork. Sometimes additional effort is a waste, or counterproductive. Sometimes there’s beauty in the low skill or constrained or rushed option.
Art is a creative process, and any of the little factors can matter, but very few of the factors always matter. It’s a “you know it when you see it” thing.
This makes me think of an anecdote from Rory Sutherland about “30 years of experience delivered in 30 seconds” when some ad exec drew a logo for a large international company right at the meeting where she was hired and someone thought to question her rates.
So the anecdote was from this guy https://youtube.com/shorts/_2KCzBMz1R0
I don’t like marketing people in general but rory is more of a mass psychologist in my opinion than just a tech bro out for money
Yeah, it’s not so much about the difficulty or the practice, it’s in the innate lack of precision that AI offers to manifest any meaningfully specific artistic vision. Whatever vision the creator might have has to be compromised to accept the closest match AI could come up with.
A definition of art that I liked basically said that art is about creative expression through a medium. Whether that expression is done poorly or well is irrelevant. AI “art” doesn’t really express anything except “here’s a possible thing I could make in response to this prompt”.
The prompts themselves are more creative and more artistic than any of the outputs, and I wonder if they could even be copyrightable.
Right, sounds like someone writing a book versus making a movie adaptation. The book has me fill in the gaps, which can in many cases be more satisfying than whatever the film adaptation comes up with, but sometimes a film adaptation executed well adds something. However a hypothetical book-to-film AI would be utterly mind numbingly uninspiring, and I would just take the prompt and use my own imagination to generate what I’d like more.
It’s been fairly obvious to me for some time that a website where people shared the prompts instead of (or alongside of) the output, and that allowed you to hit a generate button to get different versions would be way fucking cooler than a seemingly infinite scroll of uninspired, uncanny, single generation outputs.
I’m conflicted here. Your first paragraph initially opens the loophole “but AI art” could be the medium in which the artist is expressing themselves. So poor prompting could be beautiful too, in a way. I’m sure photographers of the past felt this way about software post editing when that became popular.
The results may be good to many viewers but apalling to anyone who can tell the difference. If the results don’t matter, does it matter if AI slop is “bad”?
The prompt may have been beautiful, and the process of learning and finding the right tools (i.e. choosing the right model) is akin to the struggles of any artist in learning their craft.
/devil’s advocate
I mean I didn’t say all art defined that way would be good art. But I also don’t find the outputs artistic at all.
I was afraid going down this road too far would make me sound like I was an advocate for this technology. So I typed out something similar multiple times in my initial reply and then deleted it, but that was kind of my thinking as well.
I read an article where a writer was using autocomplete – in the before times when it wasn’t called AI – to try to write a piece about a relative that recently died that fully included the usage of the tool in the article itself, and that’s the closest thing to AI assisted art I’ve ever read. Now I wish I had saved it.
You could say the same about limitations in skill or material properties or…
With a human, a limitation in skill or material properties will tend to manifest in a vague way that lets me fill in the gaps, or else leave non-essential details out.
With GenAI, some details will generally be added, but without any intent behind them.
As another commenter pointed out, I’d probably rather read the prompt and treat it like a book rather than look at the GenAI output stemming from that prompt. If there’s no voice acting, With text content I can just read it instead of looking for meaning in the voice acting when there’s no intent behind the voice. With visual art, then I know there’s intent behind whatever details are there and don’t need to pay too much attention to stuff that doesn’t matter.
Creativity is one part, but knowing the artist put in tremendous effort on making it adds a crucial piece on it.
Is it out of nothing, though? I have very strong feelings against current copyright laws. This is based on my belief that human artists do not create things out of nothing. They are standing on the shoulders of society, which gave them the experience, framework and tools to create art. And thus everything they created should, after some period, belong again to society. There are still enough other reasons to hate AI.
The artists are looking at the blank canvas, be it digital, real, or methaphorical like a piece of stone or 3D square, and then proceed to express themselves through it. Their ideas, emotions, vision, their perception of the world.
You, my dear, do not have hold of any of it. You may have a hold on their canvas, tools of creation, sources their learned they craft on (although with self-taughts, not even that may be true), but you do not hold any of the main reasons art happenes - their soul, let’s say. they do not owe that to anyone or anything.
Sorry if I sound aggressive just…the notion that society owns something so personal as art? No.
Only in terms of copyright and for published art. And I absolutely stand by my belief that your copyright needs to be limited to like 10 years and after that it becomes public domain.
I do draw myself, on a purely amateur level. And I don’t like this notion that artists create something out of thin air. That’s simply bullshit. Any artist is deeply influenced by their upbringing and personal tastes and that’s what ends up on whatever medium they choose.
I hate AI because when art is created by people. The artist is relying upon previous influence, their experiences, their artistic vision, etc.
Current AI just vomits everything. AI doesn’t have the ability to actually think.
I’d argue that there was human effort put in to develop it and make it learn how it’s learning now - but it shouldn’t be used to copy others’ art and make “art” on its own. Instead I’d really like it to see help diagnose and treat diseases, prevent crime and things like that. Ultimately it should be a tool to enhance human reasoning, not something to replace human creativity.
The issue with ‘AI’ is that it is so broad.
So we have Generative AI and other AI.
So when you talk about developing disease treatment, to the extent that AI is involved, it’s not Generative AI, some other machine learning techniques, with limitations. E.g. AlphaFold is pretty good at predictions for some proteins, but will fall apart for certain classes. Useful with limitations.
When you have help diagnose, then maybe you are in generative AI territory, and maybe useful to help find medical research that is relevant the doctor could not have kept with on their own, however it shouldn’t be a crutch, and getting caught up in trying to get an answer out of LLM can be just as bad as trying to get a sane answer out of it for anything else. So maybe useful if the Doctor’s think it’s supremely stupid but it did manage to identify actually relevant source material for an unrecognized problem. Other than LLM, then maybe the more ‘traditional’ AI approaches can help with things like quick check on imaging that might have otherwise been skipped (if we actually had enough quality, labeled stuff for asymptomatic problems in scans, which I don’t think we do). Might be able to identify more complex patterns in bloodwork, but again, would have to be trained in nuanced ways I don’t think we are equipped to do.
Prevent crime is a tough one. I don’t think I’ve seen anything resembling success above and beyond a human understanding of crime frequencies in an area, which is generally self-evident by looking at a map of incident reports without an AI saying anything. I know they tried to predict recidivism based on data about a subject, but that was a colossal failure.
The general conundrum is that generative AI is unreliable and not generally more magical than a pretty dumb human taking a look at fairly obvious visualizations. You need use cases where you have some potential improvement that wasn’t worth human attention prior. For example, hypothetically, if you needed to search for a literal needle in a haystack, an effort that human-wise wouldn’t be worth it, an AI approach could help you maybe find it. It might identify a hundred straws of hay as needles and may even miss the needle entirely, but there’s at least some chance it brings the problem down to practical reach of a human, so long as it’s not that important if the needle can’t be found anyway.
There are medical efforts using ai. Recently heard a story about AI helping doctors diagnose cancer faster by using it.