I’m wondering if its a legitmate line of argumentation to draw the line somewhere.
If someone uses an argument and then someone else uses that same argument further down the line, can you reject the first arguments logic but accept the 2nd argument logic?
For example someone is arguing that AI isnt real music because it samples and rips off other artists music and another person pointed out that argument was the same argument logically as the one used against DJs in the 90s.
I agree with the first argument but disagree with the second because even though they use the same logic I have to draw a line in my definition of music. Does this track logically or am I failing somewhere in my thoughts?


The entirety of your argument boils down to you arbitrarily deciding that music needs to derive from human intentionality.
That’s not an actual argument about whether or not AI is capable of creating music, that’s you redefining music to make sure the answer is no.
That’s not a redefinition, lol, music is a human construct. Nature has lovely noises and birds chirp, and by itself, even if it constitutes notes and waves, it isn’t music. Honestly, the whole convo is semantically confused because there’s no ghost in the machine when it comes to “AI”, they’re algorithms and datasets, and if the data is actual music then whatever “AI” comes out with could be considered an on-demand musical collage/regurgitation? There WAS human intentionality behind it, in the data sets, after all.
A gorilla or ape can’t sing or make music? Could a neanderthal? Homo florientis? Homo erectus? What is it specifically about homo sapiens that give us the unique ability to make music and sing, that no other animal has?
Again, if you predefine music as being made by humans then you’re not engaging in a discussion or logical debate, you’re just arbitrarily setting goal posts to guarantee that you’re right.
People need to get over the idea that algorithms can’t be intelligent because they’re algorithms. Algorithms can model the behaviour of the neurons in your brain, meaning that they can model your brain and intelligence. We are obviously not there yet with LLMs, but just saying ‘numbers and math = not intelligent’ is quite frankly dumb and just shows that you don’t understand math, physics, biology, neuroscience, etc.
I said human because we haven’t found another free will, conscious individual that does this, but of course they’d be included here too. Aliens could make music. AI is not “making anything”, it’s regurgitating combinations of previous stuff on-command. And idk what you’re talking about, I think therefore I am and “AI” simply isn’t. You don’t understand what thinking and free will are so you think you’re on the same level of some word calculator, lol, go ahead my guy.
Even current day LLMs are doing more than just regurgitation, even if they fall far short of human intelligence.
And at a fundamental level, there’s no reason to think that simulated neurons running on computer chips can’t be as intelligent as us, if we can figure out the right way of wiring them so to speak.
There’s no inherent law of the universe that says that only biological humans can be intelligent and can thus create music.
My man, you’re speaking sci fi, not what we currently have. Furthermore, both philosophically and materially, the notion that consciousness cannot be computed is more than gaining traction. If humans ever make something with free will and volition, something that isn’t just doing things on command but has its own wants, sure. But we might never get there, and that’s a real possibility. Intelligence isn’t in solving equations but in imagining the math problems.
The biggest of current LLM models contains ~ the same number of parameters as we have neurons. It’s not a 1:1 mapping because parameters are closer to neuronal connections, but from a pure numbers standpoint we are operating at the scale where we can start creating true simulated intelligences, even if not human scale just yet.
This doesn’t mean current LLMs are that intelligent, just that it’s not sci-fi to think we could create a simulated intelligence now.
Is it? Do you have any sources / do they have any explanation for why neurons can’t be simulated?
I mean, we’re talking about whether or not an AI could make music. If it creates a new song, with lyrics and music / a melody that never existed before, and people listen to it and sing it and dance to it and enjoy it, how would it not be music?