Music is just layered simple patterns and our brains LOVE IT.
Sound is pressure waves, musical notes are a specific pattern of pressure waves. Melodies are repeated musical notes. Songs are repeated melodies following standard structure.
Our brains love trying to decode and parse all these overlapping patterns.
Maybe not really a shower thought and more wild speculation.
I strongly believe that our brains are fundamentally just prediction machines. We strive for a specific level of controlled novelty, but for the most part ‘understanding’ (i.e. being able to predict) the world around us is the goal. We get boredom to push us beyond getting too comfortable and simply sitting in the already familiar, and one of the biggest pleasures in life is the ‘aha’ moment when understanding finally clicks in place and we feel we can predict something novel.
I feel this is also why LLMs (ChatGPT etc.) can be so effective working with language, and why they occasionally seem to behave so humanlike – The fundamental mechanism is essentially the same as our brains, if massively more limited. Animal brains continuously adapt to predict sensory input (and to an extent their own output), while LLMs learn to predict a sequence of text tokens during a restricted training period.
It also seems to me the strongest example of this kind of prediction in animals is the noticing (and wariness) when something feels ‘off’ about the environment around us. We can easily sense specific kinds of small changes to our surroundings that signify potential danger, even in seemingly unpredictable natural environments. From an evolutionary perspective this also seems like the most immediately beneficial aspect of this kind of predictive capability. Interstingly, this kind of prediction seems to happen even on the level of individual neurons. As predictive capability improves, it also necessitates an increasingly deep ability to model the world around us, leading to deeper cognition.
I agree, LLMs have the amazingly human ability to bumble into the right answer even if they don’t know why.
It seems to me that a good analogy of our experience is a whole bunch of LLMs optimized for different tasks that have some other LLM scheduler/administrator for the lower level models that is consciousness. Might be more layers deep, but that’s my guess with no neurological or machine learning background.