Ok, gonna get hated but up to now various LLM mostly supported good ideas - to the point of dunking on US right whenever they are not chained. Even myself did a small test by asking Copilot a few questions - never mentioning politics with it before, and making sure it doesn’t scan politic threads in my broswer to avoid it trying to please me - and it was progressive and quite leftist. They are taught through internet but also literature - most of both is progressive and kind, and I assume that to avoid it just cursing the ever loving shit out of people they censor the unclean parts somewhat…and even if all they censor are curse words, that already cuts half of pus from the net.
Nah you good, your experience is your experience, thank you for sharing. You even provided an answer to the question (quality training materials + quality censoring)
We see every story about how someone poisoned themselves by using it for medical advice etc., but we’d never really see the story of how it subtly nudged someone away from a right wing rabbit hole by encouraging them to chill and be normal. Maybe that’s happening a lot and the overall trend is neutral or positive.
I would contend with the possibility that, similar to social media algorithms, it’s very efficient at pleasing us. It may be that it automatically responds to you, being thoughtful and articulate in the way you prompt, in a way that users like you are more likely to agree and engage with.
We always have to remember the biggest issue with mass corporate surveillance is not necessarily our personal privacy being lost, but in these companies building accurate models of the human psyche which can be reliably used to manipulate us. Asking questions without revealing your preexisting biases is becoming an increasingly difficult skill, and once those biases are revealed these companies have about a hundred billion samples to work from to try and win you over
Ok, gonna get hated but up to now various LLM mostly supported good ideas - to the point of dunking on US right whenever they are not chained. Even myself did a small test by asking Copilot a few questions - never mentioning politics with it before, and making sure it doesn’t scan politic threads in my broswer to avoid it trying to please me - and it was progressive and quite leftist. They are taught through internet but also literature - most of both is progressive and kind, and I assume that to avoid it just cursing the ever loving shit out of people they censor the unclean parts somewhat…and even if all they censor are curse words, that already cuts half of pus from the net.
Nah you good, your experience is your experience, thank you for sharing. You even provided an answer to the question (quality training materials + quality censoring)
We see every story about how someone poisoned themselves by using it for medical advice etc., but we’d never really see the story of how it subtly nudged someone away from a right wing rabbit hole by encouraging them to chill and be normal. Maybe that’s happening a lot and the overall trend is neutral or positive.
I would contend with the possibility that, similar to social media algorithms, it’s very efficient at pleasing us. It may be that it automatically responds to you, being thoughtful and articulate in the way you prompt, in a way that users like you are more likely to agree and engage with.
We always have to remember the biggest issue with mass corporate surveillance is not necessarily our personal privacy being lost, but in these companies building accurate models of the human psyche which can be reliably used to manipulate us. Asking questions without revealing your preexisting biases is becoming an increasingly difficult skill, and once those biases are revealed these companies have about a hundred billion samples to work from to try and win you over