Sounds good. Then, they’ll finally move away from AI and we will all stop having AI being shoved down our throats. I’m sick and tired of all these AI chatbots in places where we don’t even need them.
Oh no they are shit afraid of what happened to companies that didn’t survive the shift into digital that happened around 2000s.
The truth is, many companies didn’t try that transition and disappeared or went from their peak to being 2nd class. But also, lots of companies put in large amounts of money the wrong way and the same thing happened. Guess history repeats itself and every ceo is finding out they didn’t get where they did because they’re smarter than their peers the way they strongly believed before.
I was thinking about this recently… and in the early 2000s for a short time there was this weird chat bot crazy on the internet… everyone was adding them to web pages like MySpace and free hosting sites…
I feel like this has been the resurrection of that but on a whole other level… I don’t think it will last it will find its uses but shoving glorified auto suggest down people’s throats is not going to end up anywhere helpful…
A LLM has its place in an ai system… but without having reason its not really intelligent. Its like how you would decide what to say next in a sentence but without the logic behind it
The logic is implicit in the statistical model of the relationship between words built by ingesting training materials. Essentially the logic comes from the source material provided by real human beings which is why we even talk about hallucinations because most of what is output is actually correct. If it it was mostly hallucinations nobody would use it for anything.
most of the things you want to know is literally all old information some of it thousands of years old and still valid. If you need judgement based on current info you inject current data
Sounds good. Then, they’ll finally move away from AI and we will all stop having AI being shoved down our throats. I’m sick and tired of all these AI chatbots in places where we don’t even need them.
“Instead of looking for other avenues for growth, though, PwC found that executives are worried about falling behind by not leaning into AI enough.”
Sunk cost fallacy at work
Oh no they are shit afraid of what happened to companies that didn’t survive the shift into digital that happened around 2000s.
The truth is, many companies didn’t try that transition and disappeared or went from their peak to being 2nd class. But also, lots of companies put in large amounts of money the wrong way and the same thing happened. Guess history repeats itself and every ceo is finding out they didn’t get where they did because they’re smarter than their peers the way they strongly believed before.
It’s gambling all the way down, thinking they will be the ones that will win big and everyone else fail.
I was thinking about this recently… and in the early 2000s for a short time there was this weird chat bot crazy on the internet… everyone was adding them to web pages like MySpace and free hosting sites…
I feel like this has been the resurrection of that but on a whole other level… I don’t think it will last it will find its uses but shoving glorified auto suggest down people’s throats is not going to end up anywhere helpful…
A LLM has its place in an ai system… but without having reason its not really intelligent. Its like how you would decide what to say next in a sentence but without the logic behind it
The logic is implicit in the statistical model of the relationship between words built by ingesting training materials. Essentially the logic comes from the source material provided by real human beings which is why we even talk about hallucinations because most of what is output is actually correct. If it it was mostly hallucinations nobody would use it for anything.
No you can’t use logic based on old information…
If information changes between variables a language model can’t understand that, because it doesn’t understand.
If your information relies on x being true, when x isn’t true the ai will still say its fine because it doesn’t understand the context
Just like it doesn’t understand things like not to do something.
most of the things you want to know is literally all old information some of it thousands of years old and still valid. If you need judgement based on current info you inject current data
Well people use it and don’t care about hallucinations.
And they never add anything of use. They are like an incredibly sophisticated version of Clippy… and just as useless.
Clippy being useless was okay because it was the 2000s. In this time and age though? Meh.
Also, people HATED Clippy. They always hated AI.