The chatbots are trained in the “yes and” model of conversation. They can’t (?) say “no, you are a lunatic and this is insane” or any other milder variations of it, they only say “yes, you are right”. This stokes one’s ego, but obviously creates problems
I’ve only used ChatGPT, but right off the bat it will almost always tell you if you’re wrong. You have to go down the rabbit hole to get it agreeing to insane shit.
Odd, I used chatgpt as well and got insane “yes sure you can do it like like” from the get go (asking about a git problem, got git commands that either didn’t exist or didn’t do what Chatgpt was claiming they did - turns out what I wanted to do could not be done with git)
The chatbots are trained in the “yes and” model of conversation. They can’t (?) say “no, you are a lunatic and this is insane” or any other milder variations of it, they only say “yes, you are right”. This stokes one’s ego, but obviously creates problems
I’ve only used ChatGPT, but right off the bat it will almost always tell you if you’re wrong. You have to go down the rabbit hole to get it agreeing to insane shit.
Odd, I used chatgpt as well and got insane “yes sure you can do it like like” from the get go (asking about a git problem, got git commands that either didn’t exist or didn’t do what Chatgpt was claiming they did - turns out what I wanted to do could not be done with git)