Ai is so dumb, nothing it tells me is more then regurgitating of common sense. Everything you need even a few braincells for it gets consistently wrong.
Reverse engineering something takes 3 times as long doing it yourself. Why does everyone prefer fixing AIs halluzinations instead of thinking by themself?
Ai is so dumb, nothing it tells me is more then regurgitating of common sense.
Thinking LLMs are intelligent because they sometimes reproduce correct statements is like believing that books themselves can think and are smart because some books contain thoughts of smart people.
My job is basically forcing everyone to use it because we’re trying to be a part of the bubble. So, I’ve had some recent experience with the most up-to-date models and things.
For stupidly simple operations in coding it works fairly well, and then just as you’re referring to here, for anything slightly complicated you still have to do it yourself because otherwise it will just lie to you and say it did a thing when the code doesn’t work at all.
I am being tracked for my usage of AI, so I fiddle with it. My conclusion: worst intern ever.
It sometimes gets things right, just like you said - but I’ve reviewed what a coding assistant claimed it did, been reasonable impressed, then glanced at the code and discovered it hadn’t even done what it claimed. It wasn’t just buggy - it lied about itself.
My previous worst intern just didn’t do the work. This nonsense wastes my time.
Its often not even useful common sense. I see it more like a horoscope. If you really want to believe its useful you find something in the answer you can relate to and imagine it was useful. But if you stay objective the picture is different.
Ai is so dumb, nothing it tells me is more then regurgitating of common sense. Everything you need even a few braincells for it gets consistently wrong.
Reverse engineering something takes 3 times as long doing it yourself. Why does everyone prefer fixing AIs halluzinations instead of thinking by themself?
Thinking LLMs are intelligent because they sometimes reproduce correct statements is like believing that books themselves can think and are smart because some books contain thoughts of smart people.
My job is basically forcing everyone to use it because we’re trying to be a part of the bubble. So, I’ve had some recent experience with the most up-to-date models and things.
For stupidly simple operations in coding it works fairly well, and then just as you’re referring to here, for anything slightly complicated you still have to do it yourself because otherwise it will just lie to you and say it did a thing when the code doesn’t work at all.
I’m so happy that all our ai usage is completely optional and not even encouraged.
I would have thought in order to be part of the AI bubble you need to create and sell something that has to do with AI. Not just be a user.
Where is the money in that? Being able to fire half your workforce?
Have I mentioned how happy I am that I’m not forced to use AI?!
I am being tracked for my usage of AI, so I fiddle with it. My conclusion: worst intern ever.
It sometimes gets things right, just like you said - but I’ve reviewed what a coding assistant claimed it did, been reasonable impressed, then glanced at the code and discovered it hadn’t even done what it claimed. It wasn’t just buggy - it lied about itself.
My previous worst intern just didn’t do the work. This nonsense wastes my time.
Its often not even useful common sense. I see it more like a horoscope. If you really want to believe its useful you find something in the answer you can relate to and imagine it was useful. But if you stay objective the picture is different.
Thinking hard. Brain tired. Using it so unpleasant 😭
Sigh…
LLMentalist effect