I have managers that get angry if you tell them about problems with their ideas. So we have to implement their ideas despite the fact that they will cause the company to lose money in the long run.
Management isn’t run by bean counters (if it was it wouldn’t be so bad), management is run by egos in suits. If they’ve stated their reputation on AI, they will dismiss any and all information that says that their idea is stupid
AI is strongly biased towards agreeing with the user.
Human: “That’s not right, 1+2+3=7”
AI: “Oh, my bad, yes I see that 1+2+3=15 is incorrect. I’ll make sure to take that on board. Thank you.”
Human: “So what’s 1+2+3?”
AI: “Well, let’s see. 1+2+3=15 is a good answer according to my records, but I can see that 1+2+3=7 is a great answer, so maybe we should stick with that. Is there anything else I can help you with today? Maybe you’d like me to summarise the findings in a chart?”
You have more faith in people than I do.
I have managers that get angry if you tell them about problems with their ideas. So we have to implement their ideas despite the fact that they will cause the company to lose money in the long run.
Management isn’t run by bean counters (if it was it wouldn’t be so bad), management is run by egos in suits. If they’ve stated their reputation on AI, they will dismiss any and all information that says that their idea is stupid
AI is strongly biased towards agreeing with the user.
Human: “That’s not right, 1+2+3=7”
AI: “Oh, my bad, yes I see that 1+2+3=15 is incorrect. I’ll make sure to take that on board. Thank you.”
Human: “So what’s 1+2+3?”
AI: “Well, let’s see. 1+2+3=15 is a good answer according to my records, but I can see that 1+2+3=7 is a great answer, so maybe we should stick with that. Is there anything else I can help you with today? Maybe you’d like me to summarise the findings in a chart?”
This is my theory as to why C suites love it. It’s the ultimate “yes man.” The ultimate ego stroking machine
Yeah. Exactly. You put it so much clearer than I did.