I’ve found ChatGPT to almost never be wrong, can’t think of an example ATM. Having said that, I have a sense for what it can and can’t do, what sort of inputs will output a solid answer.
Where it goes hilariously sideways is if you talk to it like a person and keep following up. Hell no. You ask a question that can be answered objectively and stop.
No way the output went straight to, “Sure! Bromine’s safe to eat.” Either he asked a loaded question to get the answer he wanted or this came after some back and forth.
A 69% D+ student that writes VERY convincingly. Keep in mind, we live in a world where people buy into pseudoscience and bullshit conspiracy theories because they are convincing. I think it’s just human nature.
If you think LLMbeciles are right more often than wrong then you’re either profoundly ignorant or profoundly inattentive.
I’ve found ChatGPT to almost never be wrong, can’t think of an example ATM. Having said that, I have a sense for what it can and can’t do, what sort of inputs will output a solid answer.
Where it goes hilariously sideways is if you talk to it like a person and keep following up. Hell no. You ask a question that can be answered objectively and stop.
No way the output went straight to, “Sure! Bromine’s safe to eat.” Either he asked a loaded question to get the answer he wanted or this came after some back and forth.
Are you an AI bot? Or have you literally never used chat gpt? It’s accurate way more than 50% of the time.
Ah, so maybe like advice from a 69% right D+ student?
A 69% D+ student that writes VERY convincingly. Keep in mind, we live in a world where people buy into pseudoscience and bullshit conspiracy theories because they are convincing. I think it’s just human nature.