People who are used to doing actual research and who understand the necessity of things like a peer review process understand how dangerous it is to have something that’s straight up confidently wrong 20% of the time, and yet have people who treat it like the ultimate authority on things. These people have likely seen AI be stunningly stupid in fields they know about and can extrapolate from there that it’s probably stunningly stupid about most things.
Well that just made me realise something. In my department we have folks with a masters and folks with a PhD.
All the pro-AI people (“what about using AI to reduce marking load”, “do you have any AI use in your course”, etc) lack PhDs.
It’s a strange overlap, and now I’m wondering why.
People who are used to doing actual research and who understand the necessity of things like a peer review process understand how dangerous it is to have something that’s straight up confidently wrong 20% of the time, and yet have people who treat it like the ultimate authority on things. These people have likely seen AI be stunningly stupid in fields they know about and can extrapolate from there that it’s probably stunningly stupid about most things.
People think AI is great for subjects they are not experts in. Experts will say AI is trash for their area of expertise.
AI is really good at sounding smart to the unknowledgeable.