Look at this shit (first paragraph of body of article):
“Automatic watches are admired for their intricate mechanics and timeless appeal, but even the finest timepieces can sometimes run fast or slow. If you’ve noticed your automatic watch gaining or losing time, you’re not.”
First search result on DDG. Thank you, automatic watch expert and real person Hnin Oo Thazin. We may need a whole ass new internet.
https://mtscwatch.com/blog/why-your-automatic-watch-runs-fast-or-slow-and-how-to-fix-it


As someone who actually knows a thing or two about automatic watches, I decided to give it a read. While the info is actually not as awful as I thought (no hallucinations surprisingly), I still would not recommend it as a guide, even if it were written by a human. Especially this part:
While, yes, a magnetized movement will cause it to suddenly run fast, demagnetizing a movement isn’t completely harmless. If you’re wrong and the movement isn’t magnetized and instead a different type of damage, attempting to demagnetize it will actually magnetize it, and the combination of a magnetized movement plus whatever other damage caused the movement to run fast suddenly could be far worse. I think an added disclaimer would be helpful here. There are ways to test if a movement is magnetized, that should be mentioned. Blanket recommending demagnetizing a movement without a disclaimer is dangerous. I know this is written by AI and not a human but that part really bugged me.
well thats the thing, its crap written by an LLM. I read the first paragraph then went and found an article written by a person, but Id say a good percentage of people would read the whole damn thing and take it as fact.
personally ive always objected to the term hallucination for these LLMs fucking up. its not hallucinating, its shitty computer code outputting shitty output
for what its worth, I just needed to manually wind my watch a bunch. I just had it bequeathed to me and didnt know anything about them. keeping time now of course
Considering you gave the article a high factuality rating, may it be supervised AI output?
I mean, it’s still missing a bunch of important info, it’s just that it’s not saying anything incorrect necessarily. In general anyway I’d just always steer clear of LLMs though regardless of the situation.