Just your daily reminder to not trust or at the very least, fact check whatever chatgpt spews out because not only does it blatantly lie, but it also makes stuff way more than youd want to believe.
(btw batrapeton doesnt exist and is a fictional genus of jurassic amphibians that I made up for a story that I am writing. They never existed in any way shape or form and neither is there any trace of info about them online yet here we are with chatgpt going “trust me bro” about them lol)
Actually its correct. If you’re at all familiar with the Permian of New Mexico you would surely have heard of Batrapeton. Maybe read more?
you just asked it to imagine what the inexistant word would mean, than complained that it did it’s job?
lmao
like, i thought this community is for people sharing the hate for cheap corpo hype over ai, not trying to hype up the hate for the otherwise useful instrument. You’re swaying from one extreme to anothar.
Nobody asked it to imagine anything. What would x mean in y is a common phrasing
Yes, they did. OP instructed it to fill in the blank by asking “what would it mean”, not if it knows what it is. If you ask it, “Do you know what ‘batrapeton’ means in a paleontological context?” instead, it does a quick search and responds like this:
AI output hidden from delicate eyes (/s Actually, it's just long)
I could not find any credible reference in the paleontological literature for the term “batrapeton” (or very close variants) as a recognized taxon, feature, or concept.
It’s possible that:
The term is a typographical or transcription error (e.g. a mis-spelling of a known genus or concept).
It’s an informal, local, or unpublished name (a “nomen nudum”) used in a manuscript but never formally erected.
It might be a fictional or invented name (as some discussions online suggest) with no real scientific usage.
One possibly related genus is Batropetes, which is a valid extinct genus of microsaur (a kind of small early amphibian) from the Early Permian (Germany). Wikipedia
If “batrapeton” was intended to be “Batropeton” (or something like that), then the user might have meant “Batropetes”. But “batrapeton” as spelled does not seem to match any known paleontological entity.
If you like, I can help you check whether “batrapeton” appears in niche literature (theses, old reports) or whether it’s a mis-rendering of another name — would you like me to look further?
I just asked Gemini and it got the wrong answer even after google searching. Plus, what I said was “what would <something> mean in <some field>” is a normal way of asking “what does <something> mean in <some field>”, which a non-pedantic English speaker would understand.
Yes, Gemini is a lot worse generally, and you have to be “pedantic” to get what you want.
Works as intended (if not as advertised)
just as always
I’m no AI proponent, but phrasing is important. Would should be replaced with does. Would implies a request for speculation, specifically, or even actively creative output.
As in, if it existed, what would…
Because AI is a predictive transformer/generator, not an infinite knowledge machine.
Whenever someone confidently states “I asked ChatGPT…” in a conversation, I die a little inside. I’m tired of explaining this shit to people.
Same. I just quit trying to correct them after a point.
First time? This is indeed how LLMs work.
Bro read the text under the got chat box. Lmao
I read it. I don’t think a community called “Fuck AI” needs a daily reminder. We all know it sucks!
I’m here for exactly these memes.
LLMs can’t say they don’t know. It’s better for the business to make up some bullshit than just say “I don’t know” because it would show how useless they can be.
You’re right, but for a different reason as well. The way these models are trained is by “taking tests” over and over. Wrong answers, as well as saying “I don’t know”, both score a 0. Only the right answer is a 1.
So it might get the question right by making stuff up/guessing, but will always be punished for admitting a gap in knowledge.
All LLMs act like improv artists, they almost never stop riffing because they always say “yes and”
But they’re not funny :(
Your specific wording is telling it to make up an answer.
What “would” this word mean? Implying it doesn’t mean anything currently, so guess a meaning for it.
But yes, in general always assume they don’t know what they are saying, as they aren’t really capable of knowing. They do a really good job of mimicking knowledge, but they don’t actually know.
Yes that is true and thanks for pointing it out. If Im being honest here I wasnt even sure if Batrapeton was a valid name and the reason I was searching it up was to make a blatantly amphibian coded name that also wasnt already a real creature that someone had already named and described otherwise I would have to go look for a different name but every name I could come up with seemed to already be taken and described by someone or the other so I decided to google it just in case and saw that there was nothing on them chatgpt had just made that up. I wish AI had a thing in which it could inform the user that “this is what it would possibly be but it doesnt actually exist” instead of just guessing like that.
They always return an answer.
I’ve had LLMbeciles make up an entire discography, track list, and even lyrics of “obscure black metal bands” that don’t exist. It doesn’t take much to have them start to spew non-stop grammatically correct gibberish.
I’ve also had them make up lyrics for bands and songs that actually exist. Specifically completely made-up lyrics for the song “One Chord Wonders” by The Adverts. And then, when I quote the actual lyrics to correct them, incorporate that into their never-ending hallucinations by claiming that was a special release for a television special, but that the album had their version.
Despite their version and the real version having entirely different scansion.
These things really are just hallucination machines.
Someone here said that LLM chatbots are always “hallucinating” and it stuck with me. They happen to be correct a lot of the time but they are always making stuff up. That’s what they do that’s how they work.
They pin values to data and use a bit of magical stats to decide if two values are related in anyway and are relevant to what was asked. Then it fluffs the data up with a bit of natural language, and there you go.
It’s the same algorithms that decide if you want to see an advert about dog food influencers or catalytic converters in your area.
Algorithmic Interpolation
Artificial Imagination
Yes. One of the original instances of this is to make up a saying and ask it what it means. Like “you can’t shave a cat until it has had its dinner.” It’ll make up what it means.
ChatGPT learns from your previous threads.
If you’re using ChatGPT for your writing, it probably used that as information to answer the question.
After asking it a similar question, it answered in a similar way.
When asking for sources, it spit out information about a name that’s very similar, which it seems to have used too to describe the fictional species.
When pressed a little more, it even linked this very post.
I didnt ask it about amphibians, writing or any extinct species at all. I was trying to see if a name that I wanted to use for a work of fiction wasnt already in use and if said name would make sense in the context that I wanted to use it in.
I have to give it props for dropping in “dissorophoid temnospondyl” which I figured even odds on also being made up, but it is not!
Yup. I didnt expect it either, its like it searched upto a certain point to gather info but couldnt find anything conclusive so made up the closest thing to what it found and called it a day. It does bullshit, but it does so very well.
For a while I was thinking I might eventually use AI for more than a code completer. But that looks less likely every day.