Reminds me of the very early days of the web, where you had people with the title “webmaster”. When you looked deeper into the supposed skillset, it was people that knew a bare minimum of HTML and the ability to manage a tree of files?
I’ll never forget being at an ATM and overhearing a conversation between two women in their 30s behind me - the one woman tells the other - “I’ve been thinking about what I want to do and I think I want to be a webmaster”. It just sounded like a very casual choice and one about making money, and not much deeper than that.
This was in 1999 or so. I thought - man, this industry is so fucked right now - we have hiring managers, recruiters, etc…that have almost no idea of the difference in skillsets between what I do (programming, architecture, networking, database, and then trying to QA all of that and keep it running in production, etc.) and people calling themselves “webmasters”.
Sure enough, not long after, the dotcom bubble popped. It was painful for everyone (even people that kept their distance from the dotcom thing to an extent) without question, whether you had skills or no. But I don’t think roles like “webmaster” did very well…
I have had in person conversations with multiple people who swear they have fixed the AI hallucination problem the same way. “I always include the words ‘make sure all of the response is correct and factual without hallucinating’”
These people think they are geniuses thanks to just telling the AI not to mess up.
Thanks to being in person with a rather significant running context, I know they are being dead serious, and no one will dissuade them from thinking their “one weird trick” works.
All the funnier when, inevitably, they get screwed up response one day and feel all betrayed because they explicitly told it not to screw up…
But yes, people take “prompt engineering” very seriously. I have seen people proudly display their massively verbose prompt that often looked like way more work than to just do the things themselves without LLM. They really think it’s a very sophisticated and hard to acquire skill…
“Do not hallucinate”, lol… The best way to get a model to not hallucinate is to include the factual data in the prompt. But for that, you have to know the data in question…
People thinking they’re AI experts because of prompts is like claiming to be an aircraft engineer because you booked a ticket.
Reminds me of the very early days of the web, where you had people with the title “webmaster”. When you looked deeper into the supposed skillset, it was people that knew a bare minimum of HTML and the ability to manage a tree of files?
I’ll never forget being at an ATM and overhearing a conversation between two women in their 30s behind me - the one woman tells the other - “I’ve been thinking about what I want to do and I think I want to be a webmaster”. It just sounded like a very casual choice and one about making money, and not much deeper than that.
This was in 1999 or so. I thought - man, this industry is so fucked right now - we have hiring managers, recruiters, etc…that have almost no idea of the difference in skillsets between what I do (programming, architecture, networking, database, and then trying to QA all of that and keep it running in production, etc.) and people calling themselves “webmasters”.
Sure enough, not long after, the dotcom bubble popped. It was painful for everyone (even people that kept their distance from the dotcom thing to an extent) without question, whether you had skills or no. But I don’t think roles like “webmaster” did very well…
I have had in person conversations with multiple people who swear they have fixed the AI hallucination problem the same way. “I always include the words ‘make sure all of the response is correct and factual without hallucinating’”
These people think they are geniuses thanks to just telling the AI not to mess up.
Thanks to being in person with a rather significant running context, I know they are being dead serious, and no one will dissuade them from thinking their “one weird trick” works.
All the funnier when, inevitably, they get screwed up response one day and feel all betrayed because they explicitly told it not to screw up…
But yes, people take “prompt engineering” very seriously. I have seen people proudly display their massively verbose prompt that often looked like way more work than to just do the things themselves without LLM. They really think it’s a very sophisticated and hard to acquire skill…
“Do not hallucinate”, lol… The best way to get a model to not hallucinate is to include the factual data in the prompt. But for that, you have to know the data in question…
“ChatGPT, please do not lie to me.”
“I’m sorry Dave, I’m afraid I can’t do that.”
That’s incorrect because in order to lie, one must know that they’re not saying the truth.
LLMs don’t lie, they bullshit.
Have you tried to not be depressed?