- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
If an LLM can’t be trusted with a fast food order, I can’t imagine what it is reliable enough for. I really was expecting this was the easy use case for the things.
It sounds like most orders still worked, so I guess we’ll see if other chains come to the same conclusion.
And none of that is AI, if we use the term in a way that’s useful and stop speaking in salesspeak. Machine learning algorythms, LLMs, all of that isn’t AI in any meaningful sense of the word. It’s all semantics of course, but the more sam altmans of the world are trying to blurry the line between intelligence and an algorythm that can identify a picture of hotdog, the more we kinda need to be stickers for the proper terminology. It was kinda cute at the beginning of this crazy cycle, but now when megacorpos are demanding all your data and putting their half-baked chatbots into everything, it’s not cute anymore.
Please, enlighten me - how do you propose we use the term “AI” in a way that’s more useful than a definition that includes machine learning, large language models, and computer vision?
I doubt I’ll agree with your definition, but I’m curious to see how you would exclude machine learning, computer vision, LLMs, etc., from your definition. My assumption is that your definition is going to be either a derivative of “AI is anything computers can’t do yet” or based on pop culture / sci fi, but maybe you’ll surprise me.
To be clear, I’m a software engineer; I’m not speaking in sales speak. I’ve derived my understanding of the term from a combination of its historical context and how it’s used in both professional and academic contexts, not from marketing propaganda or from sci fi and pop culture. I’m certainly aware of the hype machine that’s ongoing, but there are also tons of fascinating advancements happening on a regular basis, and the term “AI” is at minimum a useful term to refer to technologies that leverage similar techniques.