The only good LLM is one that is being used by a highly specialized field to search useful information and not in consumer hands in the form of a plagiarism engine otherwise known as “AI”. Techbros took something that once had the potential to be useful and made it a whole shitty affair. Thanks, I hate it.
The one that I developed and costs $300 a week. Want it to gaslight you? Done. Make up shit? Done. Shout at you? Done. Randomly stop working while still taking your money? Done.
As someone who works in network engineering support and has seen Claude completely fuck up people’s networks with bad advice: LOL.
Literally had an idiot just copying and pasting commands from Claude into their equipment and brought down a network of over 1000 people the other day.
It hallucinated entire executables that didn’t exist. It asked them to create init scripts for services that already had one. It told them to bypass the software UI, that had the functionality they needed, and start adding routes directly to the system kernel.
AI is incredibly powerful and incredibly easy to use, which means it’s a piece of cake to use AI to do incredibly stupid things. Your guy is just bad with AI, which means he doesn’t know how to talk to a computer in his native language
no, AI just sucks ass with any highly customized environment, like network infrastructure, because it has exactly ZERO capacity for on-the-fly learning.
it can somewhat pretend to remember something, but most of the time it doesn’t work, and then people are so, so surprised when it spits out the most ridiculous config for a router, because all it did was string together the top answers on stack overflow from a decade ago, stripping out any and all context that makes it make sense, and presents it as a solution that seems plausible, but absolutely isn’t.
LLMs are literally design to trick people into thinking what they write makes sense.
they have no concept of actually making sense.
this is not an exception, or an improper use of the tech.
whenever someone says AI doesn’t work they’re just saying that they don’t know how to get a computer to do their work for them. they can’t even do laziness right
i didn’t say “AI doesn’t work”, i said it works exactly as expected: producing bullshit.
i understand perfectly well how to get it to spit out useful information, because i know what i can and cannot ask it about.
I’d much rather not use it, but it’s pretty much unavoidable now, because of how trash search results have become, specifically for technical subjects.
what absolutely doesn’t work is asking AI to perform highly specific, production critical configurations on live systems.
you CAN use it to get general answers to general questions.
“what’s a common way to do this configuration?” works well enough.
“fix this config file for me!” doesn’t work, because it has no concept of what that means in your specific context. and no amount of increasingly specific prompts will ever get you there. …unless “there” is an utter clusterfuck, see the OP top of chain (should have been more specific here…) for proof…
So expensive, looks great, takes significant capital to maintain, and anyone who has one uses something else when they actually need to do something useful.
What’s with tech people always stating (marketing) things as akin to high end sports cars. The state of AI is more like arguing over which donkey is best, lol.
What they know is Google though. Most normal people doing a search now just take the Gemini snippet at the top. They don’t know or care what AI even is really. I don’t know how OpenAI can possibly compete with web search defaults.
You are comparing very well established brands to a company in a sector that is far less established. Yes, OpenAI is the most well known, but not to the degree of $300B.
I know Lemmy users avoid it, but a lot of people use LLMs, and when most people think LLMs, they think ChatGPT. I doubt the average person could name many or even any others.
That means whenever these people want to use an LLM, they automatically go to OpenAI.
As for to the degree of $300bn, who knows. Big tech has had crazy valuations for a long time.
I totally agree with you. In fact, I know people who use ChatGPT exclusively and don’t touch the web anymore. Who knows who will have the best models, but they are definitely capturing a lot of people early.
OpenAI isn‘t very good in any of those categories and they still have no business model. Subscriptions would have to be ridiculously high for them to turn a profit. Users would just leave. But to be fair that goes for all AI companies at the moment. None of their models can do what they promise and they‘re all bleeding money.
Yeah, I figured brand recognition was part of it. Everyon’e heard of ChatGPT- hell, last time I checked, ChatGPT was the number 1 app on the planet- but Claude isn’t nearly as popular, even though (in my opinion) it’s a lot better with code. It’s just a lot more thorough than the slop ChatGPT spits out
Couldn’t have happened to a worse company! Hope it hurts even worse later on and fractures the Execucultist’s will to shill AI further. 😈
300 billion on OpenAI? Why? LLMs in general are trash, but ChatGPT isn’t even the best LLM
Chatgpt is the name recognition brands. Like calling all electric cars a tesla
The only good LLM is one that is being used by a highly specialized field to search useful information and not in consumer hands in the form of a plagiarism engine otherwise known as “AI”. Techbros took something that once had the potential to be useful and made it a whole shitty affair. Thanks, I hate it.
idk man LLMs help me code bro
Oof, sad, but you do you…
Normie here. Which one is?
The one that I developed and costs $300 a week. Want it to gaslight you? Done. Make up shit? Done. Shout at you? Done. Randomly stop working while still taking your money? Done.
Not sure, but I hear the Claude Super Duper Extreme Fucking Pro ($200/month) is like the Ferrari of LLM assisted coding
As someone who works in network engineering support and has seen Claude completely fuck up people’s networks with bad advice: LOL.
Literally had an idiot just copying and pasting commands from Claude into their equipment and brought down a network of over 1000 people the other day.
It hallucinated entire executables that didn’t exist. It asked them to create init scripts for services that already had one. It told them to bypass the software UI, that had the functionality they needed, and start adding routes directly to the system kernel.
Every LLM is the same bullshit guessing machine.
Functions with arguments that don’t do anything… hey Claude why did you do that? Good catch…!
AI is incredibly powerful and incredibly easy to use, which means it’s a piece of cake to use AI to do incredibly stupid things. Your guy is just bad with AI, which means he doesn’t know how to talk to a computer in his native language
no, AI just sucks ass with any highly customized environment, like network infrastructure, because it has exactly ZERO capacity for on-the-fly learning.
it can somewhat pretend to remember something, but most of the time it doesn’t work, and then people are so, so surprised when it spits out the most ridiculous config for a router, because all it did was string together the top answers on stack overflow from a decade ago, stripping out any and all context that makes it make sense, and presents it as a solution that seems plausible, but absolutely isn’t.
LLMs are literally design to trick people into thinking what they write makes sense.
they have no concept of actually making sense.
this is not an exception, or an improper use of the tech.
it’s an inherent, fundamental flaw.
whenever someone says AI doesn’t work they’re just saying that they don’t know how to get a computer to do their work for them. they can’t even do laziness right
As a dev: lol. Do it again, you are good at entertaining
yeah, no… that’s not at all what i said.
i didn’t say “AI doesn’t work”, i said it works exactly as expected: producing bullshit.
i understand perfectly well how to get it to spit out useful information, because i know what i can and cannot ask it about.
I’d much rather not use it, but it’s pretty much unavoidable now, because of how trash search results have become, specifically for technical subjects.
what absolutely doesn’t work is asking AI to perform highly specific, production critical configurations on live systems.
you CAN use it to get general answers to general questions.
“what’s a common way to do this configuration?” works well enough.
“fix this config file for me!” doesn’t work, because it has no concept of what that means in your specific context. and no amount of increasingly specific prompts will ever get you there. …unless “there” is an utter clusterfuck, see the
OPtop of chain (should have been more specific here…) for proof…So expensive, looks great, takes significant capital to maintain, and anyone who has one uses something else when they actually need to do something useful.
it literally doesn’t cost as much as a ferrari
What’s with tech people always stating (marketing) things as akin to high end sports cars. The state of AI is more like arguing over which donkey is best, lol.
GPT goes beyond chat, copilot code generation is also based on that. They also have generative visual stuff, like Sora.
Then there is brand recognition I guess, tech bros and finance bros seem to love OpenAI.
Brand recognition cannot be overstated.
If there was a better-than-YouTube alternative right now, YouTube would still dominate.
If there was a phone OS superior to Android and iOS, they would both still dominate.
If there was a search engine that worked far better than Google, Google would still dominate.
The average person won’t look into LLM reasoning benchmarks. They’ll just use the one they know, ChatGPT.
But Windows and Google can shove it in your face because you’re already on their platform and they are doing that. You have to go to openai website.
What they know is Google though. Most normal people doing a search now just take the Gemini snippet at the top. They don’t know or care what AI even is really. I don’t know how OpenAI can possibly compete with web search defaults.
I don’t think openAI is as well-known a Google.
ChatGPT might be, which is the point.
You are comparing very well established brands to a company in a sector that is far less established. Yes, OpenAI is the most well known, but not to the degree of $300B.
OpenAI is pretty well established.
I know Lemmy users avoid it, but a lot of people use LLMs, and when most people think LLMs, they think ChatGPT. I doubt the average person could name many or even any others.
That means whenever these people want to use an LLM, they automatically go to OpenAI.
As for to the degree of $300bn, who knows. Big tech has had crazy valuations for a long time.
I totally agree with you. In fact, I know people who use ChatGPT exclusively and don’t touch the web anymore. Who knows who will have the best models, but they are definitely capturing a lot of people early.
I mean, it’s an easy answer to got the other 3 main ones: Gemini, copilot and MechaHitler
Copilot is just an implementation of GPT. Claude’s the other main one, at least as far as performance goes.
OpenAI isn‘t very good in any of those categories and they still have no business model. Subscriptions would have to be ridiculously high for them to turn a profit. Users would just leave. But to be fair that goes for all AI companies at the moment. None of their models can do what they promise and they‘re all bleeding money.
Yeah, I figured brand recognition was part of it. Everyon’e heard of ChatGPT- hell, last time I checked, ChatGPT was the number 1 app on the planet- but Claude isn’t nearly as popular, even though (in my opinion) it’s a lot better with code. It’s just a lot more thorough than the slop ChatGPT spits out