As someone who works in network engineering support and has seen Claude completely fuck up people’s networks with bad advice: LOL.
Literally had an idiot just copying and pasting commands from Claude into their equipment and brought down a network of over 1000 people the other day.
It hallucinated entire executables that didn’t exist. It asked them to create init scripts for services that already had one. It told them to bypass the software UI, that had the functionality they needed, and start adding routes directly to the system kernel.
AI is incredibly powerful and incredibly easy to use, which means it’s a piece of cake to use AI to do incredibly stupid things. Your guy is just bad with AI, which means he doesn’t know how to talk to a computer in his native language
no, AI just sucks ass with any highly customized environment, like network infrastructure, because it has exactly ZERO capacity for on-the-fly learning.
it can somewhat pretend to remember something, but most of the time it doesn’t work, and then people are so, so surprised when it spits out the most ridiculous config for a router, because all it did was string together the top answers on stack overflow from a decade ago, stripping out any and all context that makes it make sense, and presents it as a solution that seems plausible, but absolutely isn’t.
LLMs are literally design to trick people into thinking what they write makes sense.
they have no concept of actually making sense.
this is not an exception, or an improper use of the tech.
whenever someone says AI doesn’t work they’re just saying that they don’t know how to get a computer to do their work for them. they can’t even do laziness right
i didn’t say “AI doesn’t work”, i said it works exactly as expected: producing bullshit.
i understand perfectly well how to get it to spit out useful information, because i know what i can and cannot ask it about.
I’d much rather not use it, but it’s pretty much unavoidable now, because of how trash search results have become, specifically for technical subjects.
what absolutely doesn’t work is asking AI to perform highly specific, production critical configurations on live systems.
you CAN use it to get general answers to general questions.
“what’s a common way to do this configuration?” works well enough.
“fix this config file for me!” doesn’t work, because it has no concept of what that means in your specific context. and no amount of increasingly specific prompts will ever get you there. …unless “there” is an utter clusterfuck, see the OP top of chain (should have been more specific here…) for proof…
idk what you’re talking about because i’m an amateur, i’ve never pushed anything to prod. sounds like AI slop is especially bad when you write code for a living, but me…idk i just dick around on my computer all day so it’s like the wild west over here, lmaooooo (i can’t work because i had a TBI)
yes, that’s exactly the point of everything I’ve said:
to an inexperienced user/developer/admin the output LLMs produce look perfectly valid, and for relatively trivial tasks they might even work out…but when it gets more specialized it fails spectacularly and it gets extremely obvious just how limited of a system it really is.
which is why there is so much pushback from professionals. actually that’s pretty much all professionals, not just in IT.
well i got a bacon number app with django and a .db that was like 2 gigs to work and i vibe coded it. i guess that’s the most complicated thing i’ve put together. i mostly use LLMs for instruction, debugging, and (sadly) writing my code for me when i’m too lazy to read the documentation
sure, and that works at small scales and as long as no change is required.
when either of those two change (large projects where interdependent components become inevitable and frequent updates are necessary) it becomes impossible to use AI for basically anything.
any change you make then has to be carefully considered and weighed against it’s consequences, which AIs can’t do, because they can’t absorb the context of the entire project.
look, I’m not saying you can’t use AI, or that AI is entirely useless.
I’m saying that using AI is the same as any other tool; use it deliberately and for the right job at the right time.
the big problem, especially in commercial contexts, is people using AI without realizing these limitations, thinking it’s some magical genie that can everything.
As someone who works in network engineering support and has seen Claude completely fuck up people’s networks with bad advice: LOL.
Literally had an idiot just copying and pasting commands from Claude into their equipment and brought down a network of over 1000 people the other day.
It hallucinated entire executables that didn’t exist. It asked them to create init scripts for services that already had one. It told them to bypass the software UI, that had the functionality they needed, and start adding routes directly to the system kernel.
Every LLM is the same bullshit guessing machine.
Functions with arguments that don’t do anything… hey Claude why did you do that? Good catch…!
AI is incredibly powerful and incredibly easy to use, which means it’s a piece of cake to use AI to do incredibly stupid things. Your guy is just bad with AI, which means he doesn’t know how to talk to a computer in his native language
no, AI just sucks ass with any highly customized environment, like network infrastructure, because it has exactly ZERO capacity for on-the-fly learning.
it can somewhat pretend to remember something, but most of the time it doesn’t work, and then people are so, so surprised when it spits out the most ridiculous config for a router, because all it did was string together the top answers on stack overflow from a decade ago, stripping out any and all context that makes it make sense, and presents it as a solution that seems plausible, but absolutely isn’t.
LLMs are literally design to trick people into thinking what they write makes sense.
they have no concept of actually making sense.
this is not an exception, or an improper use of the tech.
it’s an inherent, fundamental flaw.
whenever someone says AI doesn’t work they’re just saying that they don’t know how to get a computer to do their work for them. they can’t even do laziness right
As a dev: lol. Do it again, you are good at entertaining
yeah, no… that’s not at all what i said.
i didn’t say “AI doesn’t work”, i said it works exactly as expected: producing bullshit.
i understand perfectly well how to get it to spit out useful information, because i know what i can and cannot ask it about.
I’d much rather not use it, but it’s pretty much unavoidable now, because of how trash search results have become, specifically for technical subjects.
what absolutely doesn’t work is asking AI to perform highly specific, production critical configurations on live systems.
you CAN use it to get general answers to general questions.
“what’s a common way to do this configuration?” works well enough.
“fix this config file for me!” doesn’t work, because it has no concept of what that means in your specific context. and no amount of increasingly specific prompts will ever get you there. …unless “there” is an utter clusterfuck, see the
OPtop of chain (should have been more specific here…) for proof…idk what you’re talking about because i’m an amateur, i’ve never pushed anything to prod. sounds like AI slop is especially bad when you write code for a living, but me…idk i just dick around on my computer all day so it’s like the wild west over here, lmaooooo (i can’t work because i had a TBI)
yes, that’s exactly the point of everything I’ve said:
to an inexperienced user/developer/admin the output LLMs produce look perfectly valid, and for relatively trivial tasks they might even work out…but when it gets more specialized it fails spectacularly and it gets extremely obvious just how limited of a system it really is.
which is why there is so much pushback from professionals. actually that’s pretty much all professionals, not just in IT.
well i got a bacon number app with django and a .db that was like 2 gigs to work and i vibe coded it. i guess that’s the most complicated thing i’ve put together. i mostly use LLMs for instruction, debugging, and (sadly) writing my code for me when i’m too lazy to read the documentation
sure, and that works at small scales and as long as no change is required.
when either of those two change (large projects where interdependent components become inevitable and frequent updates are necessary) it becomes impossible to use AI for basically anything.
any change you make then has to be carefully considered and weighed against it’s consequences, which AIs can’t do, because they can’t absorb the context of the entire project.
look, I’m not saying you can’t use AI, or that AI is entirely useless.
I’m saying that using AI is the same as any other tool; use it deliberately and for the right job at the right time.
the big problem, especially in commercial contexts, is people using AI without realizing these limitations, thinking it’s some magical genie that can everything.