So, I have an automation that checks the month and applies a scene accordingly. I could never get it to work quite right though. Every time it was supposed to trigger a specific scene, it would just default to a base scene. I ran it through DDG AI and it found a single line that was causing the “choose” to fail and defaulted to the default scene. Not a fan of LLMs/AI but this actually helped.
Using AI as a tool like any other is fine. It’s the blind trust that it can do everything for you that is problematic (not to mention the fact people hide the fact that something “they created” is AI). Just like with any other computer system: garbage in, garbage out.
Once we summit the peak of inflated expectations and the bubble bursts hopefully we’ll get back to evaluating the technology on its merits.
LLM’s definitely have some interesting properties but they are not universal problem solvers. They are great at parsing and summarizing language. There ability to vibe code is entirely based on how closely your needs match the (vast) training data. They can synthesise tutorials and stack overflow answers much faster than you can. But if you are writing something new or specialised the limits of their “reasoning” soon show up in dead ends and sycophantic “you are absolutely right, I missed that” responses.
More than the technology the social context is a challenge. We are already seeing humans form dangerous parasocial relationships with token predictors with some tragic results. If you abdicate your learning to an LLM you are not really learning and that could have profound impacts on the current cohort of learners who might be assuming they no longer need to learn as the computer can do it for them.
We are certainly experiencing a very fast technological disruption event and it’s hard to predict where the next few years will take us.
LLMs are great at language. I often use them to generate syntax for a language I don’t know and probably won’t use again. While the short snippets may not do exactly what I want, I can edit a snippet fairly easily. Writing one with no knowledge of the language would take me far longer.
Just remember that being good at language is not the same as intelligence. LLMs are good at mimicking thought, but they cannot problem solve or optimise. You still have to do that bit.
I sometimes use LLM’s to help me troubleshoot. I usually don’t ask for solutions, but rather “what is wrong here?” type stuff.
has often saved me hours of troubleshooting, but it is occasionally wrong and sees flaws where there is none.
Checking for existing errors is a completely different thing than saying “do this”
We use a lot of automation at work, but it’s stuff like that or “pull this number and put it there” where a human might make a typo.
But still, way different than actually making something from a handful of vague suggestions.
And at that point it’s not really a LLM, it’s just botting.
At my last job, they were trying to put LLMs in charge of doing data stuff to avoid typos, and it was just easily provably making up data.
Automation is not LLMs/GenAI.
I fully agree as a tool LLMs are amazing. Throw in a config file or code that you know 99% of what it should be, but can’t find what’s wrong… and I’d say there’s a good 70% chance it will find it… maybe chasing down one or 2 red herrings before it solves it.
The bad rap of course is simply the 2 main factors.
-
idiots that use it to do the entire coding, and thus wind up with something they themselves don’t have even the basic understanding of how it goes together, so they can’t spot when it does something horrifically wrong.
-
The overall reality that, no matter how you slice it, it costs an absurd amount to run these things. so… while the AI companies are letting us use these things for free or off really cheap plans, it’s actually costing real money to process, and realistically there’s no sign of it reaching a point where there’s actually a fair trade of value…
-
The right tool for the right job… LLMs can’t do a lot, and can make a lot of things worse when misapplied, but that doesn’t mean the technology is wholly useless.
AI is best used for prompting and troubleshooting when it comes to code works, imo. It can give ideas, find a small bug, or just help get it of a corner. I never use the code generated but instead at least type out what I do want to use, both so I’m sure I know what it’s doing, and to not atrophy my skills.
deleted by creator
The issue isn’t with LLMs being used for what they ARE good at (quickly sorting and correlating data), but everyone trying to make it out to be BIGGER than that, and making awful products accordingly.
Square Peg, Circular Hole.
deleted by creator




