• SlopppyEngineer@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    6 months ago

    I’ve seen a junior using chatGPT to do the job while not really understanding what’s going on and the end it was a big mess that didn’t work. After I told him to read a “for dummies” book and he started to think for himself he got something decent out of it. It’s no replacement for skill and thinking.

    • melroy@kbin.melroy.org
      link
      fedilink
      arrow-up
      4
      ·
      6 months ago

      exactly what I expected. It only will be worse. Since those juniors don’t know what is good or wrong code for example. So they just assume whatever ChatGPT is saying is correct. They have no benchmark to compare.

    • Alphane Moon@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Had a very similar experience in pretty niche-use cases. LLMs are great if you understand the what you are dealing with, but they are no magical automation tool (at least in somewhat niche, semi-technical use cases where seemingly small errors can have serious consequences).

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      That’s been my experience so far, that it’s largely useless for knowledge based stuff.

      In programming, you can have it take “pseducode” and have it output actionable code for more tedious languages, but you have to audit it. Ultimately I find traditional autocompletion just as useful.

      I definitely see how it helps cheat on homework, and extends “stock photography” to the point of really limiting the market for me photography or artists for bland business assets though.

      I see how people find it useful for their “professional” communications, but I hate it because people that used to be nice and to the point are staying to explode their communication into a big LLM mess.