• stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    1
    ·
    20 hours ago

    I am glad I realized just how bad AI is early on, I have sometimes had it help me write some simple HTML/CSS code, but it is mostly annoying to use.

    It makes me loose track of what does what in my code, and also takes away my initiative at trying to change the code myself.

    When it comes to general information, it mostly generates decent responses, but it keeps getting enough things wrong that you just can’t trust it.

    Combine that with the fact that AIs are trained to always accommodate the user and almost never tells the user straight up “No”, it keeps engaging the user, it is never angry, it focuses on reenforcement and validation of the particular arguments given to it.

    I feel dumber when I have used an AI

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      15 hours ago

      I am starting to appreciate all the times stackoverflow people told me my question itself is wrong and I am stupid.

      Well, the first part mainly.

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      5
      ·
      14 hours ago

      There are things LLMs are genuinely useful for.

      Transforming text is one. To give examples, a friend of mine works in advertising and they routinely ask a LLM to turn a spec sheet into a draft for ad copy; another person I know works as a translator and also uses DeepL as a first pass to take care of routine work. Yeah, you can get mentally lazy doing that but it can be useful for taking care of boilerplate stuff.

      Another one is fuzzy data lookup. I occasionally use LLMs to search for things where I don’t know how to turn them into concise search terms. A vague description can be enough to get an LLM onto the right track and I can continue from there using traditional means.

      Mind you, all of that should be done sparingly and with the awareness that the LLM can convincingly lie to you at any time. Nothing it returns is useful as anything but a draft that needs revision and any information must be verified. If you simply rely on its answer you will get something reasonably useful much of the time, you will get mentally lazy, and sometimes you will act on complete bullshit without knowing it.

      • OneWomanCreamTeam@sh.itjust.works
        link
        fedilink
        arrow-up
        8
        ·
        10 hours ago

        This is a little besides the point, but even those use-cases LLMs have the fatal flaw of being obscenely resource intensive. They require huge amounts of electricity and cooling to continue operating. Not to mention most of them are trained on stolen data.

        Even when they’re an effective tool for a given task, they’re still not an ethical one.

        • Jesus_666@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          9 hours ago

          That’s true; I didn’t touch on those points but I very much agree. (Yes, while I occasionally use it. It’s easy to ignore the implications of what you’re doing for a moment.)

    • WalnutLum@lemmy.ml
      link
      fedilink
      arrow-up
      22
      ·
      20 hours ago

      There have been studies that report the same thing: using an AI for too long actively makes you dumber.