I knew I wasn’t interested in A.I. for a while now, but I think I can finally put it into words.

A.I. is supposed to be useful for things I’m not skilled at or knowledgeable about. But since I’m not skilled at or knowledgeable about the thing I’m having A.I. do for me, I have no way of knowing how well A.I. accomplished it for me.

If I want to know that, I have to become skilled at or knowledgeable about the thing I need to do. At this point, why would I have A.I. do it since I know I can trust I’m doing it right?

I don’t have a problem delegating hiring people who are more skilled at or more knowledgeable about something than me because I can hold them accountable if they are faking.

With A.I., it’s on me if I’m duped. What’s the use in that?

  • rumschlumpel@feddit.org
    link
    fedilink
    arrow-up
    31
    arrow-down
    1
    ·
    edit-2
    2 days ago

    For the tasks that LLMs are actually decent at, like writing letters, the idea is that you save time even if you’re knowledgable enough to do it yourself, and even if you still need to do some corrections (and you’re right that you shouldn’t use AI for a task that you’re not knowledgable about - those corrections are crucial). One of the big issues of LLMs is that they are being sold as a solution for lots of tasks that they’re horrible at.

    • Zachariah@lemmy.worldOP
      link
      fedilink
      arrow-up
      20
      ·
      2 days ago

      Maybe I’m just too particular about things.
      I cannot imagine an LLM world write the way I want it to be written.

      • alternategait@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        Maybe not letters to your grandma, but to send out 1000 “your benefits have changed” in 600 subtly different ways.

        • Hetare King@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          I would argue that’s actually the last situation you’d want to use an LLM. With numbers like that, nobody’s going to review each and every letter with the attention things generated by an untrustworthy agent ought to get. This sounds to me like it calls for a template. Actually, it would be pretty disturbing to hear that letters like that aren’t being generated using a template and based on changes in the system.

      • rumschlumpel@feddit.org
        link
        fedilink
        arrow-up
        4
        arrow-down
        3
        ·
        2 days ago

        TBH I don’t have much experience with it, because of the myriad other issues that plague LLMs, but style and tone is generally considered the thing that they’re good at.

          • BaroqueInMind@piefed.social
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 days ago

            I almost did not believe the words mewling and quim existed in real life language and had to look it up to ensure you didn’t write that comment with an LLM AI

        • RedstoneValley@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          ·
          2 days ago

          I have some experience on the letter receiving side to share. I have a work colleague who recently decided it was a good idea to answer inquiries in MS Teams or email with LLM generated text. It was very obvious because the wording was too business-polished polite, was too verbose and did not sound like anything you would answer to a colleague ever. While the content was technically fine, the tone was missed by a mile. Also the generous use of the infamous em dash and unnecessary exclamation marks gave it away immediately.

          That poses a problem. If you do that to a person you’re working with and they immediately know you’re serving them AI slop because you’re too lazy to be bothered with basic human interaction they WILL be offended. Same goes for customers if they know you personally or expect a human on the other side.

          Humans are getting better at identifying AI garbage faster than LLMs improve. Because humans are still excellent at intuitive pattern recognition. Noticing that something is off intuitively is an evolutionary advantage that might save our ass.

        • OfficeMonkey@lemmy.today
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          Style and tone MIGHT be something they can mimic, but they are phenomenally bad at nuance. The LLM model loses information when it is constructed, and it similarly loses detail when it’s asked to elaborate on a point.

    • the_q@lemmy.zip
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      That’s the problem though, no one needed or asked for a technology to write letters badly.

      • ethaver@kbin.earth
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        eh. Sometimes I let it put in platitudes if I’m emailing someone I know that’s important to. Otherwise my “hi I want you to do x thing in exchange I will y here is the information you need to do it or make me a quote” sometimes ruffles feathers. I understand that some people need the little fluff words to feel respected and it’s important to me that they feel respected but man do I suck at it.