I knew I wasn’t interested in A.I. for a while now, but I think I can finally put it into words.

A.I. is supposed to be useful for things I’m not skilled at or knowledgeable about. But since I’m not skilled at or knowledgeable about the thing I’m having A.I. do for me, I have no way of knowing how well A.I. accomplished it for me.

If I want to know that, I have to become skilled at or knowledgeable about the thing I need to do. At this point, why would I have A.I. do it since I know I can trust I’m doing it right?

I don’t have a problem delegating hiring people who are more skilled at or more knowledgeable about something than me because I can hold them accountable if they are faking.

With A.I., it’s on me if I’m duped. What’s the use in that?

  • alternategait@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    Maybe not letters to your grandma, but to send out 1000 “your benefits have changed” in 600 subtly different ways.

    • Hetare King@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I would argue that’s actually the last situation you’d want to use an LLM. With numbers like that, nobody’s going to review each and every letter with the attention things generated by an untrustworthy agent ought to get. This sounds to me like it calls for a template. Actually, it would be pretty disturbing to hear that letters like that aren’t being generated using a template and based on changes in the system.