• cub Gucci@lemmy.today
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    10
    ·
    edit-2
    12 hours ago

    I’m not using LLMs often, but I haven’t had a single clean example of hallucination for 6 months already. This recursive calls work I incline to believe

    • Lfrith@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      I got hallucination from trying to find a book I read but didn’t know the title of. And hallucinated NBA play off results of the wrong team winning. And gotten basic math calculations wrong.

      Its a language model so its purpose is to string together words that sound like sentences, but it can’t be fully trusted to be accurate. Best it can do is give you source so you can got straight to the resource to read that instead.

      It’s decent at generating basic code, and testing yourself to see if it outputs what you want. But I don’t trust it as a resource when it comes to information when even wrong sports facts have been provided.

      • Holytimes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        The three things I’ve found search engine LLMs to be useful for. Searching for laptop since it’s absurdly good at finding weird fucking regional models or odd configurations that arnt on the main pages of most shops.

        Like my current laptop wasnt on newegg Amazon or even msi’s own shop. It was on a fucking random ass page on their website that nothing linked to and was some weird ass model that wasn’t searchable even.

        The second most useful one was generating a metric crapload of boiler plate json files for a mod.

        The third thing is bad dnd roleplaying while I’m bored at work. The hallucinations are a upside lol

    • DireTech@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      12 hours ago

      Either you’re using them rarely or just not noticing the issues. I mainly use them for looking up documentation and recently had Google’s AI screw up how sets work in JavaScript. If it makes mistakes on something that well documented, how is it doing on other items?

      • SocialMediaRefugee@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        I use them at work to get instructions on running processes and no matter how detailed I am “It is version X, the OS is Y” it still gives me commands that don’t work on my version, bad error code analysis, etc.

      • cub Gucci@lemmy.today
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        11 hours ago

        Hallucination is not just a mistake, if I understand it correctly. LLMs make mistakes and this is the primary reason why I don’t use them for my coding job.

        Like a year ago, ChatGPT made out a python library with a made out api to solve my particular problem that I asked for. Maybe the last hallucination I can recall was about claiming that manual is a keyword in PostgreSQL, which is not.

        • Holytimes@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          It’s more the hallucinations are due to the fact we have trained them to be unable to admit to failure or incompetence.

          Humans have the exact same “hallucinations” if you give them a job then tell them they aren’t allowed to admit to not knowing something ever for any reason.

          You end up only with people willing to lie, bullshit and sound incredibly confident.

          We literally reinvented the politician with LLMs.

          None of the big models are trained to be actually accurate, only to give results no matter what.

        • DireTech@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 hours ago

          What is a hallucination if not AI being confidently mistaken by making up something that is not true?