• khánh@lemmy.zip
    link
    fedilink
    arrow-up
    24
    arrow-down
    1
    ·
    1 day ago

    Sounds good. Then, they’ll finally move away from AI and we will all stop having AI being shoved down our throats. I’m sick and tired of all these AI chatbots in places where we don’t even need them.

    • themachinestops@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      “Instead of looking for other avenues for growth, though, PwC found that executives are worried about falling behind by not leaning into AI enough.”

        • trolololol@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          1 day ago

          Oh no they are shit afraid of what happened to companies that didn’t survive the shift into digital that happened around 2000s.

          The truth is, many companies didn’t try that transition and disappeared or went from their peak to being 2nd class. But also, lots of companies put in large amounts of money the wrong way and the same thing happened. Guess history repeats itself and every ceo is finding out they didn’t get where they did because they’re smarter than their peers the way they strongly believed before.

    • Joeffect@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      I was thinking about this recently… and in the early 2000s for a short time there was this weird chat bot crazy on the internet… everyone was adding them to web pages like MySpace and free hosting sites…

      I feel like this has been the resurrection of that but on a whole other level… I don’t think it will last it will find its uses but shoving glorified auto suggest down people’s throats is not going to end up anywhere helpful…

      A LLM has its place in an ai system… but without having reason its not really intelligent. Its like how you would decide what to say next in a sentence but without the logic behind it

      • michaelmrose@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        1 day ago

        The logic is implicit in the statistical model of the relationship between words built by ingesting training materials. Essentially the logic comes from the source material provided by real human beings which is why we even talk about hallucinations because most of what is output is actually correct. If it it was mostly hallucinations nobody would use it for anything.

        • Joeffect@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 day ago

          No you can’t use logic based on old information…

          If information changes between variables a language model can’t understand that, because it doesn’t understand.

          If your information relies on x being true, when x isn’t true the ai will still say its fine because it doesn’t understand the context

          Just like it doesn’t understand things like not to do something.

          • michaelmrose@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 hours ago

            most of the things you want to know is literally all old information some of it thousands of years old and still valid. If you need judgement based on current info you inject current data

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 day ago

          Well people use it and don’t care about hallucinations.

      • khánh@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        10 hours ago

        Clippy being useless was okay because it was the 2000s. In this time and age though? Meh.

        Also, people HATED Clippy. They always hated AI.