• Clent@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      20
      ·
      2 days ago

      I’m convinced this is why people are so seemingly impressed with AI. It’s smarter than the average person because the average person is that ignorant. To these people these things are ungodly smart and because of the puffery they don’t feel talked down to which increases their perception of it’s intelligence; it tells them how smart and clever they are in a way no sentient entity would ever do.

      • cecilkorik@piefed.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Yes. Remember that these things have been largely designed and funded by companies who make their money from advertising and marketing. The purpose of advertising and marketing is to convince people of something, whether that thing is actually true or not. They are experts at it, and now they have created software designed to convince people of things, whether it is actually true or not. Then they took all the combined might of their marketing and advertising expertise and infrastructure, including the AI software itself, and set it to the task of convincing people that AI is good and is going to change the world.

        And everyone was convinced. Whether it is actually true or not.

    • LemmyKnowsBest@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      2 days ago

      Hey don’t insult the average voter like that. This post demonstrates that ai has achieved the dementia level of Trump.

      • LemmyKnowsBest@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        13 hours ago

        A third pound burger for the same price as a quarter pounder? Hell yeah count me in! (~1980’s me)

        Instead of advertising it as “a third,” maybe they should’ve dumbed it down a bit:

        “More meat, better value!”

        But according to the article you linked, these third pound burgers are still available at A&W after all these years. So at least their failed marketing campaign didn’t destroy them.

  • altphoto@lemmy.today
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    2 days ago

    Obviously its 2012 again!

    2013 never happened. We just keep repeating 2012 over and over to see if we can make the world end this time around.

      • altphoto@lemmy.today
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        12 hours ago

        Es tenochtitlstlepekalpanamixtlatxiuatl. Te vas derecho diez cuadras y luego a la derecha junto a la ferreteria Damian Gonzales y Pavon de la Huerta Porfirio Ybarra. Preguntas por Inez, es la que esta requetetlalpan. Ya nomas tras lomita de hay se ve que de hay era pues.

        Lo que no me cae son los taparrabos.

  • queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    50
    arrow-down
    3
    ·
    edit-2
    2 days ago

    It’s pretty obvious how this happened.

    All the data it has been trained on said “next year is 2026” and “2027 is two years from now” and now that it is 2026 it doesn’t actually change the training data. It doesn’t know what year it is, it only knows how to regurgitate answers it was already trained on.

    • Drew@sopuli.xyz
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      2 days ago

      nah, training data is not why it answered this (otherwise it would have training data from many different years, way more than of 2025)

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        2 days ago

        There’s data weights for recency, so after a certain point “next year is 2026” will stop being weighted over “next year is 2027”

        It’s early in the year, so that threshold wasn’t crossed yet.

    • tauonite@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      It also happened last year if you asked if 2026 was next year, and that was at the end of last year, not beginning

    • just_an_average_joe@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      This instance actually seems more like ‘context rot’, I suspect google is just shoving everything into the context window cuz their engineering team likes to brag about 10m tokens windows, but the reality is that its preeeeettty bad when you throw too much stuff.

      I would expect even very small (4b params or less) models would get this question correct

    • buddascrayon@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      This is actually the reason why it will never actually become general AI. Because they’re not training it with logic they’re training it with gobbledy goop from the internet.

      • kkj@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        It can’t understand logic anyway. It can only regurgitate its training material. No amount of training will make an LLM sapient.

          • edible_funk@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            Math, physics, the fundamental programming limitations of LLMs in general. If we’re ever gonna actually develop an AGI, it’ll come about along a completely different pathway than LLMs and algorithmic generative “AI”.

          • kkj@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Based on what LLMs are. They predict token (usually word) probability. They can’t think, they can’t understand, they can’t question things. If you ask one for a seahorse emoji, it has a seizure instead of just telling you that no such emoji exists.

  • WorldsDumbestMan@lemmy.today
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    This isn’t just the machine being ignorant or wrong.

    This is a level of artificial stupidity that is downright eldritch and incomprehensible

  • Janx@piefed.social
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 days ago

    Yeah, this “sequence guesser” is definitely something we should have do all the coding, and control the entire Internet…

  • SpaceCowboy@lemmy.ca
    link
    fedilink
    arrow-up
    11
    ·
    2 days ago

    Yeah, “AI” is just statistical analysis. There’s more data in it’s database that indicates the 2027 is not year and only a few days worth of data that indicates that it is. Since there’s more data indicating 2027 is not next year, it chooses that as the correct answer.

    LLMs are a useful tool if you know what it is, it’s strengths and weaknesses. But it’s not intelligent and doesn’t understand how things work. But if you have some fuzzy data you want analyzed and validate the results, it can save some time to get to a starting point. It’s kinda like wikipedia in a way, you get to a starting point faster, but have to take things with a grain of salt and put some effort make sure things are accurate.

    • pez@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Google pulled the AI overview from the search, but “is it 2027 next year ai overview” was a suggested search because this screenshot is making the rounds. The AI overview now has all the discussion of this error in it’s data set and mentions it in it’s reply but still fails.

  • buddascrayon@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    2 days ago

    Talking to AI is like talking to an 8-year-old. Has just enough information to be confidently wrong about everything.

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      Well, that’s part of it, broadly speaking they want to generate more content in the hopes that it will latch on to something correct, which is of course hilarious when it’s confidentally incorrect. But for example: Is it 2027 next year?

      Not quite! Next year will be 2026 + 1 = 2027, but since we’re currently in 2026, the next year is 2027 only after this year ends. So yes—2027 is next year

      Here it got it wrong, based on training, then generated what would be a sensible math problem, sent it off to get calculated, then made words around the mathy stuff, then the words that followed are the probabilistic follow up for generating a number that matches the number in the question.

      So it got it wrong, and in the process of generating more words to explain the wrong answer, it ends up correcting itself (without ever realizing it screwed up, because that discontinuity is not really something it trained on). This is also the basis of ‘reasoning chains’, generate more text and then only present the last bit, because in the process of generating more text it has a chance of rerolling things correctly.

  • Mulligrubs@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    2 days ago

    This is worth at least 500 trillion dollars!

    We have a virtual parrot, it’s not “intelligence” in any way. So many suckers