Archive link: https://archive.ph/GtA4Q

The complete destruction of Google Search via forced AI adoption and the carnage it is wreaking on the internet is deeply depressing, but there are bright spots. For example, as the prophecy foretold, we are learning exactly what Google is paying Reddit $60 million annually for. And that is to confidently serve its customers ideas like, to make cheese stick on a pizza, “you can also add about 1/8 cup of non-toxic glue” to pizza sauce, which comes directly from the mind of a Reddit user who calls themselves “Fucksmith” and posted about putting glue on pizza 11 years ago.

A joke that people made when Google and Reddit announced their data sharing agreement was that Google’s AI would become dumber and/or “poisoned” by scraping various Reddit shitposts and would eventually regurgitate them to the internet. (This is the same joke people made about AI scraping Tumblr). Giving people the verbatim wisdom of Fucksmith as a legitimate answer to a basic cooking question shows that Google’s AI is actually being poisoned by random shit people say on the internet.

Because Google is one of the largest companies on Earth and operates with near impunity and because its stock continues to skyrocket behind the exciting news that AI will continue to be shoved into every aspect of all of its products until morale improves, it is looking like the user experience for the foreseeable future will be one where searches are random mishmashes of Reddit shitposts, actual information, and hallucinations. Sundar Pichai will continue to use his own product and say “this is good.”

  • pkmkdz@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    edit-2
    6 months ago

    And then they just slap small disclaimer on bottom of the page “Ai may make mistakes” and they are safe legally. I hope there will be class action lawsuit on them some day regardless. this shit gets regulated before anyone hurts themselves

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      6 months ago

      Air Canada tried this and lost in court.

      The AI gave wrong advice on a policy, person acted on it, and then Air Canada said, nah dude, the AI was wrong, tough shit.

      • can@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        6 months ago

        More info

        Air Canada has been ordered to pay compensation to a grieving grandchild who claimed they were misled into purchasing full-price flight tickets by an ill-informed chatbot.

        In an argument that appeared to flabbergast a small claims adjudicator in British Columbia, the airline attempted to distance itself from its own chatbot’s bad advice by claiming the online tool was “a separate legal entity that is responsible for its own actions.”

        “This is a remarkable submission,” Civil Resolution Tribunal (CRT) member Christopher Rivers wrote.

        “While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”