• Xerxos@lemmy.ml
    link
    fedilink
    arrow-up
    15
    arrow-down
    4
    ·
    edit-2
    2 days ago

    The business model isn’t to make money from a chat bot. The aim is to get to AGI and make money that way. Replacing workers with AI would be a gigantic money maker. Nearly all the big players bank on that.

    If that is even possible is still unclear. A big bet with billions of dollars.

    • ZDL@lazysoci.al
      link
      fedilink
      arrow-up
      1
      ·
      8 hours ago

      Uh… Replacing workers with AI would be the destruction of an economy which is literally the opposite of “money maker”.

      Economies work by the circulation of some form of currency (whatever form that might take). That means people need to spend. And people with low-paying jobs spend small amounts. People with no jobs spend nothing. (Also people who hoard wealth damage or even destroy economies as well. It’s why you want inflationary currency, not static or deflationary.)

    • Jhex@lemmy.world
      link
      fedilink
      arrow-up
      22
      ·
      2 days ago

      Using LLMs to get to AGI is like teaching a dog tricks and expecting that, if you work hard enough at it, the dog would get a law degree eventually

    • AppleTea@lemmy.zip
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      Well, we know absolutely for certain that consciousness is just complex computation… right?

      Because it would be very very silly to have bet all these billions of dollars on a convenient assumption.

    • Seth Taylor@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      2 days ago

      So if AGI is the key to replacing (most) workers, then AGI cannot exist without democratic socialism else we’ll all starve.

      • BeardededSquidward@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        7
        ·
        2 days ago

        Yes it can, it’s just the wealthy don’t care if we eat each other to survive. This is why EVERYONE should be oppose to AGI who isn’t a billionaire or at least many times multimillionaire. It harms the common man, period.

        • luciferofastora@feddit.org
          link
          fedilink
          arrow-up
          10
          ·
          2 days ago

          Alternatively (or additionally), we should all support a (livavble) universal basic income funded from corporate profits. If an AI takes my job, make my (former) employer pay me anyway. If there is (almost) no work to be done by humans anymore, work should no longer be necessary to live.

          • Xerxos@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            2 days ago

            True. Productivity would rise enormously.

            But if we look at how rise in productivity no longer raises salaries, it’s easy to guess what would happen if we’d get AGI: the profits go to the rich and the poor have to fight for themselves.

            Would the 1% care if most of us loose everything? If you truly don’t know the answer, here’s a hint:

            The super rich built luxurious bunkers instead of fighting climate change.

            The only thing to get us to UBI is a massive revolution against the ultra rich and the politicians they bought.

            We are already past the point where our resources are enough for everyone to have their basic needs met: Food, shelter and healthcare for everyone, and then some.

            The reason we don’t live in an utopia is that the system is rigged to make rich people richer and everyone else poorer. There is enough money for everyone, it’s just unfairly distributed.

            The hope that the rich will change their modus operandi, just because more people suffer is naïve.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      2 days ago

      LLM’s are cool and all, but you can’t use them for anything requiring real precision without allocating human work time to validate the output, unless you want to end up on the national news for producing something fraudulent.

      And making it so their image generator can generate porn isn’t going to change that.

      • funkless_eck@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        I had to correct my boss this morning because they didn’t read the AI output that told our client our services were worthless.

        • ZDL@lazysoci.al
          link
          fedilink
          arrow-up
          1
          ·
          8 hours ago

          I bitched out Baidu’s LLMbecile because Baidu has lost all capacity for searching in favour of the slop. It literally told me that Baidu was useless for search and recommended several of its competitors over Baidu.

          Oopsie!

      • Xerxos@lemmy.ml
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        2 days ago

        Yes, currently AI isn’t reliable enough to use instead of a human. All the big AI businesses bet that this will change - either by training with more data or some technological breakthrough.

        • FlashMobOfOne@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          2 days ago

          Could be they’re right.

          They tried that with Theranos because Elizabeth Holmes’ machine could correctly identify four viruses.

          Presumably LLM’s have already trained on the entirety of human knowledge and communication and still produce buggy information, so I’m skeptical that it’ll work out the way the VC’s expect, but we’ll see.