• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    23
    arrow-down
    5
    ·
    16 hours ago

    And then AI will just go away and everything will go back to normal again, yes? It’ll suddenly stop working and so people will stop using it for all the things they’re currently using it for.

    • arararagi@ani.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      I mean, even NFTs are still technically around, the tech didn’t go away, it just stayed as it’s own niche since grifters stopped trying to push it to normal people. I think the same will happen to AI since even if everyone that used chat gpt paid it’s monthly fee, they would still lose money.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        OpenAI has an enormous debt burden from having developed this tech in the first place. If OpenAI went bankrupt the models would be sold off to companies that didn’t have that burden, so I doubt they’d “go away.”

        As I mentioned elsewhere in this thread I use local LLMs on my own personal computer and the cost of actually running inference is negligible.

    • Jason2357@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      Absolutely not. AI tech will continue to be pushed by C-suites convinced they can surplus a fraction of their workforce eventually. The change will be that most of the investments in AI companies will disappear overnight and most will go belly-up. It will erase a significant fraction of everyone’s pension funds, and federal governments around the world will pour public funds into propping up the larger companies so that they don’t go under too. Heads they win, tails you lose.

    • philosloppy@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      11 hours ago

      consider the Dot Com Bubble: the internet obviously didn’t disappear but that doesn’t mean there weren’t serious economic consequences.

        • nyan@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          6
          ·
          7 hours ago

          The dot-com bubble? A whole bunch of investment money was poured into businesses operating over the Internet from around the time dial-up became widely available. A few years later, investors realized that “on the Internet” wasn’t necessarily the key to making a crapton of money and the stock market crashed. A bunch of companies (many of which never made it to profitability) went under, and a fair number of people lost their jobs. Pets.com was one of the more notable victims.

          This doesn’t, however, mean that no business is done over the Internet today.

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      14 hours ago

      people will stop using it for all the things they’re currently using it for

      They will when AI companies can no longer afford to eat their own costs and start charging users a non-subsidized price. How many people would keep using AI if it cost $1 per query? $5? $20?

      OpenAI lost $5 billion last year. Billion, with a B. Even their premium customers lose them money on every query, and eventually the faucet of VC cash propping this whole thing up is gonna run dry when investors inevitably realize that there’s no profitable business model to justify this technology. At that point, AI firms will have no choice but to pass their costs on to the customer, and there’s no way the customer is going to stick around when they realize how expensive this technology actually is in practice.

      • Womble@piefed.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        14 hours ago

        There are free open models you can go and download right now, that are better than SOTA 12-18 months ago, and that cost you less to run on a gaming PC than playing COD does. Even if openai, anthropic et al disappeared without a trace tomorrow AI wouldnt go away.

        • baggachipz@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 hours ago

          And those are useful tools, which will always be around. The current “AI” industry bubble is predicated on total world domination by an AGI, which is not technically possible given the underpinnings of the LLM methodology. Sooner or later, the people with the money will realize this. They’re stupid, so it may take a while.

          • Womble@piefed.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            The post I was replying to was saying

            people will stop using it for all the things they’re currently using it for

            They will when AI companies can no longer afford to eat their own costs and start charging users a non-subsidized price.

            i.e. people will stop using AI when user have to pay the “real” price (what this is is left unspecified and an exercise to the reader to figure out). My point was that even if the AI price from those provided to infinity AI usage wouldnt drop to zero like they imply.

        • arararagi@ani.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          that’s fine though, people aren’t mad about models you can run locally, even though now it takes 30 seconds instead of 5 to get a response from my ERP bot.

      • thaklor@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        8 hours ago

        I remember this happening with Uber too. All that VC money dried up, their prices skyrocketed, people stopped using them, and they went bankrupt. A tale as old as time.

        • AFaithfulNihilist@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 hours ago

          A lot of those things have a business model that relies on putting the competition out of business so you can jack up the price.

          Uber broke taxis in a lot of places. It completely broke that industry by simply ignoring the laws. Uber had a thing that it could actually sell that people would buy.

          It took years before it started making money, in an industry that already made money.

          LLMs Don’t even have a path to profitability unless they can either functionally replace a human job or at least reliably perform a useful task without human intervention.

          They’ve burned all these billions and they still don’t even have something that can function as well as the search engines that proceeded them no matter how much they want to force you to use it.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            5 hours ago

            And yet a great many people are willingly, voluntarily using them as replacements for search engines and more. If they were worse then why are they doing that?

            • AFaithfulNihilist@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              4 hours ago

              These kinds of questions are strange to me.

              A great many people are using them voluntarily, a lot of people are using them because they don’t know how to avoid using them and feel that they have no alternative.

              But the implication of the question seems to be that people wouldn’t choose to use something that is worse.

              In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.

              I don’t think you can make any of those assumptions. In fact I think you can assume the opposite.

              The average person doesn’t know how to evaluate the quality of research information they receive on topics outside of their expertise.

              The average person does not have the technical skills necessary to engage with non-AI augmented systems presuming they want to.

              The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.

              50 million cigarette smokers can't be wrong!

              • FaceDeer@fedia.io
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                4 hours ago

                In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.

                I don’t think you can make any of those assumptions. In fact I think you can assume the opposite.

                Isn’t that what you yourself are doing, right now?

                The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.

                Yes, because people have more than one single criterion for determining whether a tool is “better.”

                If there was a machine that would always give me a thorough well-researched answer to any question I put to it, but it did so by tattooing the answer onto my face with a rusty nail, I think I would not use that machine. I would prefer to use a different machine even if its answers were not as well-researched.

                But I wasn’t trying to present an argument for which is “better” in the first place, I should note. I’m just pointing out that AI isn’t going to “go away.” A huge number of people want to use AI. You may not personally want to, and that’s fine, but other people do and that’s also fine.

                • AFaithfulNihilist@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  4 hours ago

                  A lot of people want a good tool that works.

                  This is not a good tool and it does not work.

                  Most of them don’t understand that yet.

                  I am optimistic to think that they will have the opportunity find that out in time to not be walked off a cliff.

                  I’m optimistically predicting that when people find out how much it actually costs and how shit it is that they will redirect their energies to alternatives if there are still any alternatives left.

                  A better tool may come along, but it’s not this stuff. Sometimes the future of a solution doesn’t just look like more of the previous solution.

                  • FaceDeer@fedia.io
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    3 hours ago

                    This is not a good tool and it does not work.

                    For you, perhaps. But there are an awful lot of people who seem to be finding it a good tool and are getting it to work for them.

            • badgermurphy@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 hours ago

              I suspect it because search results require manually parsing through them for what you are looking for, with the added headwinds of widespread, and in many ways intentional degradation of conventional search.

              Searching with an LLM AI is thought-terminating and therefore effortless. You ask it a question and it authoritatively states a verbose answer. People like it better because it is easier, but have no ability to evaluate if it is any better in that context.

              • FaceDeer@fedia.io
                link
                fedilink
                arrow-up
                1
                ·
                4 hours ago

                So it has advantages, then.

                BTW, all the modern LLMs I’ve tried that do web searching provide citations for the summaries they generate. You can indeed evaluate the validity of their responses.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        4
        arrow-down
        4
        ·
        14 hours ago

        I run local LLMs and they cost me $0 per query. I don’t plan to charge myself more than that at any point, even if the AI bubble bursts.

        • Nephalis@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          14 hours ago

          Realy? I get what you want to say, but at least the power consumption of the machine you need the model to run on will be yours forever. Depending on your energy price it is not 0 per query.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            13 hours ago

            It’s so near zero it makes no difference. It is not a noticeable factor in my decision on whether to use it or not for any given task.

            The training of a brand new model is expensive, but once the model has been created it’s cheap to run. If OpenAI went bankrupt tomorrow and shut down the models it had trained would just be sold off to other companies and they’d run them instead, free from the debt burden that OpenAI accrued from the research and training costs that went into producing them. That’s actually a fairly common pattern for first-movers like that, they spend a lot of money blazing the trail and then other companies follow along afterwards and eat their lunch.

            • wewbull@feddit.uk
              link
              fedilink
              English
              arrow-up
              3
              ·
              8 hours ago

              It’s cheap to run for one person. Any service running it isn’t cheap when it has a good number of users.

        • very_well_lost@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          13 hours ago

          That’s great if they actually work. But my experience with the big, corporate-funded models has been pretty freaking abysmal after more than a year of trying to adopt them into my daily workflow. I can’t imagine the performance of local models is better when they’re running on much, much smaller datasets and with much, much less computing power.

          I’m happy to be proven wrong, of course, but I just don’t see how it’s possible for local models to compete with the Big Boys in terms of quality… and the quality of the largest models is only middling at best.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            4
            arrow-down
            2
            ·
            13 hours ago

            You’re free to not use them. Seems like an awful lot of people are using them, though, including myself. They must be getting something out of using them or they’d stop too.

            • expr@programming.dev
              link
              fedilink
              English
              arrow-up
              6
              ·
              7 hours ago

              Just because a lot of people are using them does not necessarily mean they are actually valuable. You’re claim assumes that people are acting rationally regarding them. But that’s an erroneous assumption to make.

              People are falling in “love” with them. Asking them for advice about mental health. Treating them like they are some kind of all-knowing oracle (or even having any intelligence whatsoever), when in reality they know nothing and cannot reason at all.

              Ultimately they are immensely effective at creating a feedback loop that preys on human psychology and reinforces a dependency on it. It’s a bit like addiction in that way.

      • Jason2357@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        5 hours ago

        The dot-com bubble didn’t build the internet. The internet still would have been built up if pension funds were not buying toiletpaper.com for millions of dollars. Bubbles, pretty much by definition, are specifically about the part of the economy where huge sums are invested into things that are not worth anything (i.e., full of air).

        LLMs would still be developed without a trillion-dollar bubble. Slower, sure, but all the crazy investment isn’t about developing tech, it’s about speculating on who will stumble on AGI and suddenly be able to run companies with 1% of the workforce of traditional companies. It’s gambling. When the gamblers figure out that a casino doesn’t pay out, they all leave at once.

        • Ulrich@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          I’m not sure what any of that is supposed to mean. The point is AI isn’t going anywhere. It’s been here before ChatGPT and it will be around long after, it just won’t be crammed down your throat day in and day out.

          Think of like Siri and the unnamed Google Assistant. All of these “assistants” have been almost completely useless since their inception but still they’ve been around for decades, even without the trillions in investment.

          • arararagi@ani.social
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            5 hours ago

            Seems you guys are being pedantic on purpose, again, no one wants machine learning or shit like that to vanish, they are extremely useful and have been for decades.

            What people are mocking and pointing how useless they are, are the LLMs and the promise of AGI, this shit won’t happen in our lifetime.

            • Ulrich@feddit.org
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              5 hours ago

              No one is being pedantic but you. Those won’t go anywhere either.