people will stop using it for all the things they’re currently using it for
They will when AI companies can no longer afford to eat their own costs and start charging users a non-subsidized price. How many people would keep using AI if it cost $1 per query? $5? $20?
OpenAI lost $5 billion last year. Billion, with a B. Even their premium customers lose them money on every query, and eventually the faucet of VC cash propping this whole thing up is gonna run dry when investors inevitably realize that there’s no profitable business model to justify this technology. At that point, AI firms will have no choice but to pass their costs on to the customer, and there’s no way the customer is going to stick around when they realize how expensive this technology actually is in practice.
There are free open models you can go and download right now, that are better than SOTA 12-18 months ago, and that cost you less to run on a gaming PC than playing COD does. Even if openai, anthropic et al disappeared without a trace tomorrow AI wouldnt go away.
And those are useful tools, which will always be around. The current “AI” industry bubble is predicated on total world domination by an AGI, which is not technically possible given the underpinnings of the LLM methodology. Sooner or later, the people with the money will realize this. They’re stupid, so it may take a while.
people will stop using it for all the things they’re currently using it for
They will when AI companies can no longer afford to eat their own costs and start charging users a non-subsidized price.
i.e. people will stop using AI when user have to pay the “real” price (what this is is left unspecified and an exercise to the reader to figure out). My point was that even if the AI price from those provided to infinity AI usage wouldnt drop to zero like they imply.
that’s fine though, people aren’t mad about models you can run locally, even though now it takes 30 seconds instead of 5 to get a response from my ERP bot.
I remember this happening with Uber too. All that VC money dried up, their prices skyrocketed, people stopped using them, and they went bankrupt. A tale as old as time.
A lot of those things have a business model that relies on putting the competition out of business so you can jack up the price.
Uber broke taxis in a lot of places. It completely broke that industry by simply ignoring the laws.
Uber had a thing that it could actually sell that people would buy.
It took years before it started making money, in an industry that already made money.
LLMs Don’t even have a path to profitability unless they can either functionally replace a human job or at least reliably perform a useful task without human intervention.
They’ve burned all these billions and they still don’t even have something that can function as well as the search engines that proceeded them no matter how much they want to force you to use it.
And yet a great many people are willingly, voluntarily using them as replacements for search engines and more. If they were worse then why are they doing that?
A great many people are using them voluntarily, a lot of people are using them because they don’t know how to avoid using them and feel that they have no alternative.
But the implication of the question seems to be that people wouldn’t choose to use something that is worse.
In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.
I don’t think you can make any of those assumptions. In fact I think you can assume the opposite.
The average person doesn’t know how to evaluate the quality of research information they receive on topics outside of their expertise.
The average person does not have the technical skills necessary to engage with non-AI augmented systems presuming they want to.
The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.
In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.
I don’t think you can make any of those assumptions. In fact I think you can assume the opposite.
Isn’t that what you yourself are doing, right now?
The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.
Yes, because people have more than one single criterion for determining whether a tool is “better.”
If there was a machine that would always give me a thorough well-researched answer to any question I put to it, but it did so by tattooing the answer onto my face with a rusty nail, I think I would not use that machine. I would prefer to use a different machine even if its answers were not as well-researched.
But I wasn’t trying to present an argument for which is “better” in the first place, I should note. I’m just pointing out that AI isn’t going to “go away.” A huge number of people want to use AI. You may not personally want to, and that’s fine, but other people do and that’s also fine.
I am optimistic to think that they will have the opportunity find that out in time to not be walked off a cliff.
I’m optimistically predicting that when people find out how much it actually costs and how shit it is that they will redirect their energies to alternatives if there are still any alternatives left.
A better tool may come along, but it’s not this stuff. Sometimes the future of a solution doesn’t just look like more of the previous solution.
I suspect it because search results require manually parsing through them for what you are looking for, with the added headwinds of widespread, and in many ways intentional degradation of conventional search.
Searching with an LLM AI is thought-terminating and therefore effortless. You ask it a question and it authoritatively states a verbose answer. People like it better because it is easier, but have no ability to evaluate if it is any better in that context.
BTW, all the modern LLMs I’ve tried that do web searching provide citations for the summaries they generate. You can indeed evaluate the validity of their responses.
Realy? I get what you want to say, but at least the power consumption of the machine you need the model to run on will be yours forever. Depending on your energy price it is not 0 per query.
It’s so near zero it makes no difference. It is not a noticeable factor in my decision on whether to use it or not for any given task.
The training of a brand new model is expensive, but once the model has been created it’s cheap to run. If OpenAI went bankrupt tomorrow and shut down the models it had trained would just be sold off to other companies and they’d run them instead, free from the debt burden that OpenAI accrued from the research and training costs that went into producing them. That’s actually a fairly common pattern for first-movers like that, they spend a lot of money blazing the trail and then other companies follow along afterwards and eat their lunch.
That’s great if they actually work. But my experience with the big, corporate-funded models has been pretty freaking abysmal after more than a year of trying to adopt them into my daily workflow. I can’t imagine the performance of local models is better when they’re running on much, much smaller datasets and with much, much less computing power.
I’m happy to be proven wrong, of course, but I just don’t see how it’s possible for local models to compete with the Big Boys in terms of quality… and the quality of the largest models is only middling at best.
You’re free to not use them. Seems like an awful lot of people are using them, though, including myself. They must be getting something out of using them or they’d stop too.
Just because a lot of people are using them does not necessarily mean they are actually valuable. You’re claim assumes that people are acting rationally regarding them. But that’s an erroneous assumption to make.
People are falling in “love” with them. Asking them for advice about mental health. Treating them like they are some kind of all-knowing oracle (or even having any intelligence whatsoever), when in reality they know nothing and cannot reason at all.
Ultimately they are immensely effective at creating a feedback loop that preys on human psychology and reinforces a dependency on it. It’s a bit like addiction in that way.
And even if they were mostly using it for that, who are you to decide what is “valuable” for other people? I happen to think that sports are a huge waste of time, does that mean that stadiums are not valuable?
They will when AI companies can no longer afford to eat their own costs and start charging users a non-subsidized price. How many people would keep using AI if it cost $1 per query? $5? $20?
OpenAI lost $5 billion last year. Billion, with a B. Even their premium customers lose them money on every query, and eventually the faucet of VC cash propping this whole thing up is gonna run dry when investors inevitably realize that there’s no profitable business model to justify this technology. At that point, AI firms will have no choice but to pass their costs on to the customer, and there’s no way the customer is going to stick around when they realize how expensive this technology actually is in practice.
There are free open models you can go and download right now, that are better than SOTA 12-18 months ago, and that cost you less to run on a gaming PC than playing COD does. Even if openai, anthropic et al disappeared without a trace tomorrow AI wouldnt go away.
And those are useful tools, which will always be around. The current “AI” industry bubble is predicated on total world domination by an AGI, which is not technically possible given the underpinnings of the LLM methodology. Sooner or later, the people with the money will realize this. They’re stupid, so it may take a while.
The post I was replying to was saying
i.e. people will stop using AI when user have to pay the “real” price (what this is is left unspecified and an exercise to the reader to figure out). My point was that even if the AI price from those provided to infinity AI usage wouldnt drop to zero like they imply.
that’s fine though, people aren’t mad about models you can run locally, even though now it takes 30 seconds instead of 5 to get a response from my ERP bot.
I remember this happening with Uber too. All that VC money dried up, their prices skyrocketed, people stopped using them, and they went bankrupt. A tale as old as time.
A lot of those things have a business model that relies on putting the competition out of business so you can jack up the price.
Uber broke taxis in a lot of places. It completely broke that industry by simply ignoring the laws. Uber had a thing that it could actually sell that people would buy.
It took years before it started making money, in an industry that already made money.
LLMs Don’t even have a path to profitability unless they can either functionally replace a human job or at least reliably perform a useful task without human intervention.
They’ve burned all these billions and they still don’t even have something that can function as well as the search engines that proceeded them no matter how much they want to force you to use it.
And yet a great many people are willingly, voluntarily using them as replacements for search engines and more. If they were worse then why are they doing that?
These kinds of questions are strange to me.
A great many people are using them voluntarily, a lot of people are using them because they don’t know how to avoid using them and feel that they have no alternative.
But the implication of the question seems to be that people wouldn’t choose to use something that is worse.
In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.
I don’t think you can make any of those assumptions. In fact I think you can assume the opposite.
The average person doesn’t know how to evaluate the quality of research information they receive on topics outside of their expertise.
The average person does not have the technical skills necessary to engage with non-AI augmented systems presuming they want to.
The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.
Isn’t that what you yourself are doing, right now?
Yes, because people have more than one single criterion for determining whether a tool is “better.”
If there was a machine that would always give me a thorough well-researched answer to any question I put to it, but it did so by tattooing the answer onto my face with a rusty nail, I think I would not use that machine. I would prefer to use a different machine even if its answers were not as well-researched.
But I wasn’t trying to present an argument for which is “better” in the first place, I should note. I’m just pointing out that AI isn’t going to “go away.” A huge number of people want to use AI. You may not personally want to, and that’s fine, but other people do and that’s also fine.
A lot of people want a good tool that works.
This is not a good tool and it does not work.
Most of them don’t understand that yet.
I am optimistic to think that they will have the opportunity find that out in time to not be walked off a cliff.
I’m optimistically predicting that when people find out how much it actually costs and how shit it is that they will redirect their energies to alternatives if there are still any alternatives left.
A better tool may come along, but it’s not this stuff. Sometimes the future of a solution doesn’t just look like more of the previous solution.
For you, perhaps. But there are an awful lot of people who seem to be finding it a good tool and are getting it to work for them.
I suspect it because search results require manually parsing through them for what you are looking for, with the added headwinds of widespread, and in many ways intentional degradation of conventional search.
Searching with an LLM AI is thought-terminating and therefore effortless. You ask it a question and it authoritatively states a verbose answer. People like it better because it is easier, but have no ability to evaluate if it is any better in that context.
So it has advantages, then.
BTW, all the modern LLMs I’ve tried that do web searching provide citations for the summaries they generate. You can indeed evaluate the validity of their responses.
That was the intended path.
I run local LLMs and they cost me $0 per query. I don’t plan to charge myself more than that at any point, even if the AI bubble bursts.
Realy? I get what you want to say, but at least the power consumption of the machine you need the model to run on will be yours forever. Depending on your energy price it is not 0 per query.
It’s so near zero it makes no difference. It is not a noticeable factor in my decision on whether to use it or not for any given task.
The training of a brand new model is expensive, but once the model has been created it’s cheap to run. If OpenAI went bankrupt tomorrow and shut down the models it had trained would just be sold off to other companies and they’d run them instead, free from the debt burden that OpenAI accrued from the research and training costs that went into producing them. That’s actually a fairly common pattern for first-movers like that, they spend a lot of money blazing the trail and then other companies follow along afterwards and eat their lunch.
It’s cheap to run for one person. Any service running it isn’t cheap when it has a good number of users.
That’s great if they actually work. But my experience with the big, corporate-funded models has been pretty freaking abysmal after more than a year of trying to adopt them into my daily workflow. I can’t imagine the performance of local models is better when they’re running on much, much smaller datasets and with much, much less computing power.
I’m happy to be proven wrong, of course, but I just don’t see how it’s possible for local models to compete with the Big Boys in terms of quality… and the quality of the largest models is only middling at best.
You’re free to not use them. Seems like an awful lot of people are using them, though, including myself. They must be getting something out of using them or they’d stop too.
Just because a lot of people are using them does not necessarily mean they are actually valuable. You’re claim assumes that people are acting rationally regarding them. But that’s an erroneous assumption to make.
People are falling in “love” with them. Asking them for advice about mental health. Treating them like they are some kind of all-knowing oracle (or even having any intelligence whatsoever), when in reality they know nothing and cannot reason at all.
Ultimately they are immensely effective at creating a feedback loop that preys on human psychology and reinforces a dependency on it. It’s a bit like addiction in that way.
Turns out very few people use it that way. Most people use it for far more practical things.
And even if they were mostly using it for that, who are you to decide what is “valuable” for other people? I happen to think that sports are a huge waste of time, does that mean that stadiums are not valuable?