The business model isn’t to make money from a chat bot. The aim is to get to AGI and make money that way. Replacing workers with AI would be a gigantic money maker. Nearly all the big players bank on that.
If that is even possible is still unclear. A big bet with billions of dollars.
Uh… Replacing workers with AI would be the destruction of an economy which is literally the opposite of “money maker”.
Economies work by the circulation of some form of currency (whatever form that might take). That means people need to spend. And people with low-paying jobs spend small amounts. People with no jobs spend nothing. (Also people who hoard wealth damage or even destroy economies as well. It’s why you want inflationary currency, not static or deflationary.)
Yes it can, it’s just the wealthy don’t care if we eat each other to survive. This is why EVERYONE should be oppose to AGI who isn’t a billionaire or at least many times multimillionaire. It harms the common man, period.
Alternatively (or additionally), we should all support a (livavble) universal basic income funded from corporate profits. If an AI takes my job, make my (former) employer pay me anyway. If there is (almost) no work to be done by humans anymore, work should no longer be necessary to live.
But if we look at how rise in productivity no longer raises salaries, it’s easy to guess what would happen if we’d get AGI: the profits go to the rich and the poor have to fight for themselves.
Would the 1% care if most of us loose everything? If you truly don’t know the answer, here’s a hint:
The super rich built luxurious bunkers instead of fighting climate change.
The only thing to get us to UBI is a massive revolution against the ultra rich and the politicians they bought.
We are already past the point where our resources are enough for everyone to have their basic needs met: Food, shelter and healthcare for everyone, and then some.
The reason we don’t live in an utopia is that the system is rigged to make rich people richer and everyone else poorer. There is enough money for everyone, it’s just unfairly distributed.
The hope that the rich will change their modus operandi, just because more people suffer is naïve.
LLM’s are cool and all, but you can’t use them for anything requiring real precision without allocating human work time to validate the output, unless you want to end up on the national news for producing something fraudulent.
And making it so their image generator can generate porn isn’t going to change that.
I bitched out Baidu’s LLMbecile because Baidu has lost all capacity for searching in favour of the slop. It literally told me that Baidu was useless for search and recommended several of its competitors over Baidu.
Yes, currently AI isn’t reliable enough to use instead of a human. All the big AI businesses bet that this will change - either by training with more data or some technological breakthrough.
They tried that with Theranos because Elizabeth Holmes’ machine could correctly identify four viruses.
Presumably LLM’s have already trained on the entirety of human knowledge and communication and still produce buggy information, so I’m skeptical that it’ll work out the way the VC’s expect, but we’ll see.
The business model isn’t to make money from a chat bot. The aim is to get to AGI and make money that way. Replacing workers with AI would be a gigantic money maker. Nearly all the big players bank on that.
If that is even possible is still unclear. A big bet with billions of dollars.
Uh… Replacing workers with AI would be the destruction of an economy which is literally the opposite of “money maker”.
Economies work by the circulation of some form of currency (whatever form that might take). That means people need to spend. And people with low-paying jobs spend small amounts. People with no jobs spend nothing. (Also people who hoard wealth damage or even destroy economies as well. It’s why you want inflationary currency, not static or deflationary.)
Using LLMs to get to AGI is like teaching a dog tricks and expecting that, if you work hard enough at it, the dog would get a law degree eventually
Well, we know absolutely for certain that consciousness is just complex computation… right?
Because it would be very very silly to have bet all these billions of dollars on a convenient assumption.
So if AGI is the key to replacing (most) workers, then AGI cannot exist without democratic socialism else we’ll all starve.
Yes it can, it’s just the wealthy don’t care if we eat each other to survive. This is why EVERYONE should be oppose to AGI who isn’t a billionaire or at least many times multimillionaire. It harms the common man, period.
Alternatively (or additionally), we should all support a (livavble) universal basic income funded from corporate profits. If an AI takes my job, make my (former) employer pay me anyway. If there is (almost) no work to be done by humans anymore, work should no longer be necessary to live.
True. Productivity would rise enormously.
But if we look at how rise in productivity no longer raises salaries, it’s easy to guess what would happen if we’d get AGI: the profits go to the rich and the poor have to fight for themselves.
Would the 1% care if most of us loose everything? If you truly don’t know the answer, here’s a hint:
The super rich built luxurious bunkers instead of fighting climate change.
The only thing to get us to UBI is a massive revolution against the ultra rich and the politicians they bought.
We are already past the point where our resources are enough for everyone to have their basic needs met: Food, shelter and healthcare for everyone, and then some.
The reason we don’t live in an utopia is that the system is rigged to make rich people richer and everyone else poorer. There is enough money for everyone, it’s just unfairly distributed.
The hope that the rich will change their modus operandi, just because more people suffer is naïve.
No no no you’ve got it all wrong.
YOU’LL all starve. No problem! /S
LLM’s are cool and all, but you can’t use them for anything requiring real precision without allocating human work time to validate the output, unless you want to end up on the national news for producing something fraudulent.
And making it so their image generator can generate porn isn’t going to change that.
I had to correct my boss this morning because they didn’t read the AI output that told our client our services were worthless.
I bitched out Baidu’s LLMbecile because Baidu has lost all capacity for searching in favour of the slop. It literally told me that Baidu was useless for search and recommended several of its competitors over Baidu.
Oopsie!
So sad that I totally believe that.
Yes, currently AI isn’t reliable enough to use instead of a human. All the big AI businesses bet that this will change - either by training with more data or some technological breakthrough.
Could be they’re right.
They tried that with Theranos because Elizabeth Holmes’ machine could correctly identify four viruses.
Presumably LLM’s have already trained on the entirety of human knowledge and communication and still produce buggy information, so I’m skeptical that it’ll work out the way the VC’s expect, but we’ll see.
Thisthread is not about AI, it’s about pattern predictions snake oil.
Can’t wait for society to pick the tab on that bet ;)