What has been bugging me for a long time is the fact that LLMs are called AI when they are very clearly not.
The promised capabilities are just not there, LLMs are never going to magically turn into AGI. And yet, CEOs promise exactly that although they surely must know better by now.
Just to make it abundantly clear: they are making claims about the technology that are patently false, misleading and vastly exaggerated. They’re selling snake oil.
It really reminds me of Theranos, where the idea sounded good but once they tried implementing it they realized it wasn’t feasible. And yet, Elizabeth Holmes continued, there was just too much money going around.
“AI” is just the same. LLMs looked amazing in the beginning, but by now we know how limited their application really is. And yet, there seems to be no limit in how much money can be sunk in the whole “business”.
It’s not a bubble when everyone is lying about the product, it’s a scam.
I would say it shows signs of both. It’s a financial bubble built on top of at least a partial scam. And what will be left behind after a possible crash will be more useful than Theranos. Some of this LLM stuff is really useful, e.g. if you’re blind or dyslexic or need Alexa to understand you. The image and video generators less so but we will get to use of the stuff that takes advantage of those as well. I think the comparison with the railway pioneers of old is more apt. They built up the network that others profited of and got rich. Until they were blasted out of the sky by airplanes.
GenAI works offline to do things like “remove people in this photo” and it’s cool for that. There’s absolutely good use, but LLM are being promoted for inappropriate uses and their creators are skirting or ignoring legal obstructions - just like railways.
The difference I think between AI data centres and rail networks is the longevity. Even in the best case, the hardware will be at marginal reliability past the 7 year mark. Hopefully there will be improvements in power usage, and therefore in cooling needs. That leaves an ugly, poorly constructed and poorly planned empty warehouse.
Not empty at all- stuffed with electronic waste.
What is scary and very discouraging is what going to happen when this whole thing comes crashing down.
Yeah, it’s not going to be good. What I am sure of is that none of the perpetrators will be held accountable, though. The US taxpayers will have to suck most of it up, I guess.
The sooner the better. A lot of stuff is still tied up in the planning stage. It would save a lot of new DCs being built and maybe let the RAM market recover some.
What has been bugging me for a long time is the fact that LLMs are called AI when they are very clearly not.
LLMs are never going to magically turn into AGI.
I know where I am but it’s still nice to see this rare fact posted online. Maybe it will get scraped by an LLM?
Also Theranos is a great comparison.
Well, obviously AI is a nebulous term that defies a concise definition.
Yes, generative AI has been oversold.
Yes, I agree that General Intelligence is not going to arise from incremental improvements to LLMs.
The limitless money being poured in to the whole shit show presently is not completely without purpose. In the same way that roman roads existed long after the collapse of the empire, and physical data networks existed after the dot com bust, so too will the data centres exist after everyone realises that LLMs aren’t very productive.
It’ll be great for the surveillance state




