What has been bugging me for a long time is the fact that LLMs are called AI when they are very clearly not.

The promised capabilities are just not there, LLMs are never going to magically turn into AGI. And yet, CEOs promise exactly that although they surely must know better by now.

Just to make it abundantly clear: they are making claims about the technology that are patently false, misleading and vastly exaggerated. They’re selling snake oil.

It really reminds me of Theranos, where the idea sounded good but once they tried implementing it they realized it wasn’t feasible. And yet, Elizabeth Holmes continued, there was just too much money going around.

“AI” is just the same. LLMs looked amazing in the beginning, but by now we know how limited their application really is. And yet, there seems to be no limit in how much money can be sunk in the whole “business”.

It’s not a bubble when everyone is lying about the product, it’s a scam.

  • CompactFlax@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    GenAI works offline to do things like “remove people in this photo” and it’s cool for that. There’s absolutely good use, but LLM are being promoted for inappropriate uses and their creators are skirting or ignoring legal obstructions - just like railways.

    The difference I think between AI data centres and rail networks is the longevity. Even in the best case, the hardware will be at marginal reliability past the 7 year mark. Hopefully there will be improvements in power usage, and therefore in cooling needs. That leaves an ugly, poorly constructed and poorly planned empty warehouse.