Bubble behavior
Every fucking day this guy is in the news literally begging for everyone to keep pretending the Emperor is totally wearing the sickest fit.
All these fucking AI CEOs out there lately begging the world to fuel their fantasies just a little longer.
They know the bubble is gonna burst; they’ve known this for years, but their desperation of late seems to indicate that it’s bursting soon.
Dear smart people,
Stop telling.the dummies with the money that this shit is fake.
From,
The dude that wants to take all your grant money, taxes, and healthcare benefits, and cook the whole planet.
terrified their bullshit pump and dumps gonna be revealed before they’ve extracated maximum value and shifted all the liability to shell companies.
They lost control of the narrative. All their millions in PR and advertising and it cannot cover up the terrible idea that is unchecked corporate GenAI. This bubble can’t pop fast enough.
They’re like my dad. You can’t just describe everything as “perfect/really good” even contrary to reality. That doesn’t just magically gaslight everyone around you into believing what you say, all it does is make those words lose meaning when you say them and your opinion loses credibility.
Then when something is actually really good, no one takes your word for it because that’s what you say when something is total shit. It’s completely insincere, and it’s so easy to see through.
Now I want a PhD so I can properly trash AI.
In fairness, I think you can do the same at home with slightly fewer details and significantly less debt.
In Germany, phd students are (mostly/often, depending on the discipline) employed on a public payscale. That is: You are (or ought to be) reducing any previous debt as you work.
You don’t need one, you can just prompt an LLM to trash AI for you /s
I have a Ph.D. The stuff people are currently calling “AI” is trash.
I regularly do peer review for manuscripts sent to a scientific journal. The number of them “written” with LLM tools is staggering. It’s really disheartening to read several pages of slop filled with very confident-sounding but astonishingly wrong citations.
But it is pretty satisfying to be able to return a review that basically says “Hey editor, they wrote this with a trash robot. I recommend rejection.”
At least it’s an easy reject…
Says the guy who’s wearing my mothers couch from the 80s as a jacket.
Buddy is all hat no cattle.
Snake Oil Salesman doesn’t want people criticizing Snake Oil.
Oh great. “Doomer narrative”. The new “woke,” to mean anything I disagree with or dislike is a “doomer narrative.”
“Enough of this doomer narrative” said the people with their foot on the pedal accelerating the whole world towards its demise…
I’m so tired, I wish these assholes your receed to the crevices they climbed out of
Well that just made me realise something. In my department we have folks with a masters and folks with a PhD.
All the pro-AI people (“what about using AI to reduce marking load”, “do you have any AI use in your course”, etc) lack PhDs.
It’s a strange overlap, and now I’m wondering why.
People who are used to doing actual research and who understand the necessity of things like a peer review process understand how dangerous it is to have something that’s straight up confidently wrong 20% of the time, and yet have people who treat it like the ultimate authority on things. These people have likely seen AI be stunningly stupid in fields they know about and can extrapolate from there that it’s probably stunningly stupid about most things.
People think AI is great for subjects they are not experts in. Experts will say AI is trash for their area of expertise.
AI is really good at sounding smart to the unknowledgeable.
Boy as soon as trump came around he just dove head first deep into the fascist cess pit didnt he?
Always a piece of shit, but now an extra special piece of shit with not wanting the filthy “intellectuals” criticizing his grand designs. That certainly doesnt sound familiar at all.
Man, first Satya at Microsoft and now Jensen at nVidia. Lots of scared talk by tech leaders heavily invested in AI. I wonder if the bottom is going to drop off soon.
That’s what I’m thinking. It will be glorious. The downfall of Babylon, and its merchants will weep.
Sure are showing a lot of faith in your product there, champ.
Exactly. If his product were any good, it would speak for itself. He wouldn’t need to beg and grovel for people to like it. Ridiculous and pathetic.
Gosh, tech CEOs are so pathetic.
The people I know who are most skeptical of LLMs and other genAI tools actually work with machine learning and statistical models. They know the limitations and pitfalls, and they know that getting ‘AI’ right 80% of the time is the EASY part. Getting it right the remaining 20% of the time might be impossible with the current approach.
I have limited exposure to ML for maintenance modeling in manufacturing. LLMs are sort like a parrot with improved guessing abilities and an insane amount of guessing power. It will never move beyond being a parrot, because that’s what it is.










