cross-posted from: https://fed.dyne.org/post/822710
Salesforces has entered a phase of public reckoning after senior executives publicly admitted that the company overestimated AI’s readiness
Not that I don’t love this for them.
But this source is really odd, it’s not a reputable new source, and has no citations, and very much an opinion blog type site.
Is there a better source for the story?
Cool, now fire the entire executive staff. Replace them.
C-suite is always clueless, and C-suite gets no consequences for their ineptitude.
If I were a shareholder, I’d be pushing my fellow shareholders to replace the inept.
I picture a room with 5 execs desperately typing into chatgpt.
I already had a good few years academic and professional experience in both NLP and ML before ChatGPT came out, and since then I’ve been doing some consulting in gen AI.
I don’t feel safe posting or commenting anything about AI on LinkedIn because of the sheer strength of the cult of “if you criticise AI you’re a Luddite who doesn’t understand the modern world, and should be shunned professionally”. Pointing out that the Emperor has no clothes makes you unemployable in the eyes of at least half the hiring managers in my contacts.
Is there anything that’s not a cult in the US?
Empathy for human beings.
Who couldn’t have seen that coming? It really brings home how stupid most of these company leaders are. The lack of ethics and reduced morals they have allow them to rise to the top of the organization but that in no way selects for intelligence. AI has them blinded because their dream is to have everyone replaced by machines. Just like that Twilight Zone episode.
I want to see the delusion in HN comments really. “actually, he just didnt prompt hard enough”
Fr. But I’m trying to not go there anymore lol
“We assumed the technology was further along than it actually was,” one executive said privately
I wonder how many executives made these types of decisions knowing the technology could not perform as intended, but wanted to boost their quarterly or yearly bonuses, versus how many were just gullible morons.
It’s also showing that they do not regret firing the employees; they would do it again if the technology was that far along.
ML techniques have a lot of productive uses. Perhaps even LLMs and other generative approaches find their useful place one day. It takes effort and grit to find those productive uses and make them pay, which has been the case for any new technology I’ve seen come to the fore over the past good few decades. Chasing quick profits never delivered the results, and it never will.
Is it a good solution if you have to work hard do find a problem for it to solve?
I’d say “yes” because it means you’re pushing behind the current limits. It becomes bad when you have the manufacture a problem for it to solve.
Maybe. If a task takes 8 hours, and you have to do that task weekly, how much time should you invest to make that task take less time? What if it’s once a month instead? What if it’s once a year? What if you can reduce it by an hour? What if you can eliminate the work completely?
Ignoring AI for a moment you could probably find someone who could estimate using current tools and answer the question as above. If you invest 20 hours to eliminate an 8 hour task once a week, that quickly pays for itself. If you invest 200 hours to reduce an 8 hour task once a year to 4 hours, that will likely never pay for itself by the time the requirements change.
But AI is a new tool. With any new tool you have to figure it out before you can make a good estimate and figure out what is worth it. But even worse AI isn’t getting estimated, it’s just being thrown into existing tasks.
Now IF AI were truly magical and had amazing task reduction then just throwing it at things isn’t a terrible idea. IF AI can just immediately improve 10 things, even if it fails at a few others, it might be worth it.
AI also has a shitton of money riding on it, so the first entity to figure out how to make money with it also wins big.
But AI is a new tool. With any new tool you have to figure it out before you can make a good estimate and figure out what is worth it.
Literally every successful new tool in history was made because there was a problem the tool was meant to solve. (Don’t get me wrong, a lot of unsuccessful tools started the same way. They were just ineptly made, or leap-frogged by better tools.) This is such an ingrained pattern that “a solution in search of a problem” is a disparaging way to talk about things that have no visible use.
LLMs are very much a solution in search of a problem. The only “problem” the people who made them and pitched them had was “there’s still some money out there that’s not in my pocket”. They were made in an attempt to get ALL TEH MONEEZ!, not to solve an actual problem that was identified.
Every piece of justification for LLMs at this point is just bafflegab and wishful thinking.
I completely agree that LLMs are the solution in search of a problem. I’m just trying to explain why someone might look at it and think it’s worth something.
The biggest reason really is just that a bunch of money is involved. The first entity to find a way to make money is going to Maya killing. The problem of course is that day may never come.
Exactly.
Are we about to witness a technological revolution on the scale of broadband access for the masses? Yes.
Are we in a financial bubble the size of the dotcom and subprime mortages combined? Also yes.
The answer to your first question is actually “no”.
At least with dotcom and mortgages we had an assets bubble that didn’t have a shelf life of 5 years. It’s not like the capacity we are building now will be useful after the end of the decade
I really don’t think AI is going to be anywhere near as influential as you think it will be.
We found a mathematical function which is good enough to be called universal estimator. Even better, our current computation technology is enough to implement these ideas algorithmically and compute in real-enough time. This will allow us to “do first, figure out later” rather than “hard work first, fruits later” approach.
It’s just not magic, so yea we have to find where it makes sense to deploy it and where it doesn’t.
Anecdote: I wasn’t really going for accuracy (we were looking at hidden layers more than the output layer) but the small model I was working with was able to predict cell state (sick with the thing I’m looking for?) from simple RNA assay data with 80-95% accuracy even with all the weird and bizarre regularization functions we were throwing at it.
For some things, it makes sense. For others, we need more research. For the remaining, this is an apple we need oranges.
I think a lot of the hype with AI comes from the sincere shock that throwing more compute at a really simple algorithm resulted in a program that kicked the Turing test’s ass. If we can’t recognize the significance of that, then we must truly have lost our sense of wonder and curiosity.
But the hype is focusing a little too much on the LLM side of things. The real gains are going to come from models that specialize on other kinds of data, especially data that humans are bad at working with.
lowest-level technical/call support is about the only thing i see, that area where your just waiting/trying to get the customer to shutup and tell you what their actual issue is.
And yet, despite fucking up royally, CEO Marc Benioff won’t lose his job or probably any remuneration. And there’s the problem.
If any of us fucked up like this they’d have security marching us out.
Prayers and best wishes for Luigi and all of his followers.
Well they probably had record earnings for a few quarters then their backlog caught up to their inability to deliver.
Got their bonuses and left scorched earth. The business strategy of private equity and capital.
I’m still holding out hope that they get half-drowned Clockwork Orange-style, despite how unlikely it is…

well, those are the people ruling the world and calling the shots.
nobody will bring consequences to them except for us.
It’s amazing how stupid these people are
How much longer will anyone believe the lie that success is earned and not luck
Unfortunately it’s part of their plan.
Now they will hire people for cheaper.
Yep. I hope there employees are not dumb and will unionize, make them pay hard
Oh no!!! The dumb business plan that has no proven way of working didn’t work!?
It did work. They wanted to fire people and put the blame on AI so they can now hire other people cheaper. AI is only the excuse.
That would be a really good smokescreen but I kinda feel like they thought it might work they were so delusional.
Yeah, they phrase it a bit differently. It was “premature”. 😅 Basically the same thing you said with some added innundo how they’re gonna try again…
But modern high-performance computing is so awesome?
What did people bet on how long this would take? I think I was around 1 year?








