cross-posted from: https://fed.dyne.org/post/822710
Salesforces has entered a phase of public reckoning after senior executives publicly admitted that the company overestimated AI’s readiness
cross-posted from: https://fed.dyne.org/post/822710
Salesforces has entered a phase of public reckoning after senior executives publicly admitted that the company overestimated AI’s readiness
Is it a good solution if you have to work hard do find a problem for it to solve?
I’d say “yes” because it means you’re pushing behind the current limits. It becomes bad when you have the manufacture a problem for it to solve.
Maybe. If a task takes 8 hours, and you have to do that task weekly, how much time should you invest to make that task take less time? What if it’s once a month instead? What if it’s once a year? What if you can reduce it by an hour? What if you can eliminate the work completely?
Ignoring AI for a moment you could probably find someone who could estimate using current tools and answer the question as above. If you invest 20 hours to eliminate an 8 hour task once a week, that quickly pays for itself. If you invest 200 hours to reduce an 8 hour task once a year to 4 hours, that will likely never pay for itself by the time the requirements change.
But AI is a new tool. With any new tool you have to figure it out before you can make a good estimate and figure out what is worth it. But even worse AI isn’t getting estimated, it’s just being thrown into existing tasks.
Now IF AI were truly magical and had amazing task reduction then just throwing it at things isn’t a terrible idea. IF AI can just immediately improve 10 things, even if it fails at a few others, it might be worth it.
AI also has a shitton of money riding on it, so the first entity to figure out how to make money with it also wins big.
Literally every successful new tool in history was made because there was a problem the tool was meant to solve. (Don’t get me wrong, a lot of unsuccessful tools started the same way. They were just ineptly made, or leap-frogged by better tools.) This is such an ingrained pattern that “a solution in search of a problem” is a disparaging way to talk about things that have no visible use.
LLMs are very much a solution in search of a problem. The only “problem” the people who made them and pitched them had was “there’s still some money out there that’s not in my pocket”. They were made in an attempt to get ALL TEH MONEEZ!, not to solve an actual problem that was identified.
Every piece of justification for LLMs at this point is just bafflegab and wishful thinking.
I completely agree that LLMs are the solution in search of a problem. I’m just trying to explain why someone might look at it and think it’s worth something.
The biggest reason really is just that a bunch of money is involved. The first entity to find a way to make money is going to Maya killing. The problem of course is that day may never come.