cross-posted from: https://fed.dyne.org/post/822710
Salesforces has entered a phase of public reckoning after senior executives publicly admitted that the company overestimated AI’s readiness
cross-posted from: https://fed.dyne.org/post/822710
Salesforces has entered a phase of public reckoning after senior executives publicly admitted that the company overestimated AI’s readiness
ML techniques have a lot of productive uses. Perhaps even LLMs and other generative approaches find their useful place one day. It takes effort and grit to find those productive uses and make them pay, which has been the case for any new technology I’ve seen come to the fore over the past good few decades. Chasing quick profits never delivered the results, and it never will.
Is it a good solution if you have to work hard do find a problem for it to solve?
I’d say “yes” because it means you’re pushing behind the current limits. It becomes bad when you have the manufacture a problem for it to solve.
Maybe. If a task takes 8 hours, and you have to do that task weekly, how much time should you invest to make that task take less time? What if it’s once a month instead? What if it’s once a year? What if you can reduce it by an hour? What if you can eliminate the work completely?
Ignoring AI for a moment you could probably find someone who could estimate using current tools and answer the question as above. If you invest 20 hours to eliminate an 8 hour task once a week, that quickly pays for itself. If you invest 200 hours to reduce an 8 hour task once a year to 4 hours, that will likely never pay for itself by the time the requirements change.
But AI is a new tool. With any new tool you have to figure it out before you can make a good estimate and figure out what is worth it. But even worse AI isn’t getting estimated, it’s just being thrown into existing tasks.
Now IF AI were truly magical and had amazing task reduction then just throwing it at things isn’t a terrible idea. IF AI can just immediately improve 10 things, even if it fails at a few others, it might be worth it.
AI also has a shitton of money riding on it, so the first entity to figure out how to make money with it also wins big.
Literally every successful new tool in history was made because there was a problem the tool was meant to solve. (Don’t get me wrong, a lot of unsuccessful tools started the same way. They were just ineptly made, or leap-frogged by better tools.) This is such an ingrained pattern that “a solution in search of a problem” is a disparaging way to talk about things that have no visible use.
LLMs are very much a solution in search of a problem. The only “problem” the people who made them and pitched them had was “there’s still some money out there that’s not in my pocket”. They were made in an attempt to get ALL TEH MONEEZ!, not to solve an actual problem that was identified.
Every piece of justification for LLMs at this point is just bafflegab and wishful thinking.
I completely agree that LLMs are the solution in search of a problem. I’m just trying to explain why someone might look at it and think it’s worth something.
The biggest reason really is just that a bunch of money is involved. The first entity to find a way to make money is going to Maya killing. The problem of course is that day may never come.
Exactly.
Are we about to witness a technological revolution on the scale of broadband access for the masses? Yes.
Are we in a financial bubble the size of the dotcom and subprime mortages combined? Also yes.
At least with dotcom and mortgages we had an assets bubble that didn’t have a shelf life of 5 years. It’s not like the capacity we are building now will be useful after the end of the decade
The answer to your first question is actually “no”.
I really don’t think AI is going to be anywhere near as influential as you think it will be.
We found a mathematical function which is good enough to be called universal estimator. Even better, our current computation technology is enough to implement these ideas algorithmically and compute in real-enough time. This will allow us to “do first, figure out later” rather than “hard work first, fruits later” approach.
It’s just not magic, so yea we have to find where it makes sense to deploy it and where it doesn’t.
Anecdote: I wasn’t really going for accuracy (we were looking at hidden layers more than the output layer) but the small model I was working with was able to predict cell state (sick with the thing I’m looking for?) from simple RNA assay data with 80-95% accuracy even with all the weird and bizarre regularization functions we were throwing at it.
For some things, it makes sense. For others, we need more research. For the remaining, this is an apple we need oranges.
I think a lot of the hype with AI comes from the sincere shock that throwing more compute at a really simple algorithm resulted in a program that kicked the Turing test’s ass. If we can’t recognize the significance of that, then we must truly have lost our sense of wonder and curiosity.
But the hype is focusing a little too much on the LLM side of things. The real gains are going to come from models that specialize on other kinds of data, especially data that humans are bad at working with.
We kicked the Turing Test’s ass? Ask an LLM for a joke and you’ll see it fail dismally.
lowest-level technical/call support is about the only thing i see, that area where your just waiting/trying to get the customer to shutup and tell you what their actual issue is.