One clear sign is how despite all the money and pressure, companies haven’t been able to actually implement it in useful ways.
Samsung, Apple, Google, Canva, you name it, they invest a billion into integrating AI into their products and what do they get?
A chat box, an image object remover, bad image generation, a translator. Sure, all things users were impressed by… Two years ago. Its always the same.
My banking app decided to update adding “innovative AI features!” which meant… Any guesses?
Instead of typing the value, pressing OK, and selecting a contact for a bank transfer - which is fast and easy - I now need to type into a chat box “Transfer X amount to Person Y” and this obnoxiously bad AI will reply with emojis, two wrong pieces of information, and take double the time to complete the same task.
You just sparked in my head something that I couldn’t quite summarize previously: GenAI is like someone who knows a lot of trivia or can do a cool party trick - it’s really impressive, but not really useful.
I’m genuinely amazed at many of the things generative AI can do. The fact that a computer can spit out text on a subject that sounds coherent is kind of amazing, and the fact that it can synthesize images based on a prompt is honestly mind-blowing to me. The quality of that generated content isn’t that impressive compared to human authored works, but the fact that a computer can do it at all is bonkers to me.
That said, it doesn’t really make my life better in any way. It’s barely helpful to me in the tasks that it actually does well on, and it wastes my time on prevarications that I spend more time double checking than if I’d just done it myself in the first place. Even worse, it takes an enormous amount of energy and other resources, it’s being used to diminish human labor, and we’ve blown through massive amounts of financial capital that could have been used to actually improve people’s lives in a substantial way.
The only use case that I can think of for an LLM that I would really want and wouldn’t cause more problems than it solves is making a smart home voice assistant more helpful. Translating my plain language commands into the specific syntax that my smart home would recognize is helpful to me because I wouldn’t have to remember the specific verbal command and taxonomy of devices to accomplish some task, and if it screwed up it’s easily noticeable and fixable. And I can run that locally and don’t require a half a trillion dollars spent on data centers and spiking energy prices to accomplish that.
One clear sign is how despite all the money and pressure, companies haven’t been able to actually implement it in useful ways.
Samsung, Apple, Google, Canva, you name it, they invest a billion into integrating AI into their products and what do they get?
A chat box, an image object remover, bad image generation, a translator. Sure, all things users were impressed by… Two years ago. Its always the same.
My banking app decided to update adding “innovative AI features!” which meant… Any guesses?
Instead of typing the value, pressing OK, and selecting a contact for a bank transfer - which is fast and easy - I now need to type into a chat box “Transfer X amount to Person Y” and this obnoxiously bad AI will reply with emojis, two wrong pieces of information, and take double the time to complete the same task.
You just sparked in my head something that I couldn’t quite summarize previously: GenAI is like someone who knows a lot of trivia or can do a cool party trick - it’s really impressive, but not really useful.
I’m genuinely amazed at many of the things generative AI can do. The fact that a computer can spit out text on a subject that sounds coherent is kind of amazing, and the fact that it can synthesize images based on a prompt is honestly mind-blowing to me. The quality of that generated content isn’t that impressive compared to human authored works, but the fact that a computer can do it at all is bonkers to me.
That said, it doesn’t really make my life better in any way. It’s barely helpful to me in the tasks that it actually does well on, and it wastes my time on prevarications that I spend more time double checking than if I’d just done it myself in the first place. Even worse, it takes an enormous amount of energy and other resources, it’s being used to diminish human labor, and we’ve blown through massive amounts of financial capital that could have been used to actually improve people’s lives in a substantial way.
The only use case that I can think of for an LLM that I would really want and wouldn’t cause more problems than it solves is making a smart home voice assistant more helpful. Translating my plain language commands into the specific syntax that my smart home would recognize is helpful to me because I wouldn’t have to remember the specific verbal command and taxonomy of devices to accomplish some task, and if it screwed up it’s easily noticeable and fixable. And I can run that locally and don’t require a half a trillion dollars spent on data centers and spiking energy prices to accomplish that.
Don’t forget the added greenhouse gases!