You just sparked in my head something that I couldn’t quite summarize previously: GenAI is like someone who knows a lot of trivia or can do a cool party trick - it’s really impressive, but not really useful.
I’m genuinely amazed at many of the things generative AI can do. The fact that a computer can spit out text on a subject that sounds coherent is kind of amazing, and the fact that it can synthesize images based on a prompt is honestly mind-blowing to me. The quality of that generated content isn’t that impressive compared to human authored works, but the fact that a computer can do it at all is bonkers to me.
That said, it doesn’t really make my life better in any way. It’s barely helpful to me in the tasks that it actually does well on, and it wastes my time on prevarications that I spend more time double checking than if I’d just done it myself in the first place. Even worse, it takes an enormous amount of energy and other resources, it’s being used to diminish human labor, and we’ve blown through massive amounts of financial capital that could have been used to actually improve people’s lives in a substantial way.
The only use case that I can think of for an LLM that I would really want and wouldn’t cause more problems than it solves is making a smart home voice assistant more helpful. Translating my plain language commands into the specific syntax that my smart home would recognize is helpful to me because I wouldn’t have to remember the specific verbal command and taxonomy of devices to accomplish some task, and if it screwed up it’s easily noticeable and fixable. And I can run that locally and don’t require a half a trillion dollars spent on data centers and spiking energy prices to accomplish that.
You just sparked in my head something that I couldn’t quite summarize previously: GenAI is like someone who knows a lot of trivia or can do a cool party trick - it’s really impressive, but not really useful.
I’m genuinely amazed at many of the things generative AI can do. The fact that a computer can spit out text on a subject that sounds coherent is kind of amazing, and the fact that it can synthesize images based on a prompt is honestly mind-blowing to me. The quality of that generated content isn’t that impressive compared to human authored works, but the fact that a computer can do it at all is bonkers to me.
That said, it doesn’t really make my life better in any way. It’s barely helpful to me in the tasks that it actually does well on, and it wastes my time on prevarications that I spend more time double checking than if I’d just done it myself in the first place. Even worse, it takes an enormous amount of energy and other resources, it’s being used to diminish human labor, and we’ve blown through massive amounts of financial capital that could have been used to actually improve people’s lives in a substantial way.
The only use case that I can think of for an LLM that I would really want and wouldn’t cause more problems than it solves is making a smart home voice assistant more helpful. Translating my plain language commands into the specific syntax that my smart home would recognize is helpful to me because I wouldn’t have to remember the specific verbal command and taxonomy of devices to accomplish some task, and if it screwed up it’s easily noticeable and fixable. And I can run that locally and don’t require a half a trillion dollars spent on data centers and spiking energy prices to accomplish that.