The technological struggles are in some ways beside the point. The financial bet on artificial general intelligence is so big that failure could cause a depression.
Leaving aside the questions whether it would benefit us, what makes you think LLM won’t bring about technical singularity? Because, you know, the word LLM doesn’t mean that much… It just means it’s a model, that is “large” (currently taken to mean many parameters), and is capable of processing languages.
Don’t you think whatever that will bring about the singularity, will at the very least understand human languages?
So can you clarify, what is it that you think won’t become AGI? Is it transformer? Is it any models that trained in the way we train llms today?
It’s because they are horrible at problem solving and creativity. They are based on word association from training purely on text. The technical singularity will need to innovate on its own so that it can improve the hardware it runs on and its software.
Even though github copilot has impressed me by implementing a 3 file Python script from scratch to finish such that I barely wrote any code, I had to hold its hand the entire way and give it very specific instructions about every function as we added the pieces one by one to build it up. And even then, it would get parts I failed to specify completely wrong and initially implemented things in a very inefficient way.
There are fundamental things that the technical singularity needs that today’s LLMs lack entirely. I think the changes that would be required to get there will also change them from LLMs into something else. The training is a part of it, but fundamentally, LLMs are massive word association engines. Words (or vectors translated to and from words) are their entire world and they can only describe things with those words because it was trained on other people doing that.
Leaving aside the questions whether it would benefit us, what makes you think LLM won’t bring about technical singularity? Because, you know, the word LLM doesn’t mean that much… It just means it’s a model, that is “large” (currently taken to mean many parameters), and is capable of processing languages.
Don’t you think whatever that will bring about the singularity, will at the very least understand human languages?
So can you clarify, what is it that you think won’t become AGI? Is it transformer? Is it any models that trained in the way we train llms today?
It’s because they are horrible at problem solving and creativity. They are based on word association from training purely on text. The technical singularity will need to innovate on its own so that it can improve the hardware it runs on and its software.
Even though github copilot has impressed me by implementing a 3 file Python script from scratch to finish such that I barely wrote any code, I had to hold its hand the entire way and give it very specific instructions about every function as we added the pieces one by one to build it up. And even then, it would get parts I failed to specify completely wrong and initially implemented things in a very inefficient way.
There are fundamental things that the technical singularity needs that today’s LLMs lack entirely. I think the changes that would be required to get there will also change them from LLMs into something else. The training is a part of it, but fundamentally, LLMs are massive word association engines. Words (or vectors translated to and from words) are their entire world and they can only describe things with those words because it was trained on other people doing that.