You want to see someone using say, VS Code to write something using say, Claude Code?
There’s probably a thousand videos of that.
More interesting: I watched someone who was super cheap trying to use multiple AIs to code a project because he kept running out of free credits. Every now and again he’d switch accounts and use up those free credits.
That was an amazing dance, let me tell ya! Glorious!
I asked him which one he’d pay for if he had unlimited money and he said Claude Code. He has the $20/month plan but only uses it in special situations because he’ll run out of credits too fast. $20 really doesn’t get you much with Anthropic 🤷
That inspired me to try out all the code assist AIs and their respective plugins/CLI tools. He’s right: Claude Code was the best by a HUGE margin.
Gemini 3.0 is supposed to be nearly as good but I haven’t tried it yet so I dunno.
Now that I’ve said all that: I am severely disappointed in this article because it doesn’t say which AI models were used. In fact, the study authors don’t even know what AI models were used. So it’s 430 pull requests of random origin, made at some point in 2025.
For all we know, half of those could’ve been made with the Copilot gpt5-mini that everyone gets for free when they install the Copilot extension in VS Code.
It’s more I want to see the process of experienced coders explaining the coding mistakes that typical AI coding makes. I have very little experience and see it as a good learning experience. You’re probably right about there being tons of videos like that.
The mistakes it makes depends on the model and the language. GPT5 models can make horrific mistakes though where it randomly removes huge swaths of code for no reason. Every time it happens I’m like, “what the actual fuck?” Undoing the last change and trying usually fixes it though 🤷
They all make horrific security mistakes quite often. Though, that’s probably because they’re trained on human code that is *also" chock full of security mistakes (former security consultant, so I’m super biased on that front haha).
One of the first videos I watched about LLMs, was a journalist who didn’t know anything about programming used ChatGPT to build a javascript game in the browser. He’d just copy paste code and then paste the errors and ask for help debugging. It even had to walk him through setting of VS Code and a git repo.
He said it took him about 4 hours to get a playable platformer.
I think that’s an example of a unique capability of AI. It can let a non-programmer kinda program, it can let a non-Chinese speaker speak kinda Chinese, it’ll let a non-artist kinda produce art.
I don’t doubt that it’ll get better, but even now it’s very useful in some cases (nowhere near enough to justify the trillions of dollars being spent though).
It is my hope to someday have AI create the assets for game concepts. I have ideas for making a Tetris clone with each piece or color having different properties, but actualizing it is far beyond my abilities.
Yeah, I’m not sure the way we allocate resources is justified either, in general. I guess ultimately the problem with AI is that it gives access to skills to capital that they would otherwise have to interact with laborers to get.
I think that people are too enthralled with the current situation that’s centered around LLMs, the massive capital bubble and the secondary effects from the expansion of datacenter space (power, water, etc).
You’re right that they do allow for the disruption of labor markets in fields that were not expecting computers to be able to do their job (to be fair to them, humanity has spent hundreds of millions of dollars designing various language processing software and been unable to engineer the software to do it effectively).
I think that usually when people say ‘AI’ they mean ChatGPT or LLMs in general. The reason that LLMs are big is because neural networks require a huge amount of data to train and the largest data repository that we have (the Internet) is text, images and video… so it makes sense that the first impressive models were trained on text and images/video.
The field of robotics hasn’t had access to a large public dataset to train large models on, so we don’t see large robotics models but they’re coming. You can already see it, compare robotic motion 4 years ago using a human engineered feedback control loop… the motions are accurate but they’re jerky and mechanical. Now look at the same company making a robot that uses a neural network trained on human kinematic data, that motion looks so natural that it breaks through the uncanny valley to me.
This is just one company generating data using human models (which is very expensive) but this is the kind of thing that will be ubiquitous and cheap given enough time.
This isn’t to mention the AlphaFold AI which learned how to fold proteins better than anything human engineered. Then, using a diffusion model (the same kind used in making pictures of shrimp jesus) another group was able to generate the RNA which would manufacture new novel proteins that fit a specific receptor. Proteins are important because essentially every kind of medication that we use has to interact with a protein-based receptor and the ability to create, visualize and test custom proteins in addition to the ability to write arbitrary mRNA (see, the mRNA COVID vaccine) is huge for computational protein design (responsible for the AIDS vaccines).
LLMs and the capitalist bubble surrounding them is certainly an important topic, framing it as being ‘against AI’ creates an impression that AI technology has nothing positive to offer. This reduces the amount of people who study the topic or major in it in college. So in 10 years, we’ll have less machine learning specialists than other countries who are not drowning in this ‘AI bad’ meme.
It would be really interesting to watch a video of this process. Though I’m certain it would be pretty difficult to pull off the editing.
You want to see someone using say, VS Code to write something using say, Claude Code?
There’s probably a thousand videos of that.
More interesting: I watched someone who was super cheap trying to use multiple AIs to code a project because he kept running out of free credits. Every now and again he’d switch accounts and use up those free credits.
That was an amazing dance, let me tell ya! Glorious!
I asked him which one he’d pay for if he had unlimited money and he said Claude Code. He has the $20/month plan but only uses it in special situations because he’ll run out of credits too fast. $20 really doesn’t get you much with Anthropic 🤷
That inspired me to try out all the code assist AIs and their respective plugins/CLI tools. He’s right: Claude Code was the best by a HUGE margin.
Gemini 3.0 is supposed to be nearly as good but I haven’t tried it yet so I dunno.
Now that I’ve said all that: I am severely disappointed in this article because it doesn’t say which AI models were used. In fact, the study authors don’t even know what AI models were used. So it’s 430 pull requests of random origin, made at some point in 2025.
For all we know, half of those could’ve been made with the Copilot gpt5-mini that everyone gets for free when they install the Copilot extension in VS Code.
It’s more I want to see the process of experienced coders explaining the coding mistakes that typical AI coding makes. I have very little experience and see it as a good learning experience. You’re probably right about there being tons of videos like that.
The mistakes it makes depends on the model and the language. GPT5 models can make horrific mistakes though where it randomly removes huge swaths of code for no reason. Every time it happens I’m like, “what the actual fuck?” Undoing the last change and trying usually fixes it though 🤷
They all make horrific security mistakes quite often. Though, that’s probably because they’re trained on human code that is *also" chock full of security mistakes (former security consultant, so I’m super biased on that front haha).
One of the first videos I watched about LLMs, was a journalist who didn’t know anything about programming used ChatGPT to build a javascript game in the browser. He’d just copy paste code and then paste the errors and ask for help debugging. It even had to walk him through setting of VS Code and a git repo.
He said it took him about 4 hours to get a playable platformer.
I think that’s an example of a unique capability of AI. It can let a non-programmer kinda program, it can let a non-Chinese speaker speak kinda Chinese, it’ll let a non-artist kinda produce art.
I don’t doubt that it’ll get better, but even now it’s very useful in some cases (nowhere near enough to justify the trillions of dollars being spent though).
It is my hope to someday have AI create the assets for game concepts. I have ideas for making a Tetris clone with each piece or color having different properties, but actualizing it is far beyond my abilities.
Yeah, I’m not sure the way we allocate resources is justified either, in general. I guess ultimately the problem with AI is that it gives access to skills to capital that they would otherwise have to interact with laborers to get.
I think that people are too enthralled with the current situation that’s centered around LLMs, the massive capital bubble and the secondary effects from the expansion of datacenter space (power, water, etc).
You’re right that they do allow for the disruption of labor markets in fields that were not expecting computers to be able to do their job (to be fair to them, humanity has spent hundreds of millions of dollars designing various language processing software and been unable to engineer the software to do it effectively).
I think that usually when people say ‘AI’ they mean ChatGPT or LLMs in general. The reason that LLMs are big is because neural networks require a huge amount of data to train and the largest data repository that we have (the Internet) is text, images and video… so it makes sense that the first impressive models were trained on text and images/video.
The field of robotics hasn’t had access to a large public dataset to train large models on, so we don’t see large robotics models but they’re coming. You can already see it, compare robotic motion 4 years ago using a human engineered feedback control loop… the motions are accurate but they’re jerky and mechanical. Now look at the same company making a robot that uses a neural network trained on human kinematic data, that motion looks so natural that it breaks through the uncanny valley to me.
This is just one company generating data using human models (which is very expensive) but this is the kind of thing that will be ubiquitous and cheap given enough time.
This isn’t to mention the AlphaFold AI which learned how to fold proteins better than anything human engineered. Then, using a diffusion model (the same kind used in making pictures of shrimp jesus) another group was able to generate the RNA which would manufacture new novel proteins that fit a specific receptor. Proteins are important because essentially every kind of medication that we use has to interact with a protein-based receptor and the ability to create, visualize and test custom proteins in addition to the ability to write arbitrary mRNA (see, the mRNA COVID vaccine) is huge for computational protein design (responsible for the AIDS vaccines).
LLMs and the capitalist bubble surrounding them is certainly an important topic, framing it as being ‘against AI’ creates an impression that AI technology has nothing positive to offer. This reduces the amount of people who study the topic or major in it in college. So in 10 years, we’ll have less machine learning specialists than other countries who are not drowning in this ‘AI bad’ meme.