Better yet, why don’t they just write the shit competently and correctly the first time?
And don’t tell me it’s too hard; that’s the way real software engineering used to be done when stuff shipped on physical media and couldn’t be patched, and still is done for stuff that actually matters (avionics, etc.). They just want to pretend PC-level half-assery is acceptable because it’s cheaper.
They can, but the point of OTA setups is that you don’t have to anymore, and you save a lot that way because satiate testing is very very expensive. Old PC platforms had a standard of compatibility in how all the hardware worked. So you could test a few variations, and be reasonably assured, or you had a specific version for a particular price of hardware, like c&c machines.
So the new paradigm is about testing your most common setup, then slow rolling out and waiting for complaints. If you broke something, you get the details, fix it, and ship again. The problem here is their release cycle takes too long. This is only viable if you can patch things in a day, if it takes you a month to fix a patch that is turning cars into driveway statues, it more than a handful of cars are affected, you need a new strategy.
The kind of quality assurance you’re talking about is astronomically expensive. Software has gotten a lot more complex over the past couple decades. And just because it came on physical media and could not easily be patched doesn’t mean that it didn’t have bugs. Far from it.
The kind of quality assurance you’re talking about is astronomically expensive
That might be a valid argument when talking about accounting software with backups in case of fuck ups. We’re talking about cars, on roads, with people sprinkled all around.
Better yet, why don’t they just write the shit competently and correctly the first time?
And don’t tell me it’s too hard; that’s the way real software engineering used to be done when stuff shipped on physical media and couldn’t be patched, and still is done for stuff that actually matters (avionics, etc.). They just want to pretend PC-level half-assery is acceptable because it’s cheaper.
They can, but the point of OTA setups is that you don’t have to anymore, and you save a lot that way because satiate testing is very very expensive. Old PC platforms had a standard of compatibility in how all the hardware worked. So you could test a few variations, and be reasonably assured, or you had a specific version for a particular price of hardware, like c&c machines.
So the new paradigm is about testing your most common setup, then slow rolling out and waiting for complaints. If you broke something, you get the details, fix it, and ship again. The problem here is their release cycle takes too long. This is only viable if you can patch things in a day, if it takes you a month to fix a patch that is turning cars into driveway statues, it more than a handful of cars are affected, you need a new strategy.
The penalty of doing it wrong needs to be higher than the cost of doing jt right.
The kind of quality assurance you’re talking about is astronomically expensive. Software has gotten a lot more complex over the past couple decades. And just because it came on physical media and could not easily be patched doesn’t mean that it didn’t have bugs. Far from it.
That might be a valid argument when talking about accounting software with backups in case of fuck ups. We’re talking about cars, on roads, with people sprinkled all around.
I would suggest against self driving cars for this very reason. The kind of thing in the article is not a hazard while driving.