I think it’s generally a brilliant solution but there are a couple of problems here:
The scanner seems to flag fucking everything and charge for minor damage where a human would probably flag it as wear.
No one is allowed to correct the scanner:
Perturbed by the apparent mistake, the user tried to speak to employees and managers at the Hertz counter, but none were able to help, and all “pointed fingers at the ‘AI scanner.’” They were told to contact customer support — but even that proved futile after representatives claimed they “can’t do anything.”
Sounds to me like they’re just trying to replace those employees. That’s why they won’t let them interfere.
You are spot on here. AI is great for sensitivity (noticing potential issues), but terrible for specivity (giving many false positives).
The issue is how AI is used, not the AI itself. They don’t have a human in the checking process. They should use AI scanner to check the car. If it’s fine, then you have saved the employee from manually checking, which is a time-consuming process and prone to error.
If the AI spots something, then get an employee to look at the issues highlighted. If it’s just a water drop or other false positive, then it should be a one click ‘ignore’, and the customer goes on their way without charge. If it is genuine, then show the evidence to the customer and discuss charges in person. Company still saves time over a manual check and has much improved accuracy and evidence collection.
They are being greedy by trying to eliminate the employee altogether. This probably doesn’t actually save any money, if anything it costs more in dealing with complaints, not to mention the loss of sales due to building a poor image.
If it’s fine, then you have saved the employee from manually checking
Exactly. Not only that but the human is more likely to overlook some things. It also creates a digital record of the complete condition.
Have the AI go over the vehicle, being insanely meticulous and then pass that info off to a human who verifies any flagged damages in a couple of seconds and makes decisions about what needs to be charged.
The US lacks even the most basic consumer protections it seems.
In Australia, companies still try to give you the run around, but I am extremely confident this wouldn’t fly here. Even though I’m not a lawyer.
If you literally can’t get a hold of them, they’re breaking Australian Consumer Law, that’s a slam dunk to charge back the card and dare them to take you to your state’s relevant tribunal that hears cases like this. It costs either like $70 to file, you can represent yourself easily, and if you’re low-income, it’s literally free.
They don’t want to waste money on fighting you. If you’re confident you’re clearly in the right, it’s very easy to get a company to back down.
This is a great time to remind everyone to take photos before and after getting a rental car, because otherwise it’s your word against them.
Companies have been fucking consumers since the beginning of time and consumers, time and time again, bend over and ask for more. Just look at all of the most successful companies in the world and ask yourself, are they constantly trying to deliver the most amazing service possible for their customers or are they trying to find new ways to fuck them at every available opportunity?
I feel like the go to strategy would be to offer incredible service at first, then once you are big enough to force out competitors and the like, then you start fucking the consumer
Sometimes there’s no competition. Many times there is. And still customers will ignore them.
Look where we all are right now. Was it hard leaving Reddit? Did it cost you anything? And yet millions of people return there every day. Reddit fucked them, they protested for 2 days, and then almost everyone went back to business as usual.
I use an app called GoMore in some places in Europe that allows you to rent cars from other peers. The rental process is cheaper and faster–everything is done through the app–and you avoid these shady corpo practices.
I’m not sure how you can make the points you make, and still call it a “generally brilliant solution”
The entire point of this system - like anything a giant company like Hertz does - is not to be fair to the customer. The point is to screw the customer over to make money.
Not allowing human employees to challenge the incorrect AI decision is very intentional, because it defers your complaint to a later time when you have to phone customer support.
This means you no longer have the persuasion power of being there in person at the time of the assessment, with the car still there too, and means you have to muster the time and effort to call customer services - which they are hoping you won’t bother doing. Even if you do call, CS hold all the cards at that point and can easily swerve you over the phone.
The technology isn’t there to accurately assess damage. It’s there to give Hertz an excuse to charge you extra money. It’s working exactly as the ghouls in the C-suite like.
Stop lights are meant to direct traffic. If someone runs a red light, is the technology not working as it should?
The technology here, using computer vision to automatically flag potential damage, needed to be implemented alongside human supervision - an employee should be able to walk by the car, see that the flagged damage doesn’t actually exist, and override the algorithm.
The technology itself isn’t bad, it’s how hertz is using it that is.
I believe the unfortunate miscommunication here is that when @[email protected] said the solution was brilliant, they were referring to the technology as the “solution”, and others are referring to the implementation as a whole as the “solution”
You’re admitting the technology is in fact flawed if you think it needed to be implemented with supervision. An uno reverse is, every set of traffic lights needs a traffic controller to stop drivers running red lights. Unequivocal, right?
You’re admitting the technology is in fact flawed if you think it needed to be implemented with supervision.
You’re absolutely right. The technology isn’t perfect if it needs to be implemented with supervision, but it can be good enough to have a role in everyday society.
Great examples are self checkout lanes, where there’s always an employee watching, and speed cameras, which always have an officer reviewing and signing off on tickets.
An uno reverse is, every set of traffic lights needs a traffic controller to stop drivers running red lights.
Traffic lights are meant to direct traffic. Yet you don’t expect them to prevent folks from running red lights. Folks don’t expect them to, because that’s not their role in their implementation - they are meant to be used alongside folks who will enforce traffic laws, and, maybe in fact, traffic controllers. This is arguably an example of an implementation done right.
This technology is meant to flag car damage. If there was a correct implementation, I would be able to say “folks don’t expect them to be perfect, because that’s not their role in their implementation - they are meant to be used alongside employees trained to verify damage exists, who can correct the algorithm if needed”, but the implementation in this case is sadly bad.
At the end of the day, you will never have a “perfect” computer vision algorithm. But you can have many “good enough” ones, depending on how they’re implemented.
There is no human element to this implantation, it is the technology itself malfunctioning. There was no damage but the system thinks there is damage.
Let’s make sure we’re building up from the same foundation. My assumptions are:
Algorithms will make mistakes.
There’s an acceptable level of error for all algorithms.
If an algorithm is making too many mistakes, that can be mitigated with human supervision and overrides.
Let me know if you disagree with any of these assumptions.
In this case, the lack of human override discussed in assumption 3 is, itself, a human-made decision that I am claiming is an error in implementing this technology. That is the human element. As management, you can either go on a snipe hunt trying to find an algorithm that is perfect, or you can make sure that trained employees can verify and correct the algorithm when needed. Instead hertz management chose option 3 - run an imperfect algorithm with absolutely 0 employee oversight. THAT is where they fucked up. THAT is where the human element screwed a potentially useful technology.
I work with machine learning algorithms. You will not, ever, find a practical machine learning algorithm that gets something right 100% of the time and is never wrong. But we don’t say “the technology is malfunctioning” when it gets something wrong, otherwise there’s a ton of invisible technology that we all rely on in our day to day lives that is “malfunctioning”.
I was pretty clear about what I was referring to. The internet is just full of pedants lurking and waiting for their chance to UM ACKSHUALLY their way into a conversation.
Just because THE TECHNOLOGY IS NOT PERFECT does not mean it is NOT DOING WHAT IT’S intended to do. Sorry I’m having trouble controlling THE VOLUME OF MY VOICE.
Society typically understands “there’s nothing wrong with x” to mean it’s performing within acceptable boundaries, and not to mean that it has achieved perfection.
It’s really funny here. There already exists software that does this stuff. It’s existed for quite a while. I personally know a software engineer that works at a company that creates this stuff. It’s sold to insurance companies. Hertz version must just totally suck.
I think it’s generally a brilliant solution but there are a couple of problems here:
Sounds to me like they’re just trying to replace those employees. That’s why they won’t let them interfere.
You are spot on here. AI is great for sensitivity (noticing potential issues), but terrible for specivity (giving many false positives).
The issue is how AI is used, not the AI itself. They don’t have a human in the checking process. They should use AI scanner to check the car. If it’s fine, then you have saved the employee from manually checking, which is a time-consuming process and prone to error.
If the AI spots something, then get an employee to look at the issues highlighted. If it’s just a water drop or other false positive, then it should be a one click ‘ignore’, and the customer goes on their way without charge. If it is genuine, then show the evidence to the customer and discuss charges in person. Company still saves time over a manual check and has much improved accuracy and evidence collection.
They are being greedy by trying to eliminate the employee altogether. This probably doesn’t actually save any money, if anything it costs more in dealing with complaints, not to mention the loss of sales due to building a poor image.
Exactly. Not only that but the human is more likely to overlook some things. It also creates a digital record of the complete condition.
Have the AI go over the vehicle, being insanely meticulous and then pass that info off to a human who verifies any flagged damages in a couple of seconds and makes decisions about what needs to be charged.
Combining the 2 improves efficiency and accuracy.
The US lacks even the most basic consumer protections it seems.
In Australia, companies still try to give you the run around, but I am extremely confident this wouldn’t fly here. Even though I’m not a lawyer.
If you literally can’t get a hold of them, they’re breaking Australian Consumer Law, that’s a slam dunk to charge back the card and dare them to take you to your state’s relevant tribunal that hears cases like this. It costs either like $70 to file, you can represent yourself easily, and if you’re low-income, it’s literally free.
They don’t want to waste money on fighting you. If you’re confident you’re clearly in the right, it’s very easy to get a company to back down.
This is a great time to remind everyone to take photos before and after getting a rental car, because otherwise it’s your word against them.
Sounds like they want to lose those customers.
Companies have been fucking consumers since the beginning of time and consumers, time and time again, bend over and ask for more. Just look at all of the most successful companies in the world and ask yourself, are they constantly trying to deliver the most amazing service possible for their customers or are they trying to find new ways to fuck them at every available opportunity?
I feel like the go to strategy would be to offer incredible service at first, then once you are big enough to force out competitors and the like, then you start fucking the consumer
The word used for that strategy is usually “enshittification”. It happens a lot after digital tech is introduced in a new sector.
Not many people today remember when Google was actually useful. Once upon a time.
From “don’t be evil” to “be as evil as possible”
That’s why the matching strategy is mergers to combine all the competitors into one company.
But they know their competitions are doing to adopt the same type of tech, so where are those customers going to go when they have no choice?
Sometimes there’s no competition. Many times there is. And still customers will ignore them.
Look where we all are right now. Was it hard leaving Reddit? Did it cost you anything? And yet millions of people return there every day. Reddit fucked them, they protested for 2 days, and then almost everyone went back to business as usual.
I use an app called GoMore in some places in Europe that allows you to rent cars from other peers. The rental process is cheaper and faster–everything is done through the app–and you avoid these shady corpo practices.
For now till the shit that happens with Airbnb happens there. With the corporations just renting all the cars.
Turo is probably the closest equivalent in the US
In the US, Turo is basically that.
good, tbh
I’m not sure how you can make the points you make, and still call it a “generally brilliant solution”
The entire point of this system - like anything a giant company like Hertz does - is not to be fair to the customer. The point is to screw the customer over to make money.
Not allowing human employees to challenge the incorrect AI decision is very intentional, because it defers your complaint to a later time when you have to phone customer support.
This means you no longer have the persuasion power of being there in person at the time of the assessment, with the car still there too, and means you have to muster the time and effort to call customer services - which they are hoping you won’t bother doing. Even if you do call, CS hold all the cards at that point and can easily swerve you over the phone.
It’s all part of the business strategy.
That’s why you chargeback. Don’t waste time arguing with the machine, cut it off at the cashflow
Because the technology itself is not the problem, it’s the application. Not complicated.
The technology is literally the problem as it’s not working
There’s literally nothing wrong with the technology. The problem is the application.
The technology is NOT DOING WHAT ITS MEANT TO DO - it is IDENTIFYING DAMAGE WHERE THERE IS NONE - the TECHNOLOGY is NOT working as it should
The technology isn’t there to accurately assess damage. It’s there to give Hertz an excuse to charge you extra money. It’s working exactly as the ghouls in the C-suite like.
It’s the guise of customer service sure, yeah
Do you hold everything to such a standard?
Stop lights are meant to direct traffic. If someone runs a red light, is the technology not working as it should?
The technology here, using computer vision to automatically flag potential damage, needed to be implemented alongside human supervision - an employee should be able to walk by the car, see that the flagged damage doesn’t actually exist, and override the algorithm.
The technology itself isn’t bad, it’s how hertz is using it that is.
I believe the unfortunate miscommunication here is that when @[email protected] said the solution was brilliant, they were referring to the technology as the “solution”, and others are referring to the implementation as a whole as the “solution”
Stop light analogy is completely unequivocal
You’re admitting the technology is in fact flawed if you think it needed to be implemented with supervision. An uno reverse is, every set of traffic lights needs a traffic controller to stop drivers running red lights. Unequivocal, right?
Just stop because you’re wrong, lol
You’re absolutely right. The technology isn’t perfect if it needs to be implemented with supervision, but it can be good enough to have a role in everyday society.
Great examples are self checkout lanes, where there’s always an employee watching, and speed cameras, which always have an officer reviewing and signing off on tickets.
Traffic lights are meant to direct traffic. Yet you don’t expect them to prevent folks from running red lights. Folks don’t expect them to, because that’s not their role in their implementation - they are meant to be used alongside folks who will enforce traffic laws, and, maybe in fact, traffic controllers. This is arguably an example of an implementation done right.
This technology is meant to flag car damage. If there was a correct implementation, I would be able to say “folks don’t expect them to be perfect, because that’s not their role in their implementation - they are meant to be used alongside employees trained to verify damage exists, who can correct the algorithm if needed”, but the implementation in this case is sadly bad.
At the end of the day, you will never have a “perfect” computer vision algorithm. But you can have many “good enough” ones, depending on how they’re implemented.
The stop light analogy would require the stop light be doing something wrong not the human element doing something wrong because.
There is no human element to this implantation, it is the technology itself malfunctioning. There was no damage but the system thinks there is damage.
Let’s make sure we’re building up from the same foundation. My assumptions are:
Let me know if you disagree with any of these assumptions.
In this case, the lack of human override discussed in assumption 3 is, itself, a human-made decision that I am claiming is an error in implementing this technology. That is the human element. As management, you can either go on a snipe hunt trying to find an algorithm that is perfect, or you can make sure that trained employees can verify and correct the algorithm when needed. Instead hertz management chose option 3 - run an imperfect algorithm with absolutely 0 employee oversight. THAT is where they fucked up. THAT is where the human element screwed a potentially useful technology.
I work with machine learning algorithms. You will not, ever, find a practical machine learning algorithm that gets something right 100% of the time and is never wrong. But we don’t say “the technology is malfunctioning” when it gets something wrong, otherwise there’s a ton of invisible technology that we all rely on in our day to day lives that is “malfunctioning”.
Yes, that’s exactly what I’m saying. That’s the problem.
I was pretty clear about what I was referring to. The internet is just full of pedants lurking and waiting for their chance to UM ACKSHUALLY their way into a conversation.
Just because THE TECHNOLOGY IS NOT PERFECT does not mean it is NOT DOING WHAT IT’S intended to do. Sorry I’m having trouble controlling THE VOLUME OF MY VOICE.
Pick a lane troll
It’s the same lane moron. It can be both imperfect and also nothing wrong with it.
Society typically understands “there’s nothing wrong with x” to mean it’s performing within acceptable boundaries, and not to mean that it has achieved perfection.
It’s really funny here. There already exists software that does this stuff. It’s existed for quite a while. I personally know a software engineer that works at a company that creates this stuff. It’s sold to insurance companies. Hertz version must just totally suck.