Nice find. Yes, it’s notoriously difficult to avoid that with machine learning in the first place. It’s designed to reproduce bias and stereotypes from the training data. It will find patterns, no matter whether they contribute towards the goal of the application. And I heard extrapolating beyond what’s in the training data doesn’t really work that well. I mean a doctor has context, can use their reasoning skills to come up with conclusions. And some medical imaging software will more or less leave the boundaries of what it can comfortably find predictions for, so it might do unpredictable things.
And: Doctors Catch Cancer-Diagnosing AI Extracting Patients’ Race Data and Being Racist With It
Nice find. Yes, it’s notoriously difficult to avoid that with machine learning in the first place. It’s designed to reproduce bias and stereotypes from the training data. It will find patterns, no matter whether they contribute towards the goal of the application. And I heard extrapolating beyond what’s in the training data doesn’t really work that well. I mean a doctor has context, can use their reasoning skills to come up with conclusions. And some medical imaging software will more or less leave the boundaries of what it can comfortably find predictions for, so it might do unpredictable things.