![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
https://youtu.be/jN7mSXMruEo?feature=shared
Not op, but I really liked this video, as it explains quite a bit. It is of course a biased video, but still…
https://youtu.be/jN7mSXMruEo?feature=shared
Not op, but I really liked this video, as it explains quite a bit. It is of course a biased video, but still…
It is not quite what I had in mind, but I’ll still poke around, because I see a lot of YAML in my future…
What I was after was having a switch just like the one shown in the screenshot as a card in the Dashboard. All I can add is a button that changes the color. I know I can YAML it, but it still feels weird that an obvious nice solution is not readily available.
I may have oversimplified my statement. Of course an objective description of reality is impossible. A curse on all social sciences and statistics.
My post was more a showerthought…even if the data is incomplete, whatever THAT data implies will also be the stereotype the AI will learn. Misrepresentation of minorities in sample data is absolutely nothing new. But even if the data WAS complete, it would probably still be very biased. I think we often don’t notice structural discrimination and AI would simply reproduce those and confront us with it. In that sense I think it is a very interesting way to get a sort of ‘outside look’ at our own society and that is something that’s very useful.
I wonder if this is because AI is trained on data that ‘is’ and has therefore no concept of how it ‘should be’. Maybe it is an effective mirror of society…
How do they know that you wrote it yourself and didn’t just steal it?
This is a rule to protect themselves. If there is ever a case around this, they can push the blame to the person that committed the code for breaking that rule.