My problem with LLMs is that positive feedback loop of low and negative quality information.
Vetting the datasets before feeding them for training is a form of bias / discrimination, but complex society has historically always been somewhat biased - for better and for worse, but never not biased at all.
My problem with LLMs is that positive feedback loop of low and negative quality information.
Vetting the datasets before feeding them for training is a form of bias / discrimination, but complex society has historically always been somewhat biased - for better and for worse, but never not biased at all.