• MangoCats@feddit.it
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 hours ago

    My problem with LLMs is that positive feedback loop of low and negative quality information.

    Vetting the datasets before feeding them for training is a form of bias / discrimination, but complex society has historically always been somewhat biased - for better and for worse, but never not biased at all.