• Wolf@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 days ago

    It literally has no other way to judge; it’s not smart enough to cross reference itself or look up sources

    I think that is it’s biggest limitation.

    Like AI basically crowd sourcing information isn’t really the worst thing, crowd sourced knowledge tends to be fairly decent. People treating it as if it’s an authoritative source like they looked it up in the encyclopedia or asked an expert is a big problem though.

    Ideally it would be more selective about the ‘crowds’ it gathers data from. Like science questions should be sourced from scientists. Preferably experts in the field that the question is about.

    Like Wikipedia (at least for now) is ‘crowd- sourced’, but individual pages are usually maintained by people who know a lot about the subject. That’s why it’s more accurate than a ‘normal’ encyclopedia. Though of course it’s not fool proof or tamper proof by any definition.

    If we taught AI how to be ‘Media Literate’ and gave it the ability to double check it’s data with reliable sources- it would be a lot more useful.

    most upvoted answer

    This is the other problem. You basically have 4 types of redditors.

    • People who use the karma system correctly, that is to say they upvote things that contribute to the conversation. Even if you think it is ‘wrong’ or you disagree with it, if it’s something that adds to the discussion, you are supposed to upvote it.

    • People who treat it as “I agree/ I disagree” buttons.

    • People who treat it as "I like this/ I hate this buttons.

    • Id say the majority of the people probably do some combination of the above.

    So more than half the time people aren’t upvoting things because they think they are correct. If LLM models are treating ‘karma’ as a “This is correct” metric- that’s a big problem.

    The other bad problem is people who really should know better- tech bros and CEO’s going all in on AI when it’s WAY to early to do that. As you point out, it’s not even really intelligent yet- it just parrots ‘common’ knowledge.

    • Hemingways_Shotgun@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      AI should never be used to create anything in Wikipedia. But theoretically, an open source LLM trained solely on wikipedia would actually be kind useful to ask quick questions to.