• sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    18 hours ago

    Heres the main problem:

    LLMs don’t forget things.

    They do not disregard false data, false concepts.

    That conversation, that dataset, knowledge base gets too big?

    Well the LLM now gets slower and less efficient, has to compare and contrast more and more contradictory data, to build its heuristics out of.

    It has no ability to meta-cognate. It has no ability to discern, and disregard bullshit, both as raw data points, and bullshit processes for evaluating and formulating concepts and systems.

    The problem is not that they know too little, but that they know so much that isn’t so is pointless contradictory garbage.

    When people learn and grow and change and make breakthroughs, they do so by shifting to or inventing some kind of totally new mental framework for understanding themselves and/or the world.

    LLMs cannot do this.