• wonderingwanderer@sopuli.xyz
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    1 day ago

    In the early stages, it had potential to develop into something useful. Legislators had a chance to regulate it so it wouldn’t become toxic and destructive of all things good, but they didn’t do that because it would “hinder growth,” again falling for the fallacy that growth is always good and desirable.

    But to be honest, some of the earlier LLMs were much better than the ones now. They could have been forked, and developed into specialized models trained exclusively on technical documents relative to their field.

    Instead, AI companies all wanted to have the biggest, most generalized models they could possibly develop, so they scraped as much data as they possibly could and trained their LLMs on enormous amounts of garbage, thinking “oh just a few billion more data points and it will become sentient” or something stupid like that. And now you have Artificial Idiocy that hallucinates nonstop.

    Like, an LLM trained exclusively on peer-reviewed journals could make a decent research assistant or expedited search engine. It would help with things like literature reviews, collating data, and meta-analyses, saving time for researchers so they could dedicate more of their effort towards the specifically human activities of critical thinking, abstract analysis, and synthesizing novel ideas.

    An ML model trained exclusively on technical diagrams could render more accurate simulations than one trained on a digital fuckton of slop.