• Infrapink@thebrainbin.org
    link
    fedilink
    arrow-up
    120
    ·
    2 days ago

    I’m a line worker in a factory, and I recently managed to give a presentation on “AI” to a group of office workers (it went well!). One of the people there is in regular contact with the C_Os but fortunately is pretty reasonable. His attitude is “We have this problem; what tools do we have to fix it”, and so isn’t impressed by " AI" yet. The C_Os, alas, insist it’s the future. They keep hammering on at him to get everybody to integrate “AI” in their workflows, but they have no idea how to actually do that (let alone what the factory actually does), they just say “We have this tool, use it somehow”.

    The reasonable manager asked me how I would respond if a C_O said we would get left behind if we don’t embrace " AI". I quipped that it’s fine to be left behind when everybody else is running towards a cliff. I was pretty proud of that one.

    • ravelin@lemmy.ml
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 day ago

      As usual, I fear for the reasonable manager’s job.

      Reasonable managers usually get plowed out of the way by unreasonable C levels who just see their reasonable concerns as obstructions.

    • Strider@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      1 day ago

      Everyone bought so hard in on it that they need to (make you/us) use it. Otherwise it will be a financial disaster. It shit leaking down all the way.

      (of course it has uses. But it’s not AGI!)

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 day ago

        In the early stages, it had potential to develop into something useful. Legislators had a chance to regulate it so it wouldn’t become toxic and destructive of all things good, but they didn’t do that because it would “hinder growth,” again falling for the fallacy that growth is always good and desirable.

        But to be honest, some of the earlier LLMs were much better than the ones now. They could have been forked, and developed into specialized models trained exclusively on technical documents relative to their field.

        Instead, AI companies all wanted to have the biggest, most generalized models they could possibly develop, so they scraped as much data as they possibly could and trained their LLMs on enormous amounts of garbage, thinking “oh just a few billion more data points and it will become sentient” or something stupid like that. And now you have Artificial Idiocy that hallucinates nonstop.

        Like, an LLM trained exclusively on peer-reviewed journals could make a decent research assistant or expedited search engine. It would help with things like literature reviews, collating data, and meta-analyses, saving time for researchers so they could dedicate more of their effort towards the specifically human activities of critical thinking, abstract analysis, and synthesizing novel ideas.

        An ML model trained exclusively on technical diagrams could render more accurate simulations than one trained on a digital fuckton of slop.

      • RaoulDook@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        That’s what I suspect. All the corporate bosses pushing AI to keep the bubble inflated so that their investments don’t get drowned in the crash.

        I gotta work on my 401k stuff to find ways to divest from AI + tech too.