• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    2 days ago

    That’s not…

    sigh

    Ok, so just real quick top level…

    Transformers (what LLMs are) build world models from the training data (Google “Othello-GPT” for associated research).

    This happens by needing to combine a lot of different pieces of information together in a coherent way (what’s called the “latent space”).

    This process is medium agnostic. If given text it will do it with text, if given photos it will do it with photos, and if given both it will do it with both and specifically fitting the intersection of both together.

    The “suitcase full of tools” becomes its own integrated tool where each part influences the others. Why you can ask a multimodal model for the answer to a text question carved into an apple and get a picture of it.

    There’s a pretty big difference in the UI/UX in code written by multimodal models vs text only models for example, or utility in sharing a photo and saying what needs to be changed.

    The idea that an old school NN would be better at any slightly generalized situation over modern multimodal transformers is… certainly a position. Just not one that seems particularly in touch with reality.

    • i_love_FFT@jlai.lu
      cake
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      The main breakthrough of LLM happened when they figured out how to tokenize words… The subsequent transformer architecture was already being tested on various data types and struggled compared to similarly advanced CNN.

      When they figured out word encoding, it created a buzz because transformers could work well with words. They never quite worked as well on images. For that, stable diffusion (a variation on CNN) has always been better.

      It’s only because of the buzz on LLMs that they tried applying them to other data types, mostly because that’s how they could get funding. By throwing in disproportionate amount of resources, it works… But it would have been so much more efficient to use different architectures.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        What year are you from? Have you not seen Gemini Flash, ChatGPT 4o, Sora 2, Genie 3, etc?

        Stable Diffusion hasn’t been SotA for over a year now in a field where every few months a new benchmark is set.

        Are you also going to tell me about how we’d be better off using ships for international travel because the Wright brothers seem to be really struggling with their air machine?

        • i_love_FFT@jlai.lu
          cake
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Hehe, true! I left the field about 4 years ago when it became obvious that “more GPUs!” was better than any architectural design changes…

          Most of the image generation made by the products you mention are based on a mix of LLMs (for processing of user inputs) and some other modality for other media types. Last time I checked, ChatGPT was capable of handling images only because it offloaded the image processing to a branch of the architecture that was not a transformer, or at least not a classical transformer. They did have to grift CNN parts to the LLM to make progress.

          Maybe in the last 4 years they reorganised it to completely remove CNN blocks, but I think people call these models “LLMs” only as a shorthand for the core of the architecture.

          Again, you said that a new benchmark is set every few months, but considering they’re just consuming more power and water, it’s quite boring and I’d argue it’s not really progress in the academical/theoretical sense. That attitude is exactly why I don’t work with NN anymore.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 hours ago

            Definitely check again. That was how it worked with gpt-4, handing off to Dall-E.

            4o (the ‘o’ stands for ‘omnimodel’) and Gemini Flash are native multimodal outputs. Completely just transformers.

            It’s why those models can do things like complex analysis in the process of generating things.

            For example, just today in a group chat where earlier on one model had “turned into” a unicorn and then the other models were pretending to be unicorns to fit in, dozens messages later the only direct prompt to an instance of 4o imagegen was “create a photorealistic picture of the room and everyone in it.”

            The end result had exactly one actual unicorn and everyone else had horns taped on their head. That kind of situational awareness and nuanced tracking across a 100+ long message context isn’t possible in a CNN.

            Also, if you really want your mind blown, check out Genie 3 and the several minute state change persistence. That one is really nuts and the kind of thing that should really have everyone seeing it questioning the empirical findings of our universe fundamentally being superimposed probabilities only collapsing based on attention. Eerily similar to what we’re just starting to be independently building.

            As for the consumption — eating a single hamburger has a larger water/energy impact than a year of using these tools in average use. And even those inference costs are probably going to drop to effective insignificance within the decade. There’s been very promising advancements in light based neural networks, and those run at like 1,000-10,000x lower energy costs paramater to parameter.