We cover the recent NVIDIA x Palantir partnership wherein NVIDIA CEO Jensen Huang stated that NVIDIA would “accelerate everything that Palantir” does. In this video, we mostly define WTF Palantir is, but also expand on other topics of AI facial recognition technology, so-called “pre-crime” arrests facilitated by other technologies and agencies, and how NVIDIA feels like it’s becoming, effectively, a defense contractor. Instead of ‘normal’ weapons, though, NVIDIA is selling AI technology to be turned into weapons.

  • Pelicanen@sopuli.xyz
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    To be fair, “hallucinations” is just an LLM model doing exactly what it is designed to to do. All it does is use a black-box statistical model to estimate the most likely following word or set of words. The only difference between “correct” outputs and “hallucinations” is humans’ interpretation of it, in terms of what the model does there is nothing separating the two.

    • C1pher@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Nah. Hallucinations are the LLM making shit up, without prompting or talking about irrelevant things. Especially in longer convos. GPT does this all the time now, unless you are very specific and repeat the constraints or clarifications. All of this is unprompted and not just “repeating” the statistical, or “working” pattern thar has the highest score.

      • Pelicanen@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Again, this is a feature of the fundamental structure of how LLMs work. What is determined to be the most statistically likely output is influenced not only by the training data itself but by the weights assigned during training.

        LLMs can’t make anything up because they do not know anything. Unexpected outputs likely become more common as the training data used is increasingly more the result of previous LLM outputs, intentionally poisoned data, and an increasing number of limitations are placed upon the models during training.