We cover the recent NVIDIA x Palantir partnership wherein NVIDIA CEO Jensen Huang stated that NVIDIA would “accelerate everything that Palantir” does. In this video, we mostly define WTF Palantir is, but also expand on other topics of AI facial recognition technology, so-called “pre-crime” arrests facilitated by other technologies and agencies, and how NVIDIA feels like it’s becoming, effectively, a defense contractor. Instead of ‘normal’ weapons, though, NVIDIA is selling AI technology to be turned into weapons.


AI is already starting to hallucinate and be more and more lobotomized. I am hoping it will all crash, before they get that circle-economy fraud stable (like the housing market). Even the most sanitized and looked after ChatGPT is worsening. Gpt4 was excellent, butcthe newest updates are just making it more evident, that it will collapse and succumb to entropy like everything in this reality.
To be fair, “hallucinations” is just an LLM model doing exactly what it is designed to to do. All it does is use a black-box statistical model to estimate the most likely following word or set of words. The only difference between “correct” outputs and “hallucinations” is humans’ interpretation of it, in terms of what the model does there is nothing separating the two.
Nah. Hallucinations are the LLM making shit up, without prompting or talking about irrelevant things. Especially in longer convos. GPT does this all the time now, unless you are very specific and repeat the constraints or clarifications. All of this is unprompted and not just “repeating” the statistical, or “working” pattern thar has the highest score.
Again, this is a feature of the fundamental structure of how LLMs work. What is determined to be the most statistically likely output is influenced not only by the training data itself but by the weights assigned during training.
LLMs can’t make anything up because they do not know anything. Unexpected outputs likely become more common as the training data used is increasingly more the result of previous LLM outputs, intentionally poisoned data, and an increasing number of limitations are placed upon the models during training.