[OpenAI CEO Sam] Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.” These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

OP: https://slashdot.org/story/25/06/09/062257/ai-is-not-intelligent-the-atlantic-criticizes-scam-underlying-the-ai-industry

Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

Secondary source: https://bookshop.org/a/12476/9780063418561

  • BlameTheAntifa@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    10 months ago

    The “Artificial” part isn’t clue enough?

    But I get it. The executives constantly hype up these madlib machines as things they are not. Emotional intelligence? It has neither emotion nor intelligence. “Artificial Intelligence” literally means it has the appearance of intelligence, but not actual intelligence.

    I used to be excited at the prospect of this technology, but at the time I naively expected people to be able to create and run their own. Instead, we got this proprietary capital-chasing clepto corporate dystopia.

      • Tamo240@programming.dev
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        10 months ago

        This is actually not true. You are referring to Artificial General Intelligence (AGI), an artificially intelligent system that is able to function in any context.

        Artificial Intelligence as a field of Computer Science goes back to the 50s, and is defined as systems that appear intelligent, not that actually exhibit thinking capabilities. The entire purpose of the Turing test is to appear intelligent, with no requirement that the system actually is.

        Rule based systems and statistical models are examples of AI in the scientific sense, but the public perception of what AI should mean is warped by portrayals in science fiction of what it could mean.

        • wewbull@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          The Turing test was a thought experiment stating that if something seemed intelligent then it was intelligent. We have utterly proved that wrong now. IMHO we should only teach it to say that it isn’t a complete definition.