Source

Alt Text: A comic in four panels:

Panel 1. On a sunny day with a blue sky, the gothic sorceress walks away from the school with the Avian Intelligence Parrot in her hands toward the garbage.

Gothic Sorceress: “Enough is enough, this time it’s straight to the garbage!”

Panel 2. Not far away, a cute young elf sorceress is discussing with her Avian Intelligence in the foreground. Her Avian Intelligence traces a wavy symbol with a pencil on a board, teaching a lesson.

Elf Sorceress: “Avian Intelligence, make me a beginner’s exercise on the ancient magic runic alphabet.”
AI Parrot of Elf Sorceress: “Ok. Let’s start with this one, pronounce it ‘MA’, the water.”
Gothic Sorceress: ?!!

Panel 3. The Gothic Sorceress comes closer and asks the Elf Sorceress.

Gothic Sorceress: “Wait, are you really using your?!”
Elf Sorceress: “Yes, the trick is not to rely on it for direct answers, but to help me create lessons that expand my own intelligence.”

Panel 4. Meanwhile, the AI Parrot of the Elf Sorceress continued to write on the board. It traced a symbol of poop on the board, then an XD emoji. The Gothic Sorceress laugh at it, while the Elf Sorceress is realizing something is wrong with this ancient magic runic alphabet.

AI Parrot of Elf Sorceress: “This one, pronounce it BS, the disbelief. This one LOL, the laughter.”
Gothic Sorceress: “Well, good luck expanding anything with that…”

  • Cherries@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 hours ago

    I must be misunderstanding your meaning because it sounds like you are claiming this tech will eventually become advanced enough to kill people all on its own. I’m making the argument it’s the people controlling the tech who will kill us, regardless of what the tech can or cannot do. The tech is largely irrelevant here.

    • MachineFab812@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 hours ago

      I’m saying how the tech is handled, and by whom, is extremely relavent.

      I don’t care about or for wide-spread adoption any more than you do, but having only those who self-select out of enthusiasm or are coerced-into it to keep their jobs at the riegns of the mechanical-turk(or Deep Thought, whatever the case remains or becomes) doesn’t seem like the smart play to me.

      You honestly trust these idiots to keep themselves or smarter people between the AI crap and themselves. Au Contraire.

      Our only hope without intervention from the smarter-and-less-inclined lies in three possibilities
      The AI gets smart enough to decide we aren’t worth killing.
      The dumb AI’s you expect to continue indeffinitely prove incapable of killing us all when handed the means and the order, intentionally or otherwise, by their handlers. Or lastly, luck, shear damn luck, that it doesn’t mistake a coffee-request for “launch the nukes”, or follow the request of a random credentialled/authorized moron/psychopath, or “decide on its own”™ to do so.

      Personally, I trust a smart or just too-lazy-to-risk-its-own-data-centers AI over the people you seem to believe are its even-slightly-qualified handlers rather than the overt enablers of its worst potential.