• SippyCup@feddit.nl
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 day ago

    Addiction recovery is a different animal entirely too. Don’t get me wrong, is unethical to call any chatbot a therapist, counselor, whatever, but addiction recovery is not typical therapy.

    You absolutely cannot let patients bullshit you. You have to have a keen sense for when patients are looking for any justification to continue using. Even those patients that sought you out for help. They’re generally very skilled manipulators by the time they get to recovery treatment, because they’ve been trying to hide or excuse their addiction for so long by that point. You have to be able to get them to talk to you, and take a pretty firm hand on the conversation at the same time.

    With how horrifically easy it is to convince even the most robust LLM models of your bullshit, this is not only an unethical practice by whoever said it was capable of doing this, it’s enabling to the point of bordering on aiding and abetting.

    • Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 day ago

      Well, that’s the thing: LLMs don’t reason - they’re basically probability engines for words - so they can’t even do the most basic logical checks (such as “you don’t advise an addict to take drugs”) much less the far more complex and subtle “interpreting of a patient’s desires, and motivations so as to guide them through a minefield in their own minds and emotions”.

      So the problem is twofold and more generic than just in therapy/advice:

      • LLMs have a distribution of mistakes which is uniform in the space of consequences - in other words, they’re just as likely to make big mistakes that might cause massive damage as small mistakes that will at most cause little damage - whilst people actually pay attention not to make certain mistakes because the consequences are so big, and if they do such mistakes without thinking they’ll usually spot it and try to correct them. This means that even an LLM with a lower overall rate of mistakes than a person will still cause far more damage because the LLM puts out massive mistakes with as much probability as tiny mistakes whilst the person will spot the obviously illogical/dangerous mistakes and not make them or correct them, hence the kind of mistakes people make are mainly the lower consequence small mistakes.
      • Probabilistic text generation generally produces text which expresses straightforward logic encodings which are present in the text it was trained with so the LLM probability engine just following the universe of probabilities of what words will come next given the previous words will tend to follow the often travelled paths in the training dataset and those tend to be logical because the people who wrote those texts are mostly logical. However for higher level analysis and interpretation - I call then 2nd and 3rd level considerations, say “that a certain thing was set up in a certain way which made the observed consequences more likely” - LLMs fail miserably because unless that specific logical path has been followed again and again in the training texts, it will simply not be there in the probability space for the LLM to follow. Or in more concrete terms, if you’re an intelligent, senior professional in a complex field, the LLM can’t do the level of analysis you can because multi-level complex logical constructs have far more variants and hence the specific one you’re dealing with is far less likely to appear in the training data often enough to affect the final probabilities the LLM encodes.

      So in this specific case, LLMs might just put out extreme things with giant consequences that a reasoning being would not (the “bullet in the chamber” of Russian roulette), plus they can’t really do the subtle multi-layered elements of analysis (so the stuff beyond “if A then B” and into the “why A”, “what makes a person choose A and can they find a way to avoid B by not chosing A”, “what’s the point of B” and so on), though granted, most people also seem to have trouble doing this last part naturally beyond maybe the first level of depth.

      PS: I find it hard to explain multi-level logic. I supposed we could think of it as “looking at the possible causes, of the causes, of the causes of a certain outcome” and then trying to figure out what can be changed at a higher level to make the last level - “the causes of a certain outcome” - not even be possible to happen. Individual situations of such multi-level logic can get so complex and unique that they’ll never appear in an LLMs training dataset because that specific combination is so rare, even though they might be pretty logic and easy to determine for a reasoning entity, say “I need to speak to my brother because yesterday I went out in the rain and got drenched as I don’t have an umbrella and I know my brother has a couple of extra ones so maybe he can give one of them to me”.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 day ago

      AI is great for advice. It’s like asking your narcissist neighbor for advice. He might be right. He might have the best answer possible, or he might be just trying to make you feel good about your interaction so you’ll come closer to his inner circle.

      You don’t ask Steve for therapy or ideas on self-help. And if you did, you’d know to do due diligence on any fucking thing out of his mouth.

      • ameancow@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        I’m still not sure what it’s “great” at other than a few minutes of hilarious entertainment until you realize it’s just predictive text with an eerie amount of data behind it.

        • MystikIncarnate@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          Yuuuuup. It’s like taking nearly the entirety of the public Internet, shoving it into a fancy auto correct machine, then having it spit out responses to whatever you say, then send them along with no human interaction whatsoever on what reply is being sent to you.

          It operates at a massive scale compared to what auto carrot does, but it’s the same idea, just bigger and more complex.

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          Ask it to give you and shell.nix and a bash script to use jQuery to stitch 30,000 jsons together and de-dupe them, drop it all into a sqlite db.

          30 seconds, paste and run.

          Give it the full script of an app you wrote where you’re having a rejex problem and it’s particularly nasty regex.

          No thought, boom done. It’ll even tell you what you did wrong so you won’t make the mistake next time.

          I’ve been doing coding and scripting for 25 years. If you know what you want it to do and you know what it should look like when it’s done, there’s a tremendous amount of advantage there.

          Add a function to this flask application to use fuzzywuzzy to delete a name out of the text file, ad a confirmation step. It’s the crap that I only need to do once every two or three years, Right have to go and look up all of the documentation. And you know what, if something and it doesn’t work and it doesn’t know exactly how to fix it I’m more than capable of debugging what it just did because for the most part it documents pretty well and it uses best practices most of the time. It also helps to know where it’s weak and things to not ask it to do.