• FartMaster69@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 days ago

    Yeah, because someone in a manic state can definitely be trusted to carefully analyze their prompts to minimize bias.

    What do you use AI for?

    I don’t.

    • womjunru@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      2 days ago

      So… you have no clue at all what any of this is. Got it. I’ll bet you blame video games for violence.

        • womjunru@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          2 days ago

          Tell me more about how you’ve never ever used it and that everything you’re saying is influenced by the media and other anti-ai user comments.

          Let’s see what happens when I google for UFOs or chemtrails or deep state or anti-vaccine, etc. how much user created delusional content will I find?

          • FartMaster69@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            Lmao, I never said I’ve never touched AI I just don’t use it for anything because it doesn’t do anything useful for me.

            Yes, delusional content on Google is also a problem. But you have to understand how delusional content tailored uniquely to the individual fed to them by a sycophantic chatbot is several times more dangerous.

            • womjunru@lemmy.cafe
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 day ago

              Oh, well that explains everything, you are using it wrong.

              A lot of people think that you’re supposed to argue with it and talk about things that are subjective or opinion based. But that’s ridiculous because it’s a computer program.

              ChatGPT and others like it are calculators. They help you brainstorm problems. Ultimately, you are responsible for the outcome.

              There’s a phrase I like to use at work when junior developers complain the outcome is not how they wanted it: shit in shit out.

              So next time you use AI, perhaps consider are you feeding it shit? Because if you’re getting it, you are.

              • FartMaster69@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                17 hours ago

                Again, I’m not fucking using it.

                I played with it when it was new but it doesn’t do anything useful, I’m perfectly capable of brainstorming on my own.

                Back to the topic at hand, do you not see how helping someone brainstorm their delusions with a sycophantic chatbot could be dangerous?

                • womjunru@lemmy.cafe
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  13 hours ago

                  So today I had it do a bunch of fractional math on dimensional lumber at the hardware store. While it was doing this math for me it asked if this was for the guitar project I was working on in another chat, where I was mostly asking about magnetic polarity and various electronic, and yes it was. So then it made a different suggestion for me, which made a big impact on what I bought. I know that’s vague, but it was a long conversation.

                  Then, when I got home my neighbor had left a half dead plant on my stoop because I’m the neighborhood green thumb apparently. I had never seen this plant before. Took a photo, sent it to AI, and it told me what it was (yes, with sources).

                  Then while I was 3d modeling some shelf brackets, it helped my design by pointing out a possible weight distribution issue. While correcting that, I was able to reduce material usage by like 30%.

                  I don’t see any of that as “delusional”

                  But to the topic at hand, I think the conversations groups and pairs of humans have, both online and real life, will always be more damaging that what a single person can trick a computer into saying.

                  And by tricking it… you are abusing a tool designed for a different purpose. So, kitchen knives. Not meant to be murder weapons, certainly can be used for that purpose. Should we blame the knife?

                  I also had it make you this image:

                  • FartMaster69@lemmy.dbzer0.com
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    6 hours ago

                    I’m not saying you’re delusional, you seem to have completely lost the thread of this conversation in your defense of chatbots.

                    My point is someone who is already prone to delusional thinking will be sent down a feedback loop of affirming their delusions making things much worse.