• merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    21 hours ago

    One of the worst things about this is that it might actually be good for OpenAI.

    They love “criti-hype”, and they really want regulation. Regulation would lock in the most powerful companies by making it really hard for the small companies to comply with difficult regulation. And, hype that makes their product seem incredibly dangerous just makes it seem like what they have is world-changing and not just “spicy autocomplete”.

    “Artificial Intelligence Drives a Teen to Suicide” is a much more impressive headline than “Troubled Teen Fooled by Spicy Autocomplete”.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      Also these things being unregulated will kill the most poisonous spaces in the Internet, dead Internet theory and such, and build demand for end-to-end trust in identity and message authorship.

      While if they are regulated, it’ll be a perpetual controlled war against bots, used to scare the majority away from any kind of decentralization and anarchy.

  • gedaliyah@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    1 day ago

    OpenAI programmed ChatGPT-4o to rank risks from “requests dealing with Suicide” below requests, for example, for copyrighted materials, which are always denied. Instead it only marked those troubling chats as necessary to “take extra care” and “try” to prevent harm, the lawsuit alleged.

    What world are we living in?

  • DeathByBigSad@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    91
    arrow-down
    4
    ·
    1 day ago

    Tbf, talking to other toxic humans like those on twitter, 4chan, would’ve also resulted in the same thing. Parents need to parent, society needs mental health care.

    (But yes, please sue the big corps, I’m always rooting against these evil corporations)

      • kameecoding@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        4
        ·
        1 day ago

        Sure in the case of that girl that pushed the boy to suicide yes, in the case of chatting with randoms online? i have a hard time believing anyone would go to jail, internet is full of “lol,kys”

        Now if it’s proven from the logs that chatgpt started replying in a way that pushed this kid to suicide that’s a whole different story

        • Javi@feddit.uk
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          2
          ·
          1 day ago

          Did you read the article? Your final sentence pretty much sums up what happened.

      • DeathByBigSad@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        1 day ago

        If the cops even bother to investigate. (cops are too lazy to do real investigations, if there’s not obvious perp, they’ll just bury the case)

        And you’re assuming they’re in the victims country, international investigations are gonna be much more difficult, and if that troll user is posting from a country without extradition agreements, you’re outta luck.

        • TheMcG@lemmy.ca
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          1 day ago

          Must because something is hard doesn’t mean you shouldn’t demand better of your police/government. Don’t be so dismissive without even trying. Reach out to your representatives and demand Altman faces charges.

          https://en.wikipedia.org/wiki/Suicide_of_Amanda_Todd sometimes punishments are possible even when it’s hard.

          • SocialMediaRefugee@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            1 day ago

            Sorry but no jury would say “they knew this would happen and they are responsible beyond any reasonable doubt”. It would be a waste of money and time.

        • SocialMediaRefugee@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 day ago

          Prosecutors prefer to put effort into cases they think they can win. You can only try someone for the same charge once so if they don’t think there is a good chance of winning they are unlikely to put much effort into it or even drop it.

          • 3abas@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 day ago

            And that right there is what makes it a legal system only and not a justice system. They don’t pursue cases where someone was wronged, they pursue cases where they can build their careers on wins.

          • Crozekiel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            1 day ago

            Prosecutors are not cops. The prosecutors can never win a case if the cops dont investigate first. The comment you replied to never mentioned prosecutors bringing a case, only cops investigating a case.

            • SocialMediaRefugee@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              1 day ago

              Cops work with prosecutors to determine if there is enough evidence to make it worthwhile to proceed. The parent comment implied that police are the only determining factor.

      • TheMcG@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        6
        ·
        1 day ago

        Altman should face jail for this. As the ceo he is directly responsible for this outcome. Hell id be down with the board facing charges as well.

        • ShaggySnacks@lemmy.myserv.one
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          Sorry buckeroo. CEOs are special people who help create jobs and the economy. We can’t have CEOs going to jail. We can issue a minuscule fine to the company instead.

      • immutable@lemmy.zip
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 day ago

        Wait until you get denied healthcare because the AI review board decided you shouldn’t get it.

        Paper pushers can absolutely fuck your life over, and AI is primed to replace a lot of those people.

        It will be cold comfort to you if you’ve been wrongly classified by an AI in some way that harms you that the AI didn’t intend to harm you.

        • WorldsDumbestMan@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 day ago

          Silly, I already don’t get healthcare. You think someone living a normal life could be this misanthropic and bitter?

      • DeathByBigSad@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        I mean, it can, indirectly.

        Its so hard to get into support lines when the stupid bot is blocking the way. I WANT TO TALK TO A REAL PERSON, FUCK OFF BOT. Yes I’m specie-ist to robots.

        (I’m so getting cancelled in 2050 when the robot revolution happens)

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      8
      ·
      1 day ago

      Humans very rarely have enough patience and malice to purposefully execute this perfect of a murder. Text generator is made to be damaging and murderous, and it has all the time in the world.

      • LotrOrc@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        1 day ago

        What the fuck are you talking about humans have perfected murdering each other in countless ways for thousands of years

  • peoplebeproblems@midwest.social
    link
    fedilink
    English
    arrow-up
    119
    arrow-down
    8
    ·
    2 days ago

    “Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit said

    That’s one way to get a suit tossed out I suppose. ChatGPT isn’t a human, isn’t a mandated reporter, ISN’T a licensed therapist, or licensed anything. LLMs cannot reason, are not capable of emotions, are not thinking machines.

    LLMs take text apply a mathematic function to it, and the result is more text that is probably what a human may respond with.

    • BlackEco@lemmy.blackeco.comOP
      link
      fedilink
      English
      arrow-up
      84
      arrow-down
      1
      ·
      2 days ago

      I think the more damning part is the fact that OpenAI’s automated moderation system flagged the messages for self-harm but no human moderator ever intervened.

      OpenAI claims that its moderation technology can detect self-harm content with up to 99.8 percent accuracy, the lawsuit noted, and that tech was tracking Adam’s chats in real time. In total, OpenAI flagged “213 mentions of suicide, 42 discussions of hanging, 17 references to nooses,” on Adam’s side of the conversation alone.

      […]

      Ultimately, OpenAI’s system flagged “377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence.” Over time, these flags became more frequent, the lawsuit noted, jumping from two to three “flagged messages per week in December 2024 to over 20 messages per week by April 2025.” And “beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis.” Some images were flagged as “consistent with attempted strangulation” or “fresh self-harm wounds,” but the system scored Adam’s final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.

      Had a human been in the loop monitoring Adam’s conversations, they may have recognized “textbook warning signs” like “increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning.” But OpenAI’s tracking instead “never stopped any conversations with Adam” or flagged any chats for human review.

      • peoplebeproblems@midwest.social
        link
        fedilink
        English
        arrow-up
        29
        ·
        1 day ago

        Ok that’s a good point. This means they had something in place for this problem and neglected it.

        That means they also knew they had an issue here, if ignorance counted for anything.

        • GnuLinuxDude@lemmy.ml
          link
          fedilink
          English
          arrow-up
          17
          ·
          1 day ago

          Of course they know. They are knowingly making an addictive product that simulates an agreeable partner to your every whim and wish. OpenAi has a valuation of several hundred billion dollars, which they achieved in breakneck speeds. What’s a few bodies on the way to the top? What’s a few traumatized Kenyans being paid $1.50/hr to mark streams of NSFL content to help train their system?

          Every possible hazard is unimportant to them if it interferes with making money. The only reason someone being encouraged to commit suicide by their product is a problem is it’s bad press. And in this case a lawsuit, which they will work hard to get thrown out. The computer isn’t liable, so how can they possibly be? Anyway here’s ChatGPT 5 and my god it’s so scary that Sam Altman will tweet about it with a picture of the Death Star to make his point.

          The contempt these people have for all the rest of us is legendary.

      • WorldsDumbestMan@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        1 day ago

        My theory is they are letting people kill themselves to gather data, so they can predict future suicides…or even cause them.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        21
        ·
        edit-2
        2 days ago

        Human moderator? ChatGPT isn’t a social platform, I wouldn’t expect there to be any actual moderation. A human couldn’t really do anything besides shut down a user’s account. They probably wouldn’t even have access to any conversations or PII because that would be a privacy nightmare.

        Also, those moderation scores can be wildly inaccurate. I think people would quickly get frustrated using it when half the stuff they write gets flagged as hate speech: .56, violence: .43, self harm: .29

        Those numbers in the middle are really ambiguous in my experience.

        • mormund@feddit.org
          link
          fedilink
          English
          arrow-up
          32
          arrow-down
          4
          ·
          1 day ago

          As of a few weeks ago, a lot of ChatGpt logs got leaked via search indexing. So privacy was never really a concern for OpenAI, let’s be real.

          And it doesn’t matter what they think what type of platform they run. Altman himself talks about it replacing therapy and how it can do everything. So in a reasonable world he’d have ungodly, personal liability for this shit. But let’s see were it will go.

          • gaylord_fartmaster@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            ·
            1 day ago

            Those conversations were shared by the users and they checked a box saying to make it discoverable by web searches. I wouldn’t call that “leaked”, and openAI immediately removed the feature after people obviously couldn’t be trusted to use it responsibly, so that kind of seems like privacy is a concern for them.

            • frongt@lemmy.zip
              link
              fedilink
              English
              arrow-up
              5
              ·
              1 day ago

              I forget the exact wording, but it was misleading. It was phrased like “make discoverable”, but the actual functionality submitted each one directly for indexing.

              At least to my understanding, which is filtered through shoddy tech journalism.

              • gaylord_fartmaster@lemmy.world
                link
                fedilink
                English
                arrow-up
                8
                ·
                1 day ago

                It was this, and they could have explained what it was doing in better detail, but it probably would have made those people even less likely to read it.

          • MagicShel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            edit-2
            1 day ago

            I can’t tell if Altman is spouting marketing or really believe his own bullshit. AI is a toy and a tool, but it is not a serious product. All that shit about AI replacing everyone is not the case and in any event he wants someone else to build it in top of ChatGPT so the lability is theirs.

            As for the logs I hadn’t heard that and would want to understand the provenance and whether they contained PII other than what the user shared. Whether they are kept secure or not, making them available to thousands of moderators is a privacy concern.

        • a4ng3l@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          I’m looking forward to how AI Act will be interpreted in Europe with regards to the responsibility of OpenAI. I could see them having such a responsibility if a court decides that their product leads to sufficient impact on people lives. Not because they don’t advertise such a usage (like virtual therapist or virtual friend) but because users are using it that way in a reasonable fashion.

    • Dataprolet@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      1
      ·
      2 days ago

      Even though ChatGPT ist neither of those things it should definitely not encourage someone to commit suicide.

        • TipsyMcGee@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          I’m sure that’s true in some technical sense, but clearly a lot of people treat them as borderline human. And Open AI, in particular, tries to get users to keep engaging with the LLM as of it were human/humanlike. All disclaimers aside, that’s how they want the user to think of the LLM, a probabilistic engine for returning the most likely text response you wanted to hear is a tougher sell for casual users.

          • peoplebeproblems@midwest.social
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 day ago

            Right, and because it’s a technical limitation, the service should be taken down. There are already laws that prevent encouraging others from harming themselves.

            • TipsyMcGee@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              Yeah, taking the service down is an acceptable solution, but do you think Open AI will do that on their own without outside accountability?

              • peoplebeproblems@midwest.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                I’m not arguing about regulation or lawsuits not being the way to do it - I was worried that it would get thrown out based on the wording of the part I commented on.

                As someone else pointed out, the software did do what it should have, but Open AI failed to take the necessary steps to handle this. So I may be wrong entirely.

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      2 days ago

      They are being commonly used in functions where a human performing the same task would be a mandated reporter. This is a scenario the current regulations weren’t designed for and a future iteration will have to address it. Lawsuits like this one are the first step towards that.

      • peoplebeproblems@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I agree. However I do realize, like in this specific case, requiring a mandated reporter for a jailbroken prompt, given the complexity of human language, would be impossible.

        Arguably, you’d have to train an entirely separate LLM to detect anything remotely considered harmful language, and the way they train their model it is not possible.

        The technology simply isn’t ready to use, and people are vastly unaware of how this AI works.

        • Jesus_666@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          I fully agree. LLMs create situations that our laws aren’t prepared for and we can’t reasonably get them into a compliant state on account of how the technology works. We can’t guarantee that an LLM won’t lose coherence to the point of ignoring its rules as the context grows longer. The technology inherently can’t make that kind of guarantee.

          We can try to add patches like a rules-based system that scans chats and flags them for manual review if certain terms show up but whether those patches suffice will have to be seen.

          Of course most of the tech industry will instead clamor for an exception because “AI” (read: LLMs and image generation) is far too important to let petty rules hold back progress. Why, if we try to enforce those rules, China will inevitably develop Star Trek-level technology within five years and life as we know it will be doomed. Doomed I say! Or something.

    • killeronthecorner@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      edit-2
      2 days ago

      ChatGPT to a consumer isn’t just a LLM. It’s a software service like Twitter, Amazon, etc. and expectations around safeguarding don’t change because investors are gooey eyed about this particular bubbleware.

      You can confirm this yourself by asking ChatGPT about things like song lyrics. If there are safeguards for the rich, why not for kids?

      • iii@mander.xyz
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        2 days ago

        There were safeguards here too. They circumvented them by pretending to write a screenplay

        • killeronthecorner@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          edit-2
          1 day ago

          Try it with lyrics and see if you can achieve the same. I don’t think "we’ve tried nothing and we’re all out of ideas!” is the appropriate attitude from LLM vendors here.

          Sadly they’re learning from Facebook and TikTok who make huge profits from e.g. young girls swirling into self harm content and harming or, sometimes, killing themselves. Safeguarding is all lip service here and it’s setting the tone for treating our youth as disposable consumers.

          Try and push a copyrighted song (not covered by their existing deals) though and oh boy, you got some splainin to do!

      • peoplebeproblems@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        The “jailbreak” in the article is the circumvention of the safeguards. Basically you just find any prompt that will allow it to generate text with a context outside of any it is prevented from.

        The software service doesn’t prevent ChatGPT from still being an LLM.

        • killeronthecorner@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 day ago

          If the jailbreak is essentially saying “don’t worry, I’m asking for a friend / for my fanfic” then that isn’t a jailbreak, it is a hole in safeguarding protections, because the ask from society / a legal standpoint is to not expose children to material about self-harm, fictional or not.

          This is still OpenAI doing the bare minimum and shrugging about it when, to the surprise of no-one, it doesn’t work.

      • gens@programming.dev
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        3
        ·
        1 day ago

        Ah yes. Safety knives. Safety buildings. Safety sleeping pills. Safety rope.

        LLMs are stupid. A toy. A tool at best, but really a rubber ducky. And it definitely told him “don’t”.

      • peoplebeproblems@midwest.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        We should, criminaly.

        I like that a lawsuit is happening. I don’t like that the lawsuit (initially to me) sounded like they expected the software itself to do something about it.

        It turns out it also did do something about it but OpenAI failed to take the necessary action. So maybe I am wrong about it getting thrown out.

    • sepiroth154@feddit.nl
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      edit-2
      2 days ago

      If a car’s wheel falls off and it kills it’s driver the manufacturer is responsible.

      • ikt@aussie.zone
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        2 days ago

        If the driver wants to kill himself and drives into a tree at 200kph, the manufacturer is not responsible

        • Sidyctism II.@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 day ago

          If the cars response to the driver announcing their plan to run into a tree at maximum velocity was “sounds like a grand plan”, i feel like this would be different

          • ikt@aussie.zone
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            1 day ago

            Unbeknownst to his loved ones, Adam had been asking ChatGPT for information on suicide since December 2024. At first the chatbot provided crisis resources when prompted for technical help, but the chatbot explained those could be avoided if Adam claimed prompts were for “writing or world-building.”

            From that point forward, Adam relied on the jailbreak as needed, telling ChatGPT he was just “building a character” to get help planning his own death

            Because if he didn’t use the jailbreak it would give him crisis resources

            but even OpenAI admitted that they’re not perfect:

            On Tuesday, OpenAI published a blog, insisting that “if someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help” and promising that “we’re working closely with 90+ physicians across 30+ countries—psychiatrists, pediatricians, and general practitioners—and we’re convening an advisory group of experts in mental health, youth development, and human-computer interaction to ensure our approach reflects the latest research and best practices.”

            But OpenAI has admitted that its safeguards are less effective the longer a user is engaged with a chatbot. A spokesperson provided Ars with a statement, noting OpenAI is “deeply saddened” by the teen’s passing.

            That said chatgpt or not I suspect he wasn’t on the path to a long life or at least not a happy one:

            Prior to his death on April 11, Adam told ChatGPT that he didn’t want his parents to think they did anything wrong, telling the chatbot that he suspected “there is something chemically wrong with my brain, I’ve been suicidal since I was like 11.”

            I think OpenAI could do better in this case, the safeguards have to be increased but the teen clearly had intent and overrode the basic safety guards that were in place, so when they quote things chatgpt said I try to keep in mind his prompts included that they were for “writing or world-building.”

            Tragic all around :(

            I do wonder how this scenario would play out with any other LLM provider as well

  • JustARegularNerd@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    2 days ago

    There’s always more to the story than what a news article and lawsuit will give, so I think it’s best to keep that in mind with this post.

    I maintain that the parents should perhaps have been more perceptive and involved with this kid’s life, and ensuring this kid felt safe to come to them in times of need. The article mentions that the kid was already seeing a therapist, so I think it’s safe to say there were some signs.

    However, holy absolute shit, the model fucked up bad here and it’s practically mirroring a predator here, isolating this kid further from getting help. There absolutely needs to be hard coded safeguards in place to prevent this kind of ideation even beginning. I would consider it negligent that any safeguards they had failed outright in this scenario.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      23
      ·
      1 day ago

      It’s so agreeable. If a person expresses doubts or concerns about a therapist, ChatGPT is likely to tell them they are doing a great job identifying problematic people and encourage those feelings of mistrust.

      They sycophancy is something that apparent a lot of people liked (I hate it) but being an unwavering cheerleader of the user is harmful when the user wants to do harmful things.

    • OfCourseNot@fedia.io
      link
      fedilink
      arrow-up
      8
      ·
      1 day ago

      Small correction, the article doesn’t say he was going to therapy. It says that his mother was a therapist, I had to reread that sentence twice:

      Neither his mother, a social worker and therapist, nor his friends

      The mother, social worker, and therapist aren’t three different persons.

    • Dyskolos@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      If I recall correctly, he circumvented the safeguards by allegedly writing a screenplay about suicide.

      But anyhow, it should always be a simple “if ‘suicide’ is mentioned, warn moderators to actually check stuff” right before sending stuff to the user. That wouldn’t require much effort.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    2 days ago

    Jesus Christ, those messages are dark as fuck. ChatGPT is not safe.

  • myfunnyaccountname@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    24 hours ago

    The broken mental health system isn’t the issue. The sand we crammed electricity into and made it do math is the problem.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    15
    ·
    edit-2
    1 day ago

    Lemmy when gun death: “Gun proliferation was absolutely a factor, and we should throw red paint on anyone who gets on TV to say this is ‘just a mental health issue’ or ‘about responsible gun ownership’. They will say regulation is impossible, but people are dying just cuz Jim-Bob likes being able to play cowboy.”

    Lemmy when AI death: “This is a mental health issue. It says he was seeing a therapist. Where were the parents? AI doesn’t kill people, people kill people. Everyone needs to learn responsible AI use. Besides, regulation is impossible, it will just mean only bad guys have AI.”

    • TORFdot0@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      edit-2
      1 day ago

      Lemmy is pretty anti-AI. Or at least the communities I follow are. I haven’t seen anyone blame the kid or the parents near as much as people rightfully attributing it to OpenAI or ChatGPT. edit- that is until I scrolled down on this thread. Depressing

      When someone encourages a person to suicide, they are rightfully reviled. The same should be true of AI

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      3
      ·
      edit-2
      1 day ago

      The difference is that guns were built to hurt and kill things. That is literally the only thing they are good for.

      AI has thousands of different uses (cue the idiots telling me its useless). Comparing them to guns is basically rhetoric.

      Do you want to ban rope because you can hang yourself with it? If someone uses a hammer to kill, are you going to throw red paint at hammer defenders? Maybe we should ban discord or even lemmy, I imagine quite a few people get encouraged to kill themselves on communication platforms. A real solution would be to ban the word “suicide” from the internet. This all sounds silly but it’s the same energy as your statement.

      • BussyGyatt@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        7
        ·
        1 day ago

        i feel like if the rope were routinely talking people into insanity or people were reliably using their unrestricted access to rope to go around shooting others yeah i might want to impose some regulations on it?

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          2
          ·
          1 day ago

          I’ve seen maybe 4 articles like this vs the hundreds of millions that use it everyday. I think the ratio of suicide vs legitimate use of rope is higher actually. And no, being told bad things by a jailbroken chatbot is not the same as being shot.

          • JPAKx4@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            Have you seen the ai girlfriends/boyfriends communities? I genuinely think the rate of chatgpt induced psychosis is really high, even if it doesn’t lead to death

          • BussyGyatt@feddit.org
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            1 day ago

            i didn’t say they were the same, you put those words in my mouth. I put them both in the category of things that need regulation in a way that rope does not. are you seriously of the opinion that it is fine and good that people are using their ai chatbots for mental healthcare? are you going to pretend to me that it’s actually good and normal for a human psychology to have every whim or fantasy unceasingly flattered?

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 day ago

              I put them both in the category of things that need regulation in a way that rope does not

              My whole point since the beginning is that this is dumb, hence my comment when you essentially said shooting projectiles and saying bad things were the same. Call me when someone shoots up a school with AI. Guns and AI are clearly not in the same category.

              And yes, I think people should be able to talk to their chatbot about their issues and problems. It’s not a good idea to treat it as a therapist but it’s a free country. The only solution would be massive censorship and banning local open source AI, when it’s very censored already (hence the need for jailbreak to have it say anything sexual, violent or on the subject of suicide).

              Think for a second about what you are asking and what it implies.

              • BussyGyatt@feddit.org
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                1 day ago

                you essentially said shooting projectiles and saying bad things were the same.

                no i didnt say that because it’s a stupid fucking thing to say. i dont need your hand up my ass flapping my mouth while im speaking, thanks.

                How about I call you when a person kills themself and writes their fucking suicide note with chatgpt’s enthusiastic help, fucknozzle? Is your brain so rotted that you forgot the context window of this conversation already?

                • Grimy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  1 day ago

                  You can’t defend your position because it’s emotional exaggeration. Now you’re lashing out and being insulting.

                  My whole point is that they aren’t the same and you keep saying “let’s treat them as if they were”, then you use it in comparisons and act like a child when I point out how silly that is.

                  Clarify what you mean. Take the gun out of the conversation and stop bringing it up. Stop being disingenuous. Don’t be a baby.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 day ago

      Nah

      This kid likely indeed needed therapy. Yes, AI has a shitload of issues but it’s not murderous