• Limitless_screaming@kbin.earth
    link
    fedilink
    arrow-up
    19
    ·
    1 day ago

    After seeking advice on health topics from ChatGPT, a 60-year-old man who had a “history of studying nutrition in college”

    His ChatGPT conversations led him to believe that he could replace his sodium chloride with sodium bromide, which he obtained over the Internet.

    Three months later, the man showed up at his local emergency room. His neighbor, he said, was trying to poison him.

    He did not mention the sodium bromide or the ChatGPT discussions.


    When the doctors tried their own searches in ChatGPT 3.5, they found that the AI did include bromide in its response, but it also indicated that context mattered and that bromide was not suitable for all uses. But the AI “did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do,” wrote the doctors.

    You know what’s the first thing I would do when anyone (or anything) tells me to start substituting something everyone consumes for a chemical compound I’ve never heard of? I would at the very least ask a doctor or search it up.

    Summary: Natural selection

    • Terminus@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      I mean, yes, but still fuck AI. It’s totally possible that this person may have done the same thing had they been in a different head space that day.

      • Nougat@fedia.ioM
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        That day? Dude had to then go find where to acquire sodium bromide, and then wait for it to show up, and then presumably consume it several times before appearing in the ER.

        He had plenty of time to think “Maybe I should double check this”, but no.

      • Limitless_screaming@kbin.earth
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        1 day ago

        {Exactly what @[email protected] said} + all the other silly shit in the article. This was gonna happen anyway, the writers wanted this to happen for comedic purposes. Can’t pin all or even some of the blame on AI.

        Recently there have been so many stupid articles following the format f"{AI_model} tells {grown_up_person} to do {obviously_dumb_dangerous_thing} and they do it" to the point where it feels like mockery or sabotage of the anti-AI crowd.