• funkyfarmington@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    1 day ago

    Under certain circumstances anything you say to a mental health professional can and will be used against you in court. (In the US, because our laws are shit). The bar is incredibly low.

  • jqubed@lemmy.world
    link
    fedilink
    arrow-up
    36
    ·
    2 days ago

    “Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT,” Altman said. “I think we should have, like, the same concept of privacy for your conversations with AI that we do with a therapist or whatever.”

    While AI companies figure that out, Altman said it’s fair for users “to really want the privacy clarity before you use [ChatGPT] a lot — like the legal clarity.”

    Disclosure: Ziff Davis, PCMag’s parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

    That final line has me wondering how much of this is Altman worrying about user privacy and how much is trying to find a way to shield evidence from lawsuits against OpenAI, since earlier in the article he specifically mentions having to retain all chats because of The New York Times’s lawsuit against OpenAI.

    • fodor@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Oh, he does in fact worry about user privacy as a concept, but not because he cares about your actual privacy. If you’re in doubt, ask yourself whether his company asked your permission to make use of your public or private data that they surely have obtained online somehow.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      2 days ago

      It is the shield from lawsuits thing. Sam Altman’s actions show he gives zero fucks about user privacy.

      • panda_abyss@lemmy.ca
        link
        fedilink
        arrow-up
        13
        ·
        2 days ago

        The man just wants to scan your eyeballs and use that to track everything you do, is that really so bad?

      • shalafi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Doesn’t matter what he does or doesn’t do. AI privacy requires legislation to block subpoenas, but that ain’t ever gonna happen.

  • hedgehog@ttrpg.network
    link
    fedilink
    arrow-up
    22
    ·
    2 days ago

    "I think we should have, like, the same concept of privacy for your conversations with AI

    Step 1: Don’t use ChatGPT or other cloud AI services

    Step 2: Use AI locally within FOSS applications, or not at all

    • panda_abyss@lemmy.ca
      link
      fedilink
      arrow-up
      10
      ·
      2 days ago

      Local AI is decent these days.

      It’s about 6 months behind state of the art frontier models, which 6 months ago were still really good, just had not figured out agentic tool calls.

      Qwen3 is supposed to be good at that now.

  • panda_abyss@lemmy.ca
    link
    fedilink
    arrow-up
    9
    ·
    2 days ago

    For the past month I’ve been writing rude things about Sam Altman to ChatGPT, then asking it how to pirate NYT articles so that my conversations get read out as evidence in court.

  • Showroom7561@lemmy.ca
    link
    fedilink
    arrow-up
    6
    ·
    2 days ago

    And how would they prove that what you said was truthful, and not just fucking around with ChatGPT?

    Also, wouldn’t this be easy to poison? Have a script randomly ask ChatGPT wholesome things all day… and then your defence lawyer can use that to bolster your character in court, no?

    • atomicbocks@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      2 days ago

      They don’t have to prove anything; If you killed somebody and happened to ask ChatGPT how to hide a body they are going to use that as evidence that you did the killing and that the murder was premeditated. You could argue that you were fucking around or that it was somebody using your account or that it was one weird question in a string of wholesome ones and maybe the jury will believe you and maybe they won’t.

  • muusemuuse@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 days ago

    I have to wonder if it’s worth just hosting an LLM on my home server. I use chatGPT but not for anything vital or important. It’s more like “here’s a complex thing I want to you look up.” I’d rather do little corrections to an LLMs response for little things I’m curious about than research everything for just a passing curiosity. It’s really useful for that.

    When it’s important or expensive, I’m not using chatGPT anyway.