so now proton completely blocking account creation through their onion adress? I have standard protection, javascript enabled. Time to swith for those who use this service as they are ditching tor and switzerland?

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    Just because I find an inaccurate search result does not mean DDG is useless. Never trust, always verify.

    • BluescreenOfDeath@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      There it is. The bold-faced lie.

      “I don’t blindly trust AI, I just ask it to summarize something, read the output, then read the source article too. Just to be sure the AI summarized it properly.”

      Nobody is doing double the work. If you ask AI a question, it only gets a vibe check at best.

      • irmadlad@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Nobody is doing double the work. If you ask AI a question, it only gets a vibe check at best.

        Hey there BluescreenOfDeath, sup. Good to meet you. My name is ‘Nobody’.

        • BluescreenOfDeath@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 day ago

          It’s easy to post on a forum and say so.

          Maybe you even are actually asking AI questions and researching whether or not it’s accurate.

          Perhaps you really are the world’s most perfect person.

          But even if that’s true, which I very seriously doubt, then you’re going to be the extreme minority. People will ask AI a question, and if they like the answers given, they’ll look no further. If they don’t like the answers given, they’ll ask the AI with different wording until they get the answer they want.

    • WoodScientist@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      You can’t practically “trust but verify” with LLMs. I task an LLM to summarize an article. If I want to check its work, I have to go and read that whole article myself. The checking takes as much time as just writing the summary myself. And this is even worse with code, as you have to be able to deconstruct the AI’s code and figure out its internal logic. And by the time you’ve done that, it’s easier to just make the code yourself.

      It’s not that you can’t verify the work of AI. It’s that if you do, you might as well just create the thing yourself.