so now proton completely blocking account creation through their onion adress? I have standard protection, javascript enabled. Time to swith for those who use this service as they are ditching tor and switzerland?

    • not_IO@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      16 hours ago

      it should be possible to criticize a mail provider without being flooded by it’s evangelists. Especially proton, their image of privacy does not reflect reality at all

    • lemmyknow@lemmy.today
      link
      fedilink
      arrow-up
      10
      arrow-down
      3
      ·
      2 days ago

      Not saying it’s some sort of conspiracy theory, but it do be kinda sus. People are just quick to hate Proton over anything. It’s like bias confirmation. They seem to be justifying their hatred, or looking for reasons to do so. I mean, “leaving Switzerland”? Really?! I thought that was because Switzerland was considering a privacy-unfriendly law. That’s bad, now?

      • ReversalHatchery@beehaw.org
        link
        fedilink
        arrow-up
        14
        arrow-down
        1
        ·
        2 days ago

        OP said they are blocking tor users. I say the error message might just be legit and someone is spamming username existence checks through Tor.

      • Ilandar@lemmy.today
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        1 day ago

        I am deeply sorry to whichever moderator i offended so much that they needed to delete my comment. Thanks to your guidance I have now learned to hate Proton like a good lemming and will boycott them for the rest of my life as penance for making you cry.

    • BluescreenOfDeath@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      58
      ·
      2 days ago

      There’s been evidence in their github repo that they’re using LLMs to code their tools now.

      It’s making me reconsider using them.

      • Zetta@mander.xyz
        link
        fedilink
        arrow-up
        43
        arrow-down
        7
        ·
        2 days ago

        Theres evidence they use the very popular tool cursor that many devs and large companies use.

        • limer@lemmy.ml
          link
          fedilink
          arrow-up
          32
          arrow-down
          9
          ·
          2 days ago

          LLM is avoided by many experienced developers and competent medium and small companies.

          Tools like cursor are sometimes ok for small things like people learning, or to generate boilerplate.

          But it is seen by some as a warning flag when it’s in source code for larger projects

          • 3abas@lemmy.world
            link
            fedilink
            arrow-up
            11
            arrow-down
            5
            ·
            edit-2
            2 days ago

            This comment is meaningless.

            What red flags? Why is it a red flag is an be experienced developer used cursor on a larger project? Put it into words.

            • irotsoma@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              6
              arrow-down
              5
              ·
              2 days ago

              It’s very time consuming to detect and correct the small mistakes that LLMs make. Beyond one or two lines of code, it becomes much more time consuming to correct the multitude of subtle mistakes vs coding it myself. I use code completion that comes with my IDE, but that is programmatic completion, not LLM, and is much, much more accurate and in smaller chunks that are easy to verify at a glance. I’ve never known any experienced developers who have had a different experience. LLMs can be good for getting a general idea of how to code something in a new language or framework I’ve never touched before and more to help find actual examples rather than use the code directly in the IDE, but if I were to use LLM code directly that would be in a test project, never, ever in production code. I would never write production code in a language I’ve never used before with or without an LLM’s “help”.

            • limer@lemmy.ml
              link
              fedilink
              arrow-up
              3
              arrow-down
              5
              ·
              edit-2
              2 days ago

              When adding code this way, one needs to look it over and read to fix bugs or things that are not quite correct; stats show experienced developers often are faster not using this approach because debugging existing code takes longer than writing it fresh.

              The speed is not the issue.

              What matters is sometimes subtle bugs are introduced that require several people to catch. If at all. These issues might be unique to the Llm.

              Having large sections of generated code offers the possibility of hard to find problems.

              Some codes are more sensitive to such issues.

              The details of how the code was added, and what it does, may render this issue harmless or very much a problem to be avoided.

              This is why it’s a flag and not a condemnation

        • lerky@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          28
          arrow-down
          6
          ·
          2 days ago

          No wAy something popular and megacorp-embraced could be bad. Asbestos, lead pipes, 2-digit dates, NFTs, opiates, sub-prime lending, algorithmic content, pervasive surveillance, etc must have just been flukes.

          • irmadlad@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            2
            ·
            2 days ago

            No wAy something popular and megacorp-embraced could be bad. Asbestos, lead pipes, 2-digit dates, NFTs, opiates, sub-prime lending, algorithmic content, pervasive surveillance, etc must have just been flukes.

            All technology weilds a double edged sword.

            • BluescreenOfDeath@lemmy.world
              link
              fedilink
              English
              arrow-up
              9
              arrow-down
              2
              ·
              2 days ago

              Sure, but with all the mistakes I see LLMs making in places where professionals should be quality checking their work (lawyers, judges, internal company email summaries, etc) it gives me pause considering this is a privacy and security focused company.

              It’s one thing for AI to hallucinate cases, and another entirely to forget there’s a difference between = and == when the AI bulk generates code. One slip up and my security and privacy could be compromised.

              You’re welcome to buy in to the AI hype. I remember the dot com bubble.

              • irmadlad@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                ·
                edit-2
                2 days ago

                You’re welcome to buy in to the AI hype.

                We’ve been using ‘AI’ for quite some time now, well before the advent of AI Rice Cookers. It’s really not that new.

                I use AI when I master my audio tracks. I am clinically deaf and there are some frequency ranges that I can’t hear well enough to master. So I lean heavily on AI. I use AI for explaining unfamiliar code to me. Now, I don’t run and implement such code in a production environment. You have to do your due diligence. If you searched for the same info in a search engine, you still have to do your due diligence. Search engine results aren’t always authoritative. It’s just that Grok is much faster at searching and in fact, lists the sources it pulled the info from. Again, much faster than engaging a search engine and slogging through site after site.

                • BluescreenOfDeath@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  1
                  ·
                  2 days ago

                  If you want to trade accuracy for speed, that’s your prerogative.

                  AI has its uses. Transcribing subtitles, searching images by description, things like that. But too many times, I’ve seen AI summaries that, if you read the article the AI cited, it can be flatly wrong on things.

                  What’s the point of a summary that doesn’t actually summarize the facts accurately?

                  • irmadlad@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    arrow-down
                    1
                    ·
                    2 days ago

                    Just because I find an inaccurate search result does not mean DDG is useless. Never trust, always verify.

          • Zetta@mander.xyz
            link
            fedilink
            arrow-up
            4
            ·
            2 days ago

            As the other guy said, double edged sword. Asbestos was fucking great, and is still used for certain things because it’s great. The poor interaction with human biology was the other side of the sword.

            An aside, I just pulled a fuck load of vinyl asbestos tile out of a house a year ago and while it wasn’t actually all that dangerous because I took proper precautions it’s sorta scary anyway cause of the poor interaction thing.

      • fluffy@feddit.org
        link
        fedilink
        arrow-up
        5
        arrow-down
        2
        ·
        2 days ago

        I think you don’t know what “evidence” means. It’s barely a clue.

        • BluescreenOfDeath@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 day ago

          It’s a single data data point, nothing more, nothing less. But that single data point is evidence of using LLMs in their code generation.

          Time will tell if this is a molehill or a mountain. When it comes to data privacy, given that it just takes one mistake and my data can be compromised, I’m going to be picky about who I park my data with.

          I’m not necessarily immediately looking to jump ship, but I consider it a red flag that they’re using developer tools centered around using AI to generate code.