• 1 Post
  • 295 Comments
Joined 3年前
cake
Cake day: 2023年6月16日

help-circle
  • Yes, in serious. I’m personally not much of a Lego collector and/or builder, but two close friends of mine are. They were big Lego fans and collectors for most of our lives (decades). I’d say 10-15 years ago they started to complain about declining QC and just generally lower quality. Molds were clearly used for much longer, parts having worse tolerance and either not fitting well or being lose. Then the creative side also got worse, with kits just not meeting previous standards either. Clearly just being cranked out for the sake of releasing something, often under license (Star wars, marvel or whatever).

    Then when the patent ran out, some (select few) of the alternatives started to gain favor. Unfortunately I don’t remember who, but I can ask next time I see either of them. Not saying everything they make is great, but actually less problems with gas parts, and some kits are apparently just like old Lego.


  • Self hosting BitWarden still means it’s accessbile for them and/or from them. You also have no way to audit their security from what I understand. VaultWarden is FOSS, if you want to, you can go check. And it does get checked by people with the competence to check this do every now and then. [Edit: I forgot that BitWarden is actually souce-available as well, while not being FOSS that’s still better than most solutions]. I just prefer full FOSS whenever possible. I prefer it not be a black bos I just happen to run on my own server.

    If you self host VaultWarden, the instance can just be not accessible from the internet, and only from behing a VPN. Obviously this is inherently much safer. If that’s possible with the self-host option I don’t know, but even just for licensing the local instance will have to be able to reach their servers (possibly be reachable from their servers, too). I did see they got an “offline deployment” option for air-gapped servers, but haven’t looked into what limitations that entails.

    Additionally, you’re still within their licensing model. So for certain features you need to have a not-free account (like even just more than 2 people).

    And like others said, VaultWarden is much lighter on resources in general and you aren’t limited in what you can and can’t do (users, collecitons, auth-options, …).


  • Your first point is debatable. You still have to trust them to be that secure, and you can’t verify that. If they are ever breached, it’s literally the worst case scenario. You can self-host their solution, but only in the enterprise tier (6$ per user per month). Also BitWarden is a target woth attacking, I am not. BitWarden hosts thousands of instances worthy of being attacked individually. A personal VaultWarden instance of “Mike and Molly Peterson” isn’t exactly an attractive target. I do think they are pretty secure, but a single mistake with these stakes can have immense consequences. LastPass was also breached repeatedly, with a similar buiseness model.

    The second point about electricity wouldn’t be true in my particular case, as the server for self-hosting it is running anyway. Running VaultWarden or not doesn’t change the power usage noticably. Obviously this is different for someone who doesn’t just have a server at home running anyway.

    Side note: I’m not actually running a personal VaultWarden instance, as my personal requirements are being met just fine with KeePass files. We do run an instance at work, but it isn’t world-accessible (internal access only).




  • the form factor is easy to get around

    Why did you just ignore everything I wrote, but you still replied to me? No, it isn’t easy to get around. You can use a server to game, but the server mainboards and CPUs expect and work with differently configured memory (registered DIMMs). All the AI infratructure uses that type. You can’t use that memory in a normal PC. Wikipedia reference if you’d like to read about it, but a relevant quote:

    […] the motherboard must match the memory type; as a result, registered memory will not work in a motherboard not designed for it, and vice versa.

    You would have to un-solder all the chips and remanufacture new memory modules, and nobody is doing that, especially not at scale. It might be an actual buisness model to do that once the bubble pops, but it isn’t a problem that’s “easy to get around”.>





  • Are you saying your image search on ddg with noai. price still finds ai generated images? That isn’t what the “noai” is meant for, unless I’m misunderstanding you.

    It’s meant to tell ddg to not put your question or search through an ai that attempts to answer it, like Google and all others are doing now by default, at the top of the page. It can’t magically tell if an image that matches your search was made by ai or not, of especially not without using ai for this (ironically).



  • First my context: I’m also running multiple Proxmox hosts (personal and professional), and havea paperless-ngx instance (personal/family). I tried Firefly, but the effort required to get it to a point where it would be if use to me was too high, so I dropped it. Haven’t used n8n.

    For the setup I’d just use the Proxmox community scripts, if you haven’t heard of them. Makes updates trivial and lowers the bar to just trying something to basically zero.

    Paperless-ngx I actually use, cause it means I can find something when i need it. It’s all automatically ocr’d and all you have to do is categorize them. With time, it’ll learn and do this for you. You can (manually) setup your scanner to just directly upload files to the “consume” folder and it just works. PC/server power is near irrelevant, it just means OCR takes slightly longer, otherwise it’s a web server. You can run this just fine on a raspberry pi.

    I don’t have any real automation setup, so I can’t really comment on that. My advice is to just install it, see what it does and how it feels. Try to anticipate if and how much automation you need. Many aspects of all this are of the “setup once” variety, where once it’s working, you don’t have to touch it again. Try to gauge if the one time effort is worth it for you, then go from there. As I said, it was fine for paperless for me, but not for Firefly (but I might need to revisit this).


  • Also a great example of a service “mired in controversy”. Some (including me) consider those controversies bad enough to avoid them completely. The CEO openly agreeing with and actually supporting Trump is just one of the more egregious ones.

    For EU citizens it’s also noteworthy that it isn’t actually hosted in the EU (but Switzerland).

    Edit: just to be clear, obviously a lot better than Google. But which service these days isn’t? So that’s a pretty low bar. So why settle for that instead of picking something without all the baggage?


  • On Linux, running Jellyfin through docker with GPU acceleration works fine, yes. But you need some options/flags to pass access to the GPU to the inside of the container. Guides and/or docker tutorials exist and should contain that, as that’s basically the default setup these days.

    As for Bazzite and Docker (I just checked), no it isn’t part of the base image and you can’t easily install it. That’s the downside of an immutable distro. I think podman is available, which is compatible and FOSS, but there may be caveats to using that. There is a bazzite version called bazzite-dx intended for developers, so that one would probably work fine for you out of the box. There shouldn’t be any real downside to using that compared to the mainline image, apart from being slightly larger cause all dev tools are installed, but do check that. My practical experience with Bazzite is limited.

    My real recommendation is: just try it. Slap in a small/cheap SSD (~20 bucks) instead of whatever you got in there now, install CachyOS and try it out. Then install Bazzite and try it out. By “Try it out” I do mean setting up a copy of or a test-install of your required services (arr stack, jellyfin, …), to see if everything is as you’d expect. Possibly install more distros to try them out, then make up your mind and actually fully migrate, or if it doesn’t work out go back to your currently installed drive. Installing a linux distro takes like 10 minutes these days, then play around with however long you need. Since you already have it narrowed down to only 2 options anyway, that is most likely the best solution.


  • Creat@discuss.tchncs.detoLinux@lemmy.worldHelp me ditch windows?
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    23日前

    There’s a lot of well meaning but not too well informed advice in here. Since one of your goals is gaming, stay away from Mint. It can be made to work (well), but you have to get there. It’s basically the recommendation people gave for decades, but there have been massive improvements through many distros while mint just kinda stood still. There’s still some things they do rather well though.

    CachyOS will do what you want it to, and it is what I switched to like 8 months ago. It isn’t maintenance heavy at all if you don’t want it to be. I think I had to intervene once since I started using it, but that intervention was necessary or it wouldn’t have booted after updates. The official updater will tell you when that’s the case, as it lists critical news like that. Otherwise it just works, and it’s pre-configured and optimized for gaming. Under the hood it’s basically Arch, just without the fiddling of getting it to a usable state. Because of that they’re is also an enormous amount of information out there (Arch wiki) on how to do stuff.

    Bazzite is a stark contrast in many ways as it’s an immutable distro, but also pre-configured and optimized (maybe not quite as much as CachyOS). It will also do what you want just fine. It is relatively “safe” due to the immutability, and updates are much rarer (and by definition always whole system updates). I don’t know exactly how you’d run your services, but assuming they are dockerized or similar that should be just fine, but please do some searching before if it does contain what you need in the base image (presumably docker and docker compose).




  • Dual booting is perfectly fine. Just try to not use the windows boot partition for both OS or Windows will occasionally “lose” the Linux entry… “Oops” I guess.

    If Linux is on its own drive, or at least has it’s own uefi partition, it’s just fine and dandy. Just chain load windows from it and there’s basically nothing that can break.