

I stream music via my own server. I use navidrome. I use their Web UI for PC and then symfonium for mobile.
But yeah you’re right, it’s cheaper for me to buy a few CDs every now and again than pay a subscription


I stream music via my own server. I use navidrome. I use their Web UI for PC and then symfonium for mobile.
But yeah you’re right, it’s cheaper for me to buy a few CDs every now and again than pay a subscription


I selfhost navidrome for the music streaming (+symfonium app for mobile). Multi user and multi library support.
For music tagging itself ive used beets, picard, and kid3 (kde). Currently I am liking picard the most. It took a little bit of learning but less than beets


Expired domains first can be bought by registrars and then they might sell or auction it off. For instance godaddy will scoop up a lot of domains and auction them off even if it was registered somewhere else first. And unfortunately a surname.tld probably will invite domain squatters to try to get it and then charge much more for it
You can look into something like dropcatch which they will try to get the domain for you before another registrar gets it. Look into their backorder service and just check the timing to make sure they still can try to get it.
Regularly check the whois info (via icann lookup) to see which registrar currently has it which can help you determine if it has gone to an auction.


Giscus / utteranc.es use github discussions/ issues to do comments so that might be an option
For one of my projects i setup remark42 which does allow anonymous comments. You can also set it up to allow logging in to a few different platforms. 7 months and no issues.
Isso is another one i had looked at which can do comments without an acct


Ha! Well i agree this would’ve been better early on but I’m glad to see them dig into the details beyond the marketing language.


Tflops is a generic measurement, not actual utilization, and not specific to a given type of workload. Not all workloads saturate gpu utilization equally and ai models will depend on cuda/tensor. the gen/count of your cores will be better optimized for AI workloads and better able to utilize those tflops for your task. and yes, amd uses rocm which i didn’t feel i needed to specify since its a given (and years behind cuda capabilities). The point is that these things are not equal and there are major differences here alone.
I mentioned memory type since the cards you listed use different versions ( hbm vs gddr) so you can’t just compare the capacity alone and expect equal performance.
And again for your specific use case of this large MoE model you’d need to solve the gpu-to-gpu communication issue (ensuring both connections + sufficient speed without getting bottlenecked)
I think you’re going to need to do actual analysis of the specific set up youre proposing. Good luck


The table you’re referencing leaves out CUDA/ tensor cores (count+gen) which is a big part of the gpus, and also not factoring in type of memory. From the comments it looks like you want to use a large MoE model. You aren’t going to be able to just stack raw power and expect to be able to run this without major deterioration of performance if it runs at all.
Don’t forget your MoE model needs all-to-all communication for expert routing


These are just off the top of my head. Best case scenario the blocking does work and the teen never tries to bypass it. They’ll still just move onto “wasting” time on something else. This is treating the symptom and not the root cause.


Pihole can set up “groups” for different blocklists. You specify client by IP or MAC address so it doesnt matter what the dhcp server is, so long as there’s a static IP or static MAC address. My pihole server doesn’t have dhcp set up and I’m able to do this fine
Though from personal experience this just becomes a game of cat and mouse, and if you have a motivated teenager then they will find a way to circumvent this. For example android can rotate MAC addresses, and IP addresses are trivial to spoof as well.


Haven’t used all of those but my recommendation would be to just start trying them. Start small, get a feel for it and expand usage or try a different backup solution. You should be able to do automatic backups for any of them either directly or setting up your own timer/cron jobs (which is how i do it with rsync).


I submitted a response but if i may give some feedback, the second portion brings up:
I am willing to pay a substantial amount for hardware required for self-hosting.
This seemed out of place because there were no other value related questions (iirc). Such as:
I’m sure you could also think of more. But i think it’s pretty important because between cloud service providers and any non-free apps you want to use, it can be quite costly compared to the cost of some hardware and time it takes to set things up.
The rest of my responses don’t change but if you’re wanting to understand the impact of money in all of this, i think some more questions are needed
Best of luck!


You’re not connected to wifi or vpn from the looks of it. jellyfin is hosted on your local network. You need to be connected to that network for any device you want to access it. The most direct way is to connect via wifi. If you want access from outside your house you’ll need to look into opening a remote connection via something like cloudflare tunnel


Logseq to some extent, but it’s set up to be a journal/ meeting notes where you tag pages, add documents, etc. it would be up to how you’ve tagged things. Does have a graph view of your pages and whiteboard feature.
Personally it wasn’t exactly what i wanted out of a PKM but it is really powerful. It’s intended to handle taking notes efficiently from meetings and then somewhat self organizing the notes as long as you tag stuff.


Foundry was the 2nd thing i started self hosting (the first being pihole). Have had it running for 5 years now.
Other than that i only recently started expanding my self hosting:


Without knowing what reddit is doing, I’m not sure. A JS redirect could be detected, but if OPs paid shortener service is working then reddit is probably working off a simple domain block list. In that case you could use throw away domains.
But JS redirect, proxy response, etc all could just become a game of cat and mouse. Just depends how motivated either side is. But given how big reddit is, i think you’d have the advantage at least in the beginning. Just gets expensive since each time your domain gets blocked you’ll be paying to register a new one.


I’m not familiar with the reddit filtering but have you tried using cloudflare page rules? You can try capturing everything after the .tld and then forward it to a lemmy server. So for instance somedomain.tld/12345 could forward to lemmy.world/post/12345. If reddit is checking links for 301 redirects to lemmy though then that wouldn’t work.
A more advanced approach would be to use a cloudflare worker to do a proxy response so the status code is returned as 200 OK instead of 301 redirect. I haven’t tried that but i think that would be much harder for them to block and you could always make more elaborate urls to make it harder to find obvious lemmy-like structure


I would use cloudflare pages (or any forge ‘pages’ feature) before using tunnels for a static website
From a user experience its a social media site, like reddit.
And an ELI5 for the technical parts:


Depends on the programs, but likely statistics if it is a halfway decent program.
I struggled with that but for me i treated it as one I’ve been most hyped about this past year