• 0 Posts
  • 26 Comments
Joined 10 months ago
cake
Cake day: December 14th, 2023

help-circle



  • My main complaint is when it decides to just stop casting to Chromecast in the middle of episodes randomly - then I have to open the app, reconnect, and resume.

    Also I find the Chromecast controls stop responding frequently making it so I can’t pause what I’m watching - it’ll like disconnect from the Chromecast but keep playing.

    My partner also complains about lots of bugs on the iOS app.




  • I’m glad to clear it up! It’s a super powerful tool, and I still occasionally skip the automation and just use it for manual searches since it reduces that process to a single click to search all configured torrent sites and a single click to download and have the rest automatically handled.

    Before when I was visiting friends and wanted to quickly add something to plex, I used to need remote access to my torrent client and separate remote access to my NAS filesystem to move/rename files when downloads finish which was a really manual process. Now all I need is the reverse-proxied sonarr/radarr UI since it handles moving/copying/renaming on download completion - and while the UI isn’t mobile-first, it’s very usable and feels less error-prone than moving/renaming files remotely using a file explorer app.


  • I mean yeah there’s a lot of stuff it does, but you can pick and choose what you want to use it for so it depends on what you would find useful - you don’t have to use the full automation. I started just by using it as a read-only way to see what movies I had and in what qualities and keep things organized. You can use it as a manual interface to do one-off downloads - basically just as an interface to search 5 torrent sites in 1 place where you are still picking exactly what you want it to download. You can use it only to rename files to a consistent format. So there are a lot of ways to use the various features of sonarr/radarr besides automatic downloads. You’re not forced to go all-in and out of the box it doesn’t start automatically downloading until you enable that.

    I think it’s a common misconception that if you use sonarr/radarr you have to use download automation and set up trackers but it’s not the case. It’s a useful library organization tool even if you don’t ever have it download anything.


  • Man that sucks. I must have gotten lucky or something with my setup. I also have trackers go unavailable all the time but I enabled 8 different ones and usually multiple will have the same torrent so it usually has no problem finding something even if 1 or 2 are down. I also don’t VPN tracker searches, just my BitTorrent client so flaresolverr seems to work fine for me (I only have it enabled for 2 of my trackers since most of the ones I use don’t seem to require it).

    If you end up trying it out again I would look into the quality settings and make sure you’re not using the remux quality profile (edit: apparently the default 1080p quality profile has the 1080 remux quality enabled so this might have been the problem). By default most of the quality profiles seem to limit at 100MB/min, so a 2 hr movie shouldn’t allow anything over like 12GB. Whenever I tweak quality or custom formats I refer to trash guides which has a lot of battle-tested rules you can copy. I have my main quality profile set to only download qualities between hdtv720 and br1080 (which is just below remux) with custom formats copied from trash guides set to prefer hevc with surround sound since I have 5.1.



  • Edit: of course the below only applies to chrome and possibly chrome derivatives - FF is keeping MV2

    It’ll make it a lot more likely that YouTube ads will get through because MV3 limits the block list size to a fraction of the size normally used by uBO and also disallows external/live updates to the block list, instead forcing the rules to be baked into the extension. Meaning an update to the blocking rules could take a week of extension review time to go through. I heard that the YouTube ad blocking rules can update multiple times a day so this would easily allow Google to update their ad code before approving updates to ad blockers, allowing them to always stay ahead.

    So it might not outright break it, but some rules will have to be left off so it seems like it’ll be a dice roll if you get an ad where the blocking rule had to be left off to fit Google’s block list limit or the rule you have is stale because it took a couple weeks for the extension update to be approved on the extension store.

    The feature of MV3 that enables these changes is that in MV3, the extension is handing over the complete blocklist to chrome, which does the blocking and gets to put limits on the blocklist. In MV2, the extension is given a direct hook to do the blocking itself, so it can have an unlimited block list size and can source the blocklist from anywhere. Think of it kind of like the difference between letting a graduation speaker speak off the cuff vs the school reviewing the speech beforehand and having their finger on the mic switch in case you wander off script. So the new system technically can be more secure and performant because the blocklist is reviewed as part of the extension and because poorly written blocker code can’t slow you down (only Google’s optimized logic is allowed to run) but it only works if they don’t impose limits lower than what effective ad blockers need (ie updating frequently like daily and allowing a large blocklist). Plus uBo is written really well for resource usage so it’s getting crippled even though it’s a shining example of an effective ad blocker.

    Plus there are even more limitations like certain types of advanced rules that all I understand is just needed for certain sites that are tricky., but those rules aren’t supported in MV3. The uBo GitHub wiki has some information about this: https://github.com/uBlockOrigin/uBOL-home/wiki/Frequently-asked-questions-(FAQ)#filtering-capabilities-which-cant-be-ported-to-mv3





  • I just discovered how easy ollama and open webui are to set up so I’ve been using llama3 locally too, it was like 20 lines in docker compose, and although I’ve been using gpt3.5 on and off for a long time I’m much more comfortable using models run locally so I’ve been playing with it a lot more. It’s also cool being able to easily switch models at any point during a conversation. I have like 15 models downloaded, mostly 7b and a few 13b models and they all run fast enough on CPU and generate slightly slower than reading speed and only take ~15-30 seconds to start spitting out a response.

    Next I want to set up a vscode plugin so I can use my own locally run codegen models from within vscode.





  • I only do web development, but my networking knowledge mostly comes from being the designated person to call the ISP for tech support and being in charge of setting up the WiFi in every place that I’ve lived, in addition to participating and running community scale mesh wifi tech meetups for many years (think NYCMesh except just 4 guys who never accomplished much aside from buying and flashing lots of routers with openwrt lmao)

    I also ran 12Us of homelab for a few years in my basement, which was powered by an overkill fiber to the home setup (courtesy of tricking Comcast into undercharging me for gigabit pro) that necessitated a 10G switch and firewall.


  • Your ISP knows the Mac address of your router since it requests a public IP from them using DHCP. That’s why if you contact support they usually can confirm the brand of your router by doing an oui lookup.

    In theory the FBI could have collected a list of MACs and optionally used an ASN lookup on the public IP and then handed each ISP their list of MACs, which the ISP could associate back to customers to contact. It would only not work for customers who spoof their router WANs ethernet mac.

    But I think just patching it is a normal and fine solution imo.