If what you want is an alternative for the SMS on Desktop feature, I’m surprised no one has mentioned https://messages.google.com/web
If what you want is an alternative for the SMS on Desktop feature, I’m surprised no one has mentioned https://messages.google.com/web
then the easier method is to install Caddy as docker and use the containername:containerport method?.. did I understand correctly?
Yes, if the only exposed port to the host or outside, is 443 from caddy container, then the only way to access any of those services is HTTPS through caddy.
I’ve installed caddy directly on my unbuntu server, but I admin my Jellyfin (and eventually Nextcloud) with Docker via CasaOS interface… is this a problem? Do I need to run Caddy in docker too?
The difference between having caddy or any other reverse proxy in docker alongside other apps/services, is that instead of having to expose ports for every container to the host, and then linking every service/app as, localhost:<host-port> to caddy, you can have them on the same docker network and use <container-name>:<container-port> and only expose 80 443 to the host, meaning that the only way to access app/services is through caddy, that way if you disable port 80 after configuring SSL certificates, you can only access services with HTTPS.
A friendly reminder that it is best to wait a bit before updating, in case there are any bugs still there, happened a few days ago with Forgejo, that a mayor bug was detected after initial release of v13.0


The TL, DR version of sharing with No License, is that technically speaking you are not explicitly permitting others to use your code in any way, just allowing them to look, a license is a formal way to give permissions to others to copy, modify, or use your code.
You don’t need an extra file for the license, you can embed it on a section at the top of your file, as you did with the description, just add a # License section at the very top, if you want the most permissive one you can just use MIT, just need to replace the year of publication of the code, and you can use a pseudonym/username like ‘[email protected]’ if you don’t want to use something like email, username on another site or real name, that can be used to identify you, if that’s a concern


Just wondering, as this is the second post I see you do like this, why not use git and a forge (codeberg, gitlab, github), to publish these projects, with proper file separation, a nice README with descriptions and instructions and a proper OSS license?


You don’t need to backup all your 24TB of data, you can have a copy of a subset of your important data on another device, if possible the best would be a 3-2-1 approach.
“RAID is not a backup”, is something that is mentioned a lot, as you can still lose data on a RAID setup.


Secondary/Failover DNS or any other service that would be nice to have running when the main server is down for any reason.


On your first part, clarifying your intent, I think that you are overcomplicating yourself by expecting traffic to come to the server via domain name (pass through proxy) from Router A network and by IP:Port from Router B network, you can access all, from anywhere through domains and subdomains, and avoid using numbers.
If you can’t set up a DNS directly on Router A, you can set it per device you would want to access the server through port forwarding of Router B, meaning setting the laptop to use itself as primary DNS and as secondary use external, and any other device you would want in that LAN do the same (laptop as primary), It is a bit tedious to do per device instead but still possible.
Wouldn’t this link to the 192.168.0.y address of router B pass through router A, and loop back to router B, routing through the slower cable? Or is the router smart enough to realize he’s just talking to itself and just cut out `router A from the traffic?
No, the request would stop on Router B, and maintain all traffic, on the 10.0.0.* network it would not change subnets, or anything.
In other words any device on 10.0.0.* will do a DNS request, ask the Router where the DNS server is, then the DNS query itself is sent directly to the server on port 53, then when the response of the DNS is received, via domain, query the server again, but on port 80|443, and then receiving the HTTP/HTTPS response.
Remember that all my advice so far is so you don’t use any IP or Port anywhere, and your experience is seamless on any device using domains, and subdomains, the only place where you would need to put IP or ports, is on the reverse proxy itself, to tell anything reaching it, where the specific app/service is, as those would need to be running on different ports but be reached through the reverse proxy on defaults 80 or 443, so that you don’t have to put numbers anywhere.


If you decide on doing the secondary local DNS on the server on Router B network, there is no need to loop back, as that DNS will maintain domain lookup and the requests on 10.0.0.x all internal to Router B network.
On Router B then you would have as primary DNS the Server IP, and as secondary an external one like Cloudflare or Google.
You can still decide to put rules on the reverse proxy if the origin IP is from 192.168.0.* or 10.0.0.* if you see the need to differentiate traffic, but I think that is not necessary.


Do yourself a favor and use the default ports for HTTP(80), HTTPS(443) or DNS(53), you are not port forwarding to the internet, so there should be no issues.
That way, you can do URLs like https://app1.home.internal/ and https://app2.home.internal/ without having to add ports on anything outside the reverse proxy.
From what you have described your hardware is connected something like this:
Internet -> Router A (192.168.0.1) -> Laptop (192.168.0.x), Router B (192.168.0.y
You could run only one DNS on the laptop (or another device) connected to Router A and point the domain to Router B, redirect for example the domain home.internal (recommend <something>.internal as it is the intended one to use by convention), to the 192.168.0.y IP, and it will redirect all devices to the server by port forwarding.
If Router B has Port Forwarding of Ports 80 and 443 to the Server 10.0.0.114 all the request are going to reach, no matter the LAN they are from. The devices connected to Router A will reach the server thanks to port forwarding, and the devices on Router B can reach anything connected to Router A Network 192.168.0.*, they will make an extra hop but still reach.
Both routers would have to point the primary DNS to the Laptop IP 192.168.0.x (should be a static IP), and secondary to either Cloudflare 1.1.1.1 or Google 8.8.8.8.
That setup would be dependent on having the laptop (or another device) always turned ON and connected to Router A network to have that DNS working.
You could run a second DNS on the server for only the 10.0.0.* LAN, but that would not be reachable from Router A or the Laptop, or any device on that outer LAN, only for devices directly connected to Router B, and the only change would be to change the primary DNS on Router B to the Server IP 10.0.0.114 to use that secondary local DNS as primary.
Lots of information, be sure to read slowly and separate steps to handle them one by one, but this should be the final setup, considering the information you have given.
You should be able to setup the certificates and the reverse proxy using subdomains without much trouble, only using IP:PORT on the reverse proxy.


Most routers, or devices, let you set up at least a primary and secondary DNS resolver (some let you add more), so you could have your local one as primary and an external one like google or Cloudflare as secondary. That way, if your local DNS resolver is down, it will directly go and query the external one, and still resolve them.
Still. Thanks for the tips. I’ll update the post with the solution once I figure it out.
You are welcome.


Should not be an issue to have everything internally, you can setup a local DNS resolver, and config the device that handles your DHCP (router or other) to set that as the default/primary DNS for any devices on your network.
To give you some options if you want to investigate, there is: dnsmasq, Technitium, Pi-Hole, Adguard Home. They can resolve external DNS queries, and also do domain rewrite/redirection to handle your internal only domain and redirect to the device with your reverse proxy.
That way, you can have a local domain like domain.lan or domain.internal that only works and is managed on your Internal network. And can use subdomains as well.
I’m sorry if I’m not making sense. It’s the first time I’m working with webservers. And I genuinely have no idea of what I’m doing. Hell. The whole project has basically been a baptism by fire, since it’s my first proper server.
Don’t worry, we all started almost the same, and gradually learned more and more, if you have any questions a place like this is exactly for that, just ask.


Not all services/apps work well with subdirectories through a reverse proxy.
Some services/apps have a config option to add a prefix to all paths on their side to help with it, some others don’t have any config and always expect paths after the domain to not be changed.
But if you need to do some kind of path rewrite only on the reverse proxy side to add/change a segment of the path, there can be issues if all path changes are not going through the proxy.
In your case, transmission internally doesn’t know about the subdirectory, so even if you can get to the index/login from your first page load, when the app itself changes paths it redirects you to a path without the subdirectory.
Another example of this is with PWAs that when you click a link that should change the path, don’t reload the page (the action that would force a load that goes through the reverse proxy and that way trigger the rewrite), but instead use JavaScript to rewrite the path text locally and do DOM manipulation without triggering a page load.
To be honest, the best way out of this headache is to use subdomains instead of subdirectories, it is the standard used these days to avoid the need to do path rewrite magic that doesn’t work in a bunch of situations.
Yes, it could be annoying to handle SSL certificates if you don’t want or can’t issue wildcard certificates, but if you can do a cert with both maindomain.tld and *.maindomain.tld then you don’t need to touch that anymore and can use the same certificate for any service/app you could want to host behind the reverse proxy.


If your concern is IoT devices, TVs, and the like sniffing on your local traffic, there are alternatives, and some of them are:


The simplest (really the simplest) would be to do a git init --bare in a directory on one machine, and that way you can clone, push or pull from it, with the directory path as URL from the same machine and using ssh from the other (you could do this bare repo inside a container but really would be complicating it), you would have to init a new bare repo per project in a new directory.
If a self-hosted server meaning something with a web UI to handle multiple repositories with pull requests, issues, etc. like your own local Github/Gitlab. The answer is forgejo (this link has the instructions to deploy with docker), and if you want to see how that looks like there is an online public instance called codeberg where the forgejo code is hosted, alongside other projects.


I don’t know if SoftEther has an option so you don’t tunnel everything, and just use the virtual LAN IPs for games, file transfers, etc.
And I don’t know your actual technical level or the people you play with, but, for people that can go as far as opening ports and installing a server on your own machine, and getting others to connect to it, I would suggest Headscale (the free self-hosted version of Tailscale) as a next step, or if inclined to learn something a bit more hands on Wireguard.
With those you can configure it so, only the desired traffic goes through (like games or files sharing using the virtual LAN IP), and the rest goes out normally, or configure exit nodes, so if/when desired, all traffic is tunneled like what you have now.
If you have any question about Headscale you could ask in [email protected]


This would be my choice as well, as I went with Dockge exactly because it works with your existing docker-compose files, and there are no issues if you manage with either Dockge or with the terminal.
If you add Ntfy or Gotify then you should be set.


But I think I’m understanding a bit! I need to literally create a file named “/etc/radicale/config”.
Yes, you will need to create that config file, on one of those paths so you then continue with any of the configuration steps on the documentation, you can do that Addresses step first.
A second file for the users is needed as well, that I would guess the best location would be /etc/radicale/users
For the Authentication part, you will need to install the apache2-utils package with sudo apt-get install apache2-utils to use the htpasswd command to add users
So the command to add users would be htpasswd -5 -c /etc/radicale/users user1 and instead of user1, your username.
And what you need to add to the config file for it to read your user file would be:
[auth]
type = htpasswd
htpasswd_filename = /etc/radicale/users
htpasswd_encryption = autodetect
Replacing the path with the one where you created your users file.
Maybe this one -> https://github.com/stan-smith/FossFLOW