

I think Sonarr uses qBittorrent’s (or other supported torrent client) API for checking the progress, instead of periodic scans. Everything else is solid.
I fuck numbers.
I think Sonarr uses qBittorrent’s (or other supported torrent client) API for checking the progress, instead of periodic scans. Everything else is solid.
The best way would be to use a VPS to proxy your traffic to you. You can achieve this for pretty cheap, just set up an wireguard tunnel to a cheap VPS. That’s exactly how I access all my services from outside my home. As long as the VPS has a publicly accessible IP (most of them do), you being behind CGNAT should not be an issue.
I personally use https://desec.io
For me, the most essentials are definitely:
I took a look at it. From what I understand, some of the lines in your setup are redundant. The final product seems to do basically the same job as mine. In any case, if it works, it works.
Hey, great post. I have one request. Can you maybe add some description for what the iptables entries do? I have a similar setup with a lot less iptables rules that works well for me. But I’m not an expert in networking, and am now worried that I might be missing something that can leak my home IP.
For media, I host the some of the arr apps, qbittorrent, Jellyfin, gpodder2go, and navidrome. For personal photos, I host PhotoPrism. I host a file sharing service fileshelter, and a link shortening service chhoto-url. I host Wiki.js for mostly recipes, and some notes. I’ve recently started hosting Forgejo for my git repos. I also host SageMath for computation, it’s especially useful when I only have my phone with me and need to use it. I use caddy as a reverse proxy and serve these through a VPS using a Wireguard tunnel.
My setup looks like the following:
/etc/wireguard/wg-vps.conf on the VPS
-----------------------------------------------------
[Interface]
Address = 10.8.0.2/24
ListenPort = 51820
PrivateKey = ********************************************
# packet forwarding
PreUp = sysctl -w net.ipv4.ip_forward=1
# port forwarding 80 and 443
PreUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 10.8.0.1:80
PreUp = iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 10.8.0.1:443
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 10.8.0.1:80
PostDown = iptables -t nat -D PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 10.8.0.1:443
# packet masquerading
PreUp = iptables -t nat -A POSTROUTING -o wg-vps -j MASQUERADE
PostDown = iptables -t nat -D POSTROUTING -o wg-vps -j MASQUERADE
[Peer]
PublicKey = ********************************************
AllowedIPs = 10.8.0.1
/etc/wireguard/wg-vps.conf on my home-server
---------------------------------------------------------------
[Interface]
Address = 10.8.0.1/24
PrivateKey = ********************************************
[Peer]
PublicKey = ********************************************
AllowedIPs = 10.8.0.2
Endpoint = <VPS-DDNS>:51820
PersistentKeepAlive = 25
Now, just enable the tunnel using sudo systemctl enable --now wg-quick@wg-vps
. Make sure that the port 51820, 80, and 443 are open on the VPS. Now, allow 80, 443 through the firewall on the home-server (not on the router, just allow it locally), and it should work.
I’m afraid that I don’t have any guides. But, you’re halfway there anyway. Which one of these methods do you prefer? I can maybe give you some pointers.
I have a wireguard tunnel set up between my home server and the VPS, with persistent keepalive. The public domain name points to the VPS, then I have it set up (simply using iptables) so that any traffic there in port 80 and 443 is sent back to my honeserver and there it’s handled by caddy, and sent to the actual service.
The only ports I need to open are 80 and 443 on my VPS to make this setup work. So, no open ports on my local machine. This does however require you to pay for VPS. Since you aren’t doing much on it though, you can get away with a cheap one. I have a $12/year VPS from Rack nerd that I use for this job.
For completely free options, you can do one of three things. (That I can think of. There are probably more ways.)
P.S. If you need help setting any of these up, lmk.
I currently use Wiki.js but it’s a bit too much. The image size is around 500MB. I don’t see why I need such a huge program for hosting essentially text files and some images.
From the comments, DokuWiki with a modern theme, Fossil-SCM, and MkDocs seem nice. I’ll probably try some of these during the weekend.
Hadn’t heard of it before. Looks promising, thank you.
deleted by creator
Try out fileshelter. It’s super lightweight and works pretty reliably.
I don’t, because I switch it with something better if something like that happens.
Nowadays, I build them locally, and upload stable releases to registry. I have in the last used GitHub runners to do it, but building locally is just easier and faster for testing.
A great option that I personally use is FileShelter. It’s super light and seems to perform very well.
No, unfortunately.
I already have an external HDD. Maybe I can try the setup out with that.
Caddy is the way.