🇨🇦

  • 10 Posts
  • 161 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle

  • A bit of redundancy is key.

    I have my primary DNS, pihole, running on an RPI that’s dedicated to it; as well as a second backup version running in a docker container on my main server machine.

    Nebula-Sync keeps the two synchronized with eachother, so if a change is made on one, it automatically syncs to the other. (things like local dns records or changes to blocklists).

    If either one goes down (dead sd cards, me playing with things, power surges, whatever); the other picks up the slack until I fix the broken one, which is usually little more than re-install, then manually sync them using piholes ‘teleporter’ settings. Worse case, restore a backup (That you’re definitely taking. Regularly. Right?)

    Both piholes use Cloudflared (here’s their guide *edit: I see I’ll have to find a new method for this… Just going to pin the containers to tag ‘2025.11.1’ for now) to translate ALL dns traffic into DOH traffic, encrypting it and using the provider of my choice, instead of my ISP or any other plain DNS. The router hands out both local DNS IPs with DHCP because Port 53 outbound (regular dns) is blocked at the router, so all LAN devices MUST use the local DNS or their own DOH config. Plain DNS won’t make it out.

    DNS adblocking isn’t perfect, but it’s a really nice tool to have. Then having an internal DNS to resolve names for local-only services is super handy. Most of my subdomains are only used internally, so pihole handles those DNS records, while external DNS only has the records for publicly accessible things.


  • I have the same issue with Immich on android. It pretty much never uploads files until I manually open the app; then the app refuses to acknowledge it has uploaded those new files until it’s closed and re-opened :( (power saving is set to un-restricted in android, and background data usage is allowed. I’ve been through troubleshooting very thoroughly, it just doesn’t work)

    FolderSync has been the only reliable (non-root) backup solution I’ve used. It’s set to monitor my image folders for changes and upload any new files as soon as they’re created; this works ~85% of the time. Then, It’s also set with a few schedules to check for changes every 3hrs, backing up everything on the phone the app can access; this catches anything the on-change/on-creation file detection misses, while also backing up more data than just my images. I have yet to see that fail after ~3 years.




  • I only bring it up because you explicitly said you have no idea why it doesn’t work.

    Take things at a comfortable pace; there’s no sense overwhelming yourself. Then you just forget what you’ve done and end up lost in your own maze.

    I started with Plex myself, almost 10 years ago. Moved to Emby, where I learned about buying a domain, setting up ssl through a reverse proxy, and just continued to explore from there. Today I run ~26 containers/projects across three systems and I’m always keeping my eye out for interesting new things.

    Best of luck with your journey m8.


  • Sounds like you’re behind cgNAT, which essentially means there’s another router owned by your ISP that’s between yours and the open internet, which also requires port forwarding, but your ISP will never do that for you.

    It complicates things, but the solution(s) are tools like tailscale, cloudflare Tunnels, or to rent a VPS just to host a proxy/vpn.

    Plex solves this by using their own public servers as a proxy for you, but this is part of how they have control over your users/server/data, such as blocking remote streaming… That makes more than a few people uncomfortable.



  • Plex has an automatic proxy service hosted by their public servers. If you haven’t or can’t configure port forwarding correctly, plex will route the connection through their own servers.

    The problem is, that also means Plex co has total control over your server and the data sent between it and clients if they so choose. Anything from quietly logging the data sent back and fourth, to controlling who can connect and what they can do while they are.

    Jellyfin has to be correctly exposed to the internet via port forwarding or tools like tailscale/a vpn; but it’s entirely your server under your control. You have ultimate control over how your server can be accessed, but that also means you’re responsible for actually setting that up.







  • This comment prompted me to look a little deeper at this. I looked at the history for each show where I’ve had failed downloads from those groups.

    For SuccessfulCrab; any time a release has come from a torrent tracker (I only have free public torrent trackers) it’s been garbage. I have however had a number of perfectly fine downloads with that group label, whenever retrieved from NZBgeek. I’ve narrowed that filter to block the string ‘SuccessfulCrab’ on all torrent trackers, but allow NBZs. Perhaps there’s an impersonator trying to smear them or something, idk.

    ELiTE on the other hand, I’ve only got history of grabbing their torrents and every one of them was trash. That’s going to stay blocked everywhere.


    The block potentially dangerous setting is interesting, but what exactly is it looking for? The torrent client is already set to not download file types I don’t want, so will it recognize and remove torrents that are empty? (everything’s marked ‘do not download’) I’m having a hard time finding documentation for that.






  • To be perfectly honest, auto updates aren’t really necessary; I’m just lazy and like automation. One less thing I’ve gotta remember to do regularly.

    I find it kind of fun to discover and explore new features on my own as they appear. If I need documentation, it’s (usually…) there, but I’d rather just explore. There are a few projects where I’m avidly following the forums/git pages so I’m at least aware of certain upcoming features, others update whenever they feel like it and I’ll see what’s new next time I happen to be messing with them.

    Watchtower notifies me whenever it updates something so I’ve at least got a history log.


  • I’ve had Immich auto updating alongside around 36 other docker containers for at least a year now. I’ve very very rarely had issues, and just attach specific version tags to the things that have caused problems. Redis and postgres for example in both Immich and Paperless-NGX have fixed version tags because they take manual work to upgrade the old databases. The main projects though, have always auto updated just fine for me.

    The reason I don’t really worry about it: Solid backups.

    BorgBackup runs in the early AM, shortly before Watchtower updates almost all of my containers, making a backup of the entire system (not including bulk storage) first.

    If I was to get up in the morning and find a service isn’t responding (Uptime-kuma notifies me via email if it can’t reach any container or service), I’ll mess with it and try to get the update working (I’ve only actually had to do this once so far, the rest has updated smoothly). Failing that, I can just extract yesterday’s data from the most recent backup and restore a previous version.

    Because of Borgs compression and de-duplication, concurrent backups of the same system can be stored in an absurdly small amount of space. I currently have 22 backups of ~532gb each, going back a full year. They are stored in 474gb of disc space. Raw, that’d be 11.8TB