

… or just publish the source on Github and let someone else continue the legacy.
openpgp4fpr:358e81f6a54dc11eaeb0af3faa742fdc5afe2a72
… or just publish the source on Github and let someone else continue the legacy.
Not with this setup, no. I specifically didn’t want The Algorithm™ involved.
It’s much more lightweight, handles Plex integration much better and automatically cuts out ads, promotions, etc.
I moved from TubeArchivist to Pinchflat. Very good.
Containers are just processes with flags. Those flags isolate the process’s filesystem, memory [1], etc.
The advantages of containers is that the software dependencies can be unique per container and not conflict with others. There are no significant disadvantages.
Without containers, if software A has the same dependency as software B but need different versions of that dependency, you’ll have issues.
[1] These all depend on how the containers are configured. These are not hard isolation but better than just running on the bare OS.
It sounds like your port forwarding settings weren’t saved and the reboot has gone back to a previous configuration.
I use Rallly.
Ahhh… very good. I avoided all this by running Pihole on its own IP on the LAN using a bridged interface from the host.
This post from Stack Exchange might help you, switching 80 for 53, of course.
You don’t need UDP on port 80 forwarded through. HTTP is TCP only.
Nextcloud does all of this.
Wrong type of POST.
Yes, you can.
Without going into specifics, you need to share the network of your DB stack with the stack of the client containers.
Rather than NFS, perhaps iSCSI would be a better fit.
Use a single reverse proxy on that one port… it can then route the requests to the various back ends.
You probably want something that’s Docker-native like Traefik or Caddy.
This is true but if you’re self-hosting it’s not that much bother to add additional copies of a bridge for other users (granted, it’s not ideal).
Haha. Oops. I should have checked first. Well done.