

It means you published 8080. Just stop doing it. nginx can reach that container via internal network (assuming they are on same network). Publishing docker-compose would help.
It means you published 8080. Just stop doing it. nginx can reach that container via internal network (assuming they are on same network). Publishing docker-compose would help.
Just a hdd in usb caddy? IMHO good enough for 4 tier backup.
just a normal PC? Streaming should work in a browser.
Just a folder + syncthing. no extra infra is necessary + easy to backup.
Get yourself some old cisco 3600 re-flash it with standalone firmware and get enterprise class WAP for cheap.
yes, is what i get reading about that server backplane. NVME part will need another “controller”
what is wrong with old good LSA 2008 based HBAs with SAS-to-Oculink cable? and for NVMe part, adapter is just port format converter, pick any.
With raid 10 - i would not risk it . With RAID6 ( obviously not on BTRFS) it is fair game if you have solid return policy for drives which are DOA. Go for SAS drives, they are cheaper (but generally hotter and nosier). And look for old-new-stock on specialized sites, no one in enterprise needs say 8tb drives, so they selling them cheap at times.
Get drive, connect , run long smart self-test ( for 18tb it is probably take a day). If it passed you are reasonably sure that it will not die soon. And keep running these test regularly, as soon as they start failing, replace.
I do not see why it will cause any problems with exception of stacking mapping layer. I wonder can LVM do it natively without adding intermediate block device of 2 x 2G?
Stay with TP-Link. Ubiquity done some strange things recently.
Many sells, some just wipe them, some just contains encrypted data. If you happy with just used drive eBay is full of surprises.
Why create yourself a headache and still get substandard and no-warranty drive. If you want cheaper drives go for reconditioned/refurbished/used drives. Same risks, better product. Old enterprise SAS drives are cheap and many still have plenty of heath in them.
Syncthing sync files, it is all does.
Look to other orchestrations solution too, like SALT. If you need to manage a lot of servers it is live saver. Setting up is only first step.
Usually just plug/unplug couple of times is enough. No fancy chemicals.
Run long smart test on the disk and check smart data after that. Other possibility is ZFS pool is nearly full.
Depends what are you doing. Something like keep base os patched is pretty much nil efforts. Some apps more problematic than others. Home Assistant is always a pain to upgrade and something like postfix is requires nearly 0 maintenance.
circular dependency seems to be the case. I guess adding second external resolver to /etc/resolve.conf will help. Second entry will not be used unless first one ( pi-hole) is responding. But it need to be tested.
BTW, why do you want to send host’s DNS via pihole?
what exactly do you mena under subdomains?
Any DNS provider will support adding NS entries for subdomains if you want to host you sub-zone somwhere, And any should allow you to use names with “.” in it for “fake” subzone, like
a.subzone1 IN A x.x.x.x
a.subzone2 IN A y.y.y.y
If you need something which can withstand some bitrot on single drive, just use par2. As long is filesystem is readable, you can recover files even if bit of data get corrupted