

Is this for hardware RAID controllers, or have you experience software RAID like LVM or ZFS exhibiting the same drop out behavior? I personally haven’t but it be nice to look out for future drives.
Is this for hardware RAID controllers, or have you experience software RAID like LVM or ZFS exhibiting the same drop out behavior? I personally haven’t but it be nice to look out for future drives.
Does Backblaze work for what you are doing? It been a bit since I’ve price compared them, but I think it was something around 5$ a month per TB?
There’s also Bitmagnet, it you’d like a local tracker for the Arr stack.
If you run it in podman, podman can export into a kubernete file, but its been a long time since I’ve tried it though. podman kube generate $CONTAINERNAME
is podman-compose really dead? Their github page looks active at a glance. The tooling is so similar, I use podman for local testing, and deploy to docker, but I’ve also done the reverse. As long as your not using really exotic parameters its really just a drop in replacement, I’ve even used GPU passthrough for AI project no problem in both docker and podman. At the end of the day, they’re just slightly different frontends for the same backend.
As far as docker support, its often as simple as just providing a Dockerfile, which is basically the same thing as your build scripts. These days I’ve often used the Dockerfile INSTEAD of the readme to find help compiling some projects.
Totally reasonable, something like LVM can at least get you to a raid1 setup, pretty easily.
Raid0 (combining both drives’ capacities) is not really tiered storage. You would want Raid1 (each drive is a copy of the other drive ), but doing this isn’t a backup. How will you be monitoring the drives so that you know if one of them actually fails?
I don’t think the RPi has a new enough kernel, but with bcachefs you can do tiered storage. By combining the storage of the ssd + hardrives, into a single block device, then make the ssd the read/write cache, and give the whole pool replicas=2, so that that if one drive dies you still have the failover of the other drive. Do be aware this setup is still not a backup however.
It does make sense. Thank you. I appreciate the link!
However, my cloud usage is purely as a proxy/load balancer, as none of my cloud providers hold any actual data. They’re just routing traffic, and all data/processing is on premises. What I’m interested in, is how to setup something like what you describe, but on premises also. From a design stand point, if I wanted to protect myself from a ransomware attack, obviously my cloud backups would be lost because they’re a mounted filesystem during a backup eventually. So I don’t know how to wrap my head around handling this, just storage design wise as specific tools I can figure out. How does one create a recovery point, and keep it safe from something like this? Just image the entire file system from a live booted offline environment? Feels like a chicken-egg problem to me.
I’ve thought about how I could handle disaster recovery for my homelab environment, but I haven’t come to any good solutions. For example, if my main concern was being hit by crypto. I can’t just recover from a regular backup, since I’m not sure how I can make a backup without that backup just being encrypted along side everything else. Since I mainly just backup everything to my file server, which is then synced to the cloud. In that setup, my cloud backups would be lost as well.
Would you have some starting points on how others handle disaster recovery? I’d like to avoid manually making an offline backup, because inevitably I’d forget to do it, which would make it useless anyway.
I don’t know if this is what you are looking for but I used :z with podman mounting and it Just Works*.
podman run -d -v /dir:/var/lib/dir:z image
From the documentation :z or :Z relabels volumes for host and container usage depending.
I use a Pi4 to run one of my HAproxy nodes. It does die once in a while from not enough power because my power brick is pretty old at this point. Other than that its great. I used to have a cluster of Pi3’s bit I’m transitioning cluster managment systems so they aren’t doing anything right now. I recently got a Lichee pi and that will most likely replace them once I get it all working.
It is primarily a http server, its ability to act as a http reverse proxy is a product of that. Apache can do the same thing, its just less common to see it used that way.
Probably not the ‘recommended’ way, but I use a selfsigned cert for each service I’m running generated dynamically on each run with nginx as a reverse proxy. Then I use HAproxy and DNS SRV records to connect to each of those services. HAproxy uses a wildcard cert (*.domain.tld) for the real domain and uses host mapping for each subdomain, (service1.domain.TLD).
This way every service has its traffic encrypted between the HAproxy and the actual service, then the traffic is encrypted with a browser valid cert on the frontend. This way I only need to actually manage 1 cert. The HAproxy one. Its worked great for me for a couple of years now.
Edit: I’m running this setup for about 50 services, but mostly accessed over LAN/VPN.
The main thing for this is cost. I don’t really know what performance specs for a VPS I would need to reasonably have good network performance with ~10 devices, though I’m guessing I’ll have to have something =<10Gbsp. So maybe $25-$30/month depending on who I buy a VPS through?
Would EACH of your devices have their own dedicated gigabit connection to your server? Even so, are you the only user or is this for some family members also? If its just you, you can 9/10 just get a basic 5$ or less gigabit VPS. You’d much more often be limited by your outbound connection than your VPS networking, by a considerable margin. Most things you are connecting to won’t saturate even a gigabit connection, so you’d be well under your bandwidth requirements.
Totally, you can easy do *.test.yourdomain.com and that’s works just fine for certbot. Ive never used cloudflare so I’d assume the same setup should work.
OpenSuse, and its commercial sister have been default using btrfs for almost a decade. The “btrfs is beta” meme is a dead horse. Its a great file system for what it was designed to do.
Last I checked, a wild card cert for *.yourdomain.com is NOT valid for test.local.yourdomain.com, but IS valid for test.yourdomain.com. Wildcard certs are not recursive as far as I know.
Arduinos can’t really handle video encoding and presence detection on board. A laptop is extreme overkill, as I said in my post. Don’t want a battery, screen, keyboard, hinges, and fans are a deal breaker. Old laptops are bulky, heavy, have proprietary power bricks that are never cross compatible with each other. A laptop and a SBC are just totally different markets, and are used for totally different things.
Friends! Mixed architectures are always exciting, I cannot wait for something as standardized as the Pi for RISC-V.
Right, I did hear about that lawsuit way back when, I just didn’t know of these types of consequences. Very appreciated, especially the sources.