

Obligatory “RAID is not backup” PSA statement
Obligatory “RAID is not backup” PSA statement
Ho… Ly… Shit… This is great! The UI is a bit confusing at first but doesn’t take long to get what’s going on. I might even be disappointed with a UI revamp 😁 I can’t believe how much functionality this has. It’s already replacing some processes I have for mounting drives and backing up files. Maybe I missed something, but my only complaint would be the lack of an automatic one-way folder sync in the Party UP! app.
I’m blown away, great job!
I use Forgejo mostly for code archiving but anything that requires CI/CD, like personal code projects, i use OneDev. No extra setup for pipeline, batteries included
You probably don’t want your server maxing out all day, your electricity bill will thank you
Not sure if this will work for you but I keep my homelab documentation in markdown, mainly edited with Obsidian. I wanted an easy way to access via web and found Perlite. I have this pointed at a notes folder on my server which is auto-updated with Syncthing. No fuss, just works
I’m using Kopia with AWS S3 for about 400GB and it runs a bit less than $4/mo. If you set up a .storageconfig file it will allow you to set a storage level based on the file names. Kopia conveniently makes the less frequently accessed files begin with “p” so you can set them to the “infrequently accessed” level while files that are accessed more often stay in standard storage:
{
"blobOptions": [
{
"prefix": "p",
"storageClass": "STANDARD_IA"
},
{
"storageClass": "STANDARD"
}
]
}
I’ve been using OneDev. It’s really easy to set up, kinda just works out of the box
I use it in a homelab, I don’t need to apply prod/team/high-availability solutions to my Audiobookshelf or Mealie servers. If an upgrade goes wrong, I’ll restore from backup. Honestly, in the handful of years I’ve been doing this, only one upgrade of an Immich container caused me trouble and I just needed to change something in the compose file and that was it.
I get using these strategies if you’re hosting something important or just want to play with new shiny stuff but, in my humble opinion, any extra effort or innovating in a homelab should be spent on backups. It’s all fun and games until your data goes poof!
Komodo is a big topic so I’ll leave this here: komo.do.
In a nutshell, though, all of Komodo is backed by a TOML-based config. You can get the config for your entire setup from a button on the dashboard. If have all of your compose files inline (using the editor in the UI) and you version control this file, you can basically spin up your entire environment from config (thus my Terraform/Cloudformation comparison). You can then either edit the file and commit, which will allow a “Resource Sync” to pick it up and make changes to the system or, you can enable “managed mode” and allow committing changes from the UI to the repo.
EDIT: I’m not really sure how necessary the inline compose is, that’s just how I do it. I would assume, if you keep the compose files in another repo, the Resource Sync wouldn’t be able to detect the changes in the repo and react ¯\_(ツ)_/¯
I guess I don’t get that granular. It will respect the current docker compose image path. So. if you have the latest
tag, that’s what it will use. Komodo is a big topic: https://komo.do
Not sure why Renovate is necessary when Komodo has built-in functionality to update Docker images/containers. I wish there was an option to check less often (like once a day), maximum time is hourly.
Also, if you’re using Komodo and have one big repo of compose files, consider just saving your entire config toml to a repo instead. You end up with something akin to Terraform or Cloudformation for your Docker hosts
My home network is called The IT Clowd with these devices:
Give me the angry downvote all you want but, you are administering a system, you are literally a sysadmin
Kopia doesn’t get enough love, it’s awesome
I’m not looking to become a sysadmin
“I want to be an F1 racer but I don’t wanna learn to drive”
That’s what I heard you say
Yep, I think you’re spot on! Glad I asked, I don’t think I would’ve ever thought of that. Thanks!
Why does Cloudflare get a pass on the “if it’s free, you’re the product” mantra of the self-hosting community? Honest question. They seem to provide a lot for free, so…
When I turn off Wi-Fi, I’m not on the same network as my server, it’s my carrier network so all the internet hops are expected.
The way it’s working now is I have a domain (example.com) that is set up on cloudflare DNS. I added a tunnel in cloudflare zero trust, which generates certificates you add to your server to encrypt traffic from your server to cloudflare. I have added these to traefik to be served with my service url (service.example.com). Then, I added a route in cloudflare for service.example.com.
This works fine. But, what I’ve also done is add a local DNS entry for service.example.com so when I’m on my LAN, I access it without going out to the internet and back (seems like a waste). However, this is serving the origin server certs from cloudflare, which causes trust issues
I’m using docker for everything: traefik, cloudflared tunnel, and my services on the same hardware. The tunnel just runs, and it’s configured on cloudflare zero trust to talk directly to the container:port over the docker network.
That’s what I’m settling on. However, it’s not just about trust, some of the services I’m exposing deal with moving files and I’m mostly interested in higher speeds associated with local transfers as well as not using up my internet data cap.
The Enrichment Center reminds you that the Weighted Companion Cube cannot speak.