

One more question, how did you manage to get the reverse proxy to proxy your pods? I just added two containers to one, and I cannot access the containers anymore by their names. Do I need to expose their ports on the pod configuration?
One more question, how did you manage to get the reverse proxy to proxy your pods? I just added two containers to one, and I cannot access the containers anymore by their names. Do I need to expose their ports on the pod configuration?
Personally, I would avoid host network mode as you expose those containers to the world (good if you want that, bad if you don’t)… possibly the same with using the public IP address of your instance.
My instance is only exposing the HTTP/HTTPS ports, those are the only ports enabled in the firewall.
It seems simple. Does it use pasta as the default networking backend? Also, I guess separating each app into their own network is added security, right? So if anything happens to one app, it cannot move laterally to the other apps unless it manages to gain access to the reverse proxy, which then it would be a huge problem.
I found Tailscale/Headacale way more difficult to setup than Wireguard.
I tried 5 different credit cards to setup my account and none of them worked for the free tier. Contacted customer support, they simply said “well we can’t do anything about it, it’s clearly a problem in your end and not ours even though you tried 5 different credit cards to pay for the service”.
My issues with Samsung nowadays is that they offer a very low TBW warranty compared to other brands like Kingston.
I wanted to buy a 1TB storage for my games and I couldn’t decide between Samsung and Kingston. Samsung had a 600TBW warranty for the 1TB model, Kingston had 800. I ended up choosing the KC3000 from Kingston.
It’s still not an excuse to just ignore the security update because you might not be a target for hackers.
Just check your logs, there’s probably a dozen or more requests trying to access wordpress pages on your server, or login via SSH. They want to take over your server so it can be part of a botnet.
I think so, but if you check the official image you can definitely find out how to include custom plugins in it. I think the documentation might mention a thing or two about it too.
You can install the log transformer plugin for Caddy and have it produce a readable log format for fail2ban: https://github.com/caddyserver/transform-encoder
I had this setup on my VPS before I moved to a k3s setup. I will take a look at how to migrate my fail2ban setup to the new server.
Thanks for the suggestions!
I ended up configuring my CI pipeline to build a Caddy docker image that ships with my website files. The pipeline is also publishing the container image to the Codeberg registry and I apply the new image repo and tag to the Caddy Helm chart I found on ArtifactHub.
The only thing that’s left is to setup the CI to automatically restart the pod when a new image is pushed, so it will always have the latest version.
It was easier than expected and I had a few issues like my stylesheets not being applied and image files not rendering, but it was solved by changing the pathType
field on the ingress configuration to Prefix
.
I don’t like Cloudflare and I try to steer away from them.
Using Codeberg/GitHub/GitLab pages was an option as well, but I wanted to have it self-hosted so I have more flexibility and I get to use and customize Caddy to my liking.
That’s a nice suggestion. I guess I can make the CI build a Docker image containing my website’s files and then have a plugin for it to restart the pod that serves the website so it fetches the latest image.
How is this different than mounting the folder with the static website using hostPath
?
I was looking for it as well. I want to host the website using Caddy because I have a lot of config options available and I can fine tune it for my use cases.
I read a tutorial about using a Hugo Docker image, but then the hosting would be done by Hugo and not Caddy itself.
I’m not using k8s just to host my website, I have other services on it as well.
I know it’s overkill for small stuff, but I’m running k3s and not k8s (so it’s a lightweight engine). The reason I’m doing this is for learning purposes, I want to learn more about k8s and thought I could do an experiment with it on a VPS.
I plan on renting another VPS and adding another node to the cluster, as it’s pretty cheap (Hetzner ARM server costs around 3.8 EUR without VAT with 2 vCPUs and 4GB RAM). For example, it’s much more cheaper than the VPS I have on Vultr that has 1GB RAM and 1 vCPU.
I had the same considerations when I self-hosted headscale as the controller for accessing my VPS. However, I figured that it shouldn’t be a big deal, and there’s no chance of someone registering rogue devices on your mesh, because, even though any device can request enrollment to Tailscale, ultimately you need to execute a command in your headscale server to confirm the enrollment/account creation, so there shouldn’t be that much of a problem leaving the web server exposed.