

Luckely we’re not relying on emails for security relevant and or private information, right?
Luckely we’re not relying on emails for security relevant and or private information, right?
The emails are unencrypted, emails in transit are in transit between the e-mail servers and relays and use secure tls channels.
They are only encrypted from your phone/notebook/browser to the server, then when send they will be encrypted till the next server.
Every server/relay first decrypts everything send to it, because it has to due to the TLS terminating at each server.
See also your source:
Transport Encryption: This form of encryption is used to secure your emails while they are transmitted over the internet. Most of today’s email services, including Gmail, employ transport layer security (TLS) to protect emails in transit. While it encrypts emails between servers, it doesn’t protect the content once it reaches the recipient’s inbox.1
In practical terms, Your e-mail server, your e-mail servers relay (if it has any) and your recipients relay server/server can all read your email unless
End-to-End Encryption (E2EE): E2EE takes encryption a step further. It ensures that only the sender and the recipient can decrypt and read the emails. Even the email service provider cannot access the contents of the email. E2EE is typically achieved through third-party encryption tools or services.1
Which takes active effort from both the sender and the recipient to make work - it’s almost only possible with people you know and little else.
1 https://umatechnology.org/gmails-new-encryption-can-make-email-safer-heres-why-you-should-use-it/
You can use caddy-l4 to redirect some traffic before (or after) tls and to different ports and hosts depending on FQDN.
Though that is still experimental.
Only thing I can comment on is that 99% of all E-Mails you will get are unencrypted and can be read by your relay. (There are few e2e encrypted emails being send.)
So either trust them or don’t use a relay.
Step 1: Get write access to the project you dislike.
I recommend switching to NixOS only after you have a basic but broad understanding of Linux, many things in NixOS are more complicated than in “normal” Linux, which is needed to archive what it does, but is overwhelming for someone who doesn’t know the what and why and where that using Linux brings.
You triggered the independent thought alarm
Check DNS, MTU and do a full wireshark capture from the Client using both curl and the browser.
I didn’t consider it as valid, one on (phone and internal nvme1), the second one on nvme2 and the third one in the cloud.
Though I have only two copies of normal data myself, I consider live and cloud to be enough for most data. Everything very important has more backups in other ways (bitwarden has an exportable local version on every logged in device, images are stored in immich on my server making it 3 devices)
You have 3 copies, one on your phone and nvme, one on the backup nvme and one in the cloud. You have 2 media, internal SSD and cloud (your phone would count as a third if it wasn’t auto synced) You have 1 off-site in the cloud
Find a new service you like, add it using rootless podman. That way you can test it without affecting your running system.
Try sysctl -w net.ipv4.conf.all.rp_filter=2
on the PC (not vps) or =0 if that doesn’t work
Do a ping of 8.8.8.8 from your user, then open a new console and run tcpdump -i <interface> with first your uplink, then wg0. The packets should be seen on wg0 if they’re routed correctly and the problem then is on the vps side. Otherwise it’s a problem on your local config.
Did you add the vps IP to the routing table of your user? ip r add 10.0.0.2/32 dev wg0 table 1070
?
I have rss feeds for my main service updates so I know what new features I have, the services mostly run in podman containers and update automatically each Monday. I also have daily backups (timed to run just before the update on monday) in case anything does break.
If it breaks I fix it depending on how much I want/need it, mostly it’s a matter of half an hour to fix it and with my current NixOS/Podman system I haven’t yet needed to fix anything this year so it breaks infrequently.
Also why are you using Kubernetes on a single host if you want minimal maintenance? XD
My recommendation is to switch to just managing containers, you should just be able to export the volumes out of kubernetes and import them as normal volumes, as long as they’re mounted in the right place you keep your data and if it doesn’t work just try again. Not like you need to destroy the current system to slowly replace it.
Edit: I also recommend to update and reboot frequently, this stops updates and unstable configurations from piling up.
One thing that makes a project good is knowing what it does, I’ve seen quite a few projects where they talk about all the features and technology and how to configure it but not a word about what it actually does, what problems it solves and so on.
I won’t self host your program if you don’t even tell me what it does, don’t make me search and clue together large parts of the documentation just to find if I want it. A simple explanation is enough but somehow I’ve seen quite a few programs that don’t have it.
Yeah it works great and is very secure but every time I create a new service it’s a lot of copy paste boilerplate, maybe I’ll put most of that into a nix function at some point but until then here’s an example n8n config, as loaded from the main nixos file.
I wrote this last night for testing purposes and just added comments, the config works but n8n uses sqlite and probably needs some other stuff that I hadn’t had a chance to use yet so keep that in mind.
Podman support in home-manager is also really new and doesn’t support pods (multiple containers, one loopback) and some other stuff yet, most of it can be compensated with the extraarguments but before this existed I used pure file definitions to write quadlet/systemd configs which was even more boilerplate but also mostly copypasta.
{ config, pkgs, lib, ... }:
{
users.users.n8n = {
# calculate sub{u,g}id using uid
subUidRanges = [{
startUid = 100000+65536*( config.users.users.n8n.uid - 999);
count = 65536;
}];
subGidRanges = [{
startGid = 100000+65536*( config.users.users.n8n.uid - 999);
count = 65536;
}];
isNormalUser = true;
linger = true; # start user services on system start, fist time start after `nixos-switch` still has to be done manually for some reason though
openssh.authorizedKeys.keys = config.users.users.root.openssh.authorizedKeys.keys; # allows the ssh keys that can login as root to login as this user too
};
home-manager.users.n8n = { pkgs, ... }:
let
dir = config.users.users.n8n.home;
data-dir = "${dir}/${config.users.users.n8n.name}-data"; # defines the path "/home/n8n/n8n-data" using evaluated home paths, could probably remove a lot of redundant n8n definitions....
in
{
home.stateVersion = "24.11";
systemd.user.tmpfiles.rules =
let
folders = [
"${data-dir}"
#"${data-dir}/data-volume-name-one"
];
formated_folders = map (folder: "d ${folder} - - - -") folders; # a function that takes a path string and formats it for systemd tmpfiles such that they get created as folders
in formated_folders;
services.podman = {
enable = true;
containers = {
n8n-app = { # define a container, service name is "podman-n8n-app.service" in case you need to make multiple containers depend and run after each other
image = "docker.n8n.io/n8nio/n8n";
ports = [
"${config.local.users.users.n8n.listenIp}:${toString config.local.users.users.n8n.listenPort}:5678" # I'm using a self defined option to keep track of all ports and uids in a seperate file, these values just map to "127.0.0.1:30023:5678", a caddy does a reverse proxy there with the same option as the port.
];
volumes = [
"${data-dir}:/home/node/.n8n" # the folder we created above
];
userNS = "keep-id:uid=1000,gid=1000"; # n8n stores files as non-root inside the container so they end up as some high uid outside and the user which runs these containers can't read it because of that. This maps the user 1000 inside the container to the uid of the user that's running podman. Takes a lot of time to generate the podman image for a first run though so make sure systemd doesn't time out
environment = {
# MYHORSE = "amazing";
};
# there's also an environmentfile option for secret management, which works with sops if you set the owner of the secret/secret template
extraPodmanArgs = [
"--pull=newer" # always pull newer images when starting, I could make this declaritive but I haven't found a good way to automagically update the container hashes in my nix config at the push of a button.
];
# few more options exist that I didn't need here
};
};
};
};
}
I use podman using home-manager configs, I could run the services natively but currently I have a user for each service that runs the podman containers. This way each service is securely isolated from each other and the rest of the system. Maybe if/when NixOS supports good selinux rules I’ll switch back to running it native.
Hey now, you can also spend 20 pages of documentation and 10 pages of blogs/forums/github1 and you can implement a whole nix module such that you only need to write a further 3 lines to activate the service.
1 Your brain can have a little source code, as a threat.
Somethign I haven’t seen mentioned yet is clevis and tang, basically if you have more than one server then they can unlock each other and if they’re spatially separated then it is very unlikely they get stolen at the same time.
Though you have to make sure it stops working when a server get stolen, using a mesh VPN works just as well after the server is stolen so either use public IPS and a VPN or use a hidden raspberry pi that is unlikely to be stolen or make the other server stop tang after the first one is stolen.