He/Him They/Them

Working in IT for about 15 years. Been online in one way or another since the late 90’s.

I like games / anime but very picky with them.

Cats are the best people.

  • 0 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle

  • mail is the one thing I refuse to self host for the simple reason that despite not being particularly hard to get up and running initially, when it doesn’t work for whatever reason it can be and often is a gigantic pain in the ass to deal with, especially when it’s something out of your control. For personal there’s very good free options, for enterprise those same free options have paid options.

    Whether it be gmail having a bad day and blocking you or whatever cloud provider or on prem infrastructure crapping out for long periods of time causing you to be cut off from email for a while and potentially missing incoming mail permanently if the retries time out. Or anything in between. It’s one of those things where I’m glad it isn’t my problem to deal with.

    My only involvement with email is ensuring I have a local copy of my inbox synced up every week so if my provider were to ever die I still have all my content.


  • Buy the domain itself wherever you want. I like cloudflare, and a lot of people also suggest porkbun.com. You then point the nameservers for your domain to whatever DNS service you want. If you stick to cloudflare then it’s already done for you.

    For dynamic DNS I use cloudflare’s one using my router to keep it updated. It’s easy to set up. Depending on your router you may need to run a service on a machine to do this instead. things like pfsense/opnsense should have it built-in.







  • Possible yes. Cost effective / valid business case probably not. Every extra 9 is diminishing returns: it’ll cost you exponentially more than the previous 9 and money saved from potential downtime is reduced. Like you said 32 seconds of downtime, how much money is that for the business?

    You’re pretty much looking at multiple geographically diverse T4 datacenters with N+2 or even N+3 redundancy all the way up and down the stack, while also implementing diversity wherever possible so no single vendor of anything can cause you to not be operational.

    Even with all that though, you’ll eventually get wrecked by DNS somewhere somehow, because it’s always DNS.



  • I run linux for everything, the nice thing is everything is a file so I use rsync to backup all my configs for physical servers. I can do a clean install, run my setup script, then rsync over the config files, reboot and everyone’s happy.

    For the actual data I also rsync from my main server to others. Each server has a schedule for when they get rsynced to so I have a history of about 3 weeks.

    For virtual servers I just use the proxmox built in backup system which works great.

    Very important files get encrypted and sent to the cloud as well, but out of dozens of TB this only accounts for a few gigs.

    I’ve also never thrown out a disk or USB stick in my life and use them for archiving, even if the drive is half dead as long as it’ll accept data I shove a copy of something on it, label and document it. There’s so many copies of everything that it can all be rebuild if needed even if half these drives end up not working. I keep most of these off-site. At some point I’ll have to physically destroy the oldest ones like the few 13 GB IDE disks that just make no sense to bother with.


  • If you’re using memory for storage operations, especially for something like ZFS cache, then you ideally want ECC so errors are caught and corrected before they corrupt your data, as a best practice.

    In the real world unless you’re buying old servers off ebay that already have it installed the economics don’t make sense for self hosted. The issues are so rare and you should have good backups anyways. I’ve never run into a problem for not using ECC, been self hosting since 2010 and have some ZFS pools nearly that old. I exclusively run on consumer stuff with the exception of HBAs and networking, never had ECC.





  • You’ll have to use nginx’s ACL feature. Like you’ve discovered once someone has access to the proxy itself, by its very nature it’s acting on behalf of the client making the request so firewall rules won’t help much there. You’d have to get into packet inspection, and deal with https man in the middle stuff and all that, likely not worth it.

    I’m not familiar with nginx proxy manager which you seem to be using but in the config files it’d look like this:

    nginx.conf

    location / {
    
        include /etc/nginx/acl_file_name.acl;
        deny all;
    
       # rest of location config
    
    }
    

    the deny all; at the end means everything that wasn’t listed in the ACL file is denied. You could alternatively put the deny all directly in the acl file at the end of the file.

    the acl file itself is just a list, i create it in a file for easy re-use, and made up the “.acl” extension because it makes sense.

    acl_file_name.acl

    allow 192.168.1.0/24;
    allow 192.168.2.101;
    

  • Very possible and done all the time.

    At home you run a local DNS server, plenty of options out there especially as a self-hoster: bind, power dns, microsoft DNS if you’re into windows. you can also combine this with something like pihole to block ads and junk at the dns level.

    You create the dns zone on your dns server, internal devices use that dns server. You create the same zone on a public DNS provider like cloudflare or whoever (or host your own if you feel like it, on your vps), with public IPs.

    Any of your devices coming/going from your home should be using DHCP to obtain an IP. At home your DHCP settings would hand out the local DNS server, anywhere else you go you’ll be using other DNS servers that will resolve the public IP. It should all be pretty seamless and transparent once set up.