• 0 Posts
  • 78 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle


  • my router and my reverse proxy (traefik) is able to receive the necessary SSL/TLS certificates however

    From something like LetsEncrypt?
    As an HTTP-01 Challenge? Not an DNS-01 challenge?
    Http challenge means that port 80 is accessible from the public internet (because that’s how LE can confirm it can reach your server via the public DNS records, proof of server ownership).
    DNS-01 is about proof of DNS record ownership, and doesn’t prove public internet access.

    Also, what are you self hosting?
    Does it really need to be publicly accessible? Or just accessible by you and people you trust?


  • You need to control a domain, so LE can verify you are the controller of the domain, then LE will issue you a certificate saying you are the controller of the domain.

    For a wildcard LE cert, you need to use the DNS challenge method.
    Essentially the ACME client (or certbot or whatever) will talk to LE and say “I want a DNS challenge for *.example.com”.
    LE will reply “ok, your order number 69, and your challenge code is DEADBEEF”.
    ACME then interacts with your public nameserver (or you have to do this manually) and add the challenge code as a txt record _acme-challenge.example.com. (I’ve been caught out by the fact LE uses Google DNS for resolution, and Google will only follow 1 level of NS records from the root authorative nameserver).
    All the while, LE is checking for that record. When it finds the record, it mints a wildcard certificate.
    ACME then periodically checks in with LE asking for order 69. Once LE has minted the cert, it will return it to acme.
    And now you have a wildcard cert.

    So, how to use it on a local domain?
    Use a split horizon DNS method.
    Ensure your DHCP is handing out a local DNS for resolving.
    Configure that local DNS to then use 8.8.8.8 or whatever as it’s upstream.
    Then load in static/override records to the local DNS.
    Pihole can do this. OPNSense/pfSense can do this. Unifi can do some of this.

    How does this work?
    Any device on your network that wants to know the IP of example.example.com will ask it’s configured DNS - the local DNS that you have configured.
    The local DNS will check it’s static assignments and go “yeh, example.example.com is 10.10.3.3”.
    If you ask you local DNS for google.com, it won’t have a static assignment for it, so it will ask it’s upstream DNS, and return that result.
    And it means you aren’t putting private IP spaces on public NS records.

    Then you can load in your wildcard cert to 10.10.3.3, and you will have a trusted HTTPS connection.

    Here is a list of LE clients that will automate LE certs.
    https://letsencrypt.org/docs/client-options/

    Have a read through and pick your desired flavour.
    Dig into the docs of that flavour, and start playing around.

    If it’s all HTTPS, consider using something like Nginx Proxy Manager (https://nginxproxymanager.com/) as a reverse proxy in front of your services and for managing the LE cert.
    It’s super easy to use, has a decent GUI, and then it’s only 1 IP to point all DNS records to.


  • DNS and domains are just human-friendly IP addresses.

    You only have 1 public IP address.
    So, to access different services you need to use different ports.
    Or run a service on a single port in front of the other services that can understand the connections and forward the connections to the actual services - known as a reverse proxy. In the case of http/https, there are plenty of reverse proxies that can direct requests based on all sorts of parameters, subdomains being one of them.

    If you are just starting out, I’d recommend a docker compose stack and Nginx Proxy Manager.
    Learning containers & docker makes everything easier.
    NPM is a very easy to use reverse proxy with a nice GUI, so you don’t have to configure CertBot/ACME or learn the specific config language of Nginx.

    If you are unsure of domains and all that, you can try it out for free.
    Your computer has a hosts file (/etc/hosts on Linux, I think it’s in system32 on windows). This allows you to tell the computer “for the domain example.com use the IP 10.0.0.200” or whatever you want. You need a hosts file entry for each subdomain.
    What this means is that you can run up a docker compose stack on your computer and point a bunch of sub domains to 127.0.0.1, use self-signed certs, and play around with nginx proxy manager and docker.
    No money spent, no records published, no traffic leaving your computer.
    Zero risk.

    There are loads of tutorials out there on NPM and docker compose stacks. Probably some close to your specific requirements.




  • accessed from the internet

    Accessed only by you and close family/friends who you are also hosting services for?
    Or accessed by anyone?

    “Accessed by anyone” carries more risk.

    “Accessed by users you host for”, the risks can be eliminated (well, other than risks from those users) by using a VPN. As in, only the people authorised to be on the VPN can access the services.
    Wireguard is the go-to these days.
    Tailscale is much easier and free for 3 users and 100 nodes.

    If it absolutely has to be “accessed by anyone” I would look into a “reverse proxy over VPN/tunnel” or just straight tunnel style approach like chisel (or crowbar, or corkscrew), rathole, frp, or cloudflare tunnels.

    Basically, don’t point a domain at your home public IP and don’t forward ports on your home router/firewall



  • So you have local DNS set up?
    If you ping (or dig) speed.mydomain.local, does it resolve the same address as local_ip?
    Considering you are accessing local_ip:3000 and the domain on port 443, there is clearly a firewall somewhere redirecting packets or a reverse proxy on the domain but not on local_ip:3000

    Follow the port chain, forwarding, proxying etc. One of those will be bottlenecking. Then figure out why

    Edit:
    Just because your ISP speed is 100mbps and you are seeing 500mbps, doesn’t mean the connection isn’t hairpinning through your router via it’s public IP (as in, the traffic never leaves your router, but still goes through it)


  • I’m currently reconsidering using a couple mikrotik for some layer 3 hardware offloading.
    Not really homelab, but close.

    I have a project that gets integrated with another network for an event. I’m thinking of using 2x crs504 (cause I’m using mlag for servers, think vrrp or whatever for “public” (it’s all internal) ip) and seeing if I can get l3hw working as a router.
    While I could sit on a subnet of the “host” network, having a gateway that traffic goes through allows me to test and prove everything for my system in my homelab, with just the final integration being a do-in-a-time-crunch problem.
    I’m already using the crs504s for networking (I bought them ages ago, thinking 25gbps was going to be as easy as 10gbps. It’s all running at 10gbps), and this saves having to use something as a router, cuts down on rack space, all sorts of benefits. I think.
    Anyone have any experience with mikrotik l3hw offloading?

    My actual homeland is just a NAS and some networking. It’s a small flat, it’s just me. Not complicated, no need to give me more headaches!



  • Default config is defined in the firmware. It can’t be deleted or changed (well, easily. I think there is a reseller option to have a custom default config).
    The “no default config” means the default config will not be applied after the reset.
    If you reset it again without checking “no default config”, then the default config will be applied.

    “No default config” is very useful for applying your own config script. It gives you a blank canvas, making scripting a lot easier!

    I have my “config.rsc” file that has the required configuration. And I have a “reset.auto.rsc” file that only has the command to reset the mikrotik with no defaults and to run the “config.rsc” script after reset.
    “filename.auto.rsc” will be executed as soon as it gets FTPd (it’s a feature of mikrotik).
    I use a bash script that FTPs the config.rsc file to the mikrotik, then the reset.auto.rsc file.
    Makes it trivial to tweak the config then apply it, and I get all the config for the devices in easy to edit/diff script files.





  • It’s not a workaround.
    In the old days, if you had 2 services that were hard coded to use the same network port, you would need virtualization or a different server and make sure the networking for those is correct.

    Network ports allow multiple services to use the same network adapter as a port is like a “sub” address.
    Docker being able to remap host network ports to containers ports is a huge feature.
    If a container doesn’t need to be accessed outside of the docker network, you don’t need to expose the port.

    The only way to have multiple services on the same port is to use either a load balancer (for multiple instances of the same service) or an application-aware reverse proxy (like nginx, haproxy, caddy etc for web things, I’m sure there are other application-aware reverse proxies).