Just an Aussie tech guy - home automation, ESP gadgets, networking. Also love my camping and 4WDing.

Be a good motherfucker. Peace.

  • 4 Posts
  • 109 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • Hmmm - interesting. I hadn’t bothered to check before now, but I’m seeing something similar on one of the two PBS CTs I run.

    Comparing the output of netstat -lantop on both CTs, I can see that the one with more outbound traffic has more waiting connections from localhost on port 82, the port Proxmox Backup Servers provides its API over:

    tcp        0      0 127.0.0.1:51562         127.0.0.1:82            TIME_WAIT   -                    timewait (40.38/0/0)
    tcp        0      0 127.0.0.1:56342         127.0.0.1:82            TIME_WAIT   -                    timewait (29.92/0/0)
    tcp        0      0 127.0.0.1:44864         127.0.0.1:82            TIME_WAIT   -                    timewait (58.94/0/0)
    tcp        0      0 127.0.0.1:45028         127.0.0.1:82            TIME_WAIT   -                    timewait (11.88/0/0)
    tcp        0      0 127.0.0.1:44026         127.0.0.1:82            TIME_WAIT   -                    timewait (48.66/0/0)
    tcp        0      0 127.0.0.1:44852         127.0.0.1:82            TIME_WAIT   -                    timewait (58.80/0/0)
    tcp        0      0 127.0.0.1:59620         127.0.0.1:82            TIME_WAIT   -                    timewait (0.00/0/0)
    tcp        0      0 127.0.0.1:56374         127.0.0.1:82            TIME_WAIT   -                    timewait (30.98/0/0)
    tcp        0      0 127.0.0.1:51544         127.0.0.1:82            TIME_WAIT   -                    timewait (39.98/0/0)
    tcp        0      0 127.0.0.1:59642         127.0.0.1:82            TIME_WAIT   -                    timewait (0.00/0/0)
    tcp        0      0 127.0.0.1:45008         127.0.0.1:82            TIME_WAIT   -                    timewait (10.92/0/0)
    tcp        0      0 127.0.0.1:45016         127.0.0.1:82            TIME_WAIT   -                    timewait (11.76/0/0)
    

    I’m wondering if the graph is pulling aggregated network data, including the loopback interface. If so, and it’s all just port 82 stuff on 127.0.0.1, then it’s probably nothing to worry about.

    Edit: found this forum post that seems to indicate it’s aggregating all the byte values from /proc/dev/net, so this is probably nothing to worry about if your netstat output, like mine, only shows API conections to/from 127.0.0.1 on port 82.







  • Not heaps, although I should probably do more than I do. Generally speaking, on Saturday mornings:

    • Between 2am-4am, Watchtower on all my docker hosts pulls updated images for my containers, and notifies me via Slack then, over coffee when I get up:
      • For containers I don’t care about, Watchtower auto-updates them as well, at which point I simply check the service is running and purge the old images
      • For mission-critical containers (Pi-hole, Home Assistant, etc), I manually update the containers and verify functionality, before purging old images
    • I then check for updates on my OPNsense firewall, and do a controlled update if required (needs me to jump onto a specific wireless SSID to be able to do so)
    • Finally, my two internet-facing hosts (Nginx reverse proxy and Wireguard VPN server) auto-update their OS and packages using unattended-upgrades, so I test inbound functionality on those

    What I still want to do is develop some Ansible playbooks to deploy unattended-upgrades across my fleet (~40ish Debian/docker LXCs). I fear I have some tech debt growing on those hosts, but have fallen into the convenient trap of knowing my internet-facing gear is the always up to date, and I can be lazy about the rest.




  • It all comes down to what you trust each type of device to do and how you want to handle their traffic.

    I have seven VLANs, with each one’s traffic being treated very specifically. The subnets for each VLAN route to specific interfaces on a virtualised OPNsense firewall, which is where my traffic handling and policy enforcement takes place.

    Also remember VLANs are just plain useful for segregating traffic, particularly broadcast traffic, without having to invest in separate switching/routing for each subnet. Having a single managed switch that limits the broadcast domains for you is a really efficient way to (physically) setup your network.



  • I run Proxmox with a few nodes, and each of my services are (usually) dockerized, each running in a Proxmox Linux container.

    As I like to keep things segregated as much as possible, I really only have one shared Postgres, for the stuff I don’t really care about (ie. if it goes down, I honestly don’t care about the services it takes with it, or the time it’ll take me to get them back).

    My main Postgres instances are below - there’s probably others, but these are the ones I backup religiously, and test the backups frequently.

    1. RADIUS database: for wireless auth
    2. paperless-ngx: document management indexing & data
    3. Immich: because Immich has a very specific set of Postgres requirements
    4. Shared: 2 x Sonarr, 3 x Radarr, 1 x Lidarr, a few others

  • If you’re starved for RAM, there’s nothing wrong with a shared instance, as long as you’re aware of the risk of that single instance bringing down multiple services.

    I run a three node Proxmox cluster, and two nodes have 80GB RAM each, so my situation is very different to yours. So, I have four Postgres instances:

    1. Mission critical: pretty much my RADIUS database, for wireless auth and not much else (yet)
    2. Important: paperless-ngx, and other similarly important services
    3. Immich: because Immich has a very specific set of Postgres requirements
    4. Meh: 2 x Sonarr, 3 x Radarr, 1 x Lidarr (not fussed if this instances goes down and takes all of those services with it)







  • OK, I can definitely see how your professional experiences as described would lead to this amount of distrust. I work in data centres myself, so I have plenty of war stories of my own about some of the crap we’ve been forced to work with.

    But, for my self-hosted needs, Proxmox has been an absolute boon for me (I moved to it from a pure RasPi/Docker setup about a year ago).

    I’m interested in having a play with LXD/Incus, but that’ll mean either finding a spare server to try it on, or unpicking a Proxmox node to do it. The former requires investment, and the latter is pretty much a one-way decision (at least, not an easy one to rollback from).

    Something I need to ponder…