Lead admin for https://lemmy.tf, tech enthusiast

  • 0 Posts
  • 19 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle







  • I’ve got a baremetal server with OVH running VMware, so it’s just a VM that I manage. I’m paying more for it than I’d like, but it’s running far more than just Lemmy. If I wind up ditching it in the future, it’s just a quick vMotion off to another machine + DNS updates.

    Here’s a current output of my storage about a week into hosting the instance. It’s growing slower than I expected, and I do have plans to move volumes/pictrs up to an s3 bucket whenever I start running low on local storage.

    [jon@lemmy lemmy]# du -sh volumes/*
    2.5G    volumes/pictrs
    2.2G    volumes/postgres
    

    I would recommend locking down SSH on your Lemmy server, I have mine restricted to allow logins from VPN only. Otherwise you’ll get probed 24/7 with a public server.


  • Do you need access control? If not, a simple Apache/Nginx directory listing is nice and easy, just drop your files in your webroot and you’re set. h5ai is a nice addition if you go that route.

    If you need access control (or at least some sort of obfuscated URLs), Nextcloud is a good option. Pretty easy to get up and running, and there’s a ton of plugins available.


  • Got a Veeam community instance running on each of my VMware nodes, backing up 9-10 VMs each.

    Using Cloudberry for my desktop, laptop and a couple Windows VMs.

    Borg for non-VMware Linux servers/VMs, including my WSL instances, game/AI baremetal rig, and some Proxmox VMs I’ve got hosted with a friend.

    Each backup agent dumps its backups into a share on my nas, which then has a cron task to do weekly uploads to GDrive. I also manually do a monthly copy to an HDD and store it off-site with a friend.



  • From what I’ve seen and read, server to server traffic is less taxing on instances than client to server. So even if your instance is JUST you, it would be your instance talking to everything else so it would have some net benefit on the federation. But it would take a lot of users self-hosting solo instances for this to help in any noticeable way, I’d think.

    There is certainly no downside to running a solo instance, if you’re even slightly interested I would say go for it!




  • Yes, I’ve got separate subnets & vlans for a few things. My PCs/phone/tablets/etc, homelab, IoT devices (i.e. loads of Govee bulbs/ropes, gaming consoles, oven, etc), Guest (all isolated from everything else internal) and one for my roommate. I’m on a Unifi Dream Machine Pro so setting up traffic rules to allow certain traffic from PC vlan to homelab (and the other way) was pretty straightforward.

    As for the VPN, yes a full tunnel would force all traffic over the VPN, but for all but my *arr stuff that’s overkill. I just join all my VMs to Zerotier and force traffic from the public LB in via their VPN IP, but the VMs can still pull yum updates and anything else they want over my WAN link.



  • I run all my lab servers/services/etc in their own /16 on my home net. Nothing is publicly routed in over my WAN IP- if I want to expose a service, it goes through Nginx Proxy Manager to my local service via a ZeroTier tunnel.

    I would strongly encourage you to not expose any of the *arr services (particularly your download node) to your WAN IP. PIA’s desktop app does a pretty good job of forcing a full tunnel with a VPN kill switch, so you never have to worry about your ISP catching onto what you’re doing.


  • jon@lemmy.tftoSelfhosted@lemmy.worldWhat are YOU self-hosting?
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 years ago

    All the things! I’ve got a hybrid VMware cluster (two nodes at home and one in a DC) with a bunch of VMs for stuff like Plex, Plesk, Gitlab, Lemmy, Stable Diffusion, etc. also running a 5-node Rancher k8s cluster.

    Some of my public services do actually run from home but are routed through ZeroTier to my Nginx Proxy Manager appliance.

    Pretty much everything is running RHEL8 or CoreOS after a recent migration. Veeam for backups (two community instances since I’m too cheap to pay for licensing for personal stuff).

    Edit:

    Unraid 114TB usable, 2TB NVMe cache
    • Nextcloud
    • *arr UI’s
    • Pihole
    • YT-DL
    • Satisfactory server
    • Fivem server
    Game/AI Rig (i7-13900k, 128gb ddr5, 6700xt, 12tb ssd/nvme)
    • Plex
    • Minecraft server (big ass custom pack)
    • Stable Diffusion
    Home servers (Poweredge R410, old but powerful)
    • *arr downloader (routed through PIA with kill switch)
    • Ansible Tower
    • Splunk
    • Domain controller
    • Veeam backups
    • Handful of Red Hat dev/test VMs
    • 1x Rancher controller
    • 2x Rancher workers
    Mac Mini’s
    • 1x Rancher controller
    • 2x Rancher workers
    Desktop (Ryzen 7, 3090ti, 128gb ddr4, too many ssds/nvme’s)
    • Another Plex server, same content
    • Stable Diffusion
    • oobagooba text-generation-webui for LLMs

    Not sure this one counts but…

    OVH Game server (Ryzen 7, 64gb ddr4, 2tb nvme) [not self hosted]
    • Lemmy.tf
    • Plesk (web/DNS/DBs mostly)
    • Teleport (SSH/RDP tunnel)
    • Nginx Proxy Manager
    • Gitlab