• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle






  • Ah ok. I’ve done opnsense and pfsense both virtualized in proxmox and on bare metal. I’ve done the setup both at two work places now and at home. I vastly prefer bare metal. Managing it in a VM is a pain. The nic pass through is fine, but it complicates configuration and troubleshooting. If you’re not getting the speeds you want then there’s now two systems to troubleshoot instead of one. Additionally, now you need to worry about keeping your hypervisor up and running in addition to the firewall. This makes updates and other maintance more difficult. Hypervisors do provide snapshots, but opnsense is easy enough to back up that it’s not really a compelling argument.

    My two cents is get the right equipment for the firewall and run bare metal. Having more CPU is great if you want to do intrusion detection, DNS filtering, vpns, etc. on the firewall. Don’t feel like you need to hypervisor everything







  • Proxmox has a virtual monitor in its web interface, so you can access the desktop of a virtual machine that way. It’s a little clunky but works ok for quick configuration. Alternately you could remote desktop into the virtual machine.

    Quicksync is a little more tricky. GPU pass through is a pain, and I’m not sure off the top of my head about that. You can Google “proxmox quicksync passthrough” and see if any solutions will work for you. There’s a chance that all you would need to do is set the processor type correctly in the virtual machine settings, but I’m not sure.



  • What no one else has touched on is the protocol used for network drives interferes with databases. Protocols like SMB lock files during read/write so other clients on the network can’t corrupt the file by interacting with it at the same time.

    It is bad practice to put the docker files on a NAS because it’s slower, and the protocol used can and will lead to docker issues.

    That’s not to say that no files can be remote, jellyfin’s media library obviously supports connecting to network drives, but the docker volume and other config files need to be on the local machine.

    Data centers get around this by:

    • running actual separate databases with load balancing
    • using cluster storage like ceph and VMs that can be moved across hypervisors
    • a lot more stuff that’s very complicated

    My advice is to buy a new SSD and clone the existing one over. They’re dirt cheap and you’re going to save yourself a lot of headache.



  • Security through obfuscation is never a good idea. Best practices for exposing ssh (iirc):

    • disable root login (or at least over ssh)
    • disable password login over ssh, use key pairs instead
    • use fail2ban to prevent brute forcing
    • install security updates frequently

    All of those are pretty easy to do, and after that you’re in a really good place.

    I don’t see a problem with ssh tunneling to access services, as long as the ssh server is secured correctly


  • There have been many posts about people running truenas as a VM in proxmox. There are a few things to consider that I’m not well versed on, so I suggest doing some more in depth research, but it’s definitely possible (I did it myself up until the end of last year).

    One of the easiest ways to get the hard drives into truenas is to connect them to a raid card running in IT mode, which allows the OS to directly control the drives (do not raid them, truenas wants the raw disks), and then pass the raid card to the truenas VM