

This is how I learn and half the reason my home lab exists. I need projects to get/stay motivated.
Mastodon: @SeeJayEmm@noc.social
This is how I learn and half the reason my home lab exists. I need projects to get/stay motivated.
I feel this post so hard. I’m always about 5 seconds from going Office Space on my printer.
I’m fond of Beekeeper Studio and a sqlite DB.
You know b2 has multi region replication now, right?
Then you didn’t understand how the system uses swap.
https://www.wireguard.com/protocol/
Looks like wireguard encrypts traffic to me.
Thanks I may give it a try if I’m feeling daring.
Media should exist in its own with a tuned record size of 1mb
Should the vm storage block size also be set to 1MB or just the ZFS record size?
That cheat sheet is getting bookmarked. Thanks.
I’m referring to this.
… using grub to directly boot from ZFS - such setups are in general not safe to run zpool upgrade on!
$ sudo proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
8357-FBD5 is configured with: grub (versions: 6.5.11-7-pve, 6.5.13-5-pve, 6.8.4-2-pve)
Unless I’m misunderstanding the guidance.
Proxmox is using ZFS. Opnsense is using UFS. Regarding the record size I assume you’re referring to the same thing this comment is?
You can always find some settings in your opnsense vm to migrate log files to tmpfs which places them in memory.
I’ll look into this.
I’ve done a bit of research on that and I believe upgrading the zpool would make my system unbootable.
I didn’t pass any phy disks through, if that’s what you mean. I’m using that system for more than OMV. I created disks for the VM like I would any other VM.
This was really interesting, thanks for the info.
Thanks for all the info. I’ll keep this in mind if I replace the drive. I am using refurb enterprise HDDs in my main server. Didn’t think I’d need to go enterprise grade for this box but you make a lot of sense.
I’ve been happily running Open Media Vault in a Proxmox VM for some time now.
I may end up having to go that route. I’m no expert but aren’t you supposed to use different parameters when using SSDs on ZFS vs an HDD?
I thought cheap SSDs and ZFS didn’t play well together?
I’m starting to lean towards this being an I/O issue but I haven’t figure out what or why yet. I don’t often make changes to this environment since it’s running my Opnsens router.
root@proxmox-02:~# zpool status
pool: rpool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:56:10 with 0 errors on Sun Apr 28 17:24:59 2024
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
ata-ST500LM021-1KJ152_W62HRJ1A-part3 ONLINE 0 0 0
errors: No known data errors
The constant argument in this space that you must know the arcane workings of everything you use, is exhausting.