

This is encouraging. Thank you.
This is encouraging. Thank you.
I use nginx for static websites and TLS passthrough servers.
I use traefik as a reverse proxy for sites with many services and SSO.
Nginx is definitely easier to configure for simple things. But I prefer traefik for more complex setups.
Compressed air can spin the fans fast enough to cause damage unfortunately.
Did you use compressed air to clean out the fans?
It’s possible to fry circuitry if you artificially rotate the fans too fast, as this generates an electric field more powerful than the fans and their attached components are rated for.
Probably rare to cause damage with modern computers but an old PC might be more susceptible to this type of damage.
Am I understanding correctly that if users had 2FA, the vulnerability would be prevented from gaining access?
I have a workstation I use for video editing/vfx as well as gaming. Because of my work, I’m fortunate to have the latest high end GPUs and a 160" projector screen. I also have a few TVs in various rooms around the house.
Traditionally, if I want to watch something or play a video game, I have to go to the room with the jellyfin/plex/roku box to watch something and am limited to the work/gaming rig to play games. I can’t run renders and game at the same time. Buying an entire new pc so I can do both is a massive waste of money. If I want to do a test screening of a video I’m working on to see how it displays on various devices, I have to transfer the file around to these devices. This is limiting and inefficient to me.
I want to be able to go to any screen in my house: my living room TV, my large projector in my studio room, my tablet, or even my phone and switch between:
I’m a massive Nextcloud fan and have a server up and running for many years now.
But I understand all of the downvoted commenters. It is clunky and buggy as hell at times. Maybe it’s less noticeable when you’re running a single user instance, but once you have non tech literate users using it you begin to notice how inferior it is to the big boys like google drive in some aspects.
That said, I personally have a decent tolerance for fiddling and slight frustrations as a trade off for avoiding privacy disrespecting and arguably evil corporations.
I would recommend everybody looking for a gdrive, Dropbox, one drive alternative to at least give Nextcloud a go.
Thanks so much for the detailed reply. I have about 20TB of data on the disks otherwise I would take your advice to set up a different scheme. Luckily, as it’s a backup server I don’t need maximum speed. I set it up with mergerfs and snapraid because I’m essentially recycling old drives into this machine and that setup works pretty well for my situation.
The proxmox host is the default (ext4/lvm I believe). The drives are also all ext4. I very recently did a data drive upgrade and besides some timestamp discrepancies likely due to rsync, the SCSI semi-virtualized thing wasn’t an issue. I replaced the old drive with a larger one, hooked the old one up to a usb dongle and passed it through to OMV and I was able to transfer everything and get my new data drive hooked back into the mergerfs pool and snapraid. I’ll do a test and see if I can still access the files directly in the proxmox host just for educational purposes.
I’ll try to re-mount the NFS and see where that gets me. I’m also considering switching to a CIFS/SMB share as another commenter had posted. Unless that is susceptible to the same estale issue. I won’t be back at that location for about a week so I might not have an update for a little while.
Third time posting this reply due to the lemmy server upgrade.
Proxmox on bare metal. A VM with OMV and a VM of proxmox backup server. Multiple drives passed through to OMV and then mergerfs pools them together. That pool has two main shared folders. One is for a remote duplicati server that connects via SFTP. The other is an NfS for PBS. The PBS VM uses the NFS shared folder as storage. Everything worked until recently when I started getting estale errors. Duplicati still works fine
Looks like my reply got purged in the server update.
Running Proxmox baremetal. Two VMs: Proxmox Backup Server and OMV. Multiple HDDs passed through directly as SCSI to OMV. In OMV they’re combined into a mergerfs pool. Two shared folders on the pool: one dedicated to proxmox backups and the other for data backups. The Proxmox backup shared folder is an NFS share and the other shared folder is accessed by a remote duplicati server via SSH (sftp?). Within the proxmox backup server VM, the aforementioned NFS share is set up as a storage location.
I have no problems with the duplicati backups at all. The Proxmox Backup Server was operating fine as well initially but began throwing the estale error after about a month or two.
Is there a way to fix the estale error and also to prevent it from reoccurring?
Underlying system is running Proxmox. From there I have the relevant two VMs: OMV and Proxmox Backup Server. The hard drives are passed into OMV as SCSI drives. I had to add them from shell as the GUI doesn’t give the option. Within OMV I have the drives in a mergerfs pool, with a shared folder via NFS that is then selected as the storage from within the Proxmox Backup Server VM. OMV has another shared folder that is used by a remote duplicati server via SSH(SFTP?), but otherwise OMV has no other shared folders or services. Duplicati/OMV have no errors. PBS/OMV worked for a couple of months before the aforementioned error cropped up.
Also possibly relevant: No other processes or services are setup to access the shared folder used by PBS.
I’ve tried Nebula before but couldn’t get it running properly on all devices. How is Tailscale in terms of compatibility and can you also use wireguard simultaneously? Mesh networks are great for connecting my own devices and servers, but I still need a wireguard interface for certain servers to provide public access through a public router. I also ran into a major issue setting up Nebula on my laptop in which it couldn’t be used without disabling my VPN. Is any of that a problem with Tailscale? Also, is Tailscales coordination server self hostable or do you have to use theirs? That seems like a dealbreaker if you’re forced to use a third party coordinator
I would suggest trying wireguard first as it’s much less complex to set up. Once you have a handle on that, you might consider moving to a mesh network. I personally would love to use a mesh network, but have not been able to get it configured correctly the few times I’ve tried.
I ended up going with migadu. Seems great so far. Already up and running with 3 domains and dozens of aliases.
Forwarded mail but it may be two way in the future so it would probably be smart to just go that route from the beninging.
Problem solved. The firewall was attempting to pass traffic through the default gateway. You have to create a firewall rule to allow whatever traffic you want but in the advanced settings you need to select the wireguard gateway instead.
My network is currently setup with wireguard. I have a VPS operating as a hub within a hub and spoke (or is it hub and wheel?) configuration. This has worked great with the exception that all traffic passes through the VPS. The benefit of a mesh network is that I can directly connect clients and data does not have to flow through an intermediary VPS.
Ideally I would be able to split tunnel around the vpn but I don’t have the option on mac
I tried to set up a nebula network but it seems like it has trouble when your hosts are behind a VPN service. The VPN must block the port or protocol the lighthouse is connecting with and I can’t figure out a way to bypass the VPN (at least on Mac split tunneling isn’t supported). I’m assuming you’re familiar with mesh networks…do you have any good youtube videos or resources you would recommend? The nice thing about VPN is it’s crazy simple to set up and seems to work with all types of system configurations. Nebula was pretty simple but seems like a pain to troubleshoot so far.
Backblaze deleted my project drive for a multimillion dollar project I was archiving through their desktop sync. It’s largely my fault for not noticing the drive had failed when considering their upfront policy about them deleting your backups after a month of inactivity. Luckily it didn’t have too big of an impact because the most important files were backed up elsewhere. I do wish their desktop app had better warnings about imminent deletions though.