

I believe that functionality exists already if you are using Plex; via RSS sync.
I believe that functionality exists already if you are using Plex; via RSS sync.
Really depends on what you want out of the system, what you can spend and how much time you want to spend on it.
My old z390 itx system has a 16x PCIE to 4x m.2 card - leveraging an m.2 to 5x SATA adaptor with the built in SATA adaptors has given it plenty of space.
Considering I can grab m.2 to 6 SATA adaptors and fill the remainder of the slots that’s a decent chunk of drives from a single PCIE x16 slot.
Software is another kettle of fish and a good way to timesink, I’d rather not give too much of my personal experience as there are so many ways to skin that cat.
Works great and has been for some time on my P7P.
Ensure you’ve allowed background usage and turn off manage app if unused.
Keep the notification on and allow notifications.
It’s not so much how old - but how long the OEM (google) keeps bringing out updates for that platform (from memory)
deleted by creator
I’ve not tested the method linked but yeah it would seem like it’s possible via this method.
My lone VM doesn’t need a connection to those drives so I’ve not had a point to.
You could probably run OMV in an LXC and skip the overheads of a VM entirely. LXC are containers, you can just edit the config files for the containers on the host Proxmox and pass drives right through.
Your containers will need to be privileged, you can also clone a container and make it privileged if you have something setup already as unprivileged!
Yeah there is a workaround for using bind-mounts in Proxmox VMs: https://gist.github.com/Drallas/7e4a6f6f36610eeb0bbb5d011c8ca0be
If you wanted, and your drives are mounted to the Proxmox host (and not to a VM), try an LXC for the services you are running, if you require a VM then the above workaround would be recommended after backing up your data.
I’ve got my drives mounted in a container as shown here:
Basicboi config, but it’s quick and gets the job done.
I’d originally gone down the same route as you had with VMs and shares, but it’s was all too much after a while.
I’m almost rid of all my VMs, home assistant is currently the last package I’ve yet to migrate. Migrated my frigate to a docker container under nixos, tailscale exit node under nixos too while the vast majority of other packages are already in LXC.
Ahh the shouting from the rooftops wasn’t aimed at you, but the general group of people in similar threads. Lots of people shill tailscale as it’s a great service for nothing but there needs to be a level of caution with it too.
I’m quite new to the self hosting game myself, but services like tailscale which have so much insight / reach into our networks are something that in the end, should be self hosted.
If your using SMB locally between VMs maybe try proxmox, https//clan.lol/ is something I’m looking into to replace Proxmox down the line. I share bind-mounts currently between multiple LXC from the host Proxmox OS, configuration is pretty easy, and there are lots of tutorials online for getting started.
I still use it, the service is very handy (and passes the wife test for ease of use)
Probably some tinfoil hat level of paranoia, but it’s one of those situations where you aren’t in control of a major component of your network.
Tailscale is great, but it’s not something that should be shouted from the rooftops.
I use tailscale with nginx / pihole for my home services BUT there will be a point where the “free” tier of their service will be gutted / monetized and your once so free, private service won’t be so free.
Tailscale are SAAS (software as a service), once their venture capital funds look like their running dry, the funds will be coming from your data, limiting the service with a push to subscription models or a combination.
Nebula is one such alternative, headscale is another. Wire guard (which tailscale is based on) again is another.
I just installed xpipe and found i was habitually double clicking, found after I had a good 3 + terminal sessions running I’d best find out why.
Gallagher were great at that, rubbish solution for “teaching” staff about phishing which would infuriate all staff caught in the net. Would come from internal email addresses too which, if one person’s email / credentials are compromised they’ve got bigger fish to fry.
I mean it’s not the worst. Is it still https? Or are they serving plain ol http? My internal services (at home) are mostly https, but the certs are self signed so browsers will flag them as “insecure”.
I’ve recently (in the last week) added my contacts and manage my calendar via nextcloud locally. Davx synchs to my android devices, nextcloud is synched to my haos VM to help me remember bin nights / other appointments. For someone with ASD + ADHD it’s a godsend.
But man, I’ll be able to amend all those TODO items that have been accumulating of the last 12 months and fix all those issues while rebuilding my raid.
I mean that’s only if my GITs aren’t hijacked during the ransomware attack.
And I mean, I’ll probably just push the same config to my server and let it on its merry way again.
I’ve got frigate running on a HAOS VM as an add-on. 2 cameras both running detection with only 2 cores dedicated to frigate.
Using proxmox on an old Intel 5960x, very minimal usage I’m sure you would see reasonable results on an orangepi. I guess make a backup of your SD / NVME before wiping and testing.
Been meaning to tweak detection as it’s a bit slow right now, wanting some automations built around person / presence detection utilising zone detection but it’s too slow right now.
Previous models are android tablets, assuming the same here.