

qsnc is a gentleperson and a scholar
qsnc is a gentleperson and a scholar
I think you can keep doing the SMB shares and use an overlay filesystem on top of those to basically stack them on top of each other, so that server1/dir1/file1.txt
and server2/dir1/file2.txt
and server3/dir1/file3.txt
all show up in the same folder. I’m not sure how happy that is when one of the servers just isn’t there though.
Other than that you probably need some kind of fancy FUSE application to fake a filesystem that works the way you want. Maybe some kind of FUES-over-Git-Annex system exists that could do it already?
I wouldn’t really recommend IPFS for this. It’s tough to get it to actually fetch the blocks promptly for files unless you manually convince it to connect to the machine that has them. It doesn’t really solve the shared-drive problem as far as I know (you’d have like several IPNS paths to juggle for the different libraries, and you’d have to have a way to update them when new files were added). Also it won’t do any encryption or privacy: anyone who has seen the same file that you have, and has the IPFS hash of it, will be able to convince you to distribute the file to them (whether you have a license to do so or not).
You might want to try Openstack. It is set up for running a multi-tenant cloud.
Seems to not be paying off though; having whole communities and instances close is pretty inconvenient.
Why does Lemmy even ship its own image host? There are plenty of places to upload images you want to post that are already good at hosting images, arguably better than pictrs is for some applications. Running your own opens up whole categories of new problems like this that are inessential to running a federated link aggregator. People selfhost Lemmy and turn around and dump the images for “their” image host in S3 anyway.
We should all get out of the image hosting business unless we really want to be there.
That could work fine, probably? Or you could use it on the same machine as other stuff.
ZFS zRAID is pretty good for this I think. You hook up the drives from one “pool” to a new machine, and ZFS can detect them and see that they constitute a pool and import them.
I think it still stores some internal references to which drives are in the pool, but if you add the drives from the by-ID directory when making the pool it ought to be using stable IDs at least across Linux machines.
There’s also always Git Annex for managing redundancy at the file level instead of inside the filesystem.
Usually for Windows VM gaming you want to pass through a GPU and a USB controller and plug in directly. You might be able to use something like Steam streaming but I wouldn’t recommend a normal desktop-app-oriented thin client setup, not having tried it.
You may run into weird problems with latency spikes: mostly it will work great and everything runs at 90 FPS or whatever, but then inexplicably 1 frame every few minutes takes 100ms and nobody can tell you why.
There can also be problems with storage access speed. What ought to be very fast storage on the host is substantially slower storage once the image file and host FS overhead, or the block device pass through overhead, come into play. Or maybe you just need an NVMe device to pass straight through.
Another recommendation for Localsend. But it’s cool that Snapdrop manages to do almost the same thing and does it in the browser. I can definitely see where it would be useful.
What’s the point of self-hosting Snapdrop though? Does it need a discovery server in there for WebRTC? Or does this just end up serving the same static files but now from a local server?
It works on some devices; they do sign the builds as far as I can tell. But the bootloader itself needs to be convinceable to trust the LOS signatures, and needs to understand the secure boot implementation used in the Android that the current LOS is built from (since Android has re-done it all a few times). Nobody knows anything about bootloaders to figure out which of them can do this or how they would be induced to do it.