

It supports wired charging?!
It supports wired charging?!
I saw sharrr the other day which apparently can be self hosted and uses cryptography / expiracy / single download / multi part downloads to make it hard to find a compete file if an attacker even has host access, it also encrypts the file prior to uploading to the server and only you on the client side have the encryption.
That said, this is all according to the architecture of the service, not sure about security in practice.
If you are using openvpn this env may be a good thing to try. It may need adjusted though.
OPENVPN_MSSFIX=‘1350’
Yeah that makes sense I didn’t mean in any aggressive way I guess codeberg is archived so that answers the question of what one to use for issues and such. I’ll put up a feature request on it. I do appreciate the work and have been watching it progress!
What’s up with changing to github?
I’ve been watching this one since it can support high availability but the biggest thing I see missing is support for indexing / searching documents.
I like the direction this has gone so far and excited to see how it continues!
VPN would still work for iPhone I imagine. Small whitelist of DNS would do 90%+ of the job.
Using the igpu might be problematic for transcoding if you need that. I’d recommend older intel / Asus NUCs if you want a mini PC. 3 year warranty, built for Enterprise, tall version has room for a 7mm tall sata SSD or HDD along with nvme m.2 SSD.
I think if you do Asus 12gen + they have another m.2 slot though it is the smaller one 2242. Doing all this you can upgrade it to 64GB RAM, 8TB m.2 2280, 8TB SATA SSSD, and 1TB M.2 2242. In homelab especially with mini PCs the limit is usually RAM / storage rather than CPU.
I got 4 11th gen with 64 GB RAM each and 32TB of SSD storage. I recommend avoiding QLC SSD as much as possible. Aim for TLC , MLC, or SLC. Higher storage capacity tends to be QLC or TLC, QLC has shortest endurance and slowest speeds.
It’s also very easy to make it highly available and to scale horizontally.
Glad you found the issue! I fell asleep hard last night sorry I couldn’t be your rubber duck haha
If it is due to a sigle asset I imagine an error would log to the console.
My advice is start over at least temporarily. Use immich base compose with one mount for the uploads and test it before deviating from the basic setup.
Were you able to fix it? Mounting like that should work but it looks like docker isn’t mounting subpaths right.
What do the logs say? I’d check
the more I think there are more unknowns sice there are a few ways it could be running.
Nah not a requirement. I think like 3 months or so after the reddit API shutdown. Big instances got local AI models to detect it and Lemmy server now supports disabling caching other instances so I’d probably disable that if I ever enable it again haha
I got all my yaml files source controlled privately right now but I can share if you want them. I disabled Pictrs around the time of CSAM attacks and have yet to bother enabling it again haha
Yeah I want to switch when other implementations catch up. Unfortunately I think that will be some more time especially since you can’t migrate from synapse and have to start from fresh. One day though!
I did the same for Lemmy at one point then found out all the configs are mapped to environment variables my convention. My Lemmy setup is the most advanced, but it has HA postgres, and all of its modules separated and HA. The proxy setup for it in k8s was rough but I eventually got it working directly on ingress-nginx too.
Yeah it’s a bit of work sometimes. Synapse matrix kinda sucks too their philosophy of no environment variables for secrets. I ended up making an init container that hijacks my config map and I jet’s the environment variables into the config
Using different federation protocol, but matrix wservers ould be the other big one.
Edit you also mentioned trouble creating them. I suggest looking into operator hub and using operators for postgres and redis and auth (keycloak?). This can get you down in the rabbit hole for making everything highly available too.
Server CPUs are built for the workload (hosting / background services) rather than desktop applications for consumer PCs. That being said generally your going to be more limited in disk / ram than CPU unless if you have some specific needs.
In my setup, my server resources are averaging 10% cpu, 54% memory, and ~70% storage. I’m running 4 PCs, 8 cores each so 32 cores, currently on memory I got 2x64GB and 2x16GB so 160GB ram. Between CPU and RAM I am utilizing basically 3.2 cores worth of processing and 86GB of ram. Most of my ram is going to postgres databases for speed improvement and it takes off load from the CPU.
How much thinner can we get before the USB C port is removed? That’s looking tight since foldable is splitting thickness by 2.