

Ohh I was referring to the pic posted behind the “No problem” link in that user’s comment lol
Hey 👋 I’m Lemann
I like tech, bicycles, and nature.
Ohh I was referring to the pic posted behind the “No problem” link in that user’s comment lol
I can hear the case vibration in that picture lol
This would be viable for users who don’t use smart/programmable/dynamic playlists, and the various features backed by a track analysis ML model in plexamp
Backblaze recently revised B2 storage, it’s gone up by $1 and they offer free egress equal to I believe 2x or 3x the data you have stored with them. Let me try find a link to this…
Outside of Plexamp, I’m not sure of any self hosted music solutions that can generate random & context-aware playlists, most of the ones I’m aware of (Navidrome and the various apps that support it) appear to be mostly classic media players as you mentioned. There’s also Funkwhale, but that’s more of a self hosted, federated SoundCloud alternative I think.
Music discoverability is something I kind of stuggle with when looking to expand my library, so I tend to rely on apps to do this. I personally use Newpipe, as it works for YouTube, SoundCloud and Bandcamp, allowing me to jump between the platforms looking for remixes and the like.
For a more spotify-like discovery experience, the Spotube app combines Spotify’s API with YouTube Music to play spotify-suggested playlists, and there’s another app SimpMusic that does the same for YouTube music
I’m worried about if/how Wayland is going to affect X11 forwarding TBH… It’s a really useful linux feature that i’m susprised gets very little use
Someone else here mentioned not running "pictrs” at all
You don’t, those are the collateral damage.
IMO it’s better to just nuke every image from the last 24 hours than to subject yourself to that kind of heinous, disgusting content
Ugh .dev domain costs are ridiculous
(30€ every six month when I call to complain that it’s too expensive)
Sounds like a Liberty Global owned telecom company… they love their annual price increases ugh, but they are usually the fastest option in most areas
I have a Mid-2012 model running linux, absolutely love the build quality and design despite the age. Performance is perfectly usable, just gets a bit loud doing some tasks
Part of my decision was also based on it being the last repairable model (swappable RAM, full size SSD) and Louis Rossmann having a wealth of repair videos on said model
The instances that have been developing custom features (IIRC lemm.ee and a couple others) have been contributing them back into the main Lemmy project, so everyone else ends up eventually getting them.
Edit: I’ve seen a community for instance owners and devs somewhere but don’t remember the name, hopefully someone else remembers
Maybe not such a great idea for the same reasons suggested as everyone else.
If you still want to proceed, you could use the Fediseer API to get a list of instances that have a guarantor/“trusted” and work from there?
None IIRC, are you not able to spin up a VM on the same VLAN as the cameras to set them up, and nuke the VM once setup complete?
You may have better luck setting up an airgapped NVR, using USB webcams instead, or just DIYing an IP camera using an SBC
+1 to this
Some models also have programmable LEDs in the front that you can use to display the status of the system or app/service
https://www.youtube.com/watch?v=uJvCVw1yONQ
@pipedlinkbot@feddit.rocks
It’s a shame because the Nucs had a really clean but feature-rich BIOS, a modern design aesthetic and a generally high quality feel to them.
Eagerly waiting to see which manufacturer fills that spot - we’ve got the likes of Beelink, Minisforum and Gigabyte et al. but at the moment most feel like knock-offs IMO. There are some really appealing ones demoed on ServeTheHome now and again though…
At the moment most LLM libraries use CUDA for acceleration, which is a hardware feature on nvidia GPUs
I believe llama.cpp
can make use of AMD GPUs, but double check the project’s GitHub discussions first to confirm this, and see how people set it up
I personally use llama.cpp
in a VM, however if you have a nvidia GPU with lots of VRAM you’ve got more options available, as well as much faster inference (text generation) speed.
Check out the community at !localllama@sh.itjust.works, they’re pretty experienced with running LLMs locally
Flash drive hidden under the carpet and connected via a USB extension, holding the decryption keys - threat model is a robber making off with the hard drives and gear, where the data just needs to be useless or inaccessible to others.
There’s a script in the initramfs which looks for the flash drive, and passes the decryption key on it to cryptsetup, which then kicks off the rest of the boot mounting the filesystems underneath the luks
I could technically remove the flash drive after boot as the system is on a UPS, but I like the ability to reboot remotely without too much hassle.
What I’d like to do in future would be to implement something more robust with a hardware device requiring 2FA. I’m not familiar with low level hardware security at all though, so the current setup will do fine for the time being!