I’m using Nextcloud as well, but I’ll admit that it’s probably a bit heavy if all one needs is a calendar.
I’m using Nextcloud as well, but I’ll admit that it’s probably a bit heavy if all one needs is a calendar.
As much as I agree, I think we’re past the point of preventing normalization.
You still have 63% RAM available in that screenshot, there are zero problems with Java using 13% RAM. It’s the same as the tired old trope of “ChRoMe Is EaTiNg My MeMoRy”. Unused memory is wasted memory if it can be used for caching instead, so unless you’re running out of available memory, there is no problem.
Also, the JVM has a lot of options for configuring its various caches as well as when it allocates or releases memory. Maybe take a look at that first.
Edit: Apparently people don’t want to hear this but don’t have any actual arguments to reply with. Sorry to ruin your “JaVa BaD” party.
I use Backblaze B2 for one offsite backup in “the cloud” and have two local HDDs. Using restic with rclone as storage interface, the whole thing is pretty easy.
A cronjob makes daily backups to B2, and once per month I copy the most current snapshot from B2 to my two local HDDs.
I have one planned improvement: Since my server needs programmatic access to B2, malware on it could wipe both the server and B2, leaving me with the potentially one-month old local backups. Therefore I want to run a Raspberry Pi at my parents’ place that mirrors the B2 repository daily but is basically air-gapped from the server. Should the B2 repository be wiped, the Raspberry Pi would still retain its snapshots.
Minio now describes itself as “S3 & Kubernetes Native Object Storage for AI” - lol
Guess it’s time to look for alternatives if you’re not doing ML stuff
I wouldn’t call criticism of their strategic focus “shitting on” Nextcloud. It obviously still does a lot of things right or at least right enough to be useful and relevant to many people, or else we wouldn’t be discussing it. But it has its issues and many of them have been unadressed for a long time, so why shouldn’t people voice their displeasure with that?
There are quite a few mature projects in 0.x that would cause a LOT of pain if they actually applied semver
Depending on how one defines the “initial development” phase, those projects are actually conforming to semver spec:
Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.
After looking at the site and trying to determine what to download to get Debian with non-free (I’m unfortunately working with an NVIDIA card)
FWIW, Debian 12 now includes non-free firmware in the installation media by default and will install whatever is necessary.
I agree that the Debian website has its weaknesses, but beyond finding the right installer (usually netinst ISO a.k.a small installation image on https://www.debian.org/distrib/) there isn’t much of a learning curve. I started out with Ubuntu too, but finally decided that enough was enough when snap started breaking my stuff on desktop.
Thanks, didn’t know about those deals!
+1 for own domain and some email hosting service. That also makes it pretty easy to switch providers because you can simply point your MX records etc. somewhere else - no need to change the actual email address.
I can also recommend mailbox.org as an alternative to mxroute, they’re even a little cheaper at $3/month (mxroute is $49/year at minimum).
Funny how you claim to know so much about security but can’t even seem to comprehend my comment. I know root shell exploits exist, that’s why I wrote that it takes additional time to get root access, not that it’s impossible. And that’s still a security improvement because it’s an additional hurdle for the adversary.
I think you’re interpreting too much. Security is about layers and making it harder for attackers, and that’s exactly what using a non-root user does.
In that scenario, the attacker needs to find and exploit another vulnerability to gain root access, which takes time - time which the attacker might not be willing to spend and time which you can use to respond.
No problem. One more tip though: If you ever censor your public IP, don’t just censor the last two digits. Otherwise it will be easily brute-forced.
My preferred option is to have the VPS inside a VPC that blocks all external traffic by default. Then I can open up specific ports for specific IP ranges.
The reason I prefer this over a firewall configuration on the VPS itself is that the latter seems far more error-prone to me. For example, I’ve had problems in the past with ufw and Docker where container ports were still reachable even though access was denied via ufw.
If an attacker already has access to a system, they can use hitherto closed ports to communicate with C2 servers or attack other devices. In that case, a firewall that only allows known-good traffic will prevent further damage.
Tbf, a lot of applications and tools provide installation scripts in lieu of more elaborate manual setup. Doesn’t make it safer, but if you want to install something, you have to trust the source with shell access at some point anyway.
I’d suggest adding high availability for HA
You could try using a VPN or some other kind of proxy which wraps your SSH traffic to prevent packet inspection. Then it should look like normal UDP traffic ;)
Imagine a world where we’re all using 30 year old software because it “still kinda works”.
restic is living proof that is neither 30 years old nor “kinda works”. It also doesn’t suffer from typical memory access problems because it’s not written in C.
Given that this whole post is about restic, this felt relevant to point out. You’re apparently not talking about rewrites in Rust in general, but rather rewrites in Rust of software the likes on GNU and the Linux kernel.
Definitely, that’s what I’m doing as well. I’ve found some to be lacking for my needs (e.g. music), but most of them are good enough for most use cases.