Multiple backups may be kept.
Nice work, but if I may suggest - it lacks hardlink support, so’s quite wasteful in terms of disk space - the number of ‘tags’ (snapshots) will be extremely limited.
At least two robust solutions that use rsync+hardlinks already exist: rsnapshot.org and dirvish.org (both written in perl). There’s definitely room for backup tools that produce plain copies, instead of packed chunk data like restic and Duplicacy, and a python or even bash-based tool might be nice, so keep at it.
However, I liken backup software to encryption - extreme care must be taken when rolling and using your own. Whatever tool you use, test test test the backups. :)
There’s no point doing anything fancy like that - wireguard over Tailscale is pretty pointless, as Tailscale is literally wireguard with NAT traversal and authentication bolted on. Unless you enable subnetting, it can’t get more secure than that.
And even if you do enable subnetting (which you might wanna do if you need access to absolutely everything), you can use Tailscale ACLs to keep tighter control - say, from specific (tagged) devices.
Won’t take that long before the enshittification is complete.
100% this. OP, whatever solution you come up with, strongly consider disentangling your backup ‘storage’ from the platform or software, so you’re not ‘locked in’.
IMO, you want to have something universal, that works with both local and ‘cloud’ (ideally off-site on a own/family/friend’s NAS; far less expensive in the long run). Trust me, as someone who came from CrashPlan and moved to Duplicacy 8 years ago, I no longer worry about how robust my backups are, as I can practice 3-2-1 on my own terms.
While you can do command line stuff with CloneZilla, I think what they’re referring to is the TEXT-based guided user interface, which doesn’t seem to differ much at all to the Rescuezilla GUI, which only looks marginally prettier. However, there’s a few other useful tools in there, and a desktop environments, so it’s still a bit nicer to use.
Yep, I guess it depends on how much data of interest is on the drive. You can hook it up to dmde with a ddrescue/OpenSuperClone-mounted drive, which can let you index the filesystem while it streams content to the backup image. It reads and remembers sectors already copied, and you can target specific files/folders so you don’t have to touch most of the drive.
You should take it to a data recovery specialist if the data is really really important but for lightly-damaged sectors, you want ddrescue (oldie but goodie) or HDDSuperClone (no longer developed) or OpenSuperClone (fork of HDDSuperClone, more actively developed).
You can combine some of these tools with commercial programs like dmde, UFS Explorer, or R-Studio - to target specific files for a quick result - but basically it’s best to get a full disk image off the bad drive onto another drive/image.
No that’s either HDD Regenerator or SpinRite. Clonezilla is a sector-by-sector disk imaging program. (SpinRite et al are good for keeping old drives running for longer but if you want to do data recovery and really value your data, ddrescue or HDDSuperClone is what you want.)
Don’t forget, you can also use SRV records to point a domain to another target, where you can also omit the port number. So connecting to server.org say, can point to mc.server.org:25565 under the hood.
This prolly isn’t what hypixel are doing as everything’s likely on the same network and their router/firewall is just forwarding traffic onto different machines, but SRV is one way to redirect a minecraft connection (and you could combine the technique with subdomains).
Actually, ufw has its own separate issue you may need to deal with. (Or bind ports to localhost/127.0.0.1 as others have stated.)
Thank you for posting this, hadn’t heard of it before.
Yes, I also work in IT.
The paid GUI version is extremely cautious on the auto-updates (it’s basically a wrapper for the CLI) - perhaps a bit too cautious. The free CLI version is also very cautious about making sure your backup storage doesn’t break.
For example, they recently added zstd encryption, yet existing storages stay on lz4 unless you force it - and even then, the two compression methods can exist in the same backup destination. It’s extremely robust in that regard (to the point that if you started forcing zstd compression, or created a new zstd backup destination, you can use the newest CLI to copy data to the older lz4 method and revert - just as an example). And of course you can compile it yourself years from now.
The licence is pretty clear - the CLI version is entirely free for personal use (commercial use requires a licence, and the GUI is optional). If you don’t like the licence, that’s fine, but it’s hardly ‘disingenuous’ when it is free for personal use, and has been for many years.
IMHO, Duplicacy is better than all of them at all those things - multi-machine, cross-platform, zstd compression, encryption, incrementals, de-duplication.
Well your account is on lemmy.world so how d’ya know the issue isn’t with your own access to the front end?
Many don’t interact with the lemmy.world directly, so we might only see delays in post propogation (if there is such an issue on the backend - I don’t see any but could be wrong).
I agree picking the biggest instances isn’t great from a scaling perspective, but s’gonna be hard to move any community once established.
+1 for Duplicacy. Been using it solidly for nearly 6 years - with local storage, sftp, and cloud. Rclone for chonky media. Veeam Agent for local PC backups as a secondary method.
Do ignore me then, I assumed you might know the reference and only I mean’t it in good humour. :) (Without spoiling anything - in the unlikely event you might some day watch it - Mr Milchick is a character that uses ‘big words’. Your choice of words struck a chord.) I will say though, you’re seriously missing out. The cinematography alone is brilliant and the acting exceptional.