How do you backup important things you store in selfhosted clouds?

I’m currently thinking about hosting Ente myself for syncing all my pictures. Maybe also spinning up nextcloud for various other shared files. However, for me one main benefit of using services like iCloud is the mitigated risk of losing everything in case the hardware fails (and fire, theft, water-damages, …).

Do you keep regular updates on hosted services? Do you keep really important stuff on other providers? Do you have other failsafes?

  • Appoxo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    37 minutes ago

    PC: Veeam
    Phone/general pics: Immich (both automatic and manual)
    Some general phone files: Syncthing
    The remaining stuff is on my NAS at home.
    Off-site of the most important VMs and some infrastructure: Veeam Backup Copy to an external HDD I keep at my workplace (encrypted)

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    I use Backblaze personal/unlimited, and have for quite a while. A lot of the other storage options go by GB/price which is fine, but I have a ton of stuff that is irreplaceable such as my music collection of around 80k songs I converted out to flac, pictures, business docs, etc. I realize it’s not really in the selfhosted arena, but Backblaze works out for me. If you are backing up a lot of data, re-initializing multiple TB backups can be a chore. Backblaze has a program where you buy a 10 TB drive from them, they ship you your data, once transferred you can send the drive back for a full refund.

  • CoyoteFacts@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    10 hours ago

    I use a 48TB ZFS RAIDZ2 pool to maintain data integrity locally and keep rolling ZFS snapshots with sanoid so that data recovery to any point within the last month is possible. Then I use borgmatic (borg) to sync the important data (~1TB) to a Servarica VPS (Polar Bear plan, which works out to be cheaper than Backblaze B2 costs for my purposes). The Servarica server really sucks in terms of CPU, and it’s quite sluggish, but it’s enough for daily backups. I also self-host healthchecks.io on a free Fly.io VPS thing (not sure if they offer this anymore) to make sure the backups are actually happening successfully, and hosting that on a third-party VPS means that it’s not going to fail at the same time my server does. Then I use Uptime Kuma to make sure everything is consistently alive (especially the healthchecks.io server, which in turn verifies that Uptime Kuma stays alive). I also run the same borg configuration to back up to a plain non-redundant disk locally.

    The downside of this setup is that I’m only truly backing up a fraction of my pool, but most of my pool is stuff that I can redownload and set up again in the event of e.g. a house fire. I also run a daily script to dump a lot of metadata about my systems and pool, like directory listings of my media folders and installed programs/etc, which means that even if the data might be lost, I have a map of what I need to grab again.

      • CoyoteFacts@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        Snapshots basically put a pin in all the data at that moment and say “these blocks are not going to be physically deleted as long as I exist”, so the “additional” data use of the snapshots is equal to the data contained within the snapshot that doesn’t exist at the current moment. I.e., if I have two 50GB files, take a snapshot, and delete one, I will still have 100GB physical disk usage. I can also take 400 more snapshots and disk usage will remain at 100GB, as the snapshots are just virtual. Then I can either bring that deleted file back from the snapshot, or I can delete the snapshot itself and my disk usage will adjust to the “true” 50GB as the snapshot releases its hold on the blocks.

        What sanoid and other snapshot managers do is they repeatedly take snapshots at specified intervals and delete old snapshots past a certain date, which in practice results in a “rolling” system where you can access snapshots from e.g. every hour in the past month. Then once a snapshot becomes older than a month, sanoid will auto-delete it and free up the disk space that it’s holding onto. The exact settings are all configurable to what you’re comfortable with in terms of trading additional physical disk usage of potential “dead” data for the convenience of being able to resurrect that data for a certain amount of time.

        I really like the “data comet” visual from this Ars Technica article.

  • non_burglar@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 hours ago

    I rsync nightly to an old synology box. It’s in an out building, so if there’s a fire, it comes with me.

  • zorflieg@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 hours ago
    1. Entropy is a law of our universe. All data wants to be lost given a long enough timeline and without attention.

    2. Divide your data into what you can’t do without and what you may not care about losing.

    3. Take a backup out of your hands, make it as automatic as possible.

    4. I sync to encrypted folders on Google drive then use msp360 cloud to automatically copy everything in that drive to another cheap cloud storage that is client side encrypted.

    For the protection it gives me, it’s cheap.

  • dieTasse@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 hours ago

    I use storj. Its well integrated with true nas. And the encrypted files are fragmented, duplicated and scattered around the world. (many people host storj node no their true nas as well).

  • wise_pancake@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 hours ago

    I’m new/planning to get more into self hosting

    I have a crappy NAS in the basement I archive to and copy my borg repos to.

    Then I pay for a Dropbox style cloud service and I copy my borg archives there. It’s kind of janky but it’s cheap and works.

  • iii@mander.xyz
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    14 hours ago

    I have 5 copies of all my files on 5 devices, synced using syncthing with staggered file versioning. 2 of those are with friends and family who let me put a thin client at their place.

    To protect against me misconfiguring syncthing, or some bug deleting all copies, every 3 months I manually make a copy and put it on a hard drive into a fire resistant safe.

  • lankydryness@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    I don’t follow the full 3-2-1 rule, but I did want some sort of offsite backup for my Nextcloud so I use Duplicity to back up my user data from Nextcloud, plus all my DockerCompose files that run my server, to an S3 bucket. Costs me like $2/mo. Way cheaper than google drive

  • robber@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    It really depends on how much you enjoy to set things up for yourself and how much it hurts you to give up control over your data with managed solutions.

    If you want to do it yourself, I recommend taking a look at ZFS and its RAIDZ configurations, snapshots and replication capabilities. It’s probably the most solid setup you will achieve, but possibly also a bit complicated to wrap your head around at first.

    But there are a ton of options as beautifully represented by all the comments.

  • sznowicki@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    15 hours ago

    Borg + hetzner backup storage (that supports Borg and rsync but I use Borg so my backups are encrypted)

  • sj_zero@lotide.fbxl.net
    link
    fedilink
    arrow-up
    4
    ·
    13 hours ago

    I use nextcloud and I love it.

    You want to follow the 3-2-1 strategy: 3 copies of your data on at least 2 different forms of media, and 1 backup being off-line.

  • Vinstaal0@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 hours ago

    I have ResillioSync setup witch syncs between different family members. Both me and my uncle make offline backups of the dataset.

    My pictures on my phone are backupped by iCloud, OneDrive, Resilliosync and Immich … The exports are all posted in the Resilliosync dataset and in Immich.

    My most important files are on Protondrive and I used to make backups using Perfectbackup to my NAS, but since I ditched WIndows I need something else/