• 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: November 23rd, 2023

help-circle
  • I can attest to projectivy and smarttube, they are great. I went with the internet’s recommendation on the $20 Walmart/onn Google tv 4k box, with projectivy as the launcher instead of the default.

    My only gripe so far is that the remote doesn’t seem to consistently turn the box on, I have to go unplug the box every so often to reset it. probably some misconfiguration that’s making it not wake from sleep correctly.

    Despite that issue, 10/10 experience: ad free YouTube, fast jellyfin in 4k, fully customizable ui…


  • No. Symlinks and hardlinks are two approaches to creating a “pointer to a file.” They are quite different in implementation, but at the high level:

    • Symlinks can point to other filesystems, hardlinks only work on the same filesystem.
    • You can delete the target of a symlink (or even create one that points at nothing), but a hardlink always points to a real file.

    In both cases, the only additional data used is the metadata used for the link itself. The contents of the file on disk are not copied.



  • No, but you’ll have much more overhead. I have a VM that hosts all Docker deployments which don’t need much disk space (most of them)

    This is a big point. One of the key advantages of docker is the layering and the fact that you can build up a pretty sizeable stack of isolated services based on the same set of core OS layers, which means significant disk space savings.

    Sure, 200-700MB for a stack of core layers seems small but multiply that by a lot of containers and it adds up.


  • Ultimately it’s a matter of personal choice and risk tolerance.

    The Z1 will be simpler and have larger capacity, but if you have a drive fail you’ll need to quickly get it replaced or risk having to rebuild/restore if the mirror drive follows the first one to the grave.

    Your Z2 setup right now can have two drives fail and still be online, and having a wider spread of power-on hours is usually a good thing in terms of failure probability.

    I manage a large (14,000±) number of on-site RAID1 arrays in various environments and there is definitely a trend for drives shipped at the same time to fail at roughly the same time. It’s common enough that we often intentionally swap drives out before shipping a new unit to the customer site.

    On my homelab, I’m much more tolerant of risk since I have trust in my 3-2-1 backup solution and if my NAS goes down it’s not going to substantially affect anything while I wait for a drive replacement.



  • felbane@lemmy.worldtoSelfhosted@lemmy.worldPost your Servernames!
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    There is no original thought.

    A friend of mine had some explaining to do when he screwed up a dhcp config change and started routing his guest wifi through his “personal” pihole instead of the restricted guest one (he had family/children over often and did not want to be the reason nephew Timmy got an eyeful of wet bush or a beheading).

    His family-friendly pihole was at holypi.lastname.local and his private one was creampi.lastname.local


  • The other poster said it’s about convenience but that’s not really true. The claim to fame for NVMe drives is speed: While SATA SSDs can theoretically run at up to 500 MB/s, the latest NVMe drives can hit 7000+ MB/s.

    It’s for this reason that you should pay attention to which NVMe drive you choose (if speed is what you’re after). SATA-based M.2 drives exist – and they run at SATA speeds – so if you see a cheap M.2 drive for sale it’s probably SATA and intended for bulk storage on laptops and SFF PCs without room for 2.5" drives. Double check the specs to be sure what you’re getting.


  • If you’re practicing 3-2-1 backups then you probably don’t need to bother with RAID.

    I can hear the mechanical keyboards clacking; Hear me out: If you’re not committed to a regular backup strategy, RAID can be a good way to protect yourself against a sudden hard drive failure, at which point you can do an “oh shit” backup and reconsider your life choices. RAID does nothing else beyond that. If your data gets corrupted, the wrong bits will happily be synced to the mirror drives. If you get ransomwared, congratulations you now have two copies of your inaccessible encrypted data.

    Skip the RAID and set up backups. It can be as simple as an external drive that you plug in once a week and run rsync, or you can pay for a service like backblaze that has a client to handle things, or you can set up a NAS that receives a nightly backup from your PC and then pushes a copy up to something like B2 or S3 glacier.


  • Most people set up a reverse proxy, yes, but it’s not strictly necessary. You could certainly change the port mapping to 8080:443 and expose the application port directly that way, but then you’d obviously have to jump through some extra hoops for certificates, etc.

    Caddy is a great solution (and there’s even a container image for it 😉)


  • The great thing about containers is that you don’t have to understand the full scope of how they work in order to use them.

    You can start with learning how to use docker-compose to get a set of applications running, and once you understand that (which is relatively easy) then go a layer deeper and learn how to customize a container, then how to build your own container from the ground up and/or containerize an application that doesn’t ship its own images.

    But you don’t need to understand that stuff to make full use of them, just like you don’t need to understand how your distribution builds an rpm or deb package. You can stop whenever your curiosity runs out.


  • You don’t actually have to care about defining IP, cpu/ram reservations, etc. Your docker-compose file just defines the applications you want and a port mapping or two, and that’s it.

    Example:

    ---
    version: "2.1"
    services:
      adguardhome-sync:
        image: lscr.io/linuxserver/adguardhome-sync:latest
        container_name: adguardhome-sync
        environment:
          - CONFIGFILE=/config/adguardhome-sync.yaml
        volumes:
          - /path/to/my/configs/adguardhome-sync:/config
        ports:
          - 8080:8080
        restart:
          - unless-stopped
    

    That’s it, you run docker-compose up and the container starts, reads your config from your config folder, and exposes port 8080 to the rest of your network.