For anything important, use matrix instead of lemmy DMs.

  • 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle
  • Warrantied drives still fail, they just happen to ship you a replacement.

    Commercial drive trashing solutions are basically a smaller, fancier version of the mechanism in a log splitter.
    You could probably rig a sketchy drive wedge/bending thing with a pump jack rather easily.
    Wear PPE.

    The odds of someone taking a failed drive and transplanting the platters to a working drive is pretty low to begin with.

    Me? I don’t have tons of drives to destroy, so I just unscrew the thing, get the platters out and smash those.





  • Yea, it’s not the first time I’ve seen this discussion either.
    I don’t wanna seem like I’m not believing you or belittling your experience, I just find it weird that we (we, users, as a whole, not just you and I) have such wildly different experiences with it.

    As is, I have a vastly better experience with my own nextcloud than with corporate’s onedrive, with more stuff on mine.

    Wish I knew why it’s so inconsistent.
    Even though my nextcloud experience is fine, I know plenty of people with the opposite.


  • Legit have had none of these issues.
    I do get a notification once in a while if I modify a picture fast enough, like a quick crop and it’s still uploading. Like snap pic and edit within the same 5 seconds or so.
    Basically just a: “there are multiple versions of the same file (which is true), which one do you wanna keep”.

    Then again mine is running on a pretty beefy server which might hide issues rooted in performance.
    I remember it being hell when I was running it on a RPi.




  • Some subjects you might wanna look into.

    1. NAT hairpin, also called NAT loopback If you’re sending packets to your ISP’s public IP from inside your LAN and it fails, your ISP modem (or whichever device does the NAT, probably doesn’t support NAT hairpin.

    2. Split-horizon DNS That’s when you configure your own DNS for your hosted services, but with a different config on your LAN (which would point towards your services LAN IP) and another config with your public DNS provider (which would point to your public IP)

    3. Carrier NAT This could break your chances of having a reachable service as they likely won’t make a port forwarding rule for you in their stuff.

    4. IPv6 address types Link-local addresses are within fe80::/10 (kinda similar to how 169.254.0.0/24 is used in ipv4). This IP wouldn’t be reachable from the outside.
      Global unicast addresses are all in 2000::/3, this would be reachable from the outside.

    5.IPv6 DNS Make sure to configure both A (ipv4) and AAAA (ipv6) records with the right info. Although if your LAN devices only have ipv4 addresses and you’re doing Split-horizon, you could theoretically omit the AAAA on your LAN

    1. Phone DNS shenanigans.
      Some recent phones ignore the DNS they receive through DHCP and instead use something like Google’s which breaks split-horizon and can confuse troubleshooting. This wasn’t in the SSID settings, but in a global “private DNS” setting.

    As for your problems, it depends.
    There might be a way to make this work without the VPS, but I don’t have all the info.
    That said, a VPS or something like a cloudflare tunnel could come in handy. I usually prefer to host directly but still, that’s an option if port forwarding doesn’t work with your ISP.
    You’d configure the DNS for your services to the VPS IP and configure the VPS to reach your stuff.
    Using the VPS kinda also gets rid of NAT hairpin problems although it is inefficient to go through the VPS from the LAN with the downside of not working when your Internet is down.
    You can still use the VPS and Split-horizon DNS if you wanna have local availability from your LAN when your Internet is down.

    Good luck




  • It can be in git even if you’re not doing ‘config as code’ or ‘infrastructure as code’ yet/ever.
    Even just a text file with notes in markdown is better than nothing. Can usually be rendered, tracked, versionned.
    You can also add some relevant files as needed too.

    Like, even if your stuff isn’t fully automated CI/CD magic, a copy of that one important file you just modified can be added as necessary.





  • Yea I’ve been running “core” in docker-compose and not the “supervised” or whatever that’s called.
    It’s been pretty flawless tbh.
    It’s running in docker-compose in a VM in proxmox.
    At first, it was mostly because I wanted to avoid their implementation of DNS, which was breaking my split-horizon DNS.

    Honestly, once you figure out docker-compose, it’s much easier to manage than the supervised add-on thing. Although the learning curve is different.
    Just the fact that your add-ons don’t need to go down when you upgrade hass makes this much easier.

    I could technically run non-hass related containers in that docker, but the other important stuff is already in lxc containers in proxmox.
    Not everything works in containers, so having the option to spin a VM is neat.

    I’m also using PCI passthrough so my home theater/gaming VM has access to the GPU and I need a VM for that.

    Even if they only want to use k8s or dockers for now, having the option to create a VM is really convenient.