• 0 Posts
  • 23 Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle
  • Hardware raid limits your flexibility, of any part fails, you probably have to closely match the part in replacement.

    Performance wise, there’s not much to recommend them. Once upon a time the xor calculations weighed on CPU enough to matter. But cpus far outpaced storage throughput and now it’s a rounding error. They continued some performance edge by battery backed ram, but now you can have nvme as a cache. In random access, it can actually be a lability as it collapses all the drive command queues into one.

    The biggest advantage is simplifying booting from such storage, but that can be handled in other ways that I wouldn’t care about that.


  • While sas is faster, the difference is moot if you have even a modest nvme cache.

    I don’t know if it’s especially that much more reliable, especially I would take new SATA over second hand sas any day.

    The hardware raid means everything is locked together, you lose a controller, you have to find a compatible controller. Lose a disk, you have to match pretty closely the previous disk. JBOD would be my strong recommendation for home usage where you need the flexibility in event of failure.


  • the TLS-ALPN-01 challenge requires a https server that implements generating a self-signed certificate on demand in response to a specific request. So we have to shut down our usual traffic forwarder and let an ACME implementation control the port for a minute or so. It’s not a long downtime, but irritatingly awkward to do and can disrupt some traffic on our site that has clients from every timezone so there’s no universal ‘3 in the morning’ time, and even then our service is used as part of other clients ‘3 in the morning’ maintenance windows… Folks can generally take a blip in the provider but don’t like that we generate a blip in those logs if they connect at just the wrong minute in a month…

    As to why not support going straight to 443, don’t know why not. I know they did TLS-ALPN-01 to keep it purely as TLS extensions to stay out of the URL space of services which had value to some that liked being able to fully handle it in TLS termination which frequently is nothing but a reverse proxy and so in principle has no business messing with payload like HTTP-01 requires. However for nginx at least this is awkward as nginx doesn’t support it.


  • Frankly, another choice virtually forced by the broader IT.

    If the broader IT either provides or brokers a service, we are not allowed to independently spend money and must go through them.

    Fine, they will broker commercial certificates, so just do that, right? Well, to renew a certificate, we have to open a ticket and attach our csr as well as a “business justification” and our dept incurs a hundred dollar internal charge for opening that ticket at all. Then they will let it sit for a day or two until one of their techs can get to it. Then we are likely to get feedback about something like their policy changing to forbid EC keys and we must do RSA instead, or vice versa because someone changed their mind. They may email an unexpected manager for confirmation in accordance to some new review process they implemented. Then, eventually, their tech manually renews it with a provider and attaches the certificate to the ticket.

    It’s pretty much a loophole that we can use let’s encrypt because they don’t charge and technically the restrictions only come in when purchasing is involved. There was a security guy raising hell that some of our sites used that “insecure” let’s encrypt and demanding standards change to explicitly ban them, but the bearaucracy to do that was insurmountable so we continue.




  • Ours is automated, but we incur downtime on the renewal because our org forbids plain http so we have to do TLS-ALPN-01. It is a short downtime. I wish let’s encrypt would just allow http challenges over https while skipping the cert validation. It’s nuts that we have to meaningfully reply over 80…

    Though I also think it’s nuts that we aren’t allowed to even send a redirect over 80…


  • Most people I know haven’t even bothered to buy a new TV since Dolby Vision was created. A fair number still have 1080 sets.

    While some like you may certainly demand it and it would be a good idea, I think it’s a fair description to help people understand the goal is an android TV like experience, and a lot of people are oblivious to a lot of the details of picture quality.

    Just a bit over the top for such an overly dismissive statement, versus saying something like “does it support Dolby vision? I won’t be interested until it does”










  • Actually, the lower level may likely be less efficient, due to being oblivious about the nature of the data.

    For example, a traditional RAID1 mirror on creation immediately starts a rebuild across all the potential data capacity of the storage, without a single byte of actual data written. So you spend an entire drive wipe making “don’t care” bytes redundant.

    Similarly, for snapshotting, it can only track dirty blocks. So you replace uninitialized data that means nothing with actual data, the snapshot layer is compelled to back up that unitiialized data, because it has no idea whether the blocks replaced were uninialized junk or real stuff.

    There’s some mechanisms in theory and in practice to convey a bit of context to the block layer, but broadly speaking by virtue of being a mostly oblivious block level, you have to resort to the most naive and often inefficient approaches.

    That said, block capacity is cheap, and doing things at the block level can be done in a ‘dumb’ way, which may be easier for an implementation to get right, versus a more clever approach with a bigger surface for mistakes.


  • Yep, and I see evidence of that over complication in some ‘getting started’ questions where people are asking about really convoluted design points and then people reinforcing that by doubling down or sometimes mentioning other weird exotic stuff, when they might be served by a checkbox in a ‘dumbed down’ self-hosting distribution on a single server, or maybe installing a package and just having it run, or maybe having to run a podman or docker command for some. But if they are struggling with complicated networking and scaling across a set of systems, then they are going way beyond what makes sense for a self host scenario.


  • Based on what I’ve seen, I’d also say a homelab is often needlessly complex compared to what I’d consider a sane approach to self hosting. You’ll throw all sorts of complexity to imitate the complexity of things you are asked to do professionally, that are either actually bad, but have hype/marketing, or may bring value, but only at scales beyond a household’s hosting needs and far simpler setups will suffice that are nearly 0 touch day to day.


  • For 90% of static site requirements, it scales fine. That entry point reverse proxy is faster at fetching content to serve via filesystem calls than it is at making an http call to another http service. For self hosting types of applications, that percentage guess to go 99.9%

    If you are in a situation where serving the files through your reverse proxy directly does not scale, throwing more containers behind that proxy won’t help in the static content scenario. You’ll need to do something like a CDN, and those like to consume straight directory trees, not containers.

    For dynamic backend, maybe. Mainly because you might screw up and your backend code needs to be isolated to mitigate security oopsies. Often it also is useful to manage dependencies, but that facet is less useful for golang where the resulting binary is pretty well self contained except maybe a little light usage of libc.