• 0 Posts
  • 46 Comments
Joined 2 years ago
cake
Cake day: June 19th, 2023

help-circle

  • So this whole Gemini thing is a tactic to push people to upgrade their phones again right? They gave up on the whole “your phone is 6 months old and therefore won’t be getting security updates anymore so you need to buy a new phone with identical specs otherwise hackers are going to break into your bank account and set your dog on fire” because regulators were starting to get twitchy, so now it’s "your phone is brand new but you didn’t spend enough money on it so you better buy a new phone or you won’t be able to have a sentient assistant to help you do your job and manage your life and you’ll be passed over for promotion by a 16 year old AI Native and never get a date and your family will be angry at you because Aunt Mildred doesn’t like fish and you booked family dinner at the wrong restaurant "









  • As in, hardware RAID is a terrible idea and should never be used. Ever.

    With hardware RAID, you are moving your single point of failure from your drive to your RAID controller - when the controller fails, and they fail more often then you would expect - you are fucked, your data is gone, nice try, play again some time. In theory you could swap the controller out, but in practice it’s a coin flip if that will actually work unless you can find exactly the same model controller with exactly the same firmware manufactured in the same production line while the moon was in the same phase and even then your odds are still only 2 in 3.

    Do yourself a favour, look at an external disk shelf/DAS/drive enclosure that connects over SAS and do RAID in software. Hardware RAID made sense when CPUs were hewn from granite and had clock rates measures in tens of megahertz so offloading things to dedicated silicon made things faster, but that’s not been the case this century.




  • I’d considered doing something similar at some point but couldn’t quite figure out what the likely behaviour was if the workers lost connection back to the control plane. I guess containers keep running, but does kubelet restart failed containers without a controller to tell it to do so? Obviously connections to pods on other machines will fail if there is no connectivity between machines, but I’m also guessing connections between pods on the same machine will be an issue if the machine can’t reach coredns?



  • I’ve started a similar process to yours and am moving domains as they come up for renewal, with a slightly different technical approach:

    • I’m using AWS Route 53 as my registrar. They aren’t the cheapest, but still work out at about half the price of Gandi and one of my key requirements was to be able to use Terraform to configure DS records for DNSSEC and NS records in the parent zone
    • I run an authoritative nameserver on an OCI free tier VM using PowerDNS, and replicate the zones to https://ns-global.zone/ for redundancy. I’m investigating setting up another authoritative server on a different cloud provider in case OCI yank the free tier or something
    • I use https://migadu.com/ for email

    I have one .nz domain which I’ll need to find a different registrar for, cos for some reason route53 doesn’t support .nz domains, but otherwise the move is going pretty smoothly. Kinda sad where Gandi has gone - I opened a support ticket to ask how they can justify being twice the price of their competitors and got a non-answer




    • An HP ML350p w/ 2x HT 8 core xeons (forget the model number) and 256GB DDR3 running Ubuntu and K3s as the primary application host
    • A pair of Raspberry Pi’s (one 3, one 4) as anycast DNS resolvers
    • A random minipc I got for free from work running VyOS as by border router
    • A Brocade ICX 6610-48p as core switch

    Hardware is total overkill. Software wise everything is running in containers, deployed into kubernetes using helmfile, Jenkins and gitea