• 7 Posts
  • 310 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle




  • There are two ways to use an .ics link:

    • You can add it in Google Calendar using “add URL” and it will show up in the GC app. Downside is that you need to use the app… and also it refreshes the link when it wants (you can’t set it).
    • You can use a calendar app that can import .ics links directly. The one I use is called Calengoo. This way you’ll be able to control when to refresh it, but it bypasses the normal Android calendars so it won’t be visible to other apps or widgets except the one that imported it.

    I noticed what you said about not using Google services. The Calengoo app has a version you can download on their website (as opposed to Google Play) and purchase a license code with CC or PayPal, that is not tied to Google Play.



  • Typical problems with parity arrays are:

    • They suffer from something called “write hole”. If power fails while information is being written to the array, different drives can end up with conflicting versions of the information and no way to reconcile it. The software solution is to use ZFS, but ZFS has a pretty steep learning curve and is not easy to manage. The hardware solution is to make sure power to the array never fails, by using either an UPS to the machine or connecting the drives through a PCI card with a battery, which allows them to always finish write operations even without power.
    • Making up a 4 TB out of 2x2 TB is not a good idea, you’re basically doubling the failure probability of that particular “4 TB” drive.
    • Parity arrays usually require drives to be all the same size. Meaning that if you want to upgrade your array you need to buy as many drives before you can take advantage of the increased space. There are parity schemes like Unraid that work around this by using only one large parity drive that computes parities across all the others regardless of their sizes; but Unraid is proprietary and requires a paid subscription.
    • If a drive fails, rebuilding the array after replacing that drive requires an intensive pass through all the surviving members of the array. This can greatly increase the risk of another drive failing. A RAID5 array would be lost if that occured. That’s why people usually recommend RAID6, but RAID6 only makes sense with 5+ drives.

    Unrelated to parity:

    • Using a lot of small drives is very power-intensive and inefficient.
    • Whenever designing arrays you have to consider what you’ll do in case of drive failure. Do you have a replacement on hand? Will you go out and buy another drive? How long will it take for it to reach you?
    • What about backups?
    • How much of your data is really essential and should be preserved at all costs?





  • I think you misunderstood the advice. If your goal is to open your services to the internet then any of the approaches can let in an attacker. It would depend on whether any of the things you expose to the internet has a remote exploitable vulnerability.

    A long-standing software like SSH or WG that everybody relies on and everybody checks all the time will have fewer vulnerabilities than a service made by one person, that you expose over reverse proxy; but they’re not 100% foolproof either.

    The Tailscale advice is about connecting your devices privately, on a private mesh network that is never exposed to the internet.

    If you’re behind CGNAT and use a VPS to open up to the internet then any method you use to tunnel traffic from the VPS into your LAN will have the same risk because it’s the service inside that’s the most vulnerable not the tunnel itself.


  • There’s one additional problem with Picard and bands with a long history that have released the same song on multiple albums and compilations, it won’t make much of an effort to group them in as few albums as possible. You will end up with songs spread across many distinct albums. Sometimes it’s not even an album of the original artist but multi-artist compilations like “The sound of the 90s” and so on.