• 0 Posts
  • 37 Comments
Joined 3 years ago
cake
Cake day: June 18th, 2023

help-circle
  • Whatever you do, and whoever you end up working with, document document document. Take.notes.

    And I mean on paper, in a notebook, something that can’t crash or get accidentally deleted and doesn’t require electricity to operate.

    You’re doing this for yourself, not for a boss, which means you can take the time to keep track of the details. This will be especially important for ongoing maintenance.

    Write down a list of things you imagine having on your network, then classify them as essential vs. desired (needs and wants), then prioritize them.

    As you buy hardware, write down the name, model and serial number and the price (so that you can list it on your renter’s/homeowner’s insurance). As you set up the devices, also add the MAC and assigned IP address(es) to each device description, and also list the specific services that are running on that device. If you buy something new that comes with a support contract, write down the information for that.

    Draw a network diagram (it doesn’t have to be complicated or super professional, but visualizing the layout and connections between things is very helpful)

    When you set up a service, write down what it’s for and what clients will have access to it. Write down the reference(s) you used. And then write down the login details. I don’t care what advice you’ve heard about writing down passwords, just do it in the notebook so that you can get back into the services you’ve set up. Six months from now when you need to log in to that background service to update the software you will have forgotten the password. If a person you don’t trust has physical access to your home network notebook, you have a much more serious problem than worrying about your router password.



  • You can just use openssl to generate x509 certificates locally. If you only need to do this for a few local connections, the simplest thing to do is create them manually and then manually place them in the certificate stores for the services that need them. You might get warnings about self-signed certificates/unrecognized CA, but obviously you know why that’s the case.

    This method becomes a problem when:

    1. You need to scale - manually transferring certs is fine maybe half a dozen times, after that it gets real tedious and you start to lose track of where they are and why.
    2. You need other people to access your encrypted services - self-signed certs won’t work for public access to an HTTPS website because every visitor will get a warning that you’re signing your own encryption certs, and most will avoid it. For friends and family you might be able to convince them that your personal cert is safe, but you’ll have to have that conversation every time.
    3. You need to implement expiration - the purpose of cert expiration is to mitigate the damage if the cert private key leaks, which happens a lot with big companies that have public-facing infrastructure and bad internal security practices (looking at you, Microsoft). As an individual, it is still worthwhile to update your certs every so often (e.g. every year) if for no other reason than to remind yourself how your SSL infrastructure is connected. It’s up to you whether or not it’s worth the effort to automate the cert distribution.

    I’ve used Letsencrypt to get certs for the proxy, but the traffic between the proxy and the backend is plain HTTP still. Do I need to worry about securing that traffic considering its behind a VPN?

    In spite of things you may have read, and the marketing of VPN services, a VPN is NOT a security tool. It is a privacy tool, as long as the encryption key for it is private.

    I’m not clear on what you mean by “between the proxy and the backend”. Is this referring to the VPS side, or your local network side, or both?

    Ultimately the question is, do you trust the other devices/services that might have access to the data before it enters the VPN tunnel? Are you certain that nothing else on the server might be able to read your traffic before it goes into the VPN?

    If you’re talking about a rented VPS from a public web host, the answer should be no. You have no idea what else might be running on that server, nor do you have control over the hypervisor or the host system.


  • Perfect explanation.

    Thank you, I try. It’s always tricky to keep network infrastructure explanations concise and readable - the Internet is such a complicated mess.

    People like paying for convenience.

    Well, I would simplify that to people like convenience. Infrastructure of any type is basically someone else solving convenience problems for you. People don’t really like paying, but they will if it’s the most convenient option.

    Syncthing is doing this for you for free, I assume mostly because the developers wanted the infrastructure to work that way and didn’t want it to be dependent on DNS, and decided to make it available to users at large. It’s very convenient, but it also obscures a lot of the technical side of network services which can make learning harder.

    This kind of thing shows why tech giants are giants and why selfhosted is a niche.

    There’s also always the “why reinvent the wheel?” question, and consider that the guy who is selling wheels works on making wheels as a full-time occupation and has been doing so long enough to build a business on it, whereas you are a hobbyist. There are things that guy knows about wheelmaking that would take you ten years to learn, and he also has a properly equipped workshop for it - you have some YouTube videos, your garage and a handful of tools from Harbor Freight.

    Sometimes there is good reason to do so (e.g. privacy from cloud service data gathering) but this is a real balancing act between cost (time and money, both up-front and long-term), risk (privacy exposure, data loss, failure tolerance), and convenience. If you’re going to do something yourself, you should have a specific answer to the question, and probably do a little cost-benefit checking.


  • But if I’m reading the materials correctly, I’ll need to set up a domain and pay some upfront costs to make my library accessible outside my home.

    Why is that?

    So when your mobile device is on the public internet it can’t reach directly into your private home network. The IP addresses of the servers on your private network are not routable outside of it, so your mobile device can’t talk to them directly. From the perspective of the public internet, the only piece of your private network that is visible is your ISP gateway device.

    When you try to reach your Syncthing service from the public internet, none of the routers know where your private Syncthing instance is or how to reach it. To solve this, the Syncthing developers provide discovery servers on the public internet which contain the directions for the Syncthing app on your device to find your Syncthing service on your private network (assuming you have registered your Syncthing server with the discovery service).

    This is a whole level of network infrastructure that is just being done for you to make using Syncthing more convenient. It saves you from having to deal with the details of network routing across network boundaries.

    Funkwhale does not provide an equivalent service. To reach your Funkwhale service on your private network from the public internet you have to solve the cross-boundary routing problem for yourself. The most reliable way to do this is to use the DNS infrastructure that already exists on the public internet, which means getting a domain name and linking it to your ISP gateway address.

    If your ISP gateway had a static address you could skip this and configure whatever app accesses your Funkwhale service to always point to your ISP gateway address, but residential IP addresses are typically dynamic, so you can’t rely on it being the same long-term. Setting up DynamicDNS solves this problem by updating a DNS record any time your ISP gateway address changes.

    There are several DynDNS providers listed at the bottom of that last article, some of which provide domain names. Some of them are free services (like afraid.org) but those typically have some strings attached (afraid.org requires you to log in regularly to confirm that your address is still active, otherwise it will be disabled).


  • They should be powered on if you want to retain data on them long-term. The controller should automatically check physical integrity and disable bad sections as needed.

    I’m not sure if just connecting them to power would be enough for the controller to run error correction, or if they need to be connected to a computer. That might be model specific.

    What server OS are you using? Are you already using some SSDs for cache drives?

    Any backup is better than no backup, but SSDs are really not a good choice for long-term cold storage. You’ll probably get tired of manually plugging them in to check integrity and update the backups pretty fast.




  • Encrypting the connection is good, it means that no one should be able capture the data and read it - but my concern is more about the holes in the network boundary you have to create to establish the connection.

    My point of view is, that’s not something you want happening automatically, unless you manually configured it to do that yourself and you know exactly how it works, what it connects to and how it authenticates (and preferably have some kind of inbound/outbound traffic monitoring for that connection).


  • NaibofTabr@infosec.pubtoSelfhosted@lemmy.worldSyncthing alternatives
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    7 months ago

    Ah, just one question - is your current Syncthing use internal to your home network, or does it sync remotely?

    Because if you’re just having your mobile devices sync files when they get on your home wifi, it’s reasonably safe for that to be fire-and-forget, but if you’re syncing from public networks into private that really should require some more specific configuration and active control.


  • NaibofTabr@infosec.pubtoSelfhosted@lemmy.worldWhat do I actually need?
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    8 months ago

    My main reasons are sailing the high seas

    If this is the goal, then you need to concern yourself with your network first and the computer/server second. You need as much operational control over your home network as you can manage, you need to put this traffic in a separate tunnel from all of your normal network traffic and have it pop up on the public network from a different location. You need to own the modem that links you to your provider’s network, and the router that is the entry/exit point for your network. You need to segregate the thing doing the sailing on its own network segment that doesn’t have direct access to any of your other devices. You can not use the combo modem/router gateway device provided by your ISP. You need to plan your internal network intentionally and understand how, when, and why each device transmits on the network. You should understand your firewall configuration (on your network boundary, not on your PC). You should also get PiHole up and running and start dropping unwanted inbound and outbound traffic.

    OpSec first.





  • NaibofTabr@infosec.pubtoSelfhosted@lemmy.worldAn idiots guide?
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 months ago

    Beyond your eventual technical solution, keep this in mind: untested backups don’t exist.

    I recommend reading some documentation about industry-leading solutions like Veeam… you won’t be able to reproduce all of the enterprise-level functionality, at least not without spending a lot of money, but you can try to reproduce the basic practices of good backup systems.

    Whatever system you implement, draft a testing plan. A simpler backup solution that you can test and validate will be worth more than something complex and highly detailed.





  • The issue is more that trying to upgrade everything at the same time is a recipe for disaster and a troubleshooting nightmare. Once you have a few interdependent services/VMs/containers/environments/hosts running, what you want to do is upgrade them separately, one at a time, then restart that service and anything that connects to it and make sure everything still works, then move on to updating the next thing.

    If you do this shotgun approach for the sake of expediency, what happens is something halfway through the stack of upgrades breaks connectivity with something else, and then you have to go digging through the logs trying to figure out which piece needs a rollback.

    Even more fun if two things in the same environment have conflicting dependencies, and one of them upgrades and installs its new dependency version and breaks whatever manual fix you did to get them to play nice together before, and good luck remembering what you did to fix it in that one environment six months ago.

    It’s not FUD, it’s experience.