

Broadcast traffic (such as DHCP) doesn’t cross subnets without a router configured to forward it. It’s one of the reasons subnets exist.
Broadcast traffic (such as DHCP) doesn’t cross subnets without a router configured to forward it. It’s one of the reasons subnets exist.
What in the world is “a proprietary OS I cannot trust”. What’s your actual threat model? Have you actually run any risk analyses or code audits against these OSes vs. (i assume) Linux to know for sure that you can trust any give FOSS OS? You do realize there’s still an OS on your dumb switch, right?
This is a silly reason to not learn to manage your networking hardware.
A VLAN is (theoretically) equivalent to a physically separated layer 2 domain. The only way for machines to communicate between vlans is via a gateway interface.
If you don’t trust the operating system, then you don’t trust that it won’t change it’s IP/subnet to just hop onto the other network. Or even send packets with the other network’s header and spoof packets onto the other subnets.
It’s trivially easy to malform broadcast traffic and hop subnets, or to use various arp table attacks to trick the switching device. If you need to segregate traffic, you need a VLAN.
Edit: Should probably note that simply VLAN tagging from the endpoints on a trunk port isn’t any better than subnetting, since an untrusted machine can just tag packets however it wants. You need to use an 802.1q aware switch and gateway to use VLANs effectively.
What you are asking will work. That’s the whole point of subnets. No you don’t need a VLAN to segregate traffic. It can be helpful for things like broadcast control.
However, you used the word “trust” which means that this is a security concern. If you are subnetting because of trust, then yes you absolutely do need to use VLANs.
deleted by creator
deleted by creator
deleted by creator
deleted by creator
Yup. Treating VMs similar to containers. The alternative, older-school method is cold snapshots of the VM, apply patches/updates (after pre-prod testing & validation), usually in an A/B or red/green phased rollout, and roll back snaps when things go tits up.
deleted by creator
If you are in a position to ask this question, it means you have no actual uptime requirements, and the question is largely irrelevant. However, in the “real” world where seconds of downtime matter:
Things not changing means less maintenance, and nothing will break compatibility all of the sudden.
This is a bit of a misconception. You have just as many maintenance cycles (e.g. “Patch Tuesdays”) because packages constantly need security updates. What it actually means is fewer, better documented changes with maintenance cycles. This makes it easier and faster to determine what’s likely to break before you even enter your testing cycle.
Less chance to break.*
Sort of. Security changes frequently break running software, especially 3rd party software that just happened to need a certain security flaw or out-of-date library to function. The world has got much better about this, but it’s still a huge headache.
Services are up to date anyway, since they are usually containerized (e.g. Docker).
Assuming that the containerized software doesn’t need maintenance is a great way to run broken, insecure containers. Containerization helps to limit attack surfaces and outage impacts, but it isn’t inherently more secure. The biggest benefit of containerization is the abstraction of software maintenance from OS maintenance. It’s a lot of what makes Dev(Sec)Ops really valuable.
Edit since it’s on my mind: Containers are great, but amateurs always seem to forget they’re all sharing the host kernel. One container causing a kernel panic, or hosing misconfigured SHM settings can take down the entire host. Virtual machines are much, much safer in this regard, but have their own downsides.
And, for Debian especially, there’s one of the biggest availability of services and documentation, since it’s THE server OS.
No it isn’t. THE server OS is the one that fits your specific use-case best. For us self-hosted types, sure, we use Debian a lot. Maybe. For critical software applications, organizations want a vendor so support them, if for no other reason than to offload liability when something goes wrong.
It is running only rarely. Most of the time, the device is powered off. I only power it on a few times per month when I want to print something.
This isn’t a server. It’s a printing appliance. You’re going to have a similar experience of needing updates with every power-on, but with CoreOS, you’re going to have many more updates. When something breaks, you’re going to have a much longer list of things to track down as the culprit.
And, last but not least, I’ve lost my password.
JFC uptime and stability isn’t your problem. You also very probably don’t need to wipe the OS to recover a password.
My Raspberry Pi on the other hand is only used as print server, running Octoprint for my 3D-printer. I have installed Octoprint there in the form of Octopi, which is a Raspian fork distro where Octoprint is pre-installed, which is the recommended way.
That is the answer to your question. You’re running this RPi as a “server” for your 3d printing. If you want your printing to work reliably, then do what Octoprint recommends.
What it sounds like is you’re curious about CoreOS and how to run other distributions. Since breakage is basically a minor inconvenience for you, have at it. Unstable distros are great learning experiences and will keep you up to date on modern software better than “safer” things like Debian Stable. Once you get it doing what you want, it’ll usually keep doing that. Until it doesn’t, and then learning how to fix it is another great way to get smarter about running computers.
E: Reformatting
This is an AB problem in which you’re going to eventually solve the actual problem that isn’t actually systemd after looking real hard at ways to replace systemd.
Or else you’re going to find yourself in an increasingly painful maintenance process trying to retrofit rc scripts into constantly evolving distributions.
There’s a lot I prefer about the old SysV, and I’m still not thrilled that everything is being more dependent on these large monolithic daemons. But I’ve yet to find a systemd problem that wasn’t just me not knowing how to use systemd.
Depends on your specific VPN, but look for a feature or setting called “split tunnel.” It should create a separate non-vpn route for the local network.
Usually client-side setting, but not always if the tunnel is built on connection.
no. Arp bridges layer 1 and 2. It’s switch local. With a VLAN, it becomes VLAN local, in the sense that 802.1q creates a “virtual” switch.