But that’s very hypothetical. I’ve been running servers for more than a decade now and never ever had an unbootable server. Because that’s super unlikely. The services are contained in to several user accounts and they launch on top of the operating system. If they fail, that’s not really any issue for the server booting. In fact the webserver or any service shouldn’t even have permission to mess with the system. It’ll just give you a red line in systemctl and not start the service. And the package managers are very robust. You might end up with some conflicts if you really mess up and do silly things. But with most if them the system will still boot.
So containers are useful and have their benefits. But I think this is a non-issue.
I don’t think so. I’ve also started small. There are entire operating systems like YunoHost who forgo containers. All the packages in Debian are laid out to work like that. It’s really not an issue by any means.
And I’d say it’s questionable whether the benefits of containers apply to your situation. If you for example have a reverse proxy and do authentication there, all people need to do is break that single container and they’ll be granted access to all other containers behind that as well… If you mess up your database connection, it doesn’t really matter if it runs in a container or a user account / namespace. The “hacker” will gain access to all the data stored there in both cases. I really think a lot of the complexity and places to mess up are a level higher, and not something you’d tackle with your container approach. You still need the background knowledge. And containers help you with other things, less so with this.
I don’t want to talk you out of using containers. They do isolate stuff. And they’re easy to use. There isn’t really a downside. I just think your claim doesn’t hold up, because it’s too general. You just can’t say it that way.
Well, hear me out… This is a self-hosted sub, I just run an *arr suite (lets face it, many here are), and do so in containers… They are not really distributed as packages AFAIK…
BTW my main nitpick of Debian is the outdated Podman packages… it wasn’t practical to run it there. Otherwise I too was content with Debian. I did mention this.
Sure. I think we could construe an argument for both sides here. You’re looking for something stable and rock solid, which doesn’t break your stuff. I’d argue Debian does exactly that. It has long release cycles and doesn’t give you any big Podman update, so you don’t have to deal with a major release update. That’s kind of what you wanted. But at the same time you want the opposite of that, too. That’s just not something Debian can do.
It’s going to get better, though. With software that had been moving fast (like Podman?) you’re going to experience that. But the major changes are going to slow down while the project matures, and we’ll get Debian Trixie soon (which is already in hard freeze as of now) and that comes with Podman 5.4.2. It’ll be less of an issue in the future. At least with that package.
Question remains: Are you going to handle updates of your containers and base system better than, or worse than Debian… If you don’t handle security updates of the containers in a timely manner for all time to come, you might be off worse. If you keep at it, you’ll experience some benefits. Updates are now in your hands, with both downsides and benefits… You should be fine, though. Most projects do an alright job with their containers published on Docker Hub.
I’ve always relied on Docker Hub and compose files (shared on the project page there), and never really delved deeper. It’s nice to hear recent Podman on the next release… So maybe it’ll become a viable option again.
I read that RHEL (and folks) is the standard, for Podman. But lately they have been riddled with licensing issues and big corporate nonsense, and found Alpine instead…
I think Alpine has a release cycle of 6 months. So it should be a better option if you want software from 6 months ago packaged and available. Debian does something like 2 years(?) so naturally it might have very old versions of software. On the flipside you don’t need to put in a lot of effort for 2 years.
I don’t think there is such a thing as a “standard” when it comes to Linux software. I mean Podman is developed by Red Hat. And Red Hat also does Fedora. But we’re not Apple here with a tight ecosystem. It’s likely going to run on a plethora of other Linux distros as well. And it’s not going to run better or worse just because of the company who made it…
Hmm I see… Probably because popularity is either Debian or RHEL forks when it comes to servers… Yeah that’s the good thing about open source is inter-compatibility I guess.
BTW this Alpine thing is still under testing personally… I still need to achieve long term stability. I still am hopeful after what I’ve been reading from other’s experiences… Thanks!
Right. Do your testing. Nothing here is black and white only. And everyone has different requirements, and it’s also hard to get own requirements right.
Plus they even change over time. I’ve used Debian before with all the services configured myself, moved to YunoHost, to Docker containers, to NixOS, partially back to YunoHost over the time… It all depends on what you’re trying to accomplish, how much time you got to spare, what level of customizability you need… It’s all there for a reason. And there isn’t a perfect solution. At least in my opinion.
But that’s very hypothetical. I’ve been running servers for more than a decade now and never ever had an unbootable server. Because that’s super unlikely. The services are contained in to several user accounts and they launch on top of the operating system. If they fail, that’s not really any issue for the server booting. In fact the webserver or any service shouldn’t even have permission to mess with the system. It’ll just give you a red line in systemctl and not start the service. And the package managers are very robust. You might end up with some conflicts if you really mess up and do silly things. But with most if them the system will still boot.
So containers are useful and have their benefits. But I think this is a non-issue.
I guess you can take more risks if you know what you’re doing :P
I don’t think so. I’ve also started small. There are entire operating systems like YunoHost who forgo containers. All the packages in Debian are laid out to work like that. It’s really not an issue by any means.
And I’d say it’s questionable whether the benefits of containers apply to your situation. If you for example have a reverse proxy and do authentication there, all people need to do is break that single container and they’ll be granted access to all other containers behind that as well… If you mess up your database connection, it doesn’t really matter if it runs in a container or a user account / namespace. The “hacker” will gain access to all the data stored there in both cases. I really think a lot of the complexity and places to mess up are a level higher, and not something you’d tackle with your container approach. You still need the background knowledge. And containers help you with other things, less so with this.
I don’t want to talk you out of using containers. They do isolate stuff. And they’re easy to use. There isn’t really a downside. I just think your claim doesn’t hold up, because it’s too general. You just can’t say it that way.
Well, hear me out… This is a self-hosted sub, I just run an *arr suite (lets face it, many here are), and do so in containers… They are not really distributed as packages AFAIK…
BTW my main nitpick of Debian is the outdated Podman packages… it wasn’t practical to run it there. Otherwise I too was content with Debian. I did mention this.
Sure. I think we could construe an argument for both sides here. You’re looking for something stable and rock solid, which doesn’t break your stuff. I’d argue Debian does exactly that. It has long release cycles and doesn’t give you any big Podman update, so you don’t have to deal with a major release update. That’s kind of what you wanted. But at the same time you want the opposite of that, too. That’s just not something Debian can do.
It’s going to get better, though. With software that had been moving fast (like Podman?) you’re going to experience that. But the major changes are going to slow down while the project matures, and we’ll get Debian Trixie soon (which is already in hard freeze as of now) and that comes with Podman 5.4.2. It’ll be less of an issue in the future. At least with that package.
Question remains: Are you going to handle updates of your containers and base system better than, or worse than Debian… If you don’t handle security updates of the containers in a timely manner for all time to come, you might be off worse. If you keep at it, you’ll experience some benefits. Updates are now in your hands, with both downsides and benefits… You should be fine, though. Most projects do an alright job with their containers published on Docker Hub.
I’ve always relied on Docker Hub and compose files (shared on the project page there), and never really delved deeper. It’s nice to hear recent Podman on the next release… So maybe it’ll become a viable option again. I read that RHEL (and folks) is the standard, for Podman. But lately they have been riddled with licensing issues and big corporate nonsense, and found Alpine instead…
I think Alpine has a release cycle of 6 months. So it should be a better option if you want software from 6 months ago packaged and available. Debian does something like 2 years(?) so naturally it might have very old versions of software. On the flipside you don’t need to put in a lot of effort for 2 years.
I don’t think there is such a thing as a “standard” when it comes to Linux software. I mean Podman is developed by Red Hat. And Red Hat also does Fedora. But we’re not Apple here with a tight ecosystem. It’s likely going to run on a plethora of other Linux distros as well. And it’s not going to run better or worse just because of the company who made it…
Hmm I see… Probably because popularity is either Debian or RHEL forks when it comes to servers… Yeah that’s the good thing about open source is inter-compatibility I guess.
BTW this Alpine thing is still under testing personally… I still need to achieve long term stability. I still am hopeful after what I’ve been reading from other’s experiences… Thanks!
Right. Do your testing. Nothing here is black and white only. And everyone has different requirements, and it’s also hard to get own requirements right.
Plus they even change over time. I’ve used Debian before with all the services configured myself, moved to YunoHost, to Docker containers, to NixOS, partially back to YunoHost over the time… It all depends on what you’re trying to accomplish, how much time you got to spare, what level of customizability you need… It’s all there for a reason. And there isn’t a perfect solution. At least in my opinion.
On gentoo, yes they are distributed as packages