

Exactly my thoughts too. Lots of theory about why it won’t work, but not looking at the fact that if people use it, maybe it does work, and when it won’t work, they will stop using it.
Exactly my thoughts too. Lots of theory about why it won’t work, but not looking at the fact that if people use it, maybe it does work, and when it won’t work, they will stop using it.
But the estimation is with each NC instance with half a CPU and 1GB of memory.
This is a super conservative estimation, that doesn’t include anything besides a tiny Fargate deployment and Aurora instances.
Edit: fargate ($40/month), the tiniest Aurora instances at 20% utilization and with merely 50GB storage ($120/month). Missing s3, which will easily cost $50 in storage and transfer (for only a few TB), ALBs and network traffic, especially outbound (easily $50-100 depending on volumes).
This basic solution’s real cost is already between $150 and $300/month. I don’t know NC enough to understand volumes on DBs and all usage, but I assume that it’s going to be lots of data in and out (backups, media, etc.). —edit—
For a heavily used NC instance (assuming a company offering it as a service), the cost is going to become massive pretty fast.
Also, as I side note, if a company is offering NC as a service, but doesn’t manage a single piece of NC deployment… What is the company product? And most importantly, how are they going to make money when AWS is going to eat a linearly scalable chunk of their revenue forever?
Well yeah, wouldn’t break the bank, but a conservative cost estimate (without considering network costs, for example, quite relevant for a data intensive app) would bring this setup to about $40/month. That is about 5 times more expensive than a VPC with 4x the resources.
OP said this is some sort of “enterprise self-hosting” solution, which I guess then kind of makes sense. For a company providing nextcloud as a service I would never vendor lock myself and let AWS take a huge chunk of my revenue forever, but I can imagine folks have different opinions.
In that case, Pulumi permissions are too broad IMHO for what it has to do, an enterprise should adhere to least privilege. Likewise, as I wrote in another comment, the egress security groups are unclear to me (why any traffic at all is needed?) and the image consumed should be pinned to a digest. Or better yet, should be coming from a private enterprise registry, ideally with an attestation that can be verified at runtime.
I am not sure ECS Fargate makes sense vs an ec2 instance to run the workload. This setup alone will cost about $30/month assuming half a vCPU per replica with Fargate, plus about $12 for the memory (1GB/task). 2xt2.micro could be run for ~$20 without even considering reservation discounts etc. Obviously the gap will become even larger at scale, which I suppose might be very interesting for an enterprise.
Plus, at this point why not using directly managed Nextcloud (or alternatives)… If anyway you use a managed storage, runtime and database, in a vendor lock…
Oh yeah, I am aware. Mostly here I would question the idea to have multi-AZ redundancy and using a manage service for DB (which indeed is expensive). All of this when a 5$ VPS could host the same (maybe still using s3 for storage) and accept the few hours downtime in the rare event your VPS explodes and you need to restore it from a backup.
So from my PoV this is absolutely overkill but I concede that it depends a lot on the requirements. I can’t ever imagine having requirements so tight that need such infra to run (in fact, I think not even most businesses have these requirements, I have written on the topic at https://loudwhisper.me/blog/hating-clouds/) for my personal stuff…
Everyone is free to pick their poison, but I have to ask…why? What is the target audience here? This is a massively overkill architecture IMHO. Not to talk about the fact you now need 3 managed services (fargate, s3 and aurora at least) for a single self hosted tool, and that is being generous (not counting cloudwatch, ALBs, etc.).
I also use porkbun, their API is not a masterpiece but it works and allows you to get, set and update records. In fact their API is now supported by some of the common ddns scripts out there.
Desec.io is a good option. To be honest using cloudflare just for DNS is completely OK. It’s not a service that allows spying on you or consolidates their monopoly.
Well, I did not mean replacement (in fact, most orgs run in clouds which uses VMs) but I meant that a lot of orgs moved from VMs as the way to slice their compute to containers/kubernetes. Often the technologies are combined, so you are right.
but that also shows that most modern software is poorly written
Does it? I mean, this is especially annoying with old software, maybe dynamically linked or PHP, or stuff like that. Modern tools (go, rust) don’t actually even have this problem. Dependencies are annoying in general, I don’t think it’s a property of modern software.
Yes, that’s exactly point point. There are many options, yet people stick with Docker and DockerHub (that is everything but open).
Who are these people? There are tons of registries that people use, github has its own, quay.io, etc. You also can simply publish Dockerfiles and people can build themselves. Ofc Docker has the edge because it was the first mainstream tool, and it’s still a great choice for single machine deployments, but it’s far from the only used. Kubernetes abandoned Docker as default runtime for years, for example… who are you referring to?
Yes… maybe we just need some automation/orchestration tool for that. This is like saying that it’s way too hard to download the rootfs of some distro, unpack it and then use unshare to launch a shell on a isolated namespace… Docker as you said provides a convenient API but it doesn’t mean we can’t do the same for systemd.
But Systemd also uses unshare, chroot, etc. They are at the same level of abstraction. Docker (and container runtimes) are simply specialized tools, while systemd is not. Why wouldn’t I use a tool that is meant for this when it’s available. I suppose bubblewrap does something similar too (used by Flatpak), and I am sure there are more.
Completely proprietary… like QEMU/libvirt? :P
Right, because organizations generally run QEMU, not VMware, Nutanix and another handful of proprietary platforms… :)
Most of the pro-Docker arguments go around security
Actually Docker and the success of containers is mostly due to the ease of shipping code that carries its own dependencies and can be run anywhere. Security is a side-effect and definitely not the reason why containers picked-up.
systemd can provide as much isolation a docker containers and 2) there are other container solutions that are at least as safe as Docker and nobody cares about them.
Yes, and it’s much harder to achieve the same.
In systemd you need to use 30 different options to get what using containers you achieve almost instantly and with much less hussle. I made an example on my blog where I decided to run blocky
in Systemd and not in Docker. It’s just less convenient and accessible, harder to debug and also relies on each individual user to do it, while with containers a lot gets packed into the image and therefore harder to mess up.
Docker isn’t totally proprietary
There are a many container runtimes (CRI-O, podman, mirantis, containerd, etc.). Docker is just a convenient API, containers are fully implemented just with Linux native features (namespaces, seccomp, capabilities, cgroups) and images follow an open standard (OCI).
I will avoid comment what looks like a rant, but I want to simply remind you that containers are the successor of VMs (virtualize everything!), platforms that were completely proprietary and in the hands of a handful of vendors, while containers use only native OS features and are therefore a step towards openness.
I wouldn’t say that namespaces are virtualization either. Container don’t virtualize anything, namespaces are all inherited from the root namespaces and therefore completely visible from the host (with the right privileges). It’s just a completely different technology.
I used to run systemd units that just start docker-compose files, that’s also a thing, I suppose. Also generally it’s easy to manage the container directly (killing/restarting) without the needed lifecycle a systemd unit gives, I would say.
Yeah, and it also requires quite many options, some with harder-to-predict outcomes. For example RootDirectory can be used to effectively chroot the process, but that carries implications such as the application not having access to CA certificates anymore, which in general in containers is a solved problem.
I would also add security, or at least accessible security. Containers provide a number of isolation features out-of-the-box or extremely easy to configure which other systems require way more effort to achieve, or can’t achieve.
Ironically, after some conversation on the topic here on Lemmy I compiled a blog post about it.
I run Prometheus on a separate cluster, so I plug my servers with node_exporter and scrape metrics. I then alert with grafana. To be honest, the setup is heavier (resource usage-wise) than I would like for my use case, but it’s what I am used to, and scales well to multiple machines.
Yes, I was in this situation and I did exactly that. You need a splitter and then moca adapters in the rooms (a bit expensive at least 5-6 years ago where I lived).