

Ports 80 and 443.
The cli is easy and you could just Cron (scheduled task) a bunch of commands to open the firewall, renew cert and close the firewall. It’s how I do it for some internal systems.
Ports 80 and 443.
The cli is easy and you could just Cron (scheduled task) a bunch of commands to open the firewall, renew cert and close the firewall. It’s how I do it for some internal systems.
I’m not sure about anything you’re running but I would look into certbot.
Either using the basic web plugin or DNS plugin. Nginx would be simpler, you’d just have to open your web ports on certificate generation to pass the challenge.
I know some proxy tools have let’s encrypt support, such as traefik.
SQLite doesn’t like NFS, the file locking isn’t stable/fast enough so any latency in the storage can cause data loss, corruption or just slow things down.
However SQLite to MySQL is relatively peanuts, Postgres less so…
Still it’s a nice move for those that don’t run containers on a single host with local filesystems.
This is the best answer.
So my production setup is 2x10Gb bonded NICs for networking and 2x10Gb bonded NICs for Ceph/Cluster stuff. I suspect that when ceph is being heavily used you may see bottlenecks however once you have host based failure then in theory your data should be closer to the correct host and not have an issue. But it’s on a basic level like have 3 copies of data, one on each host so it doesn’t save you any storage, just reduces the risks during failure.
Thinking about it, you may actually see better results with ZFS and replicate jobs. As there’s fewer overheads and the ZFS sending is incremental. You’d obviously just loose X minutes of data instead of ceph being X seconds.
Ceph works best if you have identical osd, quantity, type and capacity across the cluster, also works best on a 3+ node cluster.
I ran a mixed sata SSD/HDD 256gb/4tb cluster and it was always a bit pants. Now I have 7x1tb SSD per node (4nodes) and it works fantastic now.
Proxmox uses replica 3/2 failure at host level but you may find that EC works better for your mixed infra as you noticed you can’t meed the 3 host failure and so setting to osd failure level means data may be kept on a single host so would need to traverse the network to the other machine.
You may also need more than a single 10Gb nic too as you might start hitting bandwidth issues.
This compose looks like it should work, I’m not at a pc to test but it’s near identical to my own; I would maybe change onlyoffice for collabra otherwise try this.
Op states they are using a Nas and server, so if NFS is being used you may need :Z on the end of any kind volume (or a non-NFS mount point if using podman/extended ACLS don’t work).
FYI docker images binding to an NFS mount can be tricky due to ACL extensions not being supported. Podman is especially bad for this.
TIL. Thanks!
Derek Derek1 DerekNew Derek2 NewDerek Ted DerekNew2 DerekTheServer Derek-Derek DerekMini
IPv6 doesn’t support NAT… Or am I woefully out of date.
But your home router will just firewall like it does already but you don’t have NAT as a simple fall back for “security”. It does make running internal services much easier as you no long need to port forward. So you can run two webservers on port 80 and they be bother allowed inbound without doing horrible load balance or NAT translation.
This seems like a bad idea and would only increase load for all federated instances with no real benefit to the community. (Maybe if you were an instance with say 10million users).
However it is exciting and cool, just personally not a recommendation I would be giving.
😂
What are they?
Ah no, sorry I wrongly assumed you had an DMZ/public IP.
Some routers may have automated ways to open ports but that would be highly dependent the router etc