

Also, proxy_buffering
Also, proxy_buffering
Sounds like you need to instrument it yourself.
It could be as “easy” as calling the endpoints yourself and saving the sensor states in any kind of storage grafana supports, then making a dashboard on top of that data.
Maybe Zabbix could also work
Thanks, especially for that openwrt mesh bit, that might end up as the the best solution.
Looking into it, ty!
Good tip, thanks!
Kicking low-signal devices didn’t occur to me, and should be easy to implement on the OpenWrt one, thanks!
Tp-link is stock sadly, but could replace with more capable one (Mikrotik L009 probably, I don’t care about single-band in this case because it literally covers a single, open space room)
Yeah didn’t add that bit before, edited in. Archer is here as just dumb AP/routing box for the furthest room, connected to Omnia by ethernet (so yes, Archer acts as client device @ .1.20 and forwards everything to Omnia).
EDIT: Sadly I don’t have OpenWRT on the TP-Link, but the plan was to replace it with more capable Mikrotik so that I could setup the more advanced bits (Mobility Domain, “roaming”)
Ha, I didn’t specify it but both routers are connected by normal ethernet cable (TP-Link -> Turris).
Don’t think extender (as in forwarder) is good solution here as it would needlesly increase latency for the secondary, though will check! maybe there are some important bits about the mobility domain and roaming in it.
Quick google search it seems your only options for HA are Xiaomi scales with bluetooth paired to a mcu with esphome.
I would really recommend just trying it out too (when RL time allows), all of the low-level stuff is often well hidden or not required to deal with unless you need it, well most of it is and everything having mostly one solution is a nice refresh compared to the hells of scripted languages.
- A long time python dev.
“Import-time” execution was a huge mistake.
Your use-case and situation seems very close to mine except I specifically do not host communities.
First of all, you can run as many services from single nginx as you want (or can handle), usually you do this by having each service on it’s own (sub)domain and routing it all to the same IP, nginx then proxies the requests to the corresponding service running locally on a given port (see nginx reverse proxy).
I would definitely recommend docker images unless you have specific needs, afaik the ansible recipe installs and manages a docker compose project too (unless they also added official bare-bones ansible setup). Might be wrong here, I do docker and manage it myself, updating is usually a file edit and two commands away.
About the VPS being enough - from my monitoring, every foreign subscribed community increases the load, with bigger/more active communities increasing it more.
The main limiting resource for my setup is disk space, sometime ago I’ve calculated my database size is increasing about 1G per month with about 500 subscribed communities and that’s only the postgresql database size without any media.
The stats from my s3 provider (you can host images locally too), hint that I am gaining 1-5GBs of media per month.
I don’t have any metrics how much the amount of active users drains the server as my instance is intentionally small, but I can imagine that having 10-100-1000 active users at the same time would drastically increase the load of at least postgres as well as increase the bandwith.
And about my setup for comparison, I am renting a dedicated server from Hetzner (AX41-NVMe) running a bunch of other services as well (minecraft server, factorio server, file sharing service, …) and as of the last 30 days my monitoring reports the “average” load average (same for all 1/5/15m) being around 1 core (out of 12 core processor, 6*2 smt).
Memory is sitting at about 50% month average out of 64G.
Though, most of the services are really under-utilized (minecraft) or don’t require much (factorio).
Rule of thumb, if your users subscribe to a lot of outside communities expect at least increased disk space consumption, at worst also increased bandwidth and load.
If any of your hosted communities get popular on the wider fediverse, definitely expect increased bandwith and load - more servers hitting your server with more data (upvotes, comments, edits…) means nginx, lemmy and postgres also need to process more.
At baseline there will be a lot of a spiky but small chatter from other instances and the biggest resource drain will be postgres.
I wouldn’t personally go into this with anything less then 4 vCPUs, 32G of RAM and non-shared/virtual storage (disk latency kills postgres performance).
Maybe this https://github.com/Alinto/sogo
I wouldn’t recommend putting ssh behind any vpn connection unles you have a secondary access to the machine (for example virtual tty/terminal from your provider or local network ssh). At best, ssh should be the only publicly accessible service (unless hosting other services that need to be public accessible).
I usually move the ssh port to some higher number just to get rid of the basic scanners/skiddies.
Also disable password login (only keys) and no root login.
And for extra hardening, explicitly allow ssh for only users that need it (in sshd config).
I don’t use nginx proxy manager but websocket has to be enabled for apps that use websockets (duh) - you would have to dive into docs or example infra configs to check if the service uses it.
Rule of thumb here would be to enable it for everything. Optionally you could check if the service works with/without it.
E: Websockets are used when a website needs to talk in “real-time” with the servers - live views and graphs will usually use it also notifications, generally if the website does not reload/redraw fully but data seems to change then there is a high chance it uses websockets under the hood (but there are ways to do it without ws, ex. SSE).
Example: Grafana uses websockets but qbittorrent web ui uses other means (SSE) and does not require ws.
borg backup with rsync.net
Borg does de-duplication and compression, I’ve used it for multiple things like backing up minecraft servers and it can reduce the final backup size by a lot (like 1-2 TBs to a hundred of GB, though that was with content that was highly compressible and didn’t change much over-time so the deduplication did a lot too).
There is also borgbase.com which looks a bit better and focuses only on borg repositories instead of also being compatible with just about any usual tools (eg rsync, rclone etc)
I would try momentarily replacing the defined dns servers with nameserver 1.1.1.1
and see if stuff improves, though the pull error would hint that docker did resolve the name but somehow didn’t get an answer.
Hard to guess what else could be a problem apart from some obvious stuff - check if the internet connection is healthy and stable (ping, watch for spikes in ms or drops, also any outgoing firewall filters?)
There is also FX which can do this too, additionally you can browse/download/upload files to/from the phone locally from PC through browser (the app opens up a web server).
We’ve consolidated all our code into a single repository – just clone ente-io/ente on GitHub, and you will have at your disposal a state of the art, end-to-end encrypted, full stack (mobile/web/desktop clients, the server, and a CLI to boot) alternative to Google Photos and Apple Photos.
Sadly, it seems todays law allows them to force you to unlock it, otherwise they straight up treat you as a terrorist. At least in UK it seems (source: Britannica youtuber getting detained when returning home to UK)