

The user can choose. Please note you first much accept another client by its fingerprint.
The user can choose. Please note you first much accept another client by its fingerprint.
I use Inoreader. Not selfhosted but that is everything i have at that company. The articles are public anyway so I don’t care that much.
My workflow is article maximize and try to hide everything else like menubar and list of sources etc. I use jk to navigate for scrolling and v/space to go to article. I use vimium extension so d for close tab with article. Article is automatically marked as read as I scroll. It takes 5 min per day to go through. I think if I would selfhost then I would try tiny rss.
I don’t use it on phone. No need. I am at my computer most of the time.
45 to 55 watt.
But I make use of it for backup and firewall. No cloud shit.
You will find many more at feddit.nu
Dataloss is never fun. File systemet in general need a long time to iron out all the bugs. Hope it is in a better state today. I remember when ext4 was new and crashed in a laptop. Ubuntu was to early to adopt it, or I did not use LTS.
But as always, make sure to have a proper backup on a different physical location.
Why?
I am more looking into BTRF for backup due to I run Linux and not BSD ZFS requires more RAM I only have one disk I want to benefit from snapshots, compression and deduplication.
If there is a problem at low power usage then you can easily solve it by temporary add more power. Lets say add a 40watt lamp or something, later remove it from the calculation.
I use very simple software for this. My firewall can use route monitoring and failover and use policy based routing. I just send all traffic to another machine with the diagnosis part. It does ping through the firewall and fetch some info from the firewall. The page itself is not pretty but say what is wrong. Enough for parents to read what error. I also send DNS traffic to a special DNS server that responds with the same static ip address - enough for the browser to continue with a HTTP GET that the firewall will send forward to my landing page. It is sad that I don’t have any more problems since I changed ISP.
Had a scenario when the page said gateway reachable but nothing more. ISP issue. DHCP lease slowly ran out. There were a fiber cut between our town and the next. Not much I could do about it. Just configured the IP static and could reach some friends through IRC in the same city so we could talk about it.
The webpage itself was written in php that read icmp logs and showed the relevants logs of up and down. Very simple.
The dvds are fine for offline use. But I dont know how to keep them updated. Probably result in taking loads of spaces as I guess they are equal to a repo mirror
I use it with Kubuntu. Doing apt update is now much faster. I did some testing and found some good public mirror so I could max my connection(100 Mbit) with about 15ms latency to the server. But I think the problem was there are so many small files. Running nala to fetch the files in parallel helps of course. With apt local ng I don’t need nala at all. The low latency and files on gigabit connection to my server leads to fast access. Just need to find a good way to fill it with new updates.
A second problem is to figure out if something can be done to speed up the apt upgrade, which I guess is not possible. Workaround with snapshots and send diff does not sound efficient either, even on older hardware.
apt update - 4 seconds vs 16 seconds.
apt upgrade --download-only - 10 seconds vs 84 seconds;
First off. If Internet goes down I have a http captive portal that do some diagnos, showing where the problem is. Link on network interface, gateway reachable, dns working and dhcp lease. Second, now when it is down, show the timestamp when it went down. Third, phone number to the ISP and city fiber network owner.
Forth. Watch my local RSS feed and email folder. Also have something to watch from Youtube or Twitch game downloaded locally.
Use Veeam. If you hit the limit just configure it to send to a SMB share and you need no licens.
It might be enough to just rsync stuff to the secondary regularly and the inactive machine monitor the active machine and just start all services as the active machine stops responding.
I think 3 nodes are required for that
24 MiB is too little. Not even enough for nginx/apache. What installation instructions did you follow?
This is why you should have file history versioning on. With backup this is a most.