

Maybe get a reputable one, the other ones are sadly malware infected in way to many cases. It’s a way for the manufacturer to make an extra buck from the sale.
Maybe get a reputable one, the other ones are sadly malware infected in way to many cases. It’s a way for the manufacturer to make an extra buck from the sale.
If you have an AVM Fritz!Box home router you can simply create a new profile that disallows internet access and set the devices you want to “isolate” to that profile. They will be able to access the local network and be accessed by the local network just fine, but they won’t have any outgoing (or incoming) connectivity.
If only modern kernels weren’t a problem. I wish you could just install new OSs like on PC.
I’ve used restic before and it worked great with OVH’s object storage. Moved away from cloud backups because of the cost though.
Yeah, has anyone ever actually tried restoring from then? I only remember one disgruntled redditor posting about it, but that’s about it.
Depends a lot on what backup software you use. Blackbase B2 ist just an S3-like object storage service. It’s the underlying software stack of many different things, one of those can be backup software. They do have their own backup solution though. But in that case B2 is the wrong product for you to look at.
But Borg does not work with object storage, it needs a borg process on the receiving side.
I am very happy with mine and have only ever had one hiccup during updating that was due to my Dockerfile removing one dependency to many. I’ve run it bare metal (apache, mariadb) as well as containerized (derived custom image, traefik, mariadb). Both were okay in speed after applying all steps from the documentation.
Having the database on your fastest drive is definitely very important. Whenever I look at htop while making big copies or moves, it’s always mariadb that’s shuffling stuff around.
In my opinion there are 2 things that make nextcloud (appear) slow:
Managing the ton of metadata in the db that is used by nextcloud to provide the enhanced functionality
It is/was a webpage rendered mostly on the server.
The first issue is hard to tackle, because it is intrinsic and also has different optimums for different deployment scales. Optimizing databases is beyond my skillset and therefore I stick to the recommendations.
The second issue is slowly being worked around, because many applications on nextcloud now resemble SPAs, that are highly interactive and are rendered by your browser. That reduces page reloads and makes it feel more smooth.
All that said, I barely use the webinterface, because I rarely use the collaboration features. If I have to create a share I usually do that on the app because that’s where I send the link to people. Most of my usecase is just syncing files, calendars and contacts.
That might be due to your ISP’s routing and interconnects. They usually have good routes to big services and might lack good connections between home users in different countries or on different continents.
I did too, but shortly after decommissioning that server the drive became unresponsive. I really dodged a bullet without even realizing at the time. SMART data did not work and may have alerted me in that case.
Also, unrelated to SMART data, the server failed to do reboots because the USB-SATA adapter did not properly reset without a full power cycle (which did not happen with that mainboard’s USB on reboots). It always git stuck searching for the drive. Restarting the server therefore meant shutting it down and calling someone to push the button for me - or use Wake-On-LAN which thankfully worked but was still a dodgy workaround.
From what I read online that can lead to instabilities and was therefore disabled on Linux.
And you typically don’t get SMART-Data from USB-adapters.
Intel’s low power offerings are sometimes even less power hungry than a RPi and handle more stuff. I like Asrock’s line of CPU-onboard motherboards and use one myself. You get the convenience of a full x86 machine but it sips power. Mine peaks at ~36W with full load on CPU, GPU, RAM and 4 SSDs or disks. Usually it is much much lower. You can always go smaller with an Atom x5 z8300 (~2W Idle without disks or network, 6W with both and some load), but those are getting a little old and newer stuff is better and more feature-rich. Maybe an N100 machine with 4 or 8 gigs of RAM are a good option for you? Don’t go overboard with RAM if you are using docker for everything anyways. I use 8 but 4 would be more than enough for me and my countless containers. I run Nextcloud, Jellyfin, Paperless-ngx, Resilio, Photoprism and a few more. Only the minecraft server benefits from more than four. Very happy with my J5005 board.
The encoder engine is the same for all ARC GPUs, meaning you can by the lowest end one and it has the same encoding/decoding performance as the top tier one.
Fair point. These logs are only useless chatter anyway for everyone with proper key auth.
If you want to use Zigbee or Zwave you simply need a USB dongle for that. Search for “Sonoff Zigbee Stick” if you want to see what that looks like. As you are already running Armbian it should be easy to install HomeAssistant with ZHA or zigbee2mqtt on it.
At current power costs in Germany (~0.3€/kWh) that 10W becomes 26€ per year. If you are still on a contract from a few months ago (~0.4€/kWh), you’ll be paying ~35€ per year. Over a span of around 3 to 5 years of continued use, the cheap Aliexpress knockoffs definitely become reasonable at that price point. You’d still need a proper 230V to 12V PSU though, which costs <20€. I first tried one of the industrial ones with metal cage, but they have horrible efficiency especially at low loads. I would always suggest getting a consumer PSU with ErP rating and other certifications. Adding all the cost up, a good regular ATX PSU with 80PLUS certification might be competitive to such a combo too.
I am not too sure this problem is entirely about the PSU though. Maybe the mainboard is simply designed very cheaply and inefficiently. There’s no way of testing a new PSU combo without trying it though.
I’ll have to look into using dynamic file configuration more heavily then. But how do you personally set up networking, if traefik doesn’t handle it all for you? Can you just tell traefik the container name from docker-compose in the dynamic file config? Also, what is IaC? Cause it sounds great.
Podman is on my todo list! I like the ideas behind Podman and because I am already familiar with docker containers, I hope that I can transfer most of my stuff over almost pain free. But I heard the linuxserver.io images are unsupported on podman/rootless docker and might give me trouble. We’ll see!
On the other hand, I have recently fallen in love with NixOS and would love to consolidate on a common Nix config for all my servers, Raspberries and maybe eventually desktops. It’s the perfect time to try out podman!
What? I’ve never had the feeling that nextcloud assumes that. Are you using a special all-in-one docker image? Because I am using the regular one and pair it with db, redis etc. containers and am absolutely happy with it.