

It’s because AI cannot think. AI cannot be clever. You cannot model creativity by absorbing the collective thoughts of all of the past. Generative AI will be nothing more than autocomplete.
It’s because AI cannot think. AI cannot be clever. You cannot model creativity by absorbing the collective thoughts of all of the past. Generative AI will be nothing more than autocomplete.
I had a double NAT setup like that. Run a firewall like OPNSense as a Proxmox VM, and give it a WAN interface on the ISP router’s IP range; then run everything else on a different subnet, using OPNSense as the gateway. On the ISP router, put OPNSense’s WAN IP in the DMZ. Then, do all your hardening using OPNSense’s firewall rules. Bonus points for setting up a VLAN on a physical switch to isolate the connection.
The ISP router will send everything to OPNSense’s WAN IP, and it will basically bypass the whole double NAT situation.
The orange menace apparently just defunded it so we’ll see
Does Caddy use certbot to do the renewal? A long time ago DNS was a pain but now it seems like a lot of providers are supported.
If you are really looking for hassle-free this is it. LetsEncrypt root certificates are already trusted by most devices so when your friends come over and wanna control the media library or whatever you don’t need to install your locally hosted CA’s self-signed certificates on their phone.
Also certbot and a cron or systemd timer is all you need; people have rolled all these fancy solutions but I say keep it simple.
I’m partial to AdGuardHome myself, but PiHole does the job well
Why would anyone use containers without compose?
Especially people who are newer? It’s far easier.
This is the way to go. Way more flexible than hardware raid and getting better all the time.
In ZFS-speak, instead of RAID 10 you’ll be doing “mirrored vdevs”
Quick, now lean a firewall with a good IDS
and fail2ban
NUT works with many UPS models and provides monitoring and control
You can run into this issue with any two sync programs that operate on virtual files, as another commenter said. This isn’t specifically a OneDrive or NextCloud problem. You can safely run both at the same time on the same machine, as long as they are syncing entirely separate directories.
That being said, this is obscure enough that I feel like there should be some kind of check in these clients to make sure they’re not about to interfere with each other - users aren’t gonna know to check for this, especially since these clients are hiding what they’re actually doing behind the scenes!
You cannot specify ports in a DNS A or AAAA record. www.example.com cannot resolve to 1.2.3.4:443 and app.domain.com cannot resolve to 1.2.3.4:5555
If the application (be it a game or whatnot) supports it, SRV records can identify a port for a hostname. So, you could have minecraft1.domain.com and an SRV record to specify port 25565, and minecraft2.domain.com SRV 25566.
This means you can have multiple Minecraft servers with the same IP address, but you won’t need to give people the port numbers to remember; the hostname allows the game to look up the port via the SRV record.
This is great for selfhosters because we generally only get one IP (until they rollout IPv6; probably half the reason they don’t)
NFS is always cranky for me, and you can’t get it to use symlinks at all (yeah Samba’s implementation is janky but at least it exists)
It’s UID/GID 10000 on the host because you are using an unprivileged LXC container. Unprivileged means that “root” inside the container (which is just a user space of the host with access restrictions) is user 10000 on the host - this is so that files and processes inside the container don’t run with the real UID zero, where they could plant a malicious file, or run a malicious program that escapes containment that ends up with root access on the host.
Quickest way to make this work over samba is to force user 10000 and force group 10000. That way everything connecting to Samba would see the files as their own.
Honestly the better solution is to make your software inside the containers run with a local non-root user (which would be something like 10001) and then force samba to use that. Then nothing is running as root in or out of the containers. Samba will still limit access to shares based on the samba login, but for file access purposes it will still use the read/write levels of your non-root user (because of the force- directives)
Is the container running out of disk space for its DB? Is the container running out of memory during the backup process and crashing?
WebDAV, as others have said
That’s right, all it is is an auto-copy program. It doesn’t host a shared folder like NextCloud; it just saves you the clicks (or commands) of copying your newly-changed files to all the places you want a copy to be.
If you edit a file on your machine, and your wife edits her copy, you might even find there to be a conflict. (I don’t use Syncthing so I don’t know how it handles this)
Only on Ubuntu based distros AFAIK but sudo do-release-upgrade
is the correct command
Grocy is a neat project for stuff like this. Also available as a HomeAssistant add-on, if that’s your style
This is how they manage to get users to not blink an eye when they notice their keyboard is 1.1GB because of all the tracking and analytics libraries