• 0 Posts
  • 32 Comments
Joined 2 years ago
cake
Cake day: August 8th, 2023

help-circle

  • If you shut down the computer gracefully first before you power the disks off it should be ok more often than not, but you really should try to have everything on the same system so this can all be coordinated by the OS and the hardware.

    As others have said, avoid powering the disks off before the OS has had a chance to shut down or your disks will NOT be in a recoverable state when everything comes back online.

    I’m not even sure the setup you are describing would benefit at all from a different storage method, even “regular” writes could be in memory or controller buffers. External drives are not meant to have their power cut.




  • I had a double NAT setup like that. Run a firewall like OPNSense as a Proxmox VM, and give it a WAN interface on the ISP router’s IP range; then run everything else on a different subnet, using OPNSense as the gateway. On the ISP router, put OPNSense’s WAN IP in the DMZ. Then, do all your hardening using OPNSense’s firewall rules. Bonus points for setting up a VLAN on a physical switch to isolate the connection.

    The ISP router will send everything to OPNSense’s WAN IP, and it will basically bypass the whole double NAT situation.










  • You can run into this issue with any two sync programs that operate on virtual files, as another commenter said. This isn’t specifically a OneDrive or NextCloud problem. You can safely run both at the same time on the same machine, as long as they are syncing entirely separate directories.

    That being said, this is obscure enough that I feel like there should be some kind of check in these clients to make sure they’re not about to interfere with each other - users aren’t gonna know to check for this, especially since these clients are hiding what they’re actually doing behind the scenes!




  • It’s UID/GID 10000 on the host because you are using an unprivileged LXC container. Unprivileged means that “root” inside the container (which is just a user space of the host with access restrictions) is user 10000 on the host - this is so that files and processes inside the container don’t run with the real UID zero, where they could plant a malicious file, or run a malicious program that escapes containment that ends up with root access on the host.

    Quickest way to make this work over samba is to force user 10000 and force group 10000. That way everything connecting to Samba would see the files as their own.

    Honestly the better solution is to make your software inside the containers run with a local non-root user (which would be something like 10001) and then force samba to use that. Then nothing is running as root in or out of the containers. Samba will still limit access to shares based on the samba login, but for file access purposes it will still use the read/write levels of your non-root user (because of the force- directives)




  • That’s right, all it is is an auto-copy program. It doesn’t host a shared folder like NextCloud; it just saves you the clicks (or commands) of copying your newly-changed files to all the places you want a copy to be.

    If you edit a file on your machine, and your wife edits her copy, you might even find there to be a conflict. (I don’t use Syncthing so I don’t know how it handles this)