I believe you’re correct. I didn’t realize that I had my containers set to privileged. That would explain why I’ve never had issues with mounting shares.
I believe you’re correct. I didn’t realize that I had my containers set to privileged. That would explain why I’ve never had issues with mounting shares.
I’m sorry, I think I gave you bad information. I have my containers set to unprivileged=no. I forgot about the “double negative” in how that flag was described.
So apparently my containers are privileged, so I don’t think I’ve ever tried to do what you are doing.
I’m leaving this here for continuity, but don’t follow what I said here. I have my containers set as privileged. I was wrong.
I have a server that runs Proxmox and a server that runs TrueNAS, so a very similar setup to yours. As long as your LXC is tied to a network adapter that has access to your file server (it almost certainly is unless you’re using multiple NICs and/or VLANs), you should be able to mount shares inside your LXC just like you do on any other Linux machine.
Can you ping your fileserver from inside the container? If so, then the issue is with the configuration in the container itself. Privileged or unprivileged shouldn’t matter here. How are you trying to mount the CIFS share?
Edit: I see that you’re mounting the share in Proxmox and mapping it to your container. You don’t need to do this. Just mount it in the container itself.
Holy shit. I’m paying less than 10c per kwh even in the “high usage” tier.
I’m right around the same level, and it actually keeps my server room / workshop at comfortable temperature during the winter. I also have my gaming PC mounted in my server rack; when that’s running, there are times where my AC will still kick in even when it’s 40 degrees outside.
For two servers (one with a lot of spinning rust), two switches, and a few other miscellaneous network appliances. My server rack averages around 600-650W. During periods of high demand (nightly backups, for instance), that can peak at around 750W.
It’s actually surprising how much just having a person in the room can alter the temperature and humidity levels. In my master bathroom, I have my bathroom fan set to activate when the dew point reaches a certain level (I’ve found that dew point produces better results than just humidity); the idea is that the bathroom will be ventilated when someone takes a shower and for however long it takes for the humidity to dissipate after they’re done. The funny thing is that every so often, I’ll take an excessively long poop (lets me honest, I’m scrolling on my phone), and the fan will kick on. Just being in the bathroom will alter the dew point enough that it triggers the fan.
I also have a room that contains all my server/networking equipment. It’s climate-controlled, and I’m constantly monitoring temperatures. The times that in the room working, I can see a noticeable spike in the temperature graph, even though the only variable that’s changed is that there’s a person in the room.
So my point is: OP might not have been having fun that night; it’s entirely possible someone just came in and went to bed.
I did some research on this, and it turns out you’re absolutely correct. I was under the impression that ECC was a requirement for a ZFS cache. It does seem like ECC is highly recommended for ZFS, though, due to the large amount of data it Storrs in memory. I’m not sure I’d feel comfortable using non-ECC memory for ZFS, but it is possible.
Anecdotally, I did have one of my memory modules fail in my TrueNAS server. It detected this, corrected itself, and sent me a warning. I don’t know if this would have worked had I been using non-ECC memory.
One thing to keep in mind if you go with an i5 or i7 is that you won’t have the option to use ECC memory. If you’re running TrueNAS, you’ll need ECC memory for the ZFS cache. A Xeon E5 v2 server is old, but still has a more than enough power for your use case, and they’re not particularly expensive.
If you need something more powerful, you can find some decent Xeon Gold systems on eBay, but they’ll be a bit more pricey. The new Xeon W chips are also an option, but at least for me, they’re prohibitively expensive.
I have ReVanced on my phone, STN on my TVs, and uBlock+SponsorBlock on my PCs. I was looking for an alternative that I could run on a server and would replace the various different apps I’m using. TubeArchivist ended up working perfectly for me; your mileage may vary.
I decided to give up on it. Looking through the docs, they recommend that due to “reasons,” it should be restarted at least daily, preferably hourly. I don’t know if they have a memory leak or some other issue, but that was reason enough for me not to use it.
I installed TubeArchivist, and it suits my needs much better. Not only do I get an archive of my favorite channels, but when a new video is released, it gets automatically downloaded to my NAS and I can play it locally without worrying about buffering on my painfully slow internet connection.
I’m strongly in favor of keeping things compartmentalized. I have two main servers: One is a Proxmox host with a powerful CPU and a few hard drives set up in a fast but not-so redundant array (I use ZFS, but my setup is similar to RAID10). Then a have second server that runs TrueNAS; the CPU is slower, but it has a large amount of storage (120TB physical) arrayed in an extremely fault-tolerant configuration.
My Proxmox box runs every service on my network, but all that gets stored the hard drives are the main boot disks. It backs up daily, so I’m not so concerned about drive failure. All my data is stored on the NAS, and it’s shared with the VMs via NFS, SMB, or iSCSI, depending on which is more appropriate.
For you, I’d recommend building a NAS, and keep all your important data there. Your NUC can host your services, and they can pull data from the NAS. The 256GB on your NUC will be more than enough to host whatever services you need.
4 Mbit is exceptionally slow by today’s standards; when I signed up for internet access (there’s only one provider available where I live), I told them “I will pay for whatever the fastest connection is that you can offer.” Turns out that’s just single-channel DSL. They won’t even install bonded DSL where I live, and believe me, I’ve tried. I do have Starlink as well, but because of the land around me, it’s always going to be obstructed by the land topology; when I calculated how high I would need to raise my antenna to avoid obstructions, it was several hundred feet. My pfSense box does a good job of routing traffic between my DSL connection and my Starlink connetion (and falling back when Starlink is obstructed), but for hosting anything, I need a stable connection. That leaves me with just my DSL connection.
I’m a big fan of Jellyfin. I run it at home with a dedicated Nvidia A2000 for hardware transcoding. It’s able to transcode multiple 4k streams with tonemapping faster than they can play.
As much as I’d love to use Jellyfin, there are two major issues: My internet connection is so slow, that I’d be lucky to stream 720p at a low bitrate. I’d spend the money on a faster connection, but I live in an area that doesn’t even get cell phone service. My options are DSL and Starlink, and I have both; the DSL is just slow, and Starlink uplink speed isn’t much better, plus I have plenty of obstructions that make it somewhat unreliable. The second problem is that Jellyfin has too steep of a learning curve. Telling my relatives “oh, if it starts buffering, just lower the bitrate” isn’t an option. Not to mention, I’d have to run it on a VPS, and hosting a VPS with the resources required for this is way too expensive for me.
My personal opinion is that Docker just makes things more difficult. Containers are fantastic, and I use plenty of them, but Docker is just one way to implement containers, and a bad one. I have a server that runs Proxmox; if I need to set up a new service, I just spin up a LXC and install what I need to. It gives all the advantages of a full Linux installation without taking up the resources of a full-fledged OS. With Docker, I would need a VM running the docker host, then I’d have to install my docker containers inside this host, then forward any ports or resources between the hypervisor, docker host, and docker container.
I just don’t get the use-case for Docker. As far as I can tell, all it does is add another layer of complexity between the host machine and the container.
Oops. Yeah, I meant ZFS. Fixed.
That’s a very valid point, and certainly a reason to obfuscate the calendar event. I would argue that in general, if the concern is the government forcing you to decrypt the data, there’s really no good solution. If they have a warrant, they will get the encrypted data; the only barrier is how willing you are to refuse to give the encryption key. I think some jurisdictions prevent this on 5th amendment grounds, but I’m not not a lawyer.
I have a full-height server rack with large, loud, noisy, power-inefficient servers, so I can’t provide much of a good suggestion there. I did want to say that you might want to seriously reconsider using a single 10Tb hard drive.
Hard drives fail, and with a single drive, in the event of a failure, your data is gone. Using several smaller drives in an array provides redundancy, so that in the event of a drive failure, parity information exists on the other drives. As long as you replace the failed drive before anything else fails, you don’t lose any data. There are multiple different ways to do this, but I’ll use RAID as an example. In RAID5, one drive stores parity information. If any one drive fails, the array will continue running (albeit slower); you just need to replace the failed drive and allow your controller to rebuild the array. In a RAID5 configuration, you lose the space of one drive to parity. So if you have 4 4TB drives in a RAID5 configuration, you would have a total of 12TB of usable space. RAID6 lets you lose two drives, but you also lose two drives worth of space to parity, meaning your array would be more fault-tolerant, but you’d only have 8TB of space.
There are many different RAID configurations; far too many for me to go into them all here. You also have something called ZFS, which is a file system with many similarities to RAID (and a LOT of extra features… like snapshots). As an example, I have 12 10TB hard drives in my NAS. Two groups of 6 drives are configued as RAIDZ2 (similar to RAID6), for a total of 40TB usable space in each array. Those two arrays are then striped (like RAID0, so that data is written across both arrays with no redundancy at the striped level). In total, that means I have 80TB of usable space, and in a worst-case scenario, I could have 4 drives (two on each array) fail without losing data.
I’m not suggesting you need a setup like mine, but you could probably fit 3 4TB drives in a small case, use RAID5 or ZFS-RAIDZ1, and still have some redundancy. To be clear, RAID is not a substitution for a backup, but it can go a long way toward ensuring you don’t need to use your backup.
As someone who uses Nextcloud, why do you suggest obfuscating the name of the calendar event? My nextcloud instance is only accessible from outside my LAN via HTTPS, so no concern about someone using a packet sniffer on public WiFi or something of that sort. The server is located on my property, so physical security isn’t a real concern unless someone breaks in with a USB drive or physically removes the server from the rack and steals it. If someone was to gain access to my network remotely, they’d still need login credentials for Nextcloud or for Proxmox in order to clone the VM drive.
To be clear, I’m not disagreeing with you; I’m wondering if I may be over-estimating data security on my home network. Considering you’re posting from infosec.pub, I’m assuming you know more about this than I do.
Also, I feel like I need to say that the fact that OP even needs to consider data security for something like really makes me wonder how parts of our society have gone so wrong.
This is also true of Jellyfin, though. I have apps on my Windows PC, my Android phone, multiple Nvidia Shield boxes on my TVs, plus the web interface if I need it.
I switched over from Plex several years ago, and while it takes a bit more time to configure, compatibility for clients seems just as good for Jellyfin as it is for Plex.
Most importantly, Jellyfin is strictly client/server, no “cloud” bullshit and no remote account is required; I don’t want Plex phoning home with a list of the media on my file server.