

under a pile of pillows
maybe not literally though, hard drives do need some cooling…


under a pile of pillows
maybe not literally though, hard drives do need some cooling…
Thanks for the write-up! I’ve been trying and failing to do DOOD and POOP runners via forgejo, but I haven’t had the time or energy to really dig in and figure out the issue. At this point I just want something to work so I’ll give your setup a try 😎
please share, I’m interested in doing the same
Yes, if you’re going to run TrueNAS (or another solution based on ZFS) you should really get rid of the PERC and get an LSI SAS card in IT mode so that the system can see the raw disks.
When you start your SATA swap, either use the onboard SATA ports (if there are enough) or get a SATA card (more ports, probably slightly better performance than sharing the onboard controller) and start the process I described before.
Yes, you’re going to want to get SATA drives that are the same size or bigger than your SAS drives. The mini-sas will break out into 4x sas connectors but you don’t have to swap 4 at a time; disconnect one SAS drive from the SAS breakout cable and then connect one replacement SATA drive to the SATA backplane (either the one on your motherboard or to a SATA card if you don’t have enough mobo ports). Do a zfs scrub. Once it’s finished with no errors, repeat all three steps. Once all drives are off the SAS card and your final scrub is done you can remove the SAS card entirely.
For this use case there’s not really an advantage using SAS over SATA. I’d suggest buying SATA drives in the future just because you don’t need a SAS card for them, and SATA drives are usually cheaper.
If you use the H700 for hardware RAID and switch to SATA later, your best bet is probably to copy the data over (or better, use the opportunity to test your backup/restore process).
If you could run the SAS disks in JBOD mode (which is possible if you sell the H700 and use another SAS card), you could set up your drives in a RAIDZ1 mode and later switch to SATA drives by replacing one drive at a time and doing a scrub between each swap.
This is a PERC H700 which does not support IT mode (even if you cross-flash to an LSI firmware).
You could use that card as-is but for truenas I’d suggest grabbing a proper SAS card. I got one off ebay (LSI 9207) for about USD$35 already flashed and ready to go.


Can confirm, this is my setup and it works great.


I am a sysadmin with over 30 years of experience managing servers and networks for businesses of all sizes as well as for myself, friends, and family.
The FUTO guide is extremely detailed, accurate, and accessible. It does not always follow best practices, and it’s not a comprehensive guide to all of the possibilities for self-hosting. It’s not trying to be. It is a guide for someone with no technical expertise (but with basic technical ability) to degoogle/deapple themselves at a reasonable level of cost and effort.
You do not have to do everything in the list, you can pick and choose the parts you’re interested in. That said, I would recommend reading through the whole article as you have time, because it does a very good job of explaining the concepts involved in building a self-hosted setup, and understanding how everything works is the biggest step toward being able to effectively troubleshoot problems when they inevitably crop up.
If you have specific questions about things that aren’t answered in the guide or via a quick web search, post them here.


deleted by creator


Thank you for the detailed write up. I’m going to give this a shot and see if I can save myself some space.


Subscribe.
I’ve got TrueNAS running on a reasonably recent PC but not a ton of space on the drives. I’d love to transcode the handful of 8GB+ movies and 40GB+ seasons sitting around taking up space. How complicated was it to set up tdarr and how long does it usually take to transcode?
I wouldn’t recommend running docker/podman in LXC, but that’s just because it seems to run better as a full VM in my experience.
No sense running it in the hypervisor, agreed.
LXC is great for everything else.


It’s not worth the headache IMO. Just run a docker VM and use lxc for the one-off systems that you want to experiment with.
I have a “production” docker VM and a “sandbox” docker VM and prod only ever runs compose files that I’ve vetted in sandbox. Super stable, basically bulletproof, and still has the flexibility to experiment and break stuff without affecting my core services.


Turns out the old gag “You can’t get there from here; You have to go somewhere else and start” is actually true for btrfs’s RAID support.


Bro The RAID Fuckin’ Sucks
ZFS for “RAID” is fine. Btrfs for a single disk (or on top of mdraid or hardware raid) is also fine.


In my experience, Pulumi can best be described as a waste of time.
Crocoslut really started going downhill after the license change and conversion to nodejs in v9.


I can attest to projectivy and smarttube, they are great. I went with the internet’s recommendation on the $20 Walmart/onn Google tv 4k box, with projectivy as the launcher instead of the default.
My only gripe so far is that the remote doesn’t seem to consistently turn the box on, I have to go unplug the box every so often to reset it. probably some misconfiguration that’s making it not wake from sleep correctly.
Despite that issue, 10/10 experience: ad free YouTube, fast jellyfin in 4k, fully customizable ui…
Why use netcat when you can just tap the bits into an ethernet cable with a bench power supply like a telegram operator?