DEAD ACCOUNT. Lemmy.one does not have active administration and I need to move on. Catch me over at dbzer0: https://lemmy.dbzer0.com/u/empireOfLove2
Yet another Reddit refugee from the great 3rd party app purge of 2023. Obligatory fuck /u/Spez.
There’s no such thing as overkill, only extra overhead to do more things with. Hell, if you found yourself with a ton of excess resources and good cooling, you could run a distributed computing project like BOINC on some of the spare cores and help out some scientists.
You wouldn’t see much of a bump in CPU performance, 6cores to 8 cores with a 200mhz clock speed improvement isn’t ground breaking.
Going to 8gb of memory will give caching benefits.
But… That’s all well and good. However. What I found the most beneficial on a OPi 5, and the entire reason I bought it over other boards, is the onboard NVME m.2 slot. Yes, the orange pi 5 can support 2230 and 2242 M.2 NVME drives at PCIe3.0x1 speeds, and it makes a WORLD of difference in performance. Like you would not even believe how fast compiling and installing software becomes when it’s not bottlenecked by the ~500 iops an SD card can struggle through. SD cards are ungodly slow, and OS level writes tend to kill them every few months (they’re not designed to handle that kind of work). Even the cheapest aliexpress M.2 drives, which I bought a 512gb KingSpec one for like $16, blow SD cards out of the water, and will last for YEARS with a typical pi’s workload compared to the few-months of an SD card. Plus they’re big enough to even do a bit of file hosting on.
You’re welcome to call their tech support and ask about using custom routing hardware, at least. It’s possible they already have an approved list of SFP’s that they would be willing to work with, just don’t hold your breath since they probably want (and need) to have management control over the entire router endpoint, not just the module.
2GB of memory is fine for openWRT. Routing is surprisingly light tbh, consider that most all home/SOHO routers run integrated SoC’s with <256MB of memory.
routing speed is more dependent on CPU +cache speed
i’d eat my boot if a residential ISP let you run your own SFP fiber module. they have to pretty tighttly control those things to keep signal levels right and have wavelengths in the right spots. plus they’ll need to upstream reconfigure it somewhat frequently as the local network changes and if it’s not their hardware, they’ll get mad.
I highly doubt homeassistant will ever abuse the pi4 CPU enough to make active cooling necessary. As long as you have a heatsink of some form on the SoC you’ll likely be perfectly fine. Obviously monitor it for core temps once installed.
Remember the Pi uses the entire PCB copper ground plane in the board as a heatsink too.
I dont know of any specific docker/web app, but its very simple command line job in linux actually if you just need to record what “normal” routing paths are for reference
Set up cron job to run a script once a day. in said bash script, get the current date using date "+%Y-%m-%d"
, store in a variable, then redirect the output of traceroute
into a text file by doing traceroute > {variable}.txt
Then you’ll have a day by day snapshot of your typical routing stack saved to your hard disk.
It won’t actively monitor it. But it’ll save a record. You can increase the frequency to as much as you want (hourly?) if you want too.
Well, whatever you end up using for documentation, print it out and actively maintain an up to date paper hard copy in a 3-ring binder somewhere. That way when all your shit falls over and you have a nonfunctional LAN you can still remember how everything was set up. Don’t ask me how I know…
Orange pi 5 can boot from SD and then use a NVME M.2-2242 SSD as storage. Or boot directly from the NVME if you set up their spi flash bootloader. Good memory and cpu options but they’re pricey for what you get. Still, an actual m.2 ssd will last much MUCH longer than any SD card when running an os and moving program data around…
They had a period of about 6 years starting around the Intel Ivy Bridge era where they actually made some really nice stuff. People started recognizing them as quality, but it’s like they put that effort in, saw results, and then went back into coast mode.
Yay! Too bad they’re going to suck and likely to be not worth buying.
ASUS has massive QC issues and nonexistent support these days. They’ve been riding the coattails of their gaming heyday and can’t build anything worth a shit, and then when it’s a problem, refuses to admit they fucked up. Jayz talks about it best
Intel just punted the ball so it’s not their problem anymore. Do NOT fucking buy these.
I’d recommend one if I had tried any of them. The only one I’ve bought is the orange pi 5 which runs significantly higher than the basic RPi $35 and figured was outside the power envelope OP really needed.
RPi’s and RPi compatibles got co-opted by a huge number of commercial and industrial control systems companies being used for cheap full-fat embedded systems that needed more than a simple microcontroller, but where industrial PLC’s were overkill or not sourcable. Everything they produce, which is not a lot given covid supply chain whiplash, has now been going towards those customer’s contracts and fuck the little guy consumer they were meant for.
If you want to get into the SBC ecosystem leave rpi in the dust, they’re dead to the enthusiasts and won’t be coming back. There are much better options. See Linus tech tips video on them.
Intel’s not good for power efficiency. Id recommend any old entry level b550 AM4 motherboard, drop a ryzen 5600x in it, buy 32 gigs of dirt cheap DDR4. You’ll be out the door $250 and be in amazing shape using half the power of Intel’s hungry 13th gen 14nm+++++++++. If you wanna spend a little more get a 5800x for 2 more cores. It’ll handle all your server stuff no sweat.
For LAN, no. If you have a router NAT’ting traffic and providing DHCP service there’s really no need for ipv6. Almost every ipv6 enabled service provides both 6 and 4 usually and NAT figures it out, and many still provide only 4, meaning you can’t just get rid of ipv4 entirely.
If your ISP has modernized and is actually providing an ipv6 address, I suppose there’s probably a tiny benefit of being able to go ipv6>ipv6 when routing, bust most all devices nowadays can handle NAT translation from ipv4 to ipv6 and vice versa with no routing penalty. I don’t know if there are any ISP’s out there who can provide static ipv6 addresses without a NAT router to your entire LAN though.
If you’re buying a vps or something ipv6 is easier to get a static address for.
That of course leaves the last good reason: why not? If you’re doing homelab hosting stuff why not experiment with ipv6 and fully modernize your network. They suck to type in but it’s fun to know your stuff is brand new and using the “best”.
And Intel is really trying to clean up their ballooning manufacturing costs. Getting rid of a whole additional PCB, chip, power supply, case molding, etc. supply chain for a low volume product will be a good cost savings. Hope it helps them refocus on the core business and get their endless fab issues figured out.
Visible from lemmy.one.
Aside from the already mentioned benefits in terms of maintenance and hardware cost… A VPS simplifies a lot of routing to the public internet. Many residential ISP’s intentionally make their network design outright hostile to any kind of inbound connection that could be used for hosting services… If you want your services to be accessible from anywhere outside of your home LAN a VPS greatly simplifies such things and guarantees it will always be up rather than being subject to the whims of an ISP’s routing stack.
yeah consumer SD cards cannot handle sustained writes like most OS’es generate. Even endurance cards can have issues past a few months in an always-on Pi depending on what it’s hosting.
also worth nothing ARM support for many apps can be missing, which is another mark against SBC’s. They have their place if you’re willing to fuck with them though.
Laptops are OK if you can disable charging or ideally have a removable battery. However SFF office PC’s are typically the cheapest to buy unless you find laptops with already dead batteries that nobody else wants. If power is not of a concern, the SFF pc is probably your best bet.
SBC’s like Raspberry Pi’s are ok, but tend to suck for fileserving tasks like Jellyfin. Their CPU’s are poor IPC and will choke on transcoding video, and they’re pretty I/O limited too. They’ll work but they’re not scalable if you’re expecting more than one or two users. And straight up forget using the onboard SD cards for storage, they’re slow and will commit suicide as fast as you can replace them.
Some SBC’s with better I/O and CPU’s exist, such as the Orange Pi 5 I just bought which supports a NVME M.2 2242 drive and has a CPU roughly 4x the overall performance of a RPI. However, they’re also not cheap- a decent specced OPI5 with 8gb of memory will still run you $100. And they require the addition of aftermarket cooling to not thermal throttle constantly, which will take some tinkering. Given how bloody cheap flash memory has gotten though- you can get lower end 1TB M.2 drives for like $60- they’re a half decent option for a very power efficient and compact little set-and-forget self host board that can hold a respectable amount of media.
Next best thing would be to shop around Facebook Marketplace or Craigslist and find a SFF office PC like an Optiplex 9020. Try to get a i5 CPU for 4 cores, and a minimum of 4gb of RAM although 8 would be ideal. They’re small enough, although they will use significantly more power than a SBC (typical low-load idle for my 9020 is around 30W from the wall- under load it’s 110-120)
Laptops are more power efficient than SFF’s but you have to be careful that they have removable batteries. Li-ion cells can and will start to puff up when run fully charged at high temperatures, such in a closed laptop doing server duty.
Generally conduit is not required in code for low voltage communications wire. It certainly helps, because you can jam a piece of rigid conduit straight up through a wall easier, and it allows you to seal the hole on either end to prevent any air or pest intrusion. But it’s extra cost and labor. You can use standard grey PVC conduit from a hardware store if you wish.
It’s called fish tape. Definitely will want some.
Buy bulk cable and pull it bare then terminate the ends at a patch panel. Pulling preterminated patch cables is a recipe for a bad time. If you use conduit, buy conduit a size or two larger than you think you need.
Depending on the complexity of the runs- don’t br afraid to call an electrician. They have magic ways of getting wires to places.
2.5 is still really new in the networking space and nobody has hit economies of scale yet. I very much also want to build out my home LAN to be entirely 2.5g compatible since 1g is limiting for my NAS use case (video storage), 10g is overkill and not supported by my client devices, and I only need 16/24 ports. but good God the hardware just isn’t reasonable yet.
You pretty much have to bite the bullet if you really want 2.5 right now. What might honestly be worthwhile is finding a used enterprise 1g switch with the number of ports you need, and will still be “enough”, as those can be had for only a couple hundred dollars. Sit on that for 2-3 years until the 2.5g and 5g hardware market starts to fill out and you can decide how badly you need 2.5g then