A person with way too many hobbies, but I still continue to learn new things.

  • 5 Posts
  • 105 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • But why doesn’t it ever empty the swap space? I’ve been using vm.swappiness=10 and I’ve tried vm.vfs_cache_pressure at 100 and 50. Checking ps I’m not seeing any services that would be idling in the background, so I’m not sure why the system thought it needed to put anything in swap. (And FWIW, I run two servers with identical services that I load balance to, but the other machine has barely used any swap space – which adds to my confusion about the differences).

    Why would I want to reduce the amount of memory in the server? Isn’t all that cache memory being used to help things run smoother and reduce drive I/O?


  • And how does cache space figure in to this? I have a server with 64GB of RAM, of which 46GB is being used by system cache, but I only have 450MB of free memory and 140MB of free swap. The only ‘volatile’ service I have running is slapd which can run in bursts of activity, otherwise the only thing of consequence running is webmin and some VMs which collectively can use up to 24GB (though they actually use about half that) but there’s no reason those should hit swap space. I just don’t get why the swap space is being run dry here.




  • But is it decentralized? Do the results from multiple spiders get added to give everyone the same quality searches or do I need to scan the whole internet myself?

    [edit] I was looking at this earlier and couldn’t find the info. Started searching again just now and found it immediately… of course… (The answer is YES)


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldDecentralized Search Engine
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    Yep, that’s exactly what I was looking at (https://github.com/searx/searx). As I said, it was a QUICK dive but the wording was enough to make me shy away from it. For all the years I’ve been running servers, I won’t put up anything that requires the latest/greatest of any code because that’s where about 90% of the zero-days seem to come from. Almost all the big ones I’ve seen in the last few years where things that made me panic until I realized that oh, if your updates are more than a year old then none of this affects you. And the one that DID affect me had already been updated through a security release.


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldDecentralized Search Engine
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    4
    ·
    2 months ago

    I just did a quick dive into this and have some concerns. SearX appears to no longer be maintained and was last updated three years ago. SearXNG was forked to use more recent libraries but there were concerns that those are not always stable or fully vetted. There were also concerns that SearXNG did not follow the same concerns for user privacy. It’s a shame that SearX shut down, that one actually sounds like a project I would have jumped on.


  • More drives also equals larger power consumption so you would need a larger battery backup.

    It also means more components prone to failure which increases your chance of losing data. More drives means more moving parts and electrical connections including data and power cables, backplanes, and generated heat that you need to cool down.

    I’d be more concerned over how many failures you’re seeing that makes you think smaller drives would be the better option? I have historically used old drives from ebay or manufacturer refurbs, and even the worst of those have been reliable enough to only have to replace drives once every year or two. With RAID6 or raidz2 you should be plenty secure during a rebuild to prevent data loss. I wouldn’t consider using a lot of little drives unless it’s the only option I had or if someone gave them away for free.



  • Are you sure about that? Ever hear about this supposed predictable network names in recent linux versions? Yeah those can change too. I was trying to set up a new firewall with two internal NICs plus a 4-port card, and they kept moving around. I finally figured out that if I cold-booted the NICs would come up in one order, and if I warm-booted they would come up in a completely different order (like the ports on the card would reverse which order they were detected). This was completely the fault of systemd because when I installed an older linux and used udev to map the ports, it worked exactly as predicted. These days I trust nothing.


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldHelp with ZFS Array
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    6 months ago

    OP – if your array is in good condition (and it looks like it is) you have an option to replace drives one by one, but this will take some time (probably over a period of days). The idea is to remove a disk from the pool by its old name, then re-add the disk under the corrected name, wait for the pool to rebuild, then do the process again with the next drive. Double-check, but I think this is the proper procedure…

    zpool offline poolname /dev/nvme1n1p1

    zpool replace poolname /dev/nvme1n1p1 /dev/disk/by-id/drivename

    Check zpool status to confirm when the drive is done rebuilding under the new name, then move on to the next drive. This is the process I use when replacing a failed drive in a pool, and since that one drive is technically in a failed state right now, this same process should work for you to transfer over to the safe names. Keep in mind that this will probably put a lot of strain on your drives since the contents have to be rebuilt (although there is a small possibility zfs may recognize the drive contents and just start working immediately?), so be prepared in case a drive does actually fail during the process.


  • That is definitely true of zfs as well. In fact I have never seen a guide which suggests anything other than using the names found under /dev/disk/by-id/ or /dev/disk/by-id/uuid and that is to prevent this very problem. If the proper convention is used then you can plug the drives in through any available interface, in any order, and zfs will easily re-assemble the pool at boot.

    So now this begs the question… is proxmox using some insane configuration to create drive clusters using the name they happen to boot up with???








  • I was considering POE as an option, and this camera does have an ethernet port (although I can’t tell yet if that’s only for configuration or if the video will also stream over it directly). I don’t really need a constant stream and this camera also provides motion options so maybe it would only send video as needed (although during a heavy storm all of the cameras would probably fire at once).


  • I played with Zoneminder years ago but would like to get something set up for home security. I have a full internal network plus servers and about 60TB of free storage space so there’s really no limitations to what I could set up. Ideally I’d like to just hit a local IP from a cell phone to check the cameras (and remote access isn’t really needed), so that’s where I was trying to go with my previous questions.

    The software side seems easy enough, but finding compatible IP cameras has been stumping me. I see the Reolink 4K TrackMix wifi cameras on Amazon for $130, and other than a few hiccups it looks likely that this piece of hardware would work, unless anyone knows of any “gotchas” that I’ve missed? Otherwise I’ll do a bit more research and then order one of the cameras to see how far I can get with it.