

Can’t understand why this is interesting, as phones now have a lot of storage space, even the ones that don’t have SD card slots. Just store the music that interests you directly on the phone.
Can’t understand why this is interesting, as phones now have a lot of storage space, even the ones that don’t have SD card slots. Just store the music that interests you directly on the phone.
I haven’t looked in a few years but 20TB is probably plenty. I agree that Wikipedia lost its way once it got all that attention online and all that search traffic. Everyone should have their own copy of Wikipedia. I used to download the daily incremental data dumps but got tired of it. I still have a few TB of them around that I’ve been wanting to merge.
The text is in not-exactly-convenient database dumps (see other commenter’s link) and there are daily diffs (mostly bot noise), but then there are the images and other media, which are way up in the terabytes by now. There are some docs, maybe out of date, about how to run the software yourself. It’s written in PHP and it’s big and complicated.
How much do you expect to pay for the 24 NVMe disks?
It’s possible for a while but there is a whack-a-mole game if you’re doing anything they would care about. So you will have to keep moving it around. VPS forums will have some info.
Are you familiar with git hooks? See
https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
Scroll to the part about server side hooks. The idea is to automatically propagate updates when you receive them. So git-level replication instead of rsync.
I see, fair enough. Replication is never instantaneous, so do you have definite bounds on how much latency you’ll accept? Do you really want independent git servers online? Most HA systems have a primary and a failover, so users only see one server. If you want to use Ceph, in practice all servers would be in the same DC. Is that ok?
I think I’d look in one of the many git books out there to see what they say about replication schemes. This sounds like something that must have been done before.
Why do you want 5 git servers instead of, say, 2? Are you after something more than high availability? Are you trying to run something like GitHub where some repos might have stupendous concurrent read traffic? What about update traffic?
What happens if the servers sometimes get out of sync for 0.5 sec or whatever, as long as each is in a consistent state at all times?
Anyway my first idea isn’t rsync, but rather, use update hooks to replicate pushes to the other servers, so the updates will still look atomic to clients. Alternatively, use a replicated file system under Ceph or the like, so you can quickly migrate failed servers. That’s a standard cloud hosting setup.
What real world workload do you have, that appeared suddenly enough that your devs couldn’t stay in top of it, and you find yourself seeking advice from us relatively clueless dweebs on Lemmy? It’s not a problem most git users deal with. Git is pretty fast and most users are ok with a single server and a backup.
I wonder if you could use HAProxy for that. It’s usually used with web servers. This is a pretty surprising request though, since git is pretty fast. Do you have an actual real world workload that needs such a setup? Otherwise why not just have a normal setup with one server being mirrored, and a failover IP as lots of VPS hosts can supply?
And, can you use round robin DNS instead of a load balancer?
What does this even mean? You want to replicate between git repositories? Can you do that with receive/update hooks on the servers?
Added: also look at kimsufi.com as an alternative to Hetzner.
Dedi will perform a lot better and be more consistent and reliable. They’re not THAT expensive if you’re making nontrivial use of them. Otherwise maybe you can keep moving around between Contabo products. Keep in mind too that hdd performance will seem a lot better when you’re not sharing it with dozens of other users. I have an HDD server and it’s fine for browsing. Might not be great for large seek-intensive databases but I’m not currently doing that
Anyway you can also ask on lowendspirit.com which is a forum about budget vps.
Get a Hetzner dedicated server. Don’t mess with vps once it gets that large. Look on hetzner.com/sb for auction servers that might be better deals than regular ones. And does your 800gb of storage really have to be all SSD? Hard disks still exist and cost lots less.
AI has become an abbreviation for “bad” and I wouldn’t want that, but yes, I’ve been interested for a while in building language models into search engines, to give the queries more reach into the document semantics. Unfortunately, naive approaches like looking for matching vector embeddings instead of (or alongside) search terms seems near useless, and just clutters up the results.
I’d be interested in knowing what approaches you’re using. FOSS, I hope.
I used Squirrelmail briefly. It had a minor security bug which was easy to fix, but when I reported it to the devs, I couldn’t convince them that it was actually a bug. I decided that they weren’t paranoid enough to be working on that type of software, so I stopped using it.
Currently I’m not self-hosting email but am using mxroute.com which has a FOSS mail client that seems ok. I can’t check right now what it is, but maybe later.
Fastmail’s webmail is pretty good and they said something a while back about releasing it as FOSS but idk if that has happened.
Right now I mostly use Thunderbird rather than webmail. It sucks in many ways but I’ve had too much going on to pursue alternatives.
I think Google got it right early on when they realized that email clients should be backed by a serious search engine. The search features of a typical IMAP server aren’t enough and the one in Thunderbird is crap. So I think this is an area where FOSS clients could use some work, if it hasn’t already been done.
https://feddit.org/post/9959466/5697405
[why blocked?] "a contributor made a push from a sanctioned region is what i saw. not even a main dev, and they didn’t receive any warning is my understanding. i might be way off, i’m not a final source:
Avoids the need for a network connection or server, though I guess you could run it on a local socket. The UI might be preferable too.
If the kobo hardware device can read drm’d epubs, it is “using drm” to do so. I’m asking if Calibre can read those same drm epubs. Do you know if it can, maybe by adding a plugin? I know there was something like that for Kindle files. Thanks.
Thanks yeah I don’t have a kobo reader so was asking if there was a way to read paid-for kobo downloaded books that have drm, similar to how decss lets you watch DVDs that you bought. I don’t mind paying for books but don’t want a locked down reading device with it’s own crappy software and possible invasive phoning home.
50GB of flac = maybe 20GB of Vorbis amirite? Is that 450GB of flac in your screen shot? It would fit on a 256gb phone even without an SD card. A 512GB card is quite affordable these days. Just make sure to buy a phone with a slot, and think of it as next level degoogling ;).
Yeah I know there’s lots of music in the world but who wants to listen to all of it on a moment’s notice anyway?