

deleted by creator
deleted by creator
The amount of confidently incorrect responses is exactly what one could expect from Lemmy.
First: TCP and UDP can listen on the same port, DNS is a great example of such. You’d generally need it to be part of the same process as ports are generally bound to the same process, but more on this later.
Second: Minecraft and website are both using TCP. TCP is part of layer 4, transport; whereas HTTP(S) / Minecraft are part of layer 7, application. If you really want to, you could cram HTTP(S) over UDP (technically, QUIC/HTTP3 does this), and if you absolutely want to, with updates to the protocol itself, and some server client edits you can cram Minecraft over UDP, too. People need to brush up on their OSI layers before making bold claims.
Third: The web server and the Minecraft server are not running on the same machine. For something that scale, both services are served from a cluster focused only on what they’re serving.
Finally: Hypixel use reverse proxy to sit between the user and their actual server. Specifically, they are most likely using Cloudflare Spectrum to proxy their traffic. User request reaches a point of presence, a reverse proxy service is listening on the applicable ports (443/25565) + protocol (HTTPS/Minecraft), and then depending on traffic type, and rules, the request gets routed to the actual server behind the scenes. There are speculations of them no longer using Cloudflare, but I don’t believe this is the case. If you dig their mc.hypixel.net domain, you get a bunch of direct assigned IP addresses, but if you tried to trace it from multiple locations, you’d all end up going through Cloudflare infrastructure. It is highly likely that they’re still leaning on Cloudflare for this service, with a BYOIP arrangement to reduce risk of DDOS addressed towards them overflow to other customers.
In no uncertain terms:
mc.hypixel.net
, but also have a SRV record for _minecraft._tcp.hypixel.net
set for 25565 on mc.hypixel.net
mc.hypixel.net
domain has CNAME record for mt.mc.production.hypixel.io.
which is flattened to a bunch of their own direct assigned IP addresses.Using Ollama to try a couple of models right now for an idea. I’ve tried to run Llama 3.2 and Qwen 2.5 3b, both of which fits my 3050 6G’s VRAM. I’ve also tried for fun to use Qwen 2.5 32b, which fits in my RAM (I’ve got 128G) but it was only able to reply a couple of tokens per second, thereby making it very much a non-interactive experience. Will need to explore the response time piece a bit further to see if there are ways I can lean on larger models with longer delays still.
Locks can happen by registrar (I.e.: ninjala, cloudflare, namecheap etc.) or registry (I.e.: gen.xyz, identity digital, verisign, etc.).
Typically, registry locks cannot be resolved through your registrar, and the registrant may need to work with the registry to see about resolving the problem. This could be complicated with Whois privacy as you may not be considered the registrant of the domain.
In all cases, most registries do not take domain suspensions lightly, and generally tend to lock only on legal issues. Check your Whois record’s EPP status codes to get hints as to what may be happening.
So just because they don’t know technology like you do, they should be left behind the times instead of taking advantage of advancements? A bit elitist and gate keeping there, don’t you think?
Everyone have their own choices to make, and for most, they’ve already decided they’d rather benefit from advancements than care about what you care about.
And here’s the reason why layman should not: they’re much more likely to make that one wrong move and suffer irrecoverable data loss than some faceless corporation selling their data.
At the end of the day, those of us who are technical enough will take the risk and learn, but for vast majority of the people, it is and will continue to remain as a non starter for the foreseeable future.
If memory serves, the default docker compose expose the database port with a basic hard coded password, too. So imagine using the compose without reading too much, next thing you know you’re running a free Postgres database for the world.
Edit: yep, still publishing the db port with hard coded password…
No multi-region unless you roll it yourself. Their offerings are primarily web hosting centric, so you’d need to do the heavy lifting yourself if you want more infra. Also worth noting that they’re definitely not in the same league as the big players, they’re just an old vendor that isn’t likely to disappear on you.
BuyVM has $24s/yr KVM server that you can attach storage at $5/TB/mn. So 5TB should set you back $325/yr all in. They’ve been around for quite some time — I’ve been client since 2011 — so they’re not likely to disappear anytime soon.
There’s two ways around the symptoms you’re trying to treat:
Probably worth calling out that although 1 feels like there’s more hops (and there absolutely are), with any decent internet, you’re probably not going to feel it. This is because the edge server is probably situated very close to your ISP (that’s how they make sure everything responds quickly) so your over all round trip should only be affected by a negligible amount of time that you most likely won’t notice.
The RAID rebuild time is going to be longer than the OEM warranty… love it!
If you’re feeling that it will take too much time to maintain something you’re deploying, then there may also be toolset/skillset mismatch. Take Docker/K8s that you’ve called out for example; they’re the graduated steps to deploy things in the industry. Things deployed via Docker drastically reduces the amount of time to get up and running by eliminating large swaths of dependency management, as well as gives option to use tools on platform to manage self updates if that’s something desired (though this could potentially introduce failures by manual upgrade steps where required). You’d graduate to k8s as your infrastructure footprints start to grow. Learning the correct tools could potentially reduce the barrier to entry and time requirements on the apps front.
Having said that, it is probably better to ask the inverse: what is it that you’re trying to achieve and why?
Without a reason that resonates well with you, you’re not going to find time in your allegedly already life to maintain to keep it working. Nor will you be willing to find the time to learn the correct tools to deploy these things.
I played with it forever ago, but from memory, that is most likely due to the way it is designed to conserve battery. The app waits for significant location update notifications from the OS and then sends the updated location to the tracking server. It doesn’t (or I should say it didn’t as I don’t know about now) actively poll the location on fixed intervals.
Have you seen OwnTracks?
Not necessarily just yaml — there are things yaml cannot do well, but even ignoring that, traefik can also use toml, or container labels — but rather, the entire concept of infrastructure as code is way better than GUIs. Infrastructure as code allows for much better linting, testing, and version controls thereby providing better stability and reproducibility.
Most providers offer some kind of OS reload and you may be able to use custom ISOs for the process. However, that doesn’t change the fact that if you don’t want to change OS (especially if you’re already using something more commonly seen in production environments like Debian), then you shouldn’t change the OS.
The name servers themselves is not part of the equation. The commonality in all those linked are sending emails from Namecheap’s shared hosted email/website, not name servers. Sending email from shared hosted email/website is asking for trouble, doesn’t matter who you’re hosting with, because those IP range are always abused, especially with the larger providers, simply due to a larger exposure. The detection mechanism here is really simple and observable via raw mail headers by checking the Received:
line. Filtering emails from this information here is a typical part of the anti-spam model. A typical implementation would be via DNSBL providers such as Spamhaus, Sorbs and alike. The solution is always to use trusted transaction email services to deliver email from the website instead.
That, however, is a very different problem than the dedicated email services like Google Workspace Gmail, because you’d not be sending from your web server’s IP address, but rather via Google’s dedicated range. As such, the Recevied:
line is much less likely to yield a match in DNSBLs. Validation for these are then done via the SPF/DKIM/DMARC records on your domain, checking if your configuration permits delivery from server at the Recevied:
line (look for Received-SPF
) and whether or not you have the appropriate signing (look for Authentication-Results:
and bits about the various stages of DKIM and DMARC).
No it does not make any sense. There are literally thousands of domain registrars out there; almost every single last one of them will offer free DNS service with registration. Also, more specifically speaking, DNS provider host provider look up is not even part of email delivery flow.
The most well known spam registrar is GoDaddy as they spam ads everywhere, and everyone and their third cousin’s dogs know about them. NameCheap is a large registrar but isn’t that big of a fish comparatively speaking. But, regardless, blocking any registrars that size the way you’re describing would break way more businesses and hurt the recipient provider’s own reputation. This honestly starting to sound more and more like a smear campaign as opposed to anything grounded in actual technology.
I don’t understand how this could be the issue.
If you’re using Google Workspace, Google will give you the appropriate DMARC, DKIM and SPF records to add to your DNS. The NS themselves should resolve the records and provide the recipient server with the values you’ve entered, thereby ensuring delivery.
Does the free DNS on NameCheap no longer allow certain types of records? Aren’t those mail specific DNS records all just TXT/CNAME records now (no more weird legacy SPF record type), which are fairly basic and typical?
If you can serve content locally without tunnel (ie no CGNAT or port block by ISP), you can configure your server to respond only to cloudflare IP range and your intranet IP range; slap on the Cloudflare origin cert for your domain, and trust it for local traffic; enable orange cloud; and tada. Access from anywhere without VPN; externally encrypted between user <> cloudflare and cloudflare <> your service; internally encrypted between user <> service; and only internally, or someone via cloudflare can access it. You can still put the zero trust SSO on your subdomain so Cloudflare authenticates all users before proxying the actual request.