• 0 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle
  • Wow, there isn’t a single solution in here with the obvious answer?

    You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.

    Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:

    1. Set up a reverse proxy - I use Traefik but there are a few other solid options - and configure it to use Let’s Encrypt and your domain name.
    2. Your reverse proxy should have ports 443 and 80 exposed, but should upgrade http requests to https.
    3. Add Jellyfin as a service and route in your reverse proxy’s config.

    On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.

    If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.

    Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.

    If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.


  • Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)

    If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.

    For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:

    • q4_K_M (the default): 43 GB
    • fp16: 141 GB
    • q8: 75 GB
    • q6_K: 58 GB
    • q5_k_m: 50 GB
    • q4: 40 GB
    • q3_K_M: 34 GB
    • q2_K: 26 GB

    This is why I run a lot of Q4_K_M 70B models on two 3090s.

    Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.

    TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.


  • I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.

    I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.








  • Giphy has a documented API that you could use. There have been bulk downloaders, but I didn’t see any that had recent activity. However you still might be able to use one to model your own script after, like https://github.com/jcpsimmons/giphy-stacks

    There were downloaders for Gfycat - gallery-dl supported it at one point - but it’s down now. However you might be able to find collections that other people downloaded and are now hosting. You could also use the Internet Archive - they have tools and APIs documented

    There’s a Tenor mass downloader that uses the Tenor API and an API key that you provide.

    Imgur has GIFs is supported by gallery-dl, so that’s an option.

    Also, read over https://github.com/simon987/awesome-datahoarding - there may be something useful for you there.

    In terms of hosting, it would depend on my user base and if I want users to be able to upload GIFs, too. If it was just my close friends, then Immich would probably be fine, but if we had people I didn’t know directly using it, I’d want a more refined solution.

    There’s Gifable, which is pretty focused, but looks like it has a pretty small following. I haven’t used it myself to see how suitable it is. If you self-host it (or something else that uses S3), note that you can use MinIO or LocalStack for the S3 container rather than using AWS directly. I’m using MinIO as part of my stack now, though for a completely different app.

    MediaCMS is another option. Less focused on GIFs but more actively developed, and intended to be used for this sort of purpose.


  • If you use that docker compose file, I recommend you comment out the build section and uncomment the image section in the lemmy service.

    I also recommend you use a reverse proxy and Docker networks rather than exposing the postgres instance on port 5433, but if you aren’t familiar with Docker networks you can leave it as is for now. If you’re running locally and don’t open that port in your router’s firewall, it’s a non-issue unless there’s an attacker on your LAN, but given that you’re not gaining anything from exposing it (unless you need to connect to the DB directly regularly - as a one off you could temporarily add the port mapping), it doesn’t make sense to increase your attack surface for no benefit.


  • I haven’t personally used any of these, but looking them over, Tipi looks the most encouraging to me, followed by Yunohost, based largely on the variety of apps available but also because it looks like Tipi lets you customize the configuration much more. Freedom Box doesn’t seem to list the apps in their catalog at all and their site seems basically useless, so I ruled it out on that basis alone.


  • I am trying to avoid having to having an open port 22

    If you’re working locally you don’t need an open port.

    If you’re on a different machine but on the same network, you don’t need to expose port 22 via your router’s firewall. If you use key-based auth and disable password-based auth then this is even safer.

    If you want access remotely, then you still don’t have to expose port 22 as long as you have a vpn set up.

    That said, you don’t need to use a terminal to manage your docker containers. I use Portainer to manage all but my core containers - Traefik, Authelia, and Portainer itself - which are all part of a single docker compose file. Portainer stacks accept docker compose files so adding and configuring applications is straightforward.

    I’ve configured around 50 apps on my server using Docker Compose with Portainer but have only needed to modify the Dockerfile itself once, and that was because I was trying to do something that the original maintainer didn’t support.

    Now, if you’re satisfied with what’s available and with how much you can configure it without using Docker, then it’s fine to avoid it. I’m just trying to say that it’s pretty straightforward if you focus on just understanding the important parts, mainly:

    • docker compose
    • docker networks
    • docker volumes

    If you decide to go that route, I recommend TechnoTim’s tutorials on Youtube. I personally found them helpful, at least.


  • I’m not addressing anything Gitea has specifically done here (I’m not informed enough on the topic to have an educated opinion yet), but just this specific part of your comment:

    And they also demand a CLA from contributors now, which is directly against the idea of FOSS.

    Proprietary software is antithetical to FOSS, but CLAs themselves are not, and were endorsed by RMS as far back as 2002:

    In contrast, I think it is acceptable to … release under the GPL, but sell alternative licenses permitting proprietary extensions to their code. My understanding is that all the code they release is available as free software, which means they do not develop any proprietary softwre; that’s why their practice is acceptable. The FSF will never do that–we believe our terms should be the same for everyone, and we want to use the GPL to give others an incentive to develop additional free software. But what they do is much better than developing proprietary software.

    If contributors allow an entity to relicense their contributions, that enables the entity to write proprietary software that includes those contributions. One way to ensure they have that freedom is to require contributors to sign a CLA that allows relicensing, so clearly CLAs can enable behavior antithetical to FOSS… but they can also enable FOSS development by generating another revenue stream. And many CLAs don’t allow relicensing (e.g., Apache’s).

    Many FOSS companies require contributors to sign CLAs. For example, the FSF has required them since 2005 at least, and its CLA allows relicensing. They explain why, but that explanation doesn’t touch on why license reassignment is necessary.

    Even if a repo requires contributors sign a CLA, nobody’s four freedoms are violated, and nobody who modifies such software is forced to sign a CLA when they share their changes with the community - they can share their changes on their own repo, or submit them to a fork that doesn’t require a CLA, or only share the code with users who purchase the software from them. All they have to do is adhere to the license that the project was under.

    The big issue with CLAs is that they’re asymmetrical (as opposed to DCOs, which serve a similar purpose). That’s understandably controversial, but it’s not inherently a FOSS issue.

    Some of the same arguments against the SSPL (which is not considered FOSS because it is so copyleft that it’s impractical) being considered FOSS could be similarly made in favor of CLAs. Not in favor of signing them as a developer, mind you, but in favor of considering projects that use them to be aligned FOSS principles.


  • I’ve never used Radicale, but I just looked it up and the homepage talks about enabling authentication. It also supports auth via reverse proxy headers, which is great for anyone who wants to use Authelia, KeyCloak, or another similar solution. By contrast, as far as I can tell, Baikal doesn’t support reverse proxy auth, though it does seem to let you set up auth through the web interface.



  • If you need/want a robust multi-user experience, specifically with private personal library support, then Photoprism isn’t going to work, unfortunately.

    • Free:
      • You can create multiple Admin users in the free version, but they all can see and delete everything (unless you don’t give Photoprism delete access)
    • Paid (Essentials or Plus)
      • you can create “User” users who can upload photos - but they still have access to your full library
      • you can create “Viewer” users who can’t see private photos (but they also can’t upload photos).
      • you can share links to albums that are viewable by anyone with the link

    I’ve been using it single user and it’s been great, though I should add the caveat that I upload my photos to my server using Photosync and don’t give Photoprism write/delete access to my library, so no uploads come from it. I had been using Photosync for years before even hearing about Photoprism so it just fit very neatly into my existing process.

    Multi user features are effectively paywalled and not technically FOSS due to not allowing commercial use, but roles are documented at https://docs.photoprism.app/user-guide/users/roles/ and there’s more info at https://docs.photoprism.app/user-guide/users/libraries/

    If Photoprism Plus/Essentials features could work for you, but the ongoing subscription is an issue, then you should know that - unless this has changed - you can sub for one month on Patreon or Github, use the info they provide to upgrade to using the Essentials or Plus features, and then cancel the subscription. I still have an ongoing one but I didn’t connect it to my Patreon account or anything so I don’t think anything would change (except for me no longer getting support, if I needed it) if I canceled it.



  • Have you considered not using the Home Assistant OS? You don’t need to run it to use Home Assistant. You can instead set your host up with some other OS, like Debian, and then run Home Assistant in a docker container (or containers, plural) and run any other containers you want.

    I’m not doing this myself so can’t speak to its limitations, but from what I’ve heard, if you’re familiar with Docker then it’s pretty straightforward.

    A lot of apps use hard coded paths, so using a subdomain per app makes it much easier to use them all. Traefik has middleware, including stripPrefix, which allow you to strip a path prefix before forwarding the path to the app, though - have you tried that approach?


  • hedgehog@ttrpg.networktoSelfhosted@lemmy.worldWhy docker
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I don’t think you understood the context of the comment you replied to. As a reply to “Here are all these drawbacks to Docker vs hosting on bare metal,” it makes perfect sense to point out that the risks are there regardless.

    Unless I misread your comment and you’re suggesting that you think devs not having to deal with OS-specific code is a disadvantage of Docker. Or maybe you meant your second paragraph to be directed at OP?