A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 1 Post
  • 163 Comments
Joined 11 months ago
cake
Cake day: June 25th, 2024

help-circle
  • hendrik@palaver.p3x.detoSelfhosted@lemmy.worldHow to reverse proxy?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    23 hours ago

    Maybe have a look at https://nginxproxymanager.com as well. I don’t know how difficult it is to install since I never used it, but I heard it has a relatively straight-forward graphical interface.

    Configuring good old plain nginx isn’t super complicated. It depends a bit on your specific setup, though. Generally, you’d put config files into /etc/nginx/sites-available/servicexyz (or put it in the default)

    server {  
        listen 80;  
        server_name jellyfin.yourdomain.com;  
        return 301 https://$server_name$request_uri;  
    }  
    
    server {  
        listen 443 ssl;  
        server_name jellyfin.yourdomain.com;  
    
        ssl_certificate /etc/ssl/certs/your_ssl_certificate.crt;  
        ssl_certificate_key /etc/ssl/private/your_private_key.key;  
        ssl_protocols TLSv1.2 TLSv1.3;  
        ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';  
        ssl_prefer_server_ciphers on;  
        ssl_session_cache shared:SSL:10m;  
    
        location / {  
            proxy_pass http://127.0.0.1:8096;  
            proxy_http_version 1.1;  
            proxy_set_header Upgrade $http_upgrade;  
            proxy_set_header Connection 'upgrade';  
            proxy_set_header Host $host;  
            proxy_cache_bypass $http_upgrade;  
        }  
    
        access_log /var/log/nginx/jellyfin.yourdomain_access.log;  
        error_log /var/log/nginx/jellyfin.yourdomain_error.log;  
    }  
    

    It’s a bit tricky to search for tutorials these days… I got that from: https://linuxconfig.org/setting-up-nginx-reverse-proxy-server-on-debian-linux

    Jellyfin would then take all requests addressed at jellyfin.yourdomain.com and forward that to your Jellyfin which hopefully runs on port 8096. You’d use a similar file like this for each service, just adapt them to the internal port and domain.

    You can also have all of this on a single domain (and not sub-domains). That’d be the difference between “jellyfin.yourdomain.com” and “yourdomain.com/jellyfin”. That’s accomplished with one file with a single “server” block in it, but make it several “location” blocks within, like location /jellyfin

    Alright, now that I wrote it down, it certainly requires some knowledge. If that’s too much and all the other people here recommend Caddy, maybe have a look at that as well. It seems to be packaged in Debian, too.

    Edit: Oh yes, and you probably want to set up Letsencrypt so you connect securely to your services. The reverse proxy would be responsible for encryption.

    Edit2: And many projects have descriptions in their documentation. Jellyfin has documentation on some major reverse proxies: https://jellyfin.org/docs/general/post-install/networking/advanced/nginx


  • hendrik@palaver.p3x.detoSelfhosted@lemmy.worldHow to reverse proxy?
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    23 hours ago

    You’d install one reverse proxy only and make that forward to the individual services. Popular choices include nginx, Caddy and Traefik. I always try to rely on packages from the repository. They’re maintained by your distribution and tied into your system. You might want to take a different approach if you use containers, though. I mean if you run everything in Docker, you might want to do the reverse proxy in Docker as well.

    That one reverse proxy would get port 443 and 80. All services like Jellyfin, Immich… get random higher ports and your reverse proxy internally connects (and forwards) to those random ports. That’s the point of a reverse proxy, to make multiple distinct services available via just one and the same port.


  • Right. Do your testing. Nothing here is black and white only. And everyone has different requirements, and it’s also hard to get own requirements right.
    Plus they even change over time. I’ve used Debian before with all the services configured myself, moved to YunoHost, to Docker containers, to NixOS, partially back to YunoHost over the time… It all depends on what you’re trying to accomplish, how much time you got to spare, what level of customizability you need… It’s all there for a reason. And there isn’t a perfect solution. At least in my opinion.


  • I think Alpine has a release cycle of 6 months. So it should be a better option if you want software from 6 months ago packaged and available. Debian does something like 2 years(?) so naturally it might have very old versions of software. On the flipside you don’t need to put in a lot of effort for 2 years.

    I don’t think there is such a thing as a “standard” when it comes to Linux software. I mean Podman is developed by Red Hat. And Red Hat also does Fedora. But we’re not Apple here with a tight ecosystem. It’s likely going to run on a plethora of other Linux distros as well. And it’s not going to run better or worse just because of the company who made it…


  • Sure. I think we could construe an argument for both sides here. You’re looking for something stable and rock solid, which doesn’t break your stuff. I’d argue Debian does exactly that. It has long release cycles and doesn’t give you any big Podman update, so you don’t have to deal with a major release update. That’s kind of what you wanted. But at the same time you want the opposite of that, too. That’s just not something Debian can do.

    It’s going to get better, though. With software that had been moving fast (like Podman?) you’re going to experience that. But the major changes are going to slow down while the project matures, and we’ll get Debian Trixie soon (which is already in hard freeze as of now) and that comes with Podman 5.4.2. It’ll be less of an issue in the future. At least with that package.

    Question remains: Are you going to handle updates of your containers and base system better than, or worse than Debian… If you don’t handle security updates of the containers in a timely manner for all time to come, you might be off worse. If you keep at it, you’ll experience some benefits. Updates are now in your hands, with both downsides and benefits… You should be fine, though. Most projects do an alright job with their containers published on Docker Hub.


  • I don’t think so. I’ve also started small. There are entire operating systems like YunoHost who forgo containers. All the packages in Debian are laid out to work like that. It’s really not an issue by any means.

    And I’d say it’s questionable whether the benefits of containers apply to your situation. If you for example have a reverse proxy and do authentication there, all people need to do is break that single container and they’ll be granted access to all other containers behind that as well… If you mess up your database connection, it doesn’t really matter if it runs in a container or a user account / namespace. The “hacker” will gain access to all the data stored there in both cases. I really think a lot of the complexity and places to mess up are a level higher, and not something you’d tackle with your container approach. You still need the background knowledge. And containers help you with other things, less so with this.

    I don’t want to talk you out of using containers. They do isolate stuff. And they’re easy to use. There isn’t really a downside. I just think your claim doesn’t hold up, because it’s too general. You just can’t say it that way.


  • But that’s very hypothetical. I’ve been running servers for more than a decade now and never ever had an unbootable server. Because that’s super unlikely. The services are contained in to several user accounts and they launch on top of the operating system. If they fail, that’s not really any issue for the server booting. In fact the webserver or any service shouldn’t even have permission to mess with the system. It’ll just give you a red line in systemctl and not start the service. And the package managers are very robust. You might end up with some conflicts if you really mess up and do silly things. But with most if them the system will still boot.

    So containers are useful and have their benefits. But I think this is a non-issue.





  • Sure. With data that might be skipped, I meant something like the Jellyfin server, which probably consists of pirated TV and music or movie rips. Those tend to be huge in size and easy to recreate. With personal content, pictures and videos there is no chance of getting it back. And I’d argue with a lot of documents and data it’s not even worth the hassle to decide which might be stored somewhere else, maybe in paper form… Just back them up, storage is cheap and most people don’t generate gigabytes worth of content each month. For large data that doesn’t change a lot, something like one or two rotated external disks might do it. And for smaller documents and current projects which see a lot of changes, we have things like Nextcloud, Syncthing and a $80 a year VPS or other cloud storage solutions.


  • Next to paying for cloud storage, I know people who store an external hdd at their parent’s or with friends. I don’t do the whole backup thing for all the recorded TV shows and ripped bluerays… If my house burns down, they’re gone. But that makes the amount of data a bit more manageable. And I can replace those. I currently don’t have a good strategy. My data is somewhat scattered between my laptop, the NAS, an external hdd which is in a different room but not off-site, one cheap virtual server I pay for and critical things like the password manager are synced to the phone as well. Main thing I’m worried about is one of the mobile devices getting stolen so I focus on having that backed up to the NAS or synced to Nextcloud. But I should work on a solid strategy in case something happens to the NAS.

    I don’t think the software is a big issue. We got several good backup tools which can do incremental or full backups, schedules, encryption and whatever someone might need for backups.


  • Privacy would be the main concern. Every single one of your words, documents, pictures will probably end up in some large database over at OpenAI. I don’t like that at all. And as a company for example, it might be against the law to share some information about clients with third parties.

    Then you don’t get any of the freedoms we got with Free Software. It’s a service you rely on with very little opportunities to customize, or look inside and tinker. There is little control for the user whatsoever. Additionally we already had companies cease service. So it might become unavailable tomorrow, which is a bad thing if you’re attached to it, invested or built things around it.

    And since “the internet is for porn”… We also have a noteworthy community doing those kinds of things. And well… go ahead and ask the big services to generate a lewd story. Most of them even refuse to write a murder mystery story for me, instead they’ll lecture me on how it is not ethical to murder someone. So that would be use-cases where local AI outperforms any of the market leaders.

    Personally, I’m a bit opposed to the entire concept of letting other people’s algorithms dictate my life. I don’t want to rely on them. I also don’t want them to pick the bias for my perspective on the world. The algorithms in social media are dwarfed by how dangerous it’s gonna be once people rely on AI more and more. And it gets to choose which information to show and which to drop. What kind of bias to introduce in summaries etc. Teach people how to think. And I already don’t like the way all big AI chatbots talk to me with a lot of emojis and in a “Explain like I’m 5 yo” way.

    So to go back to the original question… I think the more “useful” AI is, the more reasons there are to retain some control yourself. What do you think?


  • By the way, you can still run the Yunohost installer ontop of your Debian install… If you want to… It’s Debian-based anyway so it doesn’t really matter if you use its own install media or use the script on an existing Debian install. Though I feel like adding: If you’re looking for Docker… Yunohost might not be your best choice. It’s made to take control itself and it doesn’t use containers. Of course you can circumvent that and add Docker containers nonetheless… But that isn’t really the point and you’d end up dealing with the underlying Debian and just making it more complicated.

    It is a very good solution if you don’t want to deal with the CLI. But it stops being useful once you want too much customization, or unpackaged apps. At least that’s my experience. But that’s kind of always the case. Simpler and more things automatically and pre-configured, means less customizability (or more effort to actually customize it).




  • hendrik@palaver.p3x.detoSelfhosted@lemmy.worldOff-grid hosting
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    1 month ago

    Some people do it. For example we have this solar-powered website: https://solar.lowtechmagazine.com/

    You’d need an energy source like a solar panel, a battery and some computing device. Like a single board computer (Raspberry Pi) you can also run webservers on smartphones, or even a microcontroller. The server part works without an internet connection. But you obviously need some way to connect to it. A wifi (router) or a computer connected via an ethernet cable.

    The tech isn’t too complicated. Just install nginx if you have a raspberry pi, open a wifi and put your website on it. If you choose a phone, try Termux and a supported webserver. Both Linux and smartphones are designed to even work without an internet connection ;-)


  • Not really. I could use some good selfhosted search engine. I mean all the existing projects (which is just YaCy, to my knowledge) are a bit dated. Nowadays we only got metasearch engines and we’re relying on Google, Bing etc.

    But I don’t need any chatbot enhancements. That’s usually something I skip when using Google or Bing because it doesn’t work well. The AI summaries tend to be wrong, and it’s bad at looking up niche information, which is something I need a search engine to be able to find. The AI just cites the most common slop, or at best the Wikipedia article. But I don’t really need any fancy software to get there… So for me, we don’t need any AI augmentation.

    And I think the old way of googling was fine. Just teach people to put in the words that are likely to be in the article they want to find. That’d be something like “Rust new features 2023” or “homelab backup blog”. Sure you can strap on a chatbot and put in entire natural language questions. But I think that’s completely unnecessary. We have brains and we’re perfectly able to translate our questions into search queries with little effort… If somebody teches us what to type into the search bar, and why.


  • I’d go with the Full Disk Encryption. You can be sure everything is encrypted that way. Any additional complexity adds ways to mess up and compromise security. Entering the password is a bit cumbersome. But that’s part of the deal. I just carry my computer keyboard to my NAS and enter the password each time I need to reboot. Which doesn’t happen that often. There also used to be some tutorial somewhere on how to put a Dropbear SSH server into the initrd so you can enter the password over network.


  • Last time I checked, Waydroid was one of the more common ways to launch Android apps on Linux. I mean you can’t just package the bare app file, since you need all the runtime and graphical environment of Android. Plus an app could include machine code for a different architecture than a desktop computer. So either you use some layer like Waydroid, or bundle this together with some app in a Linux package…

    Android includes lots of things more than just a Linux kernel. An app could request access to your GPS, or to your contacts or calendar or storage. And that’s not part of Linux. In fact not even asking to run something in the background or opening a window is something that translates to Linux. An Android app can do none of that unless the framework to deal with it is in place. That’s why we need emulation or translation layers.