

that’s pretty obvious. Their body panels are falling off and showing how little there actually is their vehicles :D
that’s pretty obvious. Their body panels are falling off and showing how little there actually is their vehicles :D
Assuming that Tesla goes bankrupt, actually shuts down forever, and shuts its servers down…
At a minimum someone would have to find out where the software sends and receives data from. Then you’d have to reverse engineer the software to control the vehicles.
Then you’d have to reprogram the software to send to your C&C server. I don’t think it would really take all that much to host that… it’s getting there that’s difficult.
I’d have to have friends across the internet that wanted files first…
100%. That’s how I started, that’s how I continue to operate. Currently have a few HP prodesk and elite desk mini pcs, my old desktop converted to be a proxmox node that runs OPNsense as a vm, and an even older desktop that runs TrueNAS. However, I would like to replace my current truenas system with something newer and lower power as it consumes quite a bit for what it’s doing.
To add:
I follow these and some other I can’t think of the name right now, but some great resources!
Just found Redirecterr and set that up, but that’s just for me since no one else seems to use Overseerr.
Purchased a new to me EOL enterprise switch that will enable me to expand my network while replacing existing hardware that is limited. It also enables me to move to 10G networking woot!
Find something that interests you, and look at the docs of how to get started. It literally is the easiest way to learn and get involved in self hosting
Sounds like you should be good there then!
@xanza@lemm.ee has a great response and also suggests using AdGuard Home instead, which is what I run as well. The biggest benefits the AGH has over PiHole for my family is the fact that you can very easily define a Client and the ips that pertain to that client… so I can define a single client for all of my devices , a single client for each of my kids, etc.
Then from there I can block specific services like social media platforms per client group or allow them. And similar to PiHole, I can setup all the blocklists that I want and it’ll block them across all clients.
For my kids, this means it’s blocking all those pesky ads that pop up in games getting them to go and download more mind numbing and draining games…
Finally, I can keep tabs on my network traffic and see what individual devices are accessing what domains; however, this doesn’t mean that I can see the individual web pages.
I have two AGH instances setup on two different hosts, and an additional AdGuardHome-sync container that syncs between the two instances, to make sure that all settings are mirrored.
Honestly I think this might be a better way than what I’m using now. I’ve subbed to dockerrelease.io (edit: docker-notify.com) and releasealert.dev … get spammed all day everyday because the devs keep pushing all sorts of updates to old branches… or because those sites aren’t configured well.
I agree that you’ll want to figure out inter-pod networking.
In docker, you can create a specific “external” network (external to the docker container is my understanding) and then you can attach the docker compose stack to that network and talk using the hostnames of the containers.
Personally, I would avoid host network mode as you expose those containers to the world (good if you want that, bad if you don’t)… possibly the same with using the public IP address of your instance.
You could alternatively bind the ports to 127.0.0.1 which would restrict them from exposing to internet… (see above)
So just depends on how you want to approach it.
I am running AdGuard Home DNS, not PiHole… but same idea. I have AGH running in two LXCs on proxmox (containers). I have all DHCP zones configured to point to both instances, and I never reboot both at the same time. Additionally, I watch the status of the service to make sure it’s running before I reboot the other instance.
Outside of that, there’s really no other approach.
You would still need at least 2 DNS servers, but you could setup some sort of virtual IP or load balancing IP and configure DHCP to point to that IP, so when one instance goes down then it fails over to the other instance.
Like others, I started with owncloud but when Nextcloud forked I switched within a year. I haven’t looked back and is working without any issues and is performant.
I don’t really care about the enterprise shit since it’s not being shoved in my face 🤷🏼♂️
Man I’m lame.
Used to be {env}-function##
Now it’s {env}-{vlanlocation}-function##
VLAN location such as DMZ, Infra, Jump for jump boxes, IOTSec or IOTInsec, Etc
I wouldn’t say you’re doing it wrong, but a reverse proxy allows you to not only have a specific domain to use and multiple backends, etc… but it also can translate to not needing to have a port open for every single service you run on the backend.
RP’s can certianly be a load balancer, but usually for home lab / selfhosted purposes, we don’t need a load balancer.
I use both WireGuard and OpenVPN to vpn into my home network.
However, it doesn’t matter whether you use a domain or just up… if you get blocked from accessing either / both … you’re screwed. 🤷🏼♂️
You are looking for a disaster recovery plan. I believe you are going down the right path, but it’s something that will take time.
I backup important files to my local NAS or directly store them on the local NAS.
This NAS then backs up to an off site cloud backup provider BackBlaze B2 storage.
Finally, I have a virtual machine that has all the same directories mounted and backs up to a different cloud provider.
It’s not quite 3-2-1… but it works.
I only backup important files. I do not do full system backups for my windows clients. I do technically backup full Linux vms from within Proxmox to my NAS…but that’s because I’m lazy and didn’t write a backup script to back up specific files and such. The idea of being able to pull a full system image quickly from a cloud provider will bite you in the ass.
In theory, when backing up containers, you want to backup the configurations, data, and the databases… but you shouldn’t worry about backing up the container image. That can usually be pulled when necessary. I don’t store any of my docker container data in volumes… I use the folder mapping from host to directory in docker container… so I can just backup directories on the host instead of trying to figure out the best way to backup a randomly named docker volume. This way I know what I’m backing up for sure.
Any questions, just ask!
Somehow, I have never seen this list… and easily over half of those projects I’ve never heard of but could add some great functionality to my home. Thanks for posting it!
Certainly has me concerned. I’ll have to investigate a bit more into the financial solvency of the company to better understand whether they are at least covering bills and such… but honestly sounds like they aren’t and haven’t been.
Going to need to start looking for alternative S3 type storage.