

I think the restrictions are just for publishing containers on Docker Hub. If you aren’t doing that, you aren’t impacted.
I think the restrictions are just for publishing containers on Docker Hub. If you aren’t doing that, you aren’t impacted.
My pleasure! Getting this stuff together can be a pain, so I’m always trying to pay it forward. Good luck and let me know if you have any questions!
Here you go. I commented out what is not necessary. There are some passwords noted that you’ll want to set to your own values. Also, pay attention to the volume mappings… I left my values in there, but you’ll almost certainly need to change those to make sense for your host system. Hopefully this is helpful!
services:
mongodb:
image: "mongo:6.0"
volumes:
- "/mnt/user/appdata/mongo-graylog:/data/db"
# - "/mnt/user/backup/mongodb:/backup"
restart: "on-failure"
# logging:
# driver: "gelf"
# options:
# gelf-address: "udp://10.9.8.7:12201"
# tag: "mongodb"
opensearch:
image: "opensearchproject/opensearch:2.13.0"
environment:
- "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g"
- "bootstrap.memory_lock=true"
- "discovery.type=single-node"
- "action.auto_create_index=false"
- "plugins.security.ssl.http.enabled=false"
- "plugins.security.disabled=true"
- "OPENSEARCH_INITIAL_ADMIN_PASSWORD=[yourpasswordhere]"
ulimits:
nofile: 64000
memlock:
hard: -1
soft: -1
volumes:
- "/mnt/user/appdata/opensearch-graylog:/usr/share/opensearch/data"
restart: "on-failure"
# logging:
# driver: "gelf"
# options:
# gelf-address: "udp://10.9.8.7:12201"
# tag: "opensearch"
graylog:
image: "graylog/graylog:6.2.0"
depends_on:
opensearch:
condition: "service_started"
mongodb:
condition: "service_started"
entrypoint: "/usr/bin/tini -- wait-for-it opensearch:9200 -- /docker-entrypoint.sh"
environment:
GRAYLOG_TIMEZONE: "America/Los_Angeles"
TZ: "America/Los_Angeles"
GRAYLOG_ROOT_TIMEZONE: "America/Los_Angeles"
GRAYLOG_NODE_ID_FILE: "/usr/share/graylog/data/config/node-id"
GRAYLOG_PASSWORD_SECRET: "[anotherpasswordhere]"
GRAYLOG_ROOT_PASSWORD_SHA2: "[aSHA2passwordhash]"
GRAYLOG_HTTP_BIND_ADDRESS: "0.0.0.0:9000"
GRAYLOG_HTTP_EXTERNAL_URI: "http://localhost:9000/"
GRAYLOG_ELASTICSEARCH_HOSTS: "http://opensearch:9200/"
GRAYLOG_MONGODB_URI: "mongodb://mongodb:27017/graylog"
ports:
- "5044:5044/tcp" # Beats
- "5140:5140/udp" # Syslog
- "5140:5140/tcp" # Syslog
- "5141:5141/udp" # Syslog - dd-wrt
- "5555:5555/tcp" # RAW TCP
- "5555:5555/udp" # RAW UDP
- "9000:9000/tcp" # Server API
- "12201:12201/tcp" # GELF TCP
- "12201:12201/udp" # GELF UDP
- "10000:10000/tcp" # Custom TCP port
- "10000:10000/udp" # Custom UDP port
- "13301:13301/tcp" # Forwarder data
- "13302:13302/tcp" # Forwarder config
volumes:
- "/mnt/user/appdata/graylog/data:/usr/share/graylog/data/data"
- "/mnt/user/appdata/graylog/journal:/usr/share/graylog/data/journal"
- "/mnt/user/appdata/graylog/etc:/etc/graylog"
restart: "on-failure"
volumes:
mongodb_data:
os_data:
graylog_data:
graylog_journal:
Can you clarify what your concern is with “heavy” logging solutions that require database/elasticsearch? If you’re worried about system resources that’s one thing, but if it’s just that it seems “complicated,” I have a docker compose file that handles Graylog, Opensearch, and Mongodb. Just give it a couple of persistent storage volumes, and it’s good to go. You can send logs directly to it with syslog or gelf, or set up a filebeat container to ingest file logs.
There’s a LOT you can do with it once you’ve got your logs into the system, but you don’t NEED to do anything else. Just something to consider!
I’m far from an expert, but it seems to me that if you’re setting up your containers according to best practice you would only be mapping the specific ports needed for the service, which renders a wayward “open port” useless. If there’s some kind of UI exploit, that’s a different story. Perhaps this is why most people suggest not exposing your containerized services to the WAN. If we’re talking about a virus that might affect files, it can only see the files that are mapped to the container which limits the damage that can be done. If you are exposing sensitive files to your container, it might be worth it to vet the container more thoroughly (and make sure you have good backups).
I always use a version tag, but I don’t spend any time reading release notes for 95% of my containers. I’ll go through and update versions a couple times a year. If something breaks, at least I know that it broke because I updated it and I can troubleshoot then. The main consideration for me is to not accidentally update and then having a surprise problem to deal with.
I think the main page has been untouched for a few years now. I think JDM went all in on the forum and Discord and stopped focusing on the static webpage.
The discord is active. There is some problem with the hosting, I don’t remember the details, but they are recommending people use the internet archive to find information posted on the forum for the time being.
I think this is just a terminology difference. The documentation says that “Add Ons” are not supported in Container and Core, but “Add Ons” means the easy button you press to install those services. All of those Add On services are just containers that HAOS manages for you. Every single one of them can be set up as a container manually and function the same as the official “Add Ons.”
I don’t know for sure, but I wonder if the reason for this is that it’s not technically possible for a container to manage other external containers. Does anybody know about this?
To be fair, Addons are just other containers. If you’re using a Docker install for Home Assistant, I think the idea is you already have a handle on your docker host, and you’re capable of adding whatever other containers you might need.
I have server2
(which replaced server1
). I also have ‘nvr1’.
telegraf is so easy to use and extend
Definitely… you can write custom scripts that Telegraf will run and write that data to Influx. For instance, I have one that writes the Gateway status information from pfSense so I can track and graph any internet downtime.
CPU/RAM/Disk/Network etc. get written to Influxdb via Telegraf, and visualized with Grafana.
Logging and errors go to Graylog stack (Mongodb, Opensearch, Graylog).
Unraid
If you look at the markings on the baffle in the T320, it’s marked to indicate the second CPU as well as the second bank of RAM slots. I think it’s safe to say it’s identical.
Is there not a native Nextcloud app for your tablet?
That doesn’t make any sense… the fact that it’s only used in part of the world makes it even more useful for the bot to define it.
Is there a reason why your bot doesn’t define CSAM?
I use a Graylog/Opensearch/Mongodb stack to log everything. I spent a good amount of time writing parsers for each source, but the benefit is that everything is normalized to make searching easier. I’m happy with it as a solution!
That’s like insult to injury… Docker Desktop is already way worse than running on linux!