In looking for an app to view logs that doesn’t require a lot of overhead, I stumbled upon Logwatch. After running it through it’s paces, it seems to be pretty capable from docker, fail2ban, to sys logs.

I got to wondering if there are other such log viewers I could try that are in the same genre. Logwatch doesn’t greate pretty graphics and dialed out dashboards, but it’s fairly quick, I can view from a range of dates and times, and a variety of logs.

I checked out GoAcces, but it seemed geared towards web related logs like webpage hits, etc. With other options requiring elastisearch, databases, etc, they just seemed heavy for my application.

Anyone have any suggestions. So far, Logwatch does what it says on the tin, but I’m curious what others have tried or still use.

ETA: Thanks all for the recommends. I’m still going over a couple of them, but lnav seems like what I’m looking for.

  • tko@tkohhh.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    19 hours ago

    Can you clarify what your concern is with “heavy” logging solutions that require database/elasticsearch? If you’re worried about system resources that’s one thing, but if it’s just that it seems “complicated,” I have a docker compose file that handles Graylog, Opensearch, and Mongodb. Just give it a couple of persistent storage volumes, and it’s good to go. You can send logs directly to it with syslog or gelf, or set up a filebeat container to ingest file logs.

    There’s a LOT you can do with it once you’ve got your logs into the system, but you don’t NEED to do anything else. Just something to consider!

    • irmadlad@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 hours ago

      If you’re worried about system resources that’s one thing

      My thoughts were that, even tho I know Graylog, et al are fantastic apps, if I could get away with something light, like Logwatch and lnav, that would allow me to read logs fairly easy and lighter on resources, I could channel those resources to other projects. I’m working from a remote VPS with 32 gb RAM, so yes I can run the big apps, and I know just enough about Docker so that it’s not way over my head as far as complicated. This particular VPS has only one user, so I’m not generating tons of user logs etc. IDK, it all made sense when I was thinking about it. LOL I do like a nice, dialed out UI tho.

      I have a docker compose file that handles Graylog, Opensearch, and Mongodb

      I certainly would like the opportunity to take a look at it, maybe run it on my test server and see how it does.

      'presh

      • tko@tkohhh.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        Here you go. I commented out what is not necessary. There are some passwords noted that you’ll want to set to your own values. Also, pay attention to the volume mappings… I left my values in there, but you’ll almost certainly need to change those to make sense for your host system. Hopefully this is helpful!

        services:
          mongodb:
            image: "mongo:6.0"
            volumes:
              - "/mnt/user/appdata/mongo-graylog:/data/db"
        #      - "/mnt/user/backup/mongodb:/backup"
            restart: "on-failure"
        #    logging:
        #      driver: "gelf"
        #      options:
        #        gelf-address: "udp://10.9.8.7:12201"
        #        tag: "mongodb"
        
          opensearch:
            image: "opensearchproject/opensearch:2.13.0"
            environment:
              - "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g"
              - "bootstrap.memory_lock=true"
              - "discovery.type=single-node"
              - "action.auto_create_index=false"
              - "plugins.security.ssl.http.enabled=false"
              - "plugins.security.disabled=true"
              - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=[yourpasswordhere]"
            ulimits:
              nofile: 64000
              memlock:
                hard: -1
                soft: -1
            volumes:
              - "/mnt/user/appdata/opensearch-graylog:/usr/share/opensearch/data"
            restart: "on-failure"
        #    logging:
        #      driver: "gelf"
        #      options:
        #        gelf-address: "udp://10.9.8.7:12201"
        #        tag: "opensearch"
        
          graylog:
            image: "graylog/graylog:6.2.0"
            depends_on:
              opensearch:
                condition: "service_started"
              mongodb:
                condition: "service_started"
            entrypoint: "/usr/bin/tini -- wait-for-it opensearch:9200 --  /docker-entrypoint.sh"
            environment:
              GRAYLOG_TIMEZONE: "America/Los_Angeles"
              TZ: "America/Los_Angeles"
              GRAYLOG_ROOT_TIMEZONE: "America/Los_Angeles"
              GRAYLOG_NODE_ID_FILE: "/usr/share/graylog/data/config/node-id"
              GRAYLOG_PASSWORD_SECRET: "[anotherpasswordhere]"
              GRAYLOG_ROOT_PASSWORD_SHA2: "[aSHA2passwordhash]"
              GRAYLOG_HTTP_BIND_ADDRESS: "0.0.0.0:9000"
              GRAYLOG_HTTP_EXTERNAL_URI: "http://localhost:9000/"
              GRAYLOG_ELASTICSEARCH_HOSTS: "http://opensearch:9200/"
              GRAYLOG_MONGODB_URI: "mongodb://mongodb:27017/graylog"
        
            ports:
            - "5044:5044/tcp"   # Beats
            - "5140:5140/udp"   # Syslog
            - "5140:5140/tcp"   # Syslog
            - "5141:5141/udp"   # Syslog - dd-wrt
            - "5555:5555/tcp"   # RAW TCP
            - "5555:5555/udp"   # RAW UDP
            - "9000:9000/tcp"   # Server API
            - "12201:12201/tcp" # GELF TCP
            - "12201:12201/udp" # GELF UDP
            - "10000:10000/tcp" # Custom TCP port
            - "10000:10000/udp" # Custom UDP port
            - "13301:13301/tcp" # Forwarder data
            - "13302:13302/tcp" # Forwarder config
            volumes:
              - "/mnt/user/appdata/graylog/data:/usr/share/graylog/data/data"
              - "/mnt/user/appdata/graylog/journal:/usr/share/graylog/data/journal"
              - "/mnt/user/appdata/graylog/etc:/etc/graylog"
            restart: "on-failure"
        
        volumes:
          mongodb_data:
          os_data:
          graylog_data:
          graylog_journal:
        
        • irmadlad@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          15 hours ago

          Dude! Thanks so much. You’re very generous with your time. I guess now I have no choice nor excuse. I’ll run it up the flag pole sometime this weekend,

          • tko@tkohhh.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 hours ago

            My pleasure! Getting this stuff together can be a pain, so I’m always trying to pay it forward. Good luck and let me know if you have any questions!

    • Xanza@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      5 days ago

      lmao this is exactly what I’ve been lookin for… Thanks! I just knew if I was a lazy fuck and sat on my hands someone would do the work for me eventually!

    • kernel_panic@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      I can attest to Lnav being great, short of implementing a full Grafana/Loki stack (which is what i use for most of my infrastructure).

      Lnav makes log browsing/filtering in the terminal infinitely more enjoyable.

      • irmadlad@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I can attest to Lnav being great

        I’m sitting here running it through some logs. So far, it’s on top of the stack.

  • tuckerm@feddit.online
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 days ago

    I installed Grafana, simply because it was the only one I had heard of, and I figured that becoming familiar with it was probably useful from a professional development standpoint.

    It’s definitely massive overkill for my use case, though, and I’m looking to replace it with something else.

    • irmadlad@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      I’ll be the first to admit that I’m a sucker for dialed out dashboards. However, logs are confusing enough for me. LOL I need just the facts ma’am. Graphana is a great package tho, useful for a lot of metrics.

    • irmadlad@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      It is my understanding that while you can use Dozzle to view other logs besides Docker logs, you have to deploy separate instances. While Dozzle is awesome, I’m not sure I want to spin up 5 or 6 separate Dozzle instances. I do use Dozzle a lot for Docker logs and it’s fantastic for that.

  • non_burglar@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    Wow, you just gave me flashbacks to my first Linux/unix job in 2008. Tripwire and logwatch reports to review every morning.

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    Saw a posting this past week on SSD drive failures. They’re blaming a lot of it on ‘over-logging’ – too much writing trivial, unnecessary data to logs. I imagine it gets worse when realtime data like OpenTelemetry get involved.

    Until I saw that, never thought there was such a thing as ‘too much logging.’ Wonder if there are any ways around it, other than putting logs on spinny disks.

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 hours ago

      That would be wild if it was caused by logging, even a cheap piece of crap SSD is usually rated for 500TBW. Even if you were generating 1TB of logs per month that would still be 41 years before it wears out.

      My ebay used enterprise SSDs are rated for 3.6PBW, and they were cheaper than a basic consumer Samsung drive at the time.

    • irmadlad@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      Oh I’m not moving that much data to log, and the logs I read are all the normal stuff, nothing exotic. I guess if it were a huge cooperation, that had every Nagios plugin known to man and logging/log-rotating that because of logs, yeah I guess.