Recently I am trying to find out best Docker logging mechanism using ELK stack. I am having some questions regarding the best work flow that companies use in production. Our system has typical software stack including Tomcat, PostgreSQL, MongoDB, Nginx, RabbitMQ, Couchbase etc. As of now, our stack runs in CoreOS cluster. Please find my questions below
This is a subjective questions but I am sure that this is a problem that people have solved long ago and I am not keen on re-inventing the wheel.
Good questions and the answer like in many other cases are - "it depends".
Shipping Logs - we use rsyslog as docker containers internally and logstash-forwarder in some cases - the advantage of logstash-forwarder is that it encrypts the logs and compresses them so in some cases that's important. I find rsyslog to be very stable and low on resources so we use it as a default shipper. The full logstash might be heavy for small machines (some more data about logstash - http://logz.io/blog/5-logstash-pitfalls-and-how-to-avoid-them/ )
We're also fully dockerized and use a separate Docker for each rsyslog/lumberjack. Easy to maintain, update versions and move around when needed.
Yes, definitely use Redis. I wrote a blog about how to build production ELK ( http://logz.io/blog/deploy-elk-production/ ) - I spoke about what I find to be the right architecture to deploy ELK in production
Not sure what exactly are you trying to achieve with that.
HTH
Docker as of Aug 2015, has " Logging Driver ", so that you can ship logs into other places. These are the supported way to ship the logs remotely.
I would recommend against putting the logging forwarder into each Docker image. That adds unneeded complexity and bloat to your Docker containers. A cleaner solution is to put the log forwarder (the latest log forwarder from Elastic being FileBeat , which replaces logstash forwarder) into its own container and mount the host machine's /var/lib/docker
directory as a volume for that container.
docker run --detach --name=docker-filebeat -v /var/lib/docker:/var/lib/docker
/var/lib/docker
contains all the logs for every container running on the host's Docker daemon. The data of the log files in this directory is the same data you would get from running docker logs <container_id>
on each container.
Then in the filebeat.yml
configuration file, put:
filebeat:
prospectors:
-
paths:
- /var/lib/docker/containers/*/*.log
Then config Filebeat to forward to the rest of your ELK stack and start the container. All the Docker container logs on that machine will be forwarded to your ELK stack automatically.
The cool thing about this approach is that it allows you to forward the rest of the host systems logs as well if you want to. Simply add another volume pointing to the host system log files you want to forward and add that path to your filebeat.yml
config as well.
I find this method cleaner and more flexible than other methods such as using the Docker logging drivers because the rest of the your Docker setup stays the same. You don't have to add the logging driver flags to each Docker run command (or to the Docker daemon parameters).
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.