简体   繁体   English

Mac OS X,Docker:将容器日志从 Docker for Mac VM 中获取到同一 Docker for Mac 中的 ELK 堆栈中

[英]Mac OS X, Docker: Get container logs out of Docker for Mac VM into an ELK Stack in the same Docker for Mac

Problem:问题:
- ELK Stack (7.6.2) running in Docker for Mac (2.2.0.5) - ELK Stack (7.6.2) 在 Docker for Mac (2.2.0.5) 中运行
- Learned from Docker container log file not found on Mac that container logs on Mac are kept in the Docker for Mac VM. - 从Docker 在 Mac 上找不到容器日志文件得知,Mac 上的容器日志保存在 Mac VM 的 Docker 中。

Question:问题:
How can I get (some of) the container logs - continuously - out of the VM into the ELK Stack which runs on the same Docker for Mac.我怎样才能将(一些)容器日志 - 连续地 - 从 VM 中取出到 ELK 堆栈中,该堆栈在 Mac 的同一 Docker 上运行。

Yes, I know in Linux it would be much easier.是的,我知道在 Linux 中会容易得多。 But currently I only have my Mac.但目前我只有我的 Mac。
Yes, I know I could copy the files with some Mac magic out of the VM into the normal Mac FS and then throw it into the ELK stack.是的,我知道我可以将带有一些 Mac 魔法的文件从 VM 复制到普通的 Mac FS 中,然后将其放入 ELK 堆栈中。 But I want to avoid this manual step, if possible.但如果可能的话,我想避免这个手动步骤。 Yes I know I could make a cron job, which does it automatically, but I want to avoid that, too.是的,我知道我可以做一个 cron 工作,它会自动完成,但我也想避免这种情况。

So any idea how to achieve this?那么知道如何实现这一目标吗?

Thanks, Alex谢谢,亚历克斯

Ok, after more research and some try and error, here is my solution, which may not be the best, but it is working.好的,经过更多的研究和一些尝试和错误,这是我的解决方案,它可能不是最好的,但它正在工作。

Steps:脚步:
1) Create network: 1)创建网络:

docker network create elastic  

2) Run es: 2)运行es:

docker run -d --name elasticsearch  --network elastic --restart unless-stopped -v /Data/elastic:/usr/share/elasticsearch/data -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.6.2  

3) Run kibana: 3)运行kibana:

docker run -d --name kibana --network elastic --restart unless-stopped -p 5601:5601 kibana:7.6.2

4) Create a logstash.conf: 4)创建一个logstash.conf:

input {
  syslog {
    port => 9500
    type => "docker"
  }
}

filter {
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "docker_logs"
  }
}

5) Run logstash: 5)运行logstash:

docker run -d --rm -v $PWD/config/syslog.conf:/usr/share/logstash/config/logstash.conf -p 9500:9500 --name logstash --network elastic logstash:7.6.2 bin/logstash -f /usr/share/logstash/config/logstash.conf

6) Run container: 6)运行容器:

docker run --log-driver syslog --log-opt syslog-address=tcp://1.2.3.4:9500 alpine echo hello world

Being 1.2.3.4 the ip of the docker host. docker 主机的 ip 为 1.2.3.4。

7) Open kibana: 7)打开kibana:

open http://1.2.3.4:5601

8) Do the kibana stuff I wanted to learn 8) 做我想学的 kibana 东西

Ok, that's my solution...好的,这就是我的解决方案...

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM