[英]Running Docker as a syslog-ng destination fails
I have a Vagrant-created VM running stock Ubuntu Trusty 64, with one host CPU allocated to it. 我有一台由Vagrant创建的VM,运行的是存量Ubuntu Trusty 64,并为其分配了一个主机CPU。 Within that VM, I have a Docker image running stock Python 3.4.3:
在该VM中,我有一个运行库存Python 3.4.3的Docker映像:
FROM python:3.4.3-slim
ENTRYPOINT ["/usr/local/bin/python"]
When I execute an arbitrary Python script: 当我执行任意Python脚本时:
import time
while True:
time.sleep(1)
Like this: 像这样:
sudo docker run -i -v /etc/alloy_listener/scripts:/scripts:ro alloy_listener /scripts/test.py
Everything is fine, the container run and just sits there doing very little. 一切都很好,容器运行了,只是坐在那里做得很少。 If I add print statements to the Python script, it gets sent to stdout as expected.
如果我将打印语句添加到Python脚本,它会按预期发送到stdout。
I also have syslog-ng installed in that VM, and my intention is to use my containerized Python to act as syslog-ng destination: 我还在该VM中安装了syslog-ng,我的意图是使用容器化的Python充当syslog-ng目标:
source s_foo {
unix-stream("/dev/log");
};
destination d_foo {
program("'docker run -i -v /etc/alloy_listener/scripts:/scripts:ro alloy_listener /scripts/test.py'");
};
log {
source(s_foo);
destination(d_foo);
};
But when I reload the config, syslog-ng consumes about 20% of the VM's CPU, and 100% of the host's CPU, and the container never gets created (running sudo docker ps -a
yields no containers). 但是,当我重新加载配置时,syslog-ng消耗了大约20%的VM CPU和100%的主机CPU,并且容器从未被创建(运行
sudo docker ps -a
产生任何容器)。 Running sudo syslog-ng-ctl stats
tells me that it is trying to execute the program: 运行
sudo syslog-ng-ctl stats
告诉我它正在尝试执行程序:
dst.program;d_foo#0;'docker run -i -v /etc/alloy_listener/scripts:/scripts:ro alloy_listener /scripts/test.py';a;dropped;0
dst.program;d_foo#0;'docker run -i -v /etc/alloy_listener/scripts:/scripts:ro alloy_listener /scripts/test.py';a;processed;2
dst.program;d_foo#0;'docker run -i -v /etc/alloy_listener/scripts:/scripts:ro alloy_listener /scripts/test.py';a;stored;0
My feeling is that because syslog-ng is using 20% of its CPU, but 100% of the host's, it's I/O bound and the VM is working extra-hard to keep up. 我的感觉是,由于syslog-ng使用了20%的CPU,但使用了100%的主机,因此它受I / O限制,并且VM正在努力地跟上。 To that end I tried consuming and flushing stdin and stdout in the Python script, but as far as I can tell because it's not even creating the container, it isn't getting as far as the script.
为此,我尝试在Python脚本中使用并刷新stdin和stdout,但是据我所知,因为它甚至没有创建容器,它并没有达到脚本的目的。
So my next thought was there must be some combination of docker's -a
, -d
, -i
, and -t
flags that I've not tried, but I'm sure I have tried every permissible combination to no avail. 因此,我的下一个想法是必须有一些我从未尝试过使用的docker
-a
, -d
, -i
和-t
标志的组合,但是我敢肯定我尝试了所有允许的组合都没有用。
What have I missed? 我错过了什么?
If you start syslog-ng in foreground (syslog-ng-binary -Fedv) you see that syslog-ng starts and stops the program destination in a loop, this cause the 100% CPU spining. 如果在前台启动syslog-ng(syslog-ng-binary -Fedv),则会看到syslog-ng循环启动和停止程序目标,这将导致100%CPU旋转。 But after investigating the problem locally, you should use the program destination as (without '): program("sudo docker run -i -v /scripts:/scripts python-test /scripts/test.py");
但是在本地调查问题后,您应该将程序目标用作(不带'):program(“ sudo docker run -i -v / scripts:/ scripts python-test /scripts/test.py”);
Br, Micek Br,Micek
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.