简体   繁体   English

码头工人停止火花容器退出

[英]docker stop spark container from exiting

I know docker only listens to pid 1 and in case that pid exits (or turns into a daemon) it thinks the program exited and the container is shut down.我知道 docker 只监听 pid 1,如果 pid 退出(或变成守护进程),它认为程序退出并且容器关闭。

When apache-spark is started the ./start-master.sh script how can I kept the container running?当 apache-spark 启动时,./ ./start-master.sh脚本如何保持容器运行?

I do not think: while true; do sleep 1000;我不认为: while true; do sleep 1000; while true; do sleep 1000; done is an appropriate solution. done 是一个合适的解决方案。

Eg I used command: sbin/start-master.sh to start the master.例如,我使用command: sbin/start-master.sh来启动 master。 But it keeps shutting down.但它一直在关闭。

How to keep it running when started with docker-compose?使用 docker-compose 启动时如何保持运行?

As mentioned in " Use of Supervisor in docker ", you could use phusion/baseimage-docker as a base image in which you can register scripts as "services".正如“phusion/baseimage-docker使用 Supervisor ”中提到的,您可以使用phusion/baseimage-docker作为基础镜像,您可以在其中将脚本注册为“服务”。

The my_init script included in that image will take care of the exit signals management.my_init包含的my_init脚本将负责退出信号管理。

And the processes launched by start-master.sh would still be running.start-master.sh启动的进程仍将运行。
Again, that supposes you are building your apache-spark image starting from phusion/baseimage-docker .同样,假设您正在从phusion/baseimage-docker开始构建apache-spark映像。

As commented by thaJeztah , using an existing image works too: gettyimages/spark/~/dockerfile/ .正如thaJeztah评论的那样,使用现有图像也可以: gettyimages/spark/~/dockerfile/ Its default CMD will keep the container running.它的默认 CMD 将保持容器运行。

Both options are cleaner than relying on a tail -f trick, which won't handle the kill/exit signals gracefully.这两个选项都比依赖tail -f技巧更干净,因为它不会优雅地处理 kill/exit 信号。

Here is another solution.这是另一种解决方案。 Create a file spark-env.sh with the following contents and copy it into the spark conf directory.使用以下内容创建一个文件spark-env.sh并将其复制到 spark conf 目录中。

SPARK_NO_DAEMONIZE=true

If your CMD in the Dockerfile looks like this:如果 Dockerfile 中的 CMD 如下所示:

CMD ["/spark/sbin/start-master.sh"]

the container will not exit.容器不会退出。

tail -f -n 50 /path/to/spark/logfile

This will keep the container alive and also provide useful info if you run -it interactive mode.如果您运行-it交互模式,这将使容器保持活动状态并提供有用的信息。 You can run -d detached and it will stay alive.您可以运行-d detached 并且它会保持活动状态。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM