简体   繁体   中英

docker stop spark container from exiting

I know docker only listens to pid 1 and in case that pid exits (or turns into a daemon) it thinks the program exited and the container is shut down.

When apache-spark is started the ./start-master.sh script how can I kept the container running?

I do not think: while true; do sleep 1000; while true; do sleep 1000; done is an appropriate solution.

Eg I used command: sbin/start-master.sh to start the master. But it keeps shutting down.

How to keep it running when started with docker-compose?

As mentioned in " Use of Supervisor in docker ", you could use phusion/baseimage-docker as a base image in which you can register scripts as "services".

The my_init script included in that image will take care of the exit signals management.

And the processes launched by start-master.sh would still be running.
Again, that supposes you are building your apache-spark image starting from phusion/baseimage-docker .

As commented by thaJeztah , using an existing image works too: gettyimages/spark/~/dockerfile/ . Its default CMD will keep the container running.

Both options are cleaner than relying on a tail -f trick, which won't handle the kill/exit signals gracefully.

Here is another solution. Create a file spark-env.sh with the following contents and copy it into the spark conf directory.

SPARK_NO_DAEMONIZE=true

If your CMD in the Dockerfile looks like this:

CMD ["/spark/sbin/start-master.sh"]

the container will not exit.

tail -f -n 50 /path/to/spark/logfile

This will keep the container alive and also provide useful info if you run -it interactive mode. You can run -d detached and it will stay alive.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM