简体   繁体   中英

run two python scripts with docker compose

My folder structure looked like this:

在此处输入图像描述

My Dockerfile looked like this:

FROM python:3.8-slim-buster

WORKDIR /src

COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

COPY src/ .

CMD [ "python", "main.py"] 

When I ran these commands:

docker build --tag FinTechExplained_Python_Docker .
docker run free

my main.py file ran and gave the correct print statements as well. Now, I have added another file tests.py in the src folder. I want to run the tests.py first and then main.py .

I tried modifying the cmd within my docker file like this:

CMD [ "python", "test.py"]  && [ "python", "main.py"]

but then it gives me the print statements from only the first test.py file.

I read about docker-compose and added this docker-compose.yml file to the root folder:

version: '3'

services:
  main:
    image: free
    command: >
      /bin/sh -c 'python tests.py'

  main:
    image: free
    command: >
      /bin/sh -c 'python main.py'

then I changed my docker file by removing the cmd:

FROM python:3.8-slim-buster

WORKDIR /src

COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

COPY src/ .

Then I ran the following commands:

docker compose build
docker compose run tests
docker compose run main

When I run these commands separately, I get the correct print statements for both tests and main . However, I am not sure if I am using docker-compose correctly or not.

  • Am I supposed to run both scripts separately? Or is there a way to run one after another using a single docker command?
  • How is my Dockerfile supposed to look like if I am running the python scripts from the docker-compose.yml instead?

Edit:

Ideally looking for solutions based on docker-compose

In the Bourne shell, in general, you can run two commands in sequence by putting && between them. It sounds like you're already aware of this.

# without Docker, at a normal shell prompt
python test.py && python main.py

The Dockerfile CMD has two syntactic forms. The JSON-array form does not run a shell, and so it is slightly more efficient and has slightly more consistent escaping rules. If it's not a JSON array then Docker automatically runs it via a shell. So for your use you can use the shell form:

CMD python test.py && python main.py

In comments to other answers you ask about providing this as an override in the docker-compose.yml file. Compose will not normally run a shell for you, so you need to explicitly specify it as part of the command: override.

command: /bin/sh -c 'python test.py && python main.py'

Your Dockerfile should generally specify a CMD and the docker-compose.yml often will not include a command: . This makes it easier to run the image in other contexts (via docker run without Compose; in Kube.netes) since you won't have to retype the command every different way you want to run the container. The entrypoint wrapper pattern highlighted in @sytech's answer is very useful in general and it's easy to add to a container that uses a CMD without an ENTRYPOINT ; but it requires the Dockerfile to use CMD as a normal well-formed shell command.

You have to change CMD to ENTRYPOINT . And run the 1st script as daemon in the background using & .

ENTRYPOINT ["/docker_entrypoint.sh"]

docker_entrypoint.sh

#!/bin/bash

set -e

exec python tests.py &
exec python main.py 

In general, it is a good rule of thumb that a container should only a single process and that essential process should be pid 1

Using an entrypoint can help you do multiple things at runtime and optionally run user-defined commands using exec , as according to the best practices guide .

For example, if you always want the tests to run whenever the container starts, then execute the defined command in CMD.

First, create an entrypoint script (be sure to make it executable with chmod +x ):

#!/usr/bin/env bash

# always run tests first
python /src/tests.py

# then run user-defined command
exec "$@"

Then configure the dockerfile to copy the script and set it as the entrypoint:

#...
COPY entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["python", "main.py"]

Then when you build an image from this dockerfile and run it, the entrypoint will first execute the tests then run the command to run main.py

The command can also still be overridden by the user when running the image like docker run... myimage <new command> which will still result in the entrypoint tests being executed, but the user can change the command being run.

You can achieve this by creating a bash script(let's name entrypoint.sh) which is containing the python commands. If you want, you can create background processes of those.

#!/usr/bin/env bash
set -e

python tests.py
python main.py

Edit your docker file as follows:

FROM python:3.8-slim-buster

# Create workDir
RUN mkdir code
WORKDIR code
ENV PYTHONPATH = /code

#upgrade pip if you like here
COPY requirements.txt .
RUN pip install -r requirements.txt

# Copy Code
COPY . .

RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

In the docker compose file, add the following line to the service.

entrypoint: [ "./entrypoint.sh" ]

Have you try this in your docker-compose.yaml ?

version: '3'

services:
  main:
    image: free
    command: >
      /bin/sh -c 'python3 tests.py & && python3 main.py &'

both will run in the background

then run in terminal

docker-compose up --build

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM