My project using flask and celery libraries. I have deployed my application in AWS ECS Fargate. Here are the two docker files for flask and celery.
# Flask Docker File
FROM python:3.6
RUN apt-get update -y
RUN pip3 install pipenv
ENV USER dockeruser
RUN useradd -ms /bin/bash $USER
ENV APP_PATH /home/$USER/my_project
RUN mkdir -p $APP_PATH
COPY . $APP_PATH
WORKDIR $APP_PATH
RUN chown -R $USER:$USER $APP_PATH
RUN pipenv install --system --deploy
USER $USER
EXPOSE 5000
CMD gunicorn run:my_app -b 0.0.0.0:5000 -w 4
# Celery Docker File
FROM python:3.6
RUN apt-get update -y
RUN pip3 install pipenv
ENV USER dockeruser
RUN useradd -ms /bin/bash $USER
ENV APP_PATH /home/$USER/my_project
RUN mkdir -p $APP_PATH
COPY . $APP_PATH
WORKDIR $APP_PATH
RUN chown -R $USER:$USER $APP_PATH
RUN pipenv install --system --deploy
USER $USER
CMD celery -A celery_tasks.celery worker -l INFO -E --autoscale=2,1 -Q apple,ball,cat
Both docker files are the same for celery and flask application. Is there is a way to create a common base image file both docker files? I am using AWS ECR to store docker images.
You can start a Dockerfile FROM
any image you want, including one you built yourself. If you built the Flask image as
docker build -t me/flaskapp .
then you can build a derived image that just overrides its CMD
as
FROM me/flaskapp
CMD celery -A celery_tasks.celery worker -l INFO -E --autoscale=2,1 -Q apple,ball,cat
If you prefer you can have an image that includes the source code but no default CMD
. Since you can't un- EXPOSE
a port, this has the minor advantage that it doesn't look like your Celery worker has a network listener. ("Expose" as a verb means almost nothing in modern Docker, though.)
FROM me/code-base
EXPOSE 5000
CMD gunicorn run:my_app -b 0.0.0.0:5000 -w 4
@Frank's answer suggests a Docker Compose path. If you're routinely using Compose you might prefer that path, since there's not an easy way to make it build multiple images in correct dependency order. All of the ways to run a container have a way to specify an alternate command (from extra docker run
options through a Kubernetes pod command:
setting) so this isn't an especially limiting approach. Conversely, in a CI environment, you generally can specify multiple things to build in sequence, but you'll probably want to use an ARG
to specify the image tag .
I think you can use docker-compose( https://docs.docker.com/compose/ ). You can specify more than 1 docker instance inside docker-compose YAML config file and run them base on the same docker image.
One Example:
test.yaml:
version: '2.0'
services:
web:
image: sameimage
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
command: ["gunicorn", "run:my_app", "-b", "0.0.0.0:5000", "-w", "4"]
celery:
image: sameimage
command: ["celery", "-A", "celery_tasks.celery"]
volumes:
logvolume01: {}
You can run it by:
docker-compose -f test.yaml -p sameiamge up --no-deps
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.