I want to create a Docker image based on an existing one with some Python packages already installed. So I'm considering using pip
in the Dockerfile to install additional packages to the image. It looks like I can either install them individually, eg:
RUN pip install foo==1.2.*
RUN pip install bar==3.4.*
...
Or put them in requirements.txt
and do something like this:
COPY requirements.txt /opt/app/requirements.txt
WORKDIR /opt/app
RUN pip install -r requirements.txt
I wonder which way is considered a better practice (ie will be more performant and/or lead to smaller image).
I need a way that is faster and leads to smaller image size
use alpine and multistage builds. Example:
FROM python:3.7-alpine as base
FROM base as builder
RUN mkdir /install
WORKDIR /install
COPY requirements.txt /requirements.txt
RUN pip install --install-option="--prefix=/install" -r /requirements.txt
FROM base
COPY --from=builder /install /usr/local
COPY src /app
WORKDIR /app
CMD ["gunicorn", "-w 4", "main:app"]
source: https://blog.realkinetic.com/building-minimal-docker-containers-for-python-applications-37d0272c52f3
This is complicated question,both of the options has their advantages and disadvantages. Let us scale the methods based on: computing resource, dependencies chains, user-friendly, etc.
Method 1: Adding the packages in requirements.txt
Method 2: Using pip to on the deployed container
Conclusion
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.