简体   繁体   中英

Dockerfile and Docker Compose for NestJS app with PSQL DB where env vars are expected at runtime

I'm Dockerizing a simple Node/JS (NestJS -- but I don't think that matters for this question) web service and have some questions. This service talks to a Postgres DB. I would like to write a Dockerfile that can be used to build an image of the service (let's call it my-service ) and then write a docker-compose.yml that defines a service for the Postgres DB as well as a service for my-service that uses it. That way I can build images of my-service but also have a Docker Compose config for running the service and its DB at the same time together. I think that's the way to do this (keep me honest though.). Kubernetes is not an option for me, just FYI.

The web service has a top-level directory structure like so:

my-service/
    .env
    package.json
    package-lock.json
    src/
    <lots of other stuff>

Its critical to note that in its present, non-containerized form, you have to set several environment variables ahead of time, including the Postgres DB connection info (host, port, database name, username, password, etc.). The application code fetches the values of these env vars at runtime and uses them to connect to Postgres.

So, I need a way to write a Dockerfile and docker-compose.yml such that:

  • if I'm just running a container of the my-service image by itself, and want to tell it to connect to any arbitrary Postgres DB, I can pass those env vars in as (ideally) runtime arguments on the Docker CLI command (however remember the app expects them to be set as env vars); and
  • if I'm spinning up the my-service and its Postgres together via the Docker Compose file, I need to also specify those as runtime args in the Docker Compose CLI, then Docker Compose needs to pass them on to the container's run arguments, and then the container needs to set them as env vars for web service to use

Again, I think this is the correct way to go, but keep me honest!

So my best attempt -- a total WIP so far -- looks like this:

Dockerfile

FROM node:18

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

# creates "dist" to run out of
RUN npm run build

# ideally the env vars are already set at this point via
## docker CLI arguments, so nothing to pass in here (???)
CMD [ "node", "dist/main.js" ]

docker-compose.yml

version: '3.7'

services:
  postgres:
    container_name: postgres
    image: postgres:14.3
    environment:
      POSTGRES_PASSWORD: ${psql.password}
      POSTGRES_USER: ${psql.user}
      POSTGRES_DB: my-service-db
      PG_DATA: /var/lib/postgresql2/data
    ports:
      - 5432:5432
    volumes:
      - pgdata:/var/lib/postgresql2/data
  my-service:
    container_name: my-service
    image: ???  anyway to say "build whats in the repo?"
    environment:
      ??? do I need to set anything here so it gets passed to the my-service
          container as env vars?
volumes:
  pgdata:

Can anyone help nudge me over the finish line here? Thanks in advance!

??? do I need to set anything here so it gets passed to the my-service container as env vars?

Yes, you should pass the variables there. This is a principle of 12 factor design

need to also specify those as runtime args in the Docker Compose CLI, then Docker Compose needs to pass them on to the container's run arguments

If you don't put them directly in the YAML, will this option work for you?

docker-compose --env-file app.env up

Ideally, you also put

depends_on:
  postgres

So that when you start your service, the database will also start up.

If you want to connect to a different database instance, then you can either create a separate compose file without that database, or use a different set of variables (written out, or using env_file , as mentioned)

Or you can use NPM dotenv or config packages and set different .env files for different database environments, based on other variables, such as NODE_ENV , at runtime.

??? anyway to say "build whats in the repo?"

Use build instead of image directive.

Kubernetes is not an option for me, just FYI

You could use Minikube instead of Compose... Doesn't really matter, but kompose exists to convert a Docker Compose into k8s resources.

Your Dockerfile is correct. You can specify the environment variables while doing docker run like this:

docker run --name my-service -it <image> -e PG_USER='user' -e PG_PASSWORD='pass'
  -e PG_HOST='dbhost' -e PG_DATABASE='dbname' --expose <port>

Or you can specify the environment variables with the help of .env file. Let's call it app.env . Its content would be:

PG_USER=user
PG_PASSWORD=pass
PG_DATABASE=dbname
PG_HOST=dbhost
OTHER_ENV_VAR1=someval
OTHER_ENV_VAR2=anotherval

Now instead of specifying multiple -e options to docker run command, you can simply tell the name of the file from where the environment variables need to be picked up.

docker run --name my-service -it <image> --env-file app.env --expose <port>

In order to run postgres and your service with a single docker compose command, a few modifications need to be done in your docker-compose.yml . Let's first see the full YAML .

version: '3.7'

services:
  postgres:
    container_name: postgres
    image: postgres:14.3
    environment:
      POSTGRES_PASSWORD: $PG_PASSWORD
      POSTGRES_USER: $PG_USER
      POSTGRES_DB: my-service-db
      PG_DATA: /var/lib/postgresql2/data
    ports:
      - 5432:5432
    volumes:
      - pgdata:/var/lib/postgresql2/data
  my-service:
    container_name: my-service
    build: .   #instead of image directive, use build to tell docker what folder to build
    environment:
      PG_USER: $PG_USER
      PG_PASSWORD: $PG_PASSWORD
      PG_HOST: postgres   #note the name of the postgres service in compose yaml
      PG_DATABASE: my-service-db
      OTHER_ENV_VAR1: $OTHER_ENV_VAR1
      OTHER_ENV_VAR2: $OTHER_ENV_VAR2
    depends_on:
      postgres
volumes:
  pgdata:

Now you can use docker compose up command to run the services. If you wish to build the my-service container each time you can pass an optional argument --build like this: docker compose up --build .

In order to pass the environment variables from the CLI, there's only one way which is by the use of .env file. In your case of docker-compose.yml the app.env would look like:

PG_USER=user
PG_PASSWORD=pass
#PG_DATABASE=dbname   #not required as you're using 'my-service-db' as db name in compose file
#PG_HOST=dbhost       #not required as service name of postgres in compose file is being used as db host
OTHER_ENV_VAR1=someval
OTHER_ENV_VAR2=anotherval

Passing this app.env file using docker compose CLI command would look like this:

docker compose --env-file app.env up --build

PS: If you're building your my-service each time just for the code changes to reflect in the docker container, you could make use of bind mount instead. The updated docker-compose.yml in that case would look like this:

version: '3.7'

services:
  postgres:
    container_name: postgres
    image: postgres:14.3
    environment:
      POSTGRES_PASSWORD: $PG_PASSWORD
      POSTGRES_USER: $PG_USER
      POSTGRES_DB: my-service-db
      PG_DATA: /var/lib/postgresql2/data
    ports:
      - 5432:5432
    volumes:
      - pgdata:/var/lib/postgresql2/data
  my-service:
    container_name: my-service
    build: .
    volumes:
      - .:/usr/src/app    #note the use of volumes here
    environment:
      PG_USER: $PG_USER
      PG_PASSWORD: $PG_PASSWORD
      PG_HOST: postgres
      PG_DATABASE: my-service-db
      OTHER_ENV_VAR1: $OTHER_ENV_VAR1
      OTHER_ENV_VAR2: $OTHER_ENV_VAR2
    depends_on:
      postgres
volumes:
  pgdata:

This way, you don't need to run docker compose build each time, making a code change in the source folder would get reflected in the docker container.

You just need to add path of your docker file in to build parameter in docker-compose.yaml file and all the environment variables in environment

version: '3.7'

services:
  postgres:
    container_name: postgres
    image: postgres:14.3
    environment:
      POSTGRES_PASSWORD: ${psql.password}
      POSTGRES_USER: ${psql.user}
      POSTGRES_DB: my-service-db
      PG_DATA: /var/lib/postgresql2/data
    ports:
      - 5432:5432
    volumes:
      - pgdata:/var/lib/postgresql2/data
  my-service:
    container_name: my-service
    image: path_to_your_dockerfile
    environment:
      your_environment_variables_here
          container as env vars?
volumes:
  pgdata

I am guessing that you have folder structure like this

project_folder/
               docker-compose.yaml
               my-service/
                           Dockerfile
                          .env
                           package.json
                           package-lock.json
                           src/
                           <lots of other stuff>

and your.env contains following

API_PORT=8082
Environment_var1=Environment_var1_value
Environment_var2=Environment_var2_value 

So in your case your docker-compose file should look like this

version: '3.7'

services:
  postgres:
    container_name: postgres
    image: postgres:14.3
    environment:
      POSTGRES_PASSWORD: ${psql.password}
      POSTGRES_USER: ${psql.user}
      POSTGRES_DB: my-service-db
      PG_DATA: /var/lib/postgresql2/data
    ports:
      - 5432:5432
    volumes:
      - pgdata:/var/lib/postgresql2/data
  my-service:
    container_name: my-service
    image: ./my-service/
    environment:
      - API_PORT=8082
      - Environment_var1=Environment_var1_value
      - Environment_var2=Environment_var2_value 
volumes:
  pgdata

FYI: for this docker configuration your database connection host should be postgres (as per service name) not localhost and your

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM