简体   繁体   中英

What is the best practice for deploying my application on my VPS using Docker?

I do have a (Python Flask) application that I want to deploy using GitLab CI and Docker to my VPS.

On my server I want to have a production version and a staging version of my application. Both of them require a MongoDB connection.

My plan is to automatically build the application on GitLab and push it to GitLab's Docker Registry. If I want to deploy the application to staging or production I do a docker pull , docker rm and docker run .

The plan is to store the config (eg secret_key ) in .production.env (and .staging.env ) and pass it to application using docker run --env-file ./env.list

I already have MongoDB installed on my server and both environments of the applications shall use the same MongoDB instance, but a different database name (configured in .env ).

Is that the best practice for deploying my application? Do you have any recommendations? Thanks!

Here's my configuration that's worked reasonably well in different organizations and project sizes:

To build:

  1. The applications are located in a git repository (GitLab in your case). Each application brings its own Dockerfile.
  2. I use Jenkins for building, you can, of course, use any other CD tooling. Jenkins pulls the application's repository, builds the docker image and publishes it into a private Docker repository ( Nexus, in my case).

To deploy:

  1. I have one central, application-independent repository that has a docker-compose file (or possibly multiple files that extend one central file for different environments). This file contains all service definitions and references the docker images in my Nexus repo.
  2. If I am using secrets, I store them in a HashiCorp Vault instance. Jenkins pulls them, and writes them into an .env file. The docker-compose file can reference the individual environment variables.
  3. Jenkins pulls the docker-compose repo and, in my case via scp, uploads the docker-compose file(s) and the .env file to my server(s).
  4. It then triggers a docker-compose up (for smaller applications) or re-deploys a docker stack into a swarm (for larger applications).
  5. Jenkins removes everything from the target server(s).

If you like it, you can do step 3. via Docker Machine. I feel, however, its benefits don't warrant use in my cases.

One thing I can recommend, as I've done it in production several times is to deploy Docker Swarm with TLS Encrypted endpoints. This link talks about how to secure the swarm via certificate. It's a bit of work, but what it will allow you to do is define services for your applications.

The services, once online can have multiple replicas and whenever you update a service (IE deploy a new image) the swarm will take care of making sure one is online at all times.

docker service update <service name> --image <new image name>

Some VPS servers actually have Kubernetes as a service (Like Digital Ocean) If they do, it's more preferable. Gitlab actually has an autodevops feature and can remotely manage your Kubernetes cluster, but you could also manually deploy with kubectl.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM