简体   繁体   中英

How to expose multiple docker-compose pipelines on available ports?

I have a docker-compose pipeline which orchestrates certain containers (nginx on port 8000 for static resources and routes other requests to a backend container running django on gunicorn). I am trying to launch multiple instances of this docker-compose pipeline. The only abstraction that I've managed to find is the -p option that defines a "project" name based identifier. However, it won't be able to spin up another instance of the compose pipeline since port 8000 would be used by the first instance. What would be the best option to achieve this?

I don't think that replicas or scale is what I am looking for.

I want to be able to access <IP_ADDRESS>:8001 for docker-compose instance 1, <IP_ADDRESS>:8002 for docker-compose instance 2 and so on. Is this even a good approach?

Unfortunately in this scenario the django app is not designed to handle multiple users for the web app. Hence the need for an alternate strategy for multiplexing connections and linking a user to an entire docker-compose pipeline execution.

If you've got Eureka/Zuul I think that --scale would be the best option as it will round-robin load-balance your requests and the port it's running on is irrelevant.

If you want to specify the port number then I think you'd have to manually set the expose port number, or script a solution looking for the next available port number in a sequence. I think this is a crappy approach, however.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM