简体   繁体   中英

Java docker container with embedded or standalone tomcat?

Currently I have a tomcat webserver that is hosting multiple .war microservices (if it matters: spring-boot applications). When upgrading an application, I'm using the tomcat parallel deployment feature by adding a myapp##005.war , myapp##006.war , etc to have zero downtime deployment.

I'd like to dockerize the applications. But what suits best for java applications webservice applications?

Is it better to package a war file directly into the container, so that each redeployment would require a new docker container? Or should the tomcat run as a container without applications, and mount the war files from a shared host system folder (and thus provide redeployment without dockerimage recreation)?

I could think of the following 3 possibilities:

  • run each war file as a jar instead with embedded tomcat, each as own docker container? Then each app is decoupled, but I cannot use the parallel deployment feature anymore as I have to kill the jar before another can take its place. If this would be the best approach, then the question is: how could I get zero downtime deployment though with docker containers?
  • run each war file as standalone tomcat, each as own docker container? Each app then would be decoupled and could also make use of parallel deployment. But I'd have to launch an explicit tomcat webserver for each application in every docker container here, which might have negative impacts on the host system performance?
  • run a standalone tomcat as docker, and place all *.war files in a shared folder for deployment? Here I could still use the parallel deployment feature. But isn't this against the idea of docker? Shouldn't the war application be packed inside the container? Performance and resource requirements would probably best here, as this requires only a single tomcat.

Which approach suits for java miroservices?

Deployment using a single Jar for each Docker container is definitely the best approach. As you mentioned, low coupling is something you want from a microservice. Rolling deployments/canary releases etc can easily be done with container orchestration tools like Docker Swarm and Kubernetes.

If you want to play around with these concepts, Docker Swarm is fairly easy:

In your compose file:

version: '3'

services:
    example:
        build: .
        image: example-image:1.0
        ports:
            - 8080:8080
        networks:
            - mynet
        deploy:
            replicas: 6
            update_config:
                parallelism: 2
                delay: 10s
            restart_policy:
                condition: on-failure

The deploy part in you compose file is all Docker Swarm needs.

  • replicas tells you that 6 instances of your application will be deployed in the swarm
  • parallelism will tell you that 2 instances will be updated at the same time (instead of 6)
  • Between updates there will be a 10 second grace period ( delay )

Lot's of other things you can do. Take a look at the documentation.

If you update your service, there will be no downtime, as Docker Swarm will serve all requests through the 4 containers that will still be running.

I don't recommend Docker Swarm in a production environment, but it's a great way to play around with the concepts of container orchestration.

Kubernetes learning curve is quite steep. If you're in the cloud, AWS for example, services like EKS, Fargate etc can take a lot of that complexity away for you.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM