简体   繁体   中英

Local Development Best Practices: Java, Docker, Kubernetes

I am trying to figure out the ultimate best practices for using Java in Docker containers deployed using Kubernetes on local environments or while developing code . In the ideal state, Java developers should be able to move as fast as python/javascript developers, but I am having a hard time matching the speed (or even coming close).

At the moment, I have a working, manually deployed k8's cluster. My Java Spring project is built by maven after a build command is run manually ( mvn clean install ), then I run a script to make an image, after that I run a script to run minkube (if its not already running) and finally I have to apply a deployments manifest file (which launches the containers into the pods).

What I am missing:

  1. All of this is done manually (there is clear room to automate the process of building an image after code is built and to have k8s update using the new image).
  2. Builds are manually specified (python relaunches on code save. No hot reloading to my knowledge in the java world).
  3. I have yet to see an integration between a local development environment and a cloud hosted k8's cluster. Ideally, a dev would test locally until they are ready to deploy to the cloud. When they are ready, it would be awesome to click a button and have a cluster read from a remote registry that could pick up the docker image changes and reload.

Sadly, Skaffold , the tool that I would be excited to use does not work natively with Java. Is there another tool that Java Devs are using to make their local deployments super fast and competitive with the DUCK languages (py, js)?

You can build a docker image directly from maven with docker-maven-plugin . Add to your pom.xml :

<build>
  <plugins>
    ...
    <plugin>
      <groupId>com.spotify</groupId>
      <artifactId>docker-maven-plugin</artifactId>
      <version>VERSION GOES HERE</version>
      <configuration>
        <imageName>example</imageName>
        <dockerDirectory>docker</dockerDirectory>
        <resources>
           <resource>
             <targetPath>/</targetPath>
             <directory>${project.build.directory}</directory>
             <include>${project.build.finalName}.jar</include>
           </resource>
        </resources>
      </configuration>
    </plugin>
    ...
  </plugins>
</build>

I don't know precisely your use case, but deploying a k8's cluster in your dev machine is maybe overkill. You can test your docker images with Docker compose

My take on your development workflow:

  • Like @Ortomala Lokni mentioned, Use docker-maven-plugin to build direct docker images from your maven build.
  • You can use https://github.com/fabric8io/fabric8-maven-plugin to push directly to a kubernetes cluster.
  • If your cluster is hosted in the cloud, your build machine should be able to reach the k8s API Server. And for that you might need to use SSH tunnels and Bastions , depending on whether your cloud k8s clusters' API Server is publicly available or not.
  • Look at minikube for a local k8s test cluster, even latest versions of docker for desktop now have a simple k8s server built in.
  • Have not used Skaffold, but a basic looking at the document suggests that it should also work for you as it takes over basic functions of watching your code, kicking off a docker build and deploying to k8s. These functions remain the same across languages. Having said that the above two plugins integrate the build docker image and deploy to k8s into your maven workflow.

You mention python/js as being fast, but do note, that for even those languages the basic steps remain the same, build a docker image, push to repository, update k8s deployment.

Also hot deployment has worked with Java, even in things like eclipse say with a spring boot based microservice you can use the spring-dev-tools to do live reloads and auto restarts. However I am not aware of anything that will help you handle live changes to a docker container and I would ask you to shy away from it docker containers are supposed to be immutable.

Sorry if I'm late, I'll try to give an answer for future readers, or maybe still for you!

First of all, docker build and deploy on kubernetes cluster are two totally different phases of your software supply chain, let's keep them as separate discourses

  1. The build process should be already automated: if you need to run manually mvn clean install it means that you are loosing one of the advantages of Docker: build repeatable, immutable software packages that can be delivered everywhere. Just add RUN mvn clean install to your Dockerfile (yes, you need to put maven in your image before, but there are some base images around that does the job for you). Now you should just set up a CI server that builds and pushes images at every repository check-in (I am intentionally skipping any quality gate and pipeline workflow , they're up to you to automate). Also deploy can be managed by CI servers, there are two main approaches

a) create a config repository with all the k8s manifests and run kubectl apply from you CI server at every push

b) put config along with the interested microservice, tag the fresh built image with the commit hash and at the end of the pipeline kubectl apply env.yaml && kubectl set image myregistry.com/myimage:${commitHash} (just be sure to tag also as "latest" and include the latest tag in you deployment spec, it helps to reconstruct the current situation after a remove-and-apply configuration)

c) deploy with helm charts. It's similar to the previous but you can leverage all the advantages of the dependency management and deployment templateization

  1. Hot reloads are nice when you do your tdd development but useless when the code is about to be delivered, neither with node/python microservices you will use it because once your code is containerized, you should shoot with an AK47 every developer that tries to touch it. The real big thing here is to automate your integration/delivery/deployment. In my team we just need to open and accept a PR, and the magic happens

  2. You need to do some debugging/integration between microservices on your laptop. I would not discourage this practice, but is something that has to be done at a frequency for which speed is not so important for productivity. But if you want to do it, you can do it building a "laptop" or "dev" environment with docker compose, pulling your dependencies from the registry (to reproduce the current "online" situation) and then build your microservice with its own configuration. Another way is to use port forwarding a k8s capability to pretend that a pod is connected with your local machine exposing a well-known port, but is an headache if there are many dependencies. A third way is to use tools like https://www.telepresence.io/ that promises to run a pod locally and connect to the cluster with a pair of proxies in both directions

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM