简体   繁体   English

如何在多个 Kubernetes 容器之间共享一个 jar?

[英]How to share a jar amongst several Kubernetes containers?

My Application is a Java based Application and we did the tomcat dockerized implementation.我的应用程序是一个基于 Java 的应用程序,我们做了 tomcat dockerized 实现。 We have four application and created four containers.我们有四个应用程序并创建了四个容器。 We have a common library for authentication, and in all containers we have moved this jar file to the /lib folder and application works fine.我们有一个用于身份验证的公共库,并且在所有容器中,我们已将此 jar 文件移动到/lib文件夹,并且应用程序运行良好。

But whenever changes happen to the jar file, we need to build and deploy all the containers.但是每当 jar 文件发生更改时,我们都需要构建和部署所有容器。 Is there way do share the jar file to 4 containers that does not require ys to build and deploy all 4 containers only needing to update the jar?有没有办法将 jar 文件共享到 4 个容器,不需要 ys 来构建和部署所有 4 个容器,只需要更新 jar?

It is like sharing the tomcat lib folder to another container in kubernetes and whenever changes happen to jar file they are automatically replicated to all containers.这就像将 tomcat lib 文件夹共享到 kubernetes 中的另一个容器,每当 jar 文件发生更改时,它们都会自动复制到所有容器。

This is not standard practice and you shouldn't do it.这不是标准做法,您不应该这样做。 Also, it's operationally tricky.此外,它在操作上很棘手。

Docker images generally are self-contained and include all of their dependencies, in your case including the repeated jar file. Docker 映像通常是自包含的,并包含它们的所有依赖项,在您的情况下包括重复的 jar 文件。 In the context of a Kubernetes cluster with software under active development, you should make sure every image has a unique image tag, maybe something like a time stamp or source control commit ID, probably assigned by your build system.在具有正在积极开发软件的 Kubernetes 集群的上下文中,您应该确保每个图像都有一个唯一的图像标签,可能是时间戳或源代码控制提交 ID,可能由您的构建系统分配。 Then you can update your Deployment with the new image tag;然后你可以用新的镜像标签更新你的部署; this triggers Kubernetes to pull the new images and restart containers.这会触发 Kubernetes 拉取新镜像并重启容器。

This means that if you see a pod running an image tagged 20200228 then you know exactly the software that's in it, including the shared jar, and you can test exactly that image outside your cluster.这意味着,如果您看到一个 pod 运行一个标记为 20200228 的映像,那么您就确切地知道其中的软件,包括共享的 jar,并且您可以在集群外准确地测试该映像。 If you discover something has gone wrong, maybe even in the shared jar, you can change the deployment tag back to 20200227 to get yesterday's build while you fix the problem.如果您发现出现问题,甚至可能是在共享 jar 中,您可以将部署标记更改回 20200227,以便在解决问题的同时获得昨天的构建。

If you're hand-deploying jar files somehow and mounting them as volumes into pods, you lose all of this: you have to restart pods by hand to see the new jar files, you can't test images offline without manually injecting the jar file, and if there is an issue you have multiple things you need to try to revert by hand.如果您以某种方式手动部署 jar 文件并将它们作为卷安装到 pod 中,您将失去所有这些:您必须手动重新启动 pod 以查看新的 jar 文件,如果不手动注入 jar,则无法离线测试图像文件,如果出现问题,您需要尝试手动还原多项内容。


As far as the mechanics go, you would need some sort of Volume that can be read by multiple pods concurrently, and either written to from outside the cluster or writable by a single pod.就机制而言,您需要某种可以由多个 pod 并发读取的,并且可以从集群外部写入或由单个 pod 写入。 The discussion ofPersistentVolumes has the concept of an access mode , and so you need something that's ReadOnlyMany (and externally accessible) or ReadWriteMany. PersistentVolumes的讨论具有访问模式的概念,因此您需要 ReadOnlyMany(和外部可访问)或 ReadWriteMany 的内容。 Depending on the environment you have available to you, your only option might be an NFS server.根据您可用的环境,您唯一的选择可能是 NFS 服务器。 That's possible, but it's one additional piece to maintain, and you'd have to set it up yourself outside the cluster infrastructure.这是可能的,但这是一个额外的维护部分,您必须在集群基础架构之外自行设置。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM