简体   繁体   English

将Spring Boot Tomcat Azure K8s部署转换为独立应用程序

[英]convert spring boot tomcat azure k8s deployment to standalone application

I have created an azure devops project for java , spring boot and kubernetes as a way to learn about the azure technology set. 我已经为java,spring boot和kubernetes创建了一个Azure devops项目,作为了解Azure技术集的一种方式。 It does work , the simple spring boot web application is deployed and runs and is rebuilt if I make code changes. 它可以正常工作,简单的spring boot Web应用程序可以部署并运行,如果我进行代码更改,则可以重新构建。

However the spring boot application uses a very old version of spring 1.5.7.RELEASE and it is deployed in a tomcat server in k8s. 但是spring boot应用程序使用的是spring 1.5.7.RELEASE的非常旧的版本,并且已部署在k8s的tomcat服务器中。

I am looking for some guidance on how to run it as a standalone spring boot version 2 application in kubernetes. 我正在寻找有关如何在kubernetes中将其作为独立的spring boot version 2应用程序运行的指南。 My attempts so far have resulted in the deployment timing out after 15 minutes in the Helm Upgrade step. 到目前为止,我的尝试导致在“ Helm升级”步骤中15分钟后部署超时。

The existing docker file 现有的docker文件

FROM maven:3.5.2-jdk-8 AS build-env
WORKDIR /app
COPY . /app
RUN mvn package

FROM tomcat:8
RUN rm -rf /usr/local/tomcat/webapps/ROOT
COPY --from=build-env /app/target/*.war /usr/local/tomcat/webapps/ROOT.war

How to change the dockerfile to build the image of a standalone spring boot app? 如何更改dockerfile来构建独立的spring boot应用程序的映像?

I changed the pom to generate a jar file, then modified the docker file to this: 我将pom更改为生成jar文件,然后将docker文件修改为:

FROM maven:3.5.2-jdk-8 AS build-env
WORKDIR /app
COPY . /app
RUN mvn package

FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY --from=build-env  /app/target/ROOT.jar .
RUN ls -la
ENTRYPOINT ["java","-jar","ROOT.jar"]

This builds, see output from the log for 'Build an image' step 这样就可以构建,请参阅日志中“构建图像”步骤的输出

...
2019-06-25T23:33:38.0841365Z Step 9/20 : COPY --from=build-env  /app/target/ROOT.jar .
2019-06-25T23:33:41.4839851Z  ---> b478fb8867e6
2019-06-25T23:33:41.4841124Z Step 10/20 : RUN ls -la
2019-06-25T23:33:41.6653383Z  ---> Running in 4618c503ac5c
2019-06-25T23:33:42.2022890Z total 50156
2019-06-25T23:33:42.2026590Z drwxr-xr-x    1 root     root          4096 Jun 25 23:33 .
2019-06-25T23:33:42.2026975Z drwxr-xr-x    1 root     root          4096 Jun 25 23:33 ..
2019-06-25T23:33:42.2027267Z -rwxr-xr-x    1 root     root             0 Jun 25 23:33 .dockerenv
2019-06-25T23:33:42.2027608Z -rw-r--r--    1 root     root      51290350 Jun 25 23:33 ROOT.jar
2019-06-25T23:33:42.2027889Z drwxr-xr-x    2 root     root          4096 May  9 20:49 bin
2019-06-25T23:33:42.2028188Z drwxr-xr-x    5 root     root           340 Jun 25 23:33 dev
2019-06-25T23:33:42.2028467Z drwxr-xr-x    1 root     root          4096 Jun 25 23:33 etc
2019-06-25T23:33:42.2028765Z drwxr-xr-x    2 root     root          4096 May  9 20:49 home
2019-06-25T23:33:42.2029376Z drwxr-xr-x    1 root     root          4096 May 11 01:32 lib
2019-06-25T23:33:42.2029682Z drwxr-xr-x    5 root     root          4096 May  9 20:49 media
2019-06-25T23:33:42.2029961Z drwxr-xr-x    2 root     root          4096 May  9 20:49 mnt
2019-06-25T23:33:42.2030257Z drwxr-xr-x    2 root     root          4096 May  9 20:49 opt
2019-06-25T23:33:42.2030537Z dr-xr-xr-x  135 root     root             0 Jun 25 23:33 proc
2019-06-25T23:33:42.2030937Z drwx------    2 root     root          4096 May  9 20:49 root
2019-06-25T23:33:42.2031214Z drwxr-xr-x    2 root     root          4096 May  9 20:49 run
2019-06-25T23:33:42.2031523Z drwxr-xr-x    2 root     root          4096 May  9 20:49 sbin
2019-06-25T23:33:42.2031797Z drwxr-xr-x    2 root     root          4096 May  9 20:49 srv
2019-06-25T23:33:42.2032254Z dr-xr-xr-x   12 root     root             0 Jun 25 23:33 sys
2019-06-25T23:33:42.2032355Z drwxrwxrwt    2 root     root          4096 May  9 20:49 tmp
2019-06-25T23:33:42.2032656Z drwxr-xr-x    1 root     root          4096 May 11 01:32 usr
2019-06-25T23:33:42.2032945Z drwxr-xr-x    1 root     root          4096 May  9 20:49 var
2019-06-25T23:33:43.0909881Z Removing intermediate container 4618c503ac5c
2019-06-25T23:33:43.0911258Z  ---> 0d824ce4ae62
2019-06-25T23:33:43.0911852Z Step 11/20 : ENTRYPOINT ["java","-jar","ROOT.jar"]
2019-06-25T23:33:43.2880002Z  ---> Running in bba9345678be
...

The build completes but deployment fails in the Helm Upgrade step, timing out after 15 minutes. 构建完成,但在“ Helm升级”步骤中部署失败,在15分钟后超时。 This is the log 这是日志

2019-06-25T23:38:06.6438602Z ##[section]Starting: Helm upgrade
2019-06-25T23:38:06.6444317Z ==============================================================================
2019-06-25T23:38:06.6444448Z Task         : Package and deploy Helm charts
2019-06-25T23:38:06.6444571Z Description  : Deploy, configure, update a Kubernetes cluster in Azure Container Service by running helm commands
2019-06-25T23:38:06.6444648Z Version      : 0.153.0
2019-06-25T23:38:06.6444927Z Author       : Microsoft Corporation
2019-06-25T23:38:06.6445006Z Help         : https://docs.microsoft.com/azure/devops/pipelines/tasks/deploy/helm-deploy
2019-06-25T23:38:06.6445300Z ==============================================================================
2019-06-25T23:38:09.1285973Z [command]/opt/hostedtoolcache/helm/2.14.1/x64/linux-amd64/helm upgrade --tiller-namespace dev2134 --namespace dev2134 --install --force --wait --set image.repository=stephenacr.azurecr.io/stephene991 --set image.tag=20 --set applicationInsights.InstrumentationKey=643a47f5-58bd-4012-afea-b3c943bc33ce --set imagePullSecrets={stephendockerauth} --timeout 900 azuredevops /home/vsts/work/r1/a/Drop/drop/sampleapp-v0.2.0.tgz
2019-06-25T23:53:13.7882713Z UPGRADE FAILED
2019-06-25T23:53:13.7883396Z Error: timed out waiting for the condition
2019-06-25T23:53:13.7885043Z Error: UPGRADE FAILED: timed out waiting for the condition
2019-06-25T23:53:13.7967270Z ##[error]Error: UPGRADE FAILED: timed out waiting for the condition

2019-06-25T23:53:13.7976964Z ##[section]Finishing: Helm upgrade

I have had another look at this as I now am more familiar with all the technologies, and I have located the problem. 因为我现在对所有技术都更加熟悉,并且已经找到了问题,所以我对此进行了另一番研究。

The helm upgrade statement is timing out waiting for the newly deployed pod to become live but this doesn't happen because the k8s liveness probe defined for the pod is not working. Helm升级声明正在等待新部署的Pod上线,但是这不会发生,因为为Pod定义的k8s活动度探针无法正常工作。 This can be seen with this command : 这可以通过以下命令看到:

kubectl get po  -n dev5998 -w
NAME                           READY   STATUS             RESTARTS   AGE
sampleapp-86869d4d54-nzd9f     0/1     CrashLoopBackOff   17         48m
sampleapp-c8f84c857-phrrt      1/1     Running            0          1h
sampleapp-c8f84c857-rmq8w      1/1     Running            0          1h
tiller-deploy-79f84d5f-4r86q   1/1     Running            0          2h

The new pod is repeatedly restarted then killed. 新的Pod反复重启,然后终止。 It seems to repeat forever or until another deployment is run. 它似乎会一直重复下去,或者直到运行另一个部署为止。

In the log for the pod 在Pod的日志中

kubectl describe po sampleapp-86869d4d54-nzd9f -n dev5998
Events:
  Type     Reason                 Age                    From                               Message
  ----     ------                 ----                   ----                               -------
  Normal   Scheduled              39m                    default-scheduler                  Successfully assigned sampleapp-86869d4d54-nzd9f to aks-agentpool-24470557-1
  Normal   SuccessfulMountVolume  39m                    kubelet, aks-agentpool-24470557-1  MountVolume.SetUp succeeded for volume "default-token-v72n5"
  Normal   Pulling                39m                    kubelet, aks-agentpool-24470557-1  pulling image "devopssampleacreg.azurecr.io/devopssamplec538:52"
  Normal   Pulled                 39m                    kubelet, aks-agentpool-24470557-1  Successfully pulled image "devopssampleacreg.azurecr.io/devopssamplec538:52"
  Normal   Created                37m (x3 over 39m)      kubelet, aks-agentpool-24470557-1  Created container
  Normal   Started                37m (x3 over 39m)      kubelet, aks-agentpool-24470557-1  Started container
  Normal   Killing                37m (x2 over 38m)      kubelet, aks-agentpool-24470557-1  Killing container with id docker://sampleapp:Container failed liveness probe.. Container will be killed and recreated.
  Warning  Unhealthy              36m (x6 over 38m)      kubelet, aks-agentpool-24470557-1  Liveness probe failed: HTTP probe failed with statuscode: 404
  Warning  Unhealthy              34m (x12 over 38m)     kubelet, aks-agentpool-24470557-1  Readiness probe failed: HTTP probe failed with statuscode: 404
  Normal   Pulled                 9m25s (x12 over 38m)   kubelet, aks-agentpool-24470557-1  Container image "devopssampleacreg.azurecr.io/devopssamplec538:52" already present on machine
  Warning  BackOff                4m10s (x112 over 34m)  kubelet, aks-agentpool-24470557-1  Back-off restarting failed container

So there must be a difference in what urls are delivered by the application depending on how it is deployed, tomcat or standalone. 因此,根据应用程序的部署方式,tomcat或独立方式,应用程序传递的URL必须有所不同。 Which now seems obvious. 现在似乎很明显。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM