简体   繁体   English

kubectl wait 有时会意外超时

[英]kubectl wait sometimes timed out unexpectedly

I just add kubectl wait --for=condition=ready pod -l app=appname --timeout=30s in the last step of BitBucket Pipeline to report any deployment failure if the new pod somehow producing error.我只是在 BitBucket 管道的最后一步添加kubectl wait --for=condition=ready pod -l app=appname --timeout=30s来报告任何部署失败,如果新 pod 以某种方式产生错误。

I realize that the wait doesn't really consistent.我意识到等待并不真正一致。 Sometimes it gets timed out even if new pod from new image doesn't producing any error, pod turn to ready state.有时即使新映像中的新 pod 没有产生任何错误,它也会超时,pod 转向准备好 state。

Try to always change deployment.yaml or push newer image everytime to test this, the result is inconsistent.尝试总是更改deployment.yaml 或每次推送更新的镜像来测试,结果不一致。

BTW, I believe using kubectl rollout status doesn't suitable, I think because it just return after the deployment done without waiting for pod ready.顺便说一句,我认为使用kubectl rollout status不合适,我认为因为它只是在部署完成后返回而无需等待 pod 准备好。

Note that there is not much difference if I change timeout from 30s to 5m since apply or rollout restart is quite instant.请注意,如果我将超时时间从30s更改为5m ,则没有太大区别,因为应用或推出重新启动是非常即时的。

  • kubectl version: 1.17 kubectl 版本:1.17
  • AWS EKS: latest 1.16 AWS EKS:最新 1.16

I'm placing this answer for better visibility as noted in the comments this indeed solves some problems with kubectl wait behavior.如评论中所述,我放置此答案是为了提高可见性,这确实解决了kubectl wait行为的一些问题。

I managed to replicate the issue and have some timeouts when my client version was older than server version.当我的客户端版本比服务器版本旧时,我设法复制了该问题并出现了一些超时。 You have to match your client version with server in order to kubectl wait work properly.您必须将客户端版本与服务器匹配才能使kubectl wait正常工作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM