[英]Last Pod In Kubernetes Stefulsets is ready after 3 times restarting
I am deploying cassandra in Kubernetes using Helm. 我正在使用Helm在Kubernetes中部署cassandra。 While starting the statefulsets, For eg.
在启动有状态集时,例如 with 6 pods, the last pod is starting after 3 times restart( crashloopbackoff: Backoff restarting failed container ).
具有6个容器的容器,最后一个容器在重新启动3次后开始启动( crashloopbackoff:后退重新启动失败的容器 )。 But after 3 times restart, the pod is ready.
但是在3次重启后,吊舱已准备就绪。 Before that, I used PodManagementPolicy: OrderedReady .
在此之前,我使用了PodManagementPolicy:OrderedReady 。 I didnt face this problem at that time.
当时我没有面对这个问题。 I want to start all the pods at the same time.
我想同时启动所有Pod。 So I give PodManagementPolicy: Parallel .
所以我给PodManagementPolicy:Parallel 。 Now I face this problem.
现在我面临这个问题。
You can't start multiple Cassandra instances in parallell. 您不能并行启动多个Cassandra实例。 Each Cassandra node has to bootstrap (stream data) and join the cluster.
每个Cassandra节点都必须引导(流数据)并加入集群。 If a joining node notices another node is already joining it will crash (stop Cassandra).
如果一个加入节点通知另一个节点已经加入,它将崩溃(停止Cassandra)。 This is why you're getting the crashbackoff message.
这就是为什么您会收到crashbackoff消息的原因。
I recommend you reverse your PodManagementPolicy and setup a readinessProbe. 我建议您反转PodManagementPolicy并设置一个readinessProbe。 Example script: https://github.com/instaclustr/cassandra-operator/blob/ace024626c9339650a5a76861f36af48423a35be/docker/cassandra/readiness-probe.sh
示例脚本: https : //github.com/instaclustr/cassandra-operator/blob/ace024626c9339650a5a76861f36af48423a35be/docker/cassandra/readiness-probe.sh
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.