简体   繁体   English

重新启动3次后,Kubernetes Stefulsets中的最后一个Pod已准备就绪

[英]Last Pod In Kubernetes Stefulsets is ready after 3 times restarting

I am deploying cassandra in Kubernetes using Helm. 我正在使用Helm在Kubernetes中部署cassandra。 While starting the statefulsets, For eg. 在启动有状态集时,例如 with 6 pods, the last pod is starting after 3 times restart( crashloopbackoff: Backoff restarting failed container ). 具有6个容器的容器,最后一个容器在重新启动3次后开始启动( crashloopbackoff:后退重新启动失败的容器 )。 But after 3 times restart, the pod is ready. 但是在3次重启后,吊舱已准备就绪。 Before that, I used PodManagementPolicy: OrderedReady . 在此之前,我使用了PodManagementPolicy:OrderedReady I didnt face this problem at that time. 当时我没有面对这个问题。 I want to start all the pods at the same time. 我想同时启动所有Pod。 So I give PodManagementPolicy: Parallel . 所以我给PodManagementPolicy:Parallel Now I face this problem. 现在我面临这个问题。

You can't start multiple Cassandra instances in parallell. 您不能并行启动多个Cassandra实例。 Each Cassandra node has to bootstrap (stream data) and join the cluster. 每个Cassandra节点都必须引导(流数据)并加入集群。 If a joining node notices another node is already joining it will crash (stop Cassandra). 如果一个加入节点通知另一个节点已经加入,它将崩溃(停止Cassandra)。 This is why you're getting the crashbackoff message. 这就是为什么您会收到crashbackoff消息的原因。

I recommend you reverse your PodManagementPolicy and setup a readinessProbe. 我建议您反转PodManagementPolicy并设置一个readinessProbe。 Example script: https://github.com/instaclustr/cassandra-operator/blob/ace024626c9339650a5a76861f36af48423a35be/docker/cassandra/readiness-probe.sh 示例脚本: https : //github.com/instaclustr/cassandra-operator/blob/ace024626c9339650a5a76861f36af48423a35be/docker/cassandra/readiness-probe.sh

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 kubernetes 节点重启后 Cassandra pod 失败 - Cassandra pod fails after kubernetes node restart Kube.netes - 从 Job 连接到 Pod - Kubernetes - connect from Job to Pod Kubernetes - 从作业连接到 cassandra 到不同的吊舱 - Kubernetes - connect to cassandra from job to different pod 重新启动后,Cassandra丢失了数据 - Cassandra lost data after restarting 如何在Kubernetes中的Cassandra和MariaDB pod之间移动数据? - How do you move data between a Cassandra and MariaDB pod in Kubernetes? Cassandra-Kubernetes重新启动时如何保持Pod IP - Cassandra - Kubernetes how to keep pod ip when restart 如果 kubernetes 中的 pod 重新启动/失败,如何防止 IP 交换在 pod 内运行的多个容器? - How do I prevent IP swaping of multiple containers running inside a pod if pod restarts/fails in kubernetes? 如果在 Kubernetes 中重新启动所有 pod 而无需重新启动客户端应用程序,Cassandra 驱动程序如何更新contactPoints? - How does Cassandra driver update contactPoints if all pods are restarted in Kubernetes without restarting the client application? 从 bitnami helm chart 部署后的 Pod cassandra-0 root 密码 - Pod cassandra-0 root password after deploying from bitnami helm chart 在 Cassandra 中多次插入后出现无效或不受支持的协议版本错误 - Invalid or unsupported protocol version error after insert several times in Cassandra
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM