简体   繁体   English

Kubernetes:服务器停止后 Kafka pod 关闭

[英]Kubernetes: Kafka pod shutdown after server stops

I am running a Kafka pod in Kubernetes with Rancher.我正在使用 Rancher 在 Kubernetes 中运行 Kafka pod。 I am using the confluent image and it is connecting properly to zookeeper.我正在使用融合图像,它正在正确连接到 Zookeeper。 I am using the stable helm chart with the Kafka confluent image 5.3.1.我正在使用带有 Kafka 融合图像 5.3.1 的稳定舵图。 I also added SSL encryption into Helm using this page It is starting properly then it shutdown abruptly and pod restart.我还使用此页面在 Helm 中添加了 SSL 加密它正在正常启动,然后突然关闭并重新启动 Pod。 I am getting this error on the log.我在日志上收到此错误。

[2019-11-15 19:41:49,943] INFO Terminating process due to signal SIGTERM (org.apache.kafka.common.utils.LoggingSignalHandler) [2019-11-15 19:41:49,945] INFO Shutting down SupportedServerStartable (io.confluent.support.metrics.SupportedServerStartable) [2019-11-15 19:41:49,943] INFO 由于信号 SIGTERM 终止进程(org.apache.kafka.common.utils.LoggingSignalHandler) [2019-11-15 19:41:49,945] INFO 关闭 SupportedServerStartable(io .confluent.support.metrics.SupportedServerStartable)

What is the SIGTERM error in Kafka pods? Kafka pod 中的 SIGTERM 错误是什么? How to fix it?如何解决?

Thank you谢谢

This is liveness and readiness problem.这是活性和准备问题。 When kafka attempt to read topic snapshots liveness ping to kafka and it can't respond.当 kafka 尝试读取主题快照时,活动 ping 到 kafka 并且无法响应。 So k8 shutdown kafka.所以k8关闭kafka。 Remove liveness and readiness去除活力和准备

As mentioned by @Hamzatli, it is about liveness and readiness.正如@Hamzatli 所提到的,它是关于活力和准备的。 K8s thinks that your pod is hitting a timeout issue and sends the SIGTERM to the pod for it to shutdown. K8s 认为您的 pod 遇到了超时问题,并将 SIGTERM 发送到 pod 以使其关闭。

In your Helm Chart's values.yaml, there should be an option about liveness and/or readiness.在您的 Helm Chart 的 values.yaml 中,应该有一个关于活跃度和/或准备情况的选项。 Increase the initialDelaySeconds to a higher time, which you think is enough time for the Kafka to be brought up, so that k8s doesn't send the shutdown signal too quickly during initial startup.将 initialDelaySeconds 增加到更高的时间,你认为这个时间足以让 Kafka 启动,这样 k8s 在初始启动时不会太快发送关闭信号。

livenessProbe:
  enabled: true
  initialDelaySeconds: 60   # 60 seconds delay for the pod to start liveness probe
  timeoutSeconds: 5
readinessProbe:
  enabled: true
  initialDelaySeconds: 60   # 60 seconds delay for pod to start readiness probe
  timeoutSeconds: 5

You can read more about this here .您可以在此处阅读有关此内容的更多信息。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 卡夫卡在 Kubernetes 与 SSL - Kafka on Kubernetes with SSL 400 Bad Request You're speak plain HTTP to an SSL-enabled server port kubernetes pod - 400 Bad Request You're speaking plain HTTP to an SSL-enabled server port kubernetes pod Kube.netes Nginx 通过 https 进入 pod 通信 - Kubernetes Nginx Ingress to pod communication over https pod上的kafka:使用java而不是python - 证书错误 - kafka on pod: working with java but not python - certificate error 在服务器端用SHA-2证书替换SHA-1证书后,Cordova应用程序停止工作 - Cordova app stops working after replacing SHA-1 cert with SHA-2 cert on server side Kubernetes + Socket.io:Pod 客户端 -> LoadBalancer 服务 SSL 问题 - Kubernetes + Socket.io: Pod client -> LoadBalancer service SSL issues 由于 SSL,MySQL 连接无法从 Kubernetes pod 工作 - MySQL connection doesn't work from Kubernetes pod because of SSL 如何在Kubernetes中的Pod到服务呼叫中轻松启用SSL - How to enable easily SSL in pod-to-service calls in Kubernetes 在 EKS Fargate 上使用内部 MTLS 身份验证启用到 Kubernetes pod 的 https 流量 - Enable https traffic to Kubernetes pod with internal MTLS auth on EKS Fargate 由于SSL配置更改server.properties后,重新启动wurstmeister / kafka-docker - Restart wurstmeister/kafka-docker after server.properties changes due to ssl config
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM