简体   繁体   中英

Stop restarting kubernetes pod

Is there any way to stop restarting pod again and again when inside container fails.

simple way to check this is to pass [ "exit", "1" ] to pod as-

enter code here
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: alpine:latest
    command: ['sh','-c','exit','1']

Note -
RestartPolicy: Never this option is unsupported.

You will need to change the restart policy of the pod:

A PodSpec has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always. restartPolicy applies to all Containers in the Pod. restartPolicy only refers to restarts of the Containers by the kubelet on the same node. Exited Containers that are restarted by the kubelet are restarted with an exponential back-off delay (10s, 20s, 40s …) capped at five minutes, and is reset after ten minutes of successful execution. As discussed in the Pods document, once bound to a node, a Pod will never be rebound to another node.

See more details here and here .

You're using StatefulSet and it's responsible to restart pod if one container is failed. If your container is a sidecar and it doesn't matter if it fails or not, you can wrap the container's command and exit 0 every time it fails and then StatefulSet doesn't detect any failure. For example, you can use this command:

exit 1 & # your container's command should be here
PID=$!
wait $PID
exit 0

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM