I have a bunch of pods in kubernetes which are completed (successfully or unsuccessfully) and I'd like to clean up the output of kubectl get pods
. Here's what I see when I run kubectl get pods
:
NAME READY STATUS RESTARTS AGE
intent-insights-aws-org-73-ingest-391c9384 0/1 ImagePullBackOff 0 8d
intent-postgres-f6dfcddcc-5qwl7 1/1 Running 0 23h
redis-scheduler-dev-master-0 1/1 Running 0 10h
redis-scheduler-dev-metrics-85b45bbcc7-ch24g 1/1 Running 0 6d
redis-scheduler-dev-slave-74c7cbb557-dmvfg 1/1 Running 0 10h
redis-scheduler-dev-slave-74c7cbb557-jhqwx 1/1 Running 0 5d
scheduler-5f48b845b6-d5p4s 2/2 Running 0 36m
snapshot-169-5af87b54 0/1 Completed 0 20m
snapshot-169-8705f77c 0/1 Completed 0 1h
snapshot-169-be6f4774 0/1 Completed 0 1h
snapshot-169-ce9a8946 0/1 Completed 0 1h
snapshot-169-d3099b06 0/1 ImagePullBackOff 0 24m
snapshot-204-50714c88 0/1 Completed 0 21m
snapshot-204-7c86df5a 0/1 Completed 0 1h
snapshot-204-87f35e36 0/1 ImagePullBackOff 0 26m
snapshot-204-b3a4c292 0/1 Completed 0 1h
snapshot-204-c3d90db6 0/1 Completed 0 1h
snapshot-245-3c9a7226 0/1 ImagePullBackOff 0 28m
snapshot-245-45a907a0 0/1 Completed 0 21m
snapshot-245-71911b06 0/1 Completed 0 1h
snapshot-245-a8f5dd5e 0/1 Completed 0 1h
snapshot-245-b9132236 0/1 Completed 0 1h
snapshot-76-1e515338 0/1 Completed 0 22m
snapshot-76-4a7d9a30 0/1 Completed 0 1h
snapshot-76-9e168c9e 0/1 Completed 0 1h
snapshot-76-ae510372 0/1 Completed 0 1h
snapshot-76-f166eb18 0/1 ImagePullBackOff 0 30m
train-169-65f88cec 0/1 Error 0 20m
train-169-9c92f72a 0/1 Error 0 1h
train-169-c935fc84 0/1 Error 0 1h
train-169-d9593f80 0/1 Error 0 1h
train-204-70729e42 0/1 Error 0 20m
train-204-9203be3e 0/1 Error 0 1h
train-204-d3f2337c 0/1 Error 0 1h
train-204-e41a3e88 0/1 Error 0 1h
train-245-7b65d1f2 0/1 Error 0 19m
train-245-a7510d5a 0/1 Error 0 1h
train-245-debf763e 0/1 Error 0 1h
train-245-eec1908e 0/1 Error 0 1h
train-76-86381784 0/1 Completed 0 19m
train-76-b1fdc202 0/1 Error 0 1h
train-76-e972af06 0/1 Error 0 1h
train-76-f993c8d8 0/1 Completed 0 1h
webserver-7fc9c69f4d-mnrjj 2/2 Running 0 36m
worker-6997bf76bd-kvjx4 2/2 Running 0 25m
worker-6997bf76bd-prxbg 2/2 Running 0 36m
and I'd like to get rid of the pods like train-204-d3f2337c
. How can I do that?
You can do this a bit easier, now.
You can list all completed pods by:
kubectl get pod --field-selector=status.phase==Succeeded
delete all completed pods by:
kubectl delete pod --field-selector=status.phase==Succeeded
and delete all errored pods by:
kubectl delete pod --field-selector=status.phase==Failed
If this pods created by CronJob, you can use spec.failedJobsHistoryLimit
and spec.successfulJobsHistoryLimit
Example:
apiVersion: batch/v1
kind: CronJob
metadata:
name: my-cron-job
spec:
schedule: "*/10 * * * *"
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
jobTemplate:
spec:
template:
...
You can do it on two ways.
$ kubectl delete pod $(kubectl get pods | grep Completed | awk '{print $1}')
or
$ kubectl get pods | grep Completed | awk '{print $1}' | xargs kubectl delete pod
Both solutions will do the job.
If you would like to delete pods not Running, it could be done with one command
kubectl get pods --field-selector=status.phase!=Running
Updated command to delete pods
kubectl delete pods --field-selector=status.phase!=Running
As previous answers mentioned you can use the command:
kubectl delete pod --field-selector=status.phase=={{phase}}
To delete pods in a certain "phase", What's still missing is a quick summary of what phases exist, so the valid values for a "pod phase" are:
Pending, Running, Succeeded, Failed, Unknown
And in this specific case to delete "error" pods:
kubectl delete pod --field-selector=status.phase==Failed
Here's a one liner which will delete all pods which aren't in the Running
or Pending
state (note that if a pod name has Running
or Pending
in it, it won't get deleted ever with this one liner):
kubectl get pods --no-headers=true |grep -v "Running" | grep -v "Pending" | sed -E 's/([a-z0-9-]+).*/\1/g' | xargs kubectl delete pod
Here's an explanation:
Running
Pending
xargs
to delete each of the pods by name Note, this doesn't account for all pod states. For example, if a pod is in the state ContainerCreating
this one liner will delete that pod too.
Here you go:
kubectl get pods --all-namespaces |grep -i completed|awk '{print "kubectl delete pod "$2" -n "$1}'|bash
you can replace completed with CrashLoopBackOff or any other state...
You can list all completed pods by:
kubectl get pod --field-selector=status.phase==Succeeded
And delete all completed pods by:
kubectl delete pod --field-selector=status.phase==Succeeded
I think pjincz handled your question well regarding deleting the completed pods manually.
However, I popped in here to introduce a new feature of Kubernetes, which may remove finished pods automatically on your behalf. You should just define a time to live to auto clean up finished Jobs like below:
apiVersion: batch/v1
kind: Job
metadata:
name: remove-after-ttl
spec:
ttlSecondsAfterFinished: 86400
template:
...
Here is a single command to delete all pods that are terminated, Completed, in Error, etc.
kubectl delete pods --field-selector status.phase=Failed -A --ignore-not-found=true
If you are using preemptive GKE nodes, you often see those pods hanging around.
Here is an automated solution I setup to cleanup: https://stackoverflow.com/a/72872547/4185100
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.