简体   繁体   中英

Kubernetes delete pod job

I wanted to know is it possible to have a job in Kubernetes that will run every hour, and will delete certain pods. I need this as a temporary stop gap to fix an issue.

Yes, it's possible.

I think the easiest way is just to call the Kubernernes API directly from a job. Considering RBAC is configured, something like this:

apiVersion: batch/v1
kind: Job
metadata:
  name: cleanup
spec:
  serviceAccountName: service-account-that-has-access-to-api
  template:
    spec:
      containers:
      - name: cleanup
        image: image-that-has-curl
        command:
        - curl
        - -ik 
        - -X
        - DELETE
        - -H
        - "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
        - https://kubernetes.default.svc.cluster.local/api/v1/namespaces/{namespace}/pods/{name}
      restartPolicy: Never
  backoffLimit: 4

You can also run a kubectl proxy sidecar to connect to the cluster using localhost . More information here

Or even running plain kubectl in a pod is also an option: Kubernetes - How to run kubectl commands inside a container?

Use a CronJob ( 1 , 2 ) to run the Job every hour.

K8S API can be accessed from Pod ( 3 ) with proper permissions. When a Pod is created a default ServiceAccount is assigned to it ( 4 ) by default. The default ServiceAccount has no RoleBinding and hence the default ServiceAccount and also the Pod has no permissions to invoke the API.

If a role (with permissions) is created and mapped to the default ServiceAccount , then all the Pods by default will get those permissions. So, it's better to create a new ServiceAccount instead of modifying the default ServiceAccount .

So, here are steps for RBAC ( 5 )

  • Create a ServiceAccount
  • Create a Role with proper permissions (deleting pods)
  • Map the ServiceAccount with the Role using RoleBinding
  • Use the above ServiceAccount in the Pod definition
  • Create a pod/container with the code/commands to delete the pods

I know it's a bit confusing, but that's the way K8S works.

There is another workaround possibly.

You could create a liveness probe (super easy if you have none already) that doesn't run until after one hour and always fail.

livenessProbe:
  tcpSocket:
    port: 1234
  initialDelaySeconds: 3600

This will wait 3600 seconds (1 hour) and then try to connect to port 1234 and if that fails it will kill the container (not the pod!).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM