简体   繁体   中英

Kubernetes job pod completed successfully but one of the containers were not ready

I've got some strange looking behavior.

When a job is run, it completes successfully but one of the containers says it's not (or was not..) ready:

NAMESPACE     NAME                                                 READY     STATUS      RESTARTS   AGE       IP           NODE
default       **********-migration-22-20-16-29-11-2018-xnffp       1/2       Completed   0          11h       10.4.5.8     gke-******

job yaml:

apiVersion: batch/v1
kind: Job
metadata:
  name: migration-${timestamp_hhmmssddmmyy}
  labels:
    jobType: database-migration
spec:
  backoffLimit: 0
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: app
        image: "${appApiImage}"
        imagePullPolicy: IfNotPresent
        command:
          - php
          - artisan
          - migrate
      - name: cloudsql-proxy
        image: gcr.io/cloudsql-docker/gce-proxy:1.11
        command: ["/cloud_sql_proxy",
                  "-instances=${SQL_INSTANCE_NAME}=tcp:3306",
                  "-credential_file=/secrets/cloudsql/credentials.json"]
        securityContext:
          runAsUser: 2  # non-root user
          allowPrivilegeEscalation: false
        volumeMounts:
          - name: cloudsql-instance-credentials
            mountPath: /secrets/cloudsql
            readOnly: true
      volumes:
        - name: cloudsql-instance-credentials
          secret:
            secretName: cloudsql-instance-credentials

What may be the cause of this behavior? There is no readiness or liveness probes defined on the containers.

If I do a describe on the pod, the relevant info is:

...
Command:
  php
  artisan
  migrate
State:          Terminated
  Reason:       Completed
  Exit Code:    0
  Started:      Thu, 29 Nov 2018 22:20:18 +0000
  Finished:     Thu, 29 Nov 2018 22:20:19 +0000
Ready:          False
Restart Count:  0
Requests:
  cpu:  100m
...

A Pod with a Ready status means it "is able to serve requests and should be added to the load balancing pools of all matching Services" , see https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions

In your case, you don't want to serve requests, but simply to execute php artisan migrate once, and done. So you don't have to worry about this status, the important part is the State: Terminated with a Reason: Completed and a zero exit code: your command did whatever and then exited successfully.

If the result of the command is not what you expected, you'd have to investigate the logs from the container that ran this command with kubectl logs your-pod -c app (where app is the name of the container you defined), and/or you would expect the php artisan migrate command to NOT issue a zero exit code.

In my case, I was using istio, and experienced the same issue, removing istio-sidecar from the job pod solves this problem.

my solution if using istio:

  spec:
    template:
      metadata:
        annotations:
          sidecar.istio.io/inject: "false"

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM