简体   繁体   中英

K8S - livenessProbe - Restart pod if another pod is not ready / working ( MySql)

Happy New Year, I have a 2 deployments, MySQL and Application, My Application is depends on the MySQL pod, I have initContainers that make sure that the Application runs after the MySQL pod fully app and ready, but I'm trying to make the next scenario working.

I want the Application pod to check the MySQL pod and if the port 3306 is not available then the Application pod himself will restart, and this will keep happens until the MySQL pod will be fully ready.

I'm using this in the Application deployment / pod

livenessProbe:
  httpGet:
    host: ???
    path: /
    port: 3306

but instead of "??? " I don't know what I need to write, because, i know that I can not write their DNS name, I was told that livenessProbe does not work with DNS, so I tried to enter this IP address for ENV, but still its not working.

how can I do this?

SQL Deployment yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.deployment.mysql.name }}
  namespace: {{ .Values.namespace }}
spec:
  selector:
    matchLabels:
      app: {{ .Values.deployment.mysql.name }}
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: {{ .Values.deployment.mysql.name }}
    spec:
      containers:
      - image: {{ .Values.deployment.mysql.image }}
        name: {{ .Values.deployment.mysql.name }}
        env:
         - name: MYSQL_ROOT_PASSWORD
           valueFrom: 
             secretKeyRef:
              name: mysql-secret
              key: mysql-root-password   
        ports:
        - containerPort: {{ .Values.deployment.mysql.port }}
          name: {{ .Values.deployment.mysql.name }} 
        volumeMounts:
        - name: sqlvol
          mountPath: /var/lib/mysql/
          readOnly: false
        # - name: db
        #   mountPath: /etc/notebook-db/
        # command:
        #   - mysql < /etc/notebook-db/crud.sql
        livenessProbe:
          tcpSocket:
            port: 3306
          initialDelaySeconds: 15
          periodSeconds: 20
      initContainers:
      - name: init-myservice
        image: busybox:1.28
        command: ['sh', '-c', "sleep 10"]
      volumes:
      - name: sqlvol
        persistentVolumeClaim:
          claimName: mysqlvolume
          readOnly: false

Application Deployment yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.deployment.nodejs.name }}
  namespace: {{ .Values.namespace }}
  labels:
    app: {{ .Values.deployment.nodejs.name }}
    name: {{ .Values.deployment.nodejs.name }}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: {{ .Values.deployment.nodejs.name }}
  template:
    metadata:
      labels:
        app: {{ .Values.deployment.nodejs.name }}
    spec:
      containers:
      - name: {{ .Values.deployment.nodejs.name }}
        image: {{ .Values.deployment.nodejs.image }}:{{ .Values.deployment.nodejs.tag }}
        ports:
        - containerPort: {{ .Values.deployment.nodejs.targetPort }}
         livenessProbe:
           httpGet:
             host: $MYSQL_CLUSTERIP_SERVICE_HOST
             path: /
             port: 3306
      initContainers:
        - name: init-myservice
          image: busybox:1.28
          command: ['sh', '-c', "sleep 60"]

$MYSQL_CLUSTERIP_SERVICE_HOST - this is ENV ( it did not worked for me this way ).

so how can i restart pod application if the pod mysql is not ready?

  • Create a service for MySQL Deployment, So that it can solve two problems
    • Service IP does not change
    • dns lookups & reverse lookups works fine with services
Example: 
kubectl get svc -n default
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
nginx        ClusterIP   10.104.97.252   <none>        8080/TCP   33m

FQDN of the above service (in the fomat : <nameoftheservice>.<namespace>.cluster.local ):
nginx.default.svc.cluster.local

  • Then use the FQDN of the service as host in your livenessProbe in your application deployment.
liveness probe for with respect above service: 

livenessProbe:
  httpGet:
    host: nginx.default.svc.cluster.local
    path: /
    port: 8080

Is your application supposed to crash if MySQL is not ready? If it's the case as your application is a Deployment the self-healing will be applied by Kubernetes so your Pod will restart until restart is oki (in fact with a maximum number of retries that you can configure).

TL;DR

DNS doesn't work for liveness probes, the kubelet.network space cannot basically resolve any in-cluster DNS.

You can consider putting both of your services in a single pod as sidecars. This way they would share the same address space if one container fails then the whole pod is restarted.

Another option is to create an operator for your pods/application and basically have it check the liveness through the in-cluster DNS for both pods separately and restart the pods through the Kube.netes API.

You can also just create your own script in a pod that just calls curl to check for a 200 OK and kubectl to restart your pod if you get something else.

Note that for the 2 options above you need to make sure that Coredns is stable and solid otherwise your health checks might fail to make your services have potential downtime.

from: Liveness-Probe of one pod via another

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM