简体   繁体   中英

AWS EKS K8s Service and CronJob/Jon same node

I have a k8s deployment which consists of a cron job (runs hourly), service (runs the http service) and a storage class (pvc to store data, using gp2).

The issue I am seeing is that gp2 is only readwriteonce.

I notice when the cron job creates a job and it lands on the same node as the service it can mount it fine.

Is there something I can do in the service, deployment or cron job yaml to ensure the cron job and service always land on the same node? It can be any node but as long as cron job goes to the same node as service.

This isn't an issue in my lower environment as we have very little nodes but in our production environments where we have more nodes it is an issue.

In short I want to get my cron job, which creates a job then pod to run the pod on the same node as my services pod is on.

I know thing isn't best practice but our webservice reads data from the pvc and serves it. The cron job pulls new data in from other sources and leaves it for the webserver.

Happy for other ideas / ways.

Thanks

Focusing only on the part:

How can I schedule a workload ( Pod , Job , Cronjob ) on a specific set of Nodes

You can spawn your Cronjob / Job either with:

  • nodeSelector
  • nodeAffinity

nodeSelector

nodeSelector is the simplest recommended form of node selection constraint. nodeSelector is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). The most common usage is one key-value pair.

-- Kubernetes.io: Docs: Concepts: Scheduling eviction: Assign pod node: Node selector

Example of it could be following (assuming that your node have a specific label that is referenced in .spec.jobTemplate.spec.template.spec.nodeSelector ):

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          nodeSelector: # <-- IMPORTANT
            schedule: "here" # <-- IMPORTANT
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

Running above manifest will schedule your Pod ( Cronjob ) on a node that has a schedule=here label:

  • $ kubectl get pods -o wide
NAME                     READY   STATUS      RESTARTS   AGE     IP          NODE                                   NOMINATED NODE   READINESS GATES
hello-1616323740-mqdmq   0/1     Completed   0          2m33s   10.4.2.67   node-ffb5                              <none>           <none>
hello-1616323800-wv98r   0/1     Completed   0          93s     10.4.2.68   node-ffb5                              <none>           <none>
hello-1616323860-66vfj   0/1     Completed   0          32s     10.4.2.69   node-ffb5                              <none>           <none>

nodeAffinity

Node affinity is conceptually similar to nodeSelector -- it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.

There are currently two types of node affinity, called requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution . You can think of them as "hard" and "soft" respectively, in the sense that the former specifies rules that must be met for a pod to be scheduled onto a node (just like nodeSelector but using a more expressive syntax), while the latter specifies preferences that the scheduler will try to enforce but will not guarantee.

-- Kubernetes.io: Docs: Concepts: Scheduling eviction: Assign pod node: Node affinity

Example of it could be following (assuming that your node have a specific label that is referenced in .spec.jobTemplate.spec.template.spec.nodeSelector ):

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          # --- nodeAffinity part
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: schedule
                    operator: In
                    values:
                    - here
          # --- nodeAffinity part
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
  • $ kubectl get pods
NAME                     READY   STATUS      RESTARTS   AGE     IP           NODE                                   NOMINATED NODE   READINESS GATES
hello-1616325840-5zkbk   0/1     Completed   0          2m14s   10.4.2.102   node-ffb5                              <none>           <none>
hello-1616325900-lwndf   0/1     Completed   0          74s     10.4.2.103   node-ffb5                              <none>           <none>
hello-1616325960-j9kz9   0/1     Completed   0          14s     10.4.2.104   node-ffb5                              <none>           <none>

Additional resources:

I'd reckon you could also take a look on this StackOverflow answer:

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM