简体   繁体   English

为Pod分配特定的CPU资源-kubernetes.io/limit-ranger:'LimitRanger插件集:容器Elasticsearch的CPU请求'

[英]Assigning specific CPU resources to pod - kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container elasticsearch'

I've created an elasticsearch service to apply it like backend to jaeger tracing, using this guide , all over Kubernetes GCP cluster. 我已经创建了一个Elasticsearch服务,使用本指南在Kubernetes GCP集群上将其像后端一样应用于jaeger跟踪。

I have the elasticsearch service: 我有elasticsearch服务:

~/w/jaeger-elasticsearch ❯❯❯ kubectl get service elasticsearch
NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP   None         <none>        9200/TCP,9300/TCP   8m
~/w/jaeger-elasticsearch ❯❯❯

And their respective pod called elasticsearch-0 他们各自的豆荚叫做elasticsearch-0

~/w/jaeger-elasticsearch ❯❯❯ kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
elasticsearch-0                   1/1     Running   0          37m
jaeger-agent-cnw9m                1/1     Running   0          2h
jaeger-agent-dl5n9                1/1     Running   0          2h
jaeger-agent-zzljk                1/1     Running   0          2h
jaeger-collector-9879cd76-fvpz4   1/1     Running   0          2h
jaeger-query-5584576487-dzqkd     1/1     Running   0          2h
~/w/jaeger-elasticsearch ❯❯❯ kubectl get pod elasticsearch-0
NAME              READY   STATUS    RESTARTS   AGE
elasticsearch-0   1/1     Running   0          38m
~/w/jaeger-elasticsearch ❯❯❯ 

I've enter to my pod configuration on GCP, and I can see that my elasticsearch-0 pod have limited resources: 我已经在GCP上输入了pod配置,并且可以看到我的elasticsearch-0 pod的资源有限:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
      elasticsearch'
  creationTimestamp: 2019-01-03T09:11:10Z
  generateName: elasticsearch-

And then, I want to assign it specific CPU request and CPU limit according to the documentation , and then, I proceed to modufy the pod manifest, adding the following directives: 然后,我想根据文档为其分配特定的CPU请求和CPU限制,然后继续修改pod清单,并添加以下指令:

- cpu "2" in the args section: args部分中的- cpu "2"

args:
    - -cpus
    - "2"

And I am including a resources:requests field in the container resource, in order to specify a request of 0.5 CPU and I've include a resources:limits in order to specify a CPU limit of this way: 我在容器资源中包含一个resources:requests字段,以指定0.5 CPU的请求,并且我包含了resources:limits以这种方式指定CPU限制:

  limits:
    cpu: "1"
  requests:
    cpu: "0.5"

My complete pod manifest is this (See numerals 1,2,3,4 and 5 numerals commented with # symbol): 我完整的pod清单是这样的(请参见数字1,2、3、4和5用#符号注释的数字):

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
      elasticsearch'
  creationTimestamp: 2019-01-03T09:11:10Z
  generateName: elasticsearch-
  labels:
    app: jaeger-elasticsearch
    controller-revision-hash: elasticsearch-8684f69799
    jaeger-infra: elasticsearch-replica
    statefulset.kubernetes.io/pod-name: elasticsearch-0
  name: elasticsearch-0
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: StatefulSet
    name: elasticsearch
    uid: 86578784-0f36-11e9-b8b1-42010aa60019
  resourceVersion: "2778"
  selfLink: /api/v1/namespaces/default/pods/elasticsearch-0
  uid: 82d3be2f-0f37-11e9-b8b1-42010aa60019
spec:
  containers:
  - args:
    - -Ehttp.host=0.0.0.0
    - -Etransport.host=127.0.0.1
    - -cpus # 1
    - "2" # 2
    command:
    - bin/elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
    imagePullPolicy: Always
    name: elasticsearch
    readinessProbe:
      exec:
        command:
        - curl
        - --fail
        - --silent
        - --output
        - /dev/null
        - --user
        - elastic:changeme
        - localhost:9200
      failureThreshold: 3
      initialDelaySeconds: 5
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 4
    resources: # 3
      limits:
        cpu: "1" # 4
      requests:
        cpu: "0.5" # 5
        # container has a request of 0.5  CPU 
        #cpu: 100m
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /data
      name: data
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-96vwj
      readOnly: true
  dnsPolicy: ClusterFirst
  hostname: elasticsearch-0
  nodeName: gke-jaeger-persistent-st-default-pool-81004235-h8xt
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  subdomain: elasticsearch
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: data
  - name: default-token-96vwj
    secret:
      defaultMode: 420
      secretName: default-token-96vwj
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2019-01-03T09:11:10Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2019-01-03T09:11:40Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2019-01-03T09:11:10Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
    imageID: docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7
    lastState: {}
    name: elasticsearch
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2019-01-03T09:11:13Z
  hostIP: 10.166.0.2
  phase: Running
  podIP: 10.36.0.10
  qosClass: Burstable
  startTime: 2019-01-03T09:11:10Z

But when I apply my pod manifest file, I get the following output: 但是,当我应用pod清单文件时,会得到以下输出:

Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
.
.
.
for: "elasticsearch-0.yaml": Operation cannot be fulfilled on pods "elasticsearch-0": the object has been modified; please apply your changes to the latest version and try again
~/w/jaeger-elasticsearch ❯❯❯

The complete output of my kubectl apply command is this: 我的kubectl apply命令的完整输出是这样的:

~/w/jaeger-elasticsearch ❯❯❯ kubectl apply -f elasticsearch-0.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{\"kubernetes.io/limit-ranger\":\"LimitRanger plugin set: cpu request for container elasticsearch\"},\"creationTimestamp\":\"2019-01-03T09:11:10Z\",\"generateName\":\"elasticsearch-\",\"labels\":{\"app\":\"jaeger-elasticsearch\",\"controller-revision-hash\":\"elasticsearch-8684f69799\",\"jaeger-infra\":\"elasticsearch-replica\",\"statefulset.kubernetes.io/pod-name\":\"elasticsearch-0\"},\"name\":\"elasticsearch-0\",\"namespace\":\"default\",\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"blockOwnerDeletion\":true,\"controller\":true,\"kind\":\"StatefulSet\",\"name\":\"elasticsearch\",\"uid\":\"86578784-0f36-11e9-b8b1-42010aa60019\"}],\"resourceVersion\":\"2778\",\"selfLink\":\"/api/v1/namespaces/default/pods/elasticsearch-0\",\"uid\":\"82d3be2f-0f37-11e9-b8b1-42010aa60019\"},\"spec\":{\"containers\":[{\"args\":[\"-Ehttp.host=0.0.0.0\",\"-Etransport.host=127.0.0.1\",\"-cpus\",\"2\"],\"command\":[\"bin/elasticsearch\"],\"image\":\"docker.elastic.co/elasticsearch/elasticsearch:5.6.0\",\"imagePullPolicy\":\"Always\",\"name\":\"elasticsearch\",\"readinessProbe\":{\"exec\":{\"command\":[\"curl\",\"--fail\",\"--silent\",\"--output\",\"/dev/null\",\"--user\",\"elastic:changeme\",\"localhost:9200\"]},\"failureThreshold\":3,\"initialDelaySeconds\":5,\"periodSeconds\":5,\"successThreshold\":1,\"timeoutSeconds\":4},\"resources\":{\"limits\":{\"cpu\":\"1\"},\"requests\":{\"cpu\":\"0.5\"}},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\",\"volumeMounts\":[{\"mountPath\":\"/data\",\"name\":\"data\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"default-token-96vwj\",\"readOnly\":true}]}],\"dnsPolicy\":\"ClusterFirst\",\"hostname\":\"elasticsearch-0\",\"nodeName\":\"gke-jaeger-persistent-st-default-pool-81004235-h8xt\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"serviceAccount\":\"default\",\"serviceAccountName\":\"default\",\"subdomain\":\"elasticsearch\",\"terminationGracePeriodSeconds\":30,\"tolerations\":[{\"effect\":\"NoExecute\",\"key\":\"node.kubernetes.io/not-ready\",\"operator\":\"Exists\",\"tolerationSeconds\":300},{\"effect\":\"NoExecute\",\"key\":\"node.kubernetes.io/unreachable\",\"operator\":\"Exists\",\"tolerationSeconds\":300}],\"volumes\":[{\"emptyDir\":{},\"name\":\"data\"},{\"name\":\"default-token-96vwj\",\"secret\":{\"defaultMode\":420,\"secretName\":\"default-token-96vwj\"}}]},\"status\":{\"conditions\":[{\"lastProbeTime\":null,\"lastTransitionTime\":\"2019-01-03T09:11:10Z\",\"status\":\"True\",\"type\":\"Initialized\"},{\"lastProbeTime\":null,\"lastTransitionTime\":\"2019-01-03T09:11:40Z\",\"status\":\"True\",\"type\":\"Ready\"},{\"lastProbeTime\":null,\"lastTransitionTime\":\"2019-01-03T09:11:10Z\",\"status\":\"True\",\"type\":\"PodScheduled\"}],\"containerStatuses\":[{\"containerID\":\"docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030\",\"image\":\"docker.elastic.co/elasticsearch/elasticsearch:5.6.0\",\"imageID\":\"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7\",\"lastState\":{},\"name\":\"elasticsearch\",\"ready\":true,\"restartCount\":0,\"state\":{\"running\":{\"startedAt\":\"2019-01-03T09:11:13Z\"}}}],\"hostIP\":\"10.166.0.2\",\"phase\":\"Running\",\"podIP\":\"10.36.0.10\",\"qosClass\":\"Burstable\",\"startTime\":\"2019-01-03T09:11:10Z\"}}\n"},"creationTimestamp":"2019-01-03T09:11:10Z","resourceVersion":"2778","uid":"82d3be2f-0f37-11e9-b8b1-42010aa60019"},"spec":{"$setElementOrder/containers":[{"name":"elasticsearch"}],"containers":[{"args":["-Ehttp.host=0.0.0.0","-Etransport.host=127.0.0.1","-cpus","2"],"name":"elasticsearch","resources":{"limits":{"cpu":"1"},"requests":{"cpu":"0.5"}}}]},"status":{"$setElementOrder/conditions":[{"type":"Initialized"},{"type":"Ready"},{"type":"PodScheduled"}],"conditions":[{"lastTransitionTime":"2019-01-03T09:11:10Z","type":"Initialized"},{"lastTransitionTime":"2019-01-03T09:11:40Z","type":"Ready"},{"lastTransitionTime":"2019-01-03T09:11:10Z","type":"PodScheduled"}],"containerStatuses":[{"containerID":"docker://46eb2c664f947a2a0a35ac7799b04c77756aef0a9935855c2dadcf959bd27030","image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0","imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7","lastState":{},"name":"elasticsearch","ready":true,"restartCount":0,"state":{"running":{"startedAt":"2019-01-03T09:11:13Z"}}}],"podIP":"10.36.0.10","startTime":"2019-01-03T09:11:10Z"}}
to:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "elasticsearch-0", Namespace: "default"
Object: &{map["kind":"Pod" "apiVersion":"v1" "metadata":map["selfLink":"/api/v1/namespaces/default/pods/elasticsearch-0" "generateName":"elasticsearch-" "namespace":"default" "resourceVersion":"11515" "creationTimestamp":"2019-01-03T10:29:53Z""labels":map["controller-revision-hash":"elasticsearch-8684f69799" "jaeger-infra":"elasticsearch-replica" "statefulset.kubernetes.io/pod-name":"elasticsearch-0" "app":"jaeger-elasticsearch"] "annotations":map["kubernetes.io/limit-ranger":"LimitRanger plugin set: cpu request for container elasticsearch"] "ownerReferences":[map["controller":%!q(bool=true) "blockOwnerDeletion":%!q(bool=true) "apiVersion":"apps/v1" "kind":"StatefulSet" "name":"elasticsearch" "uid":"86578784-0f36-11e9-b8b1-42010aa60019"]] "name":"elasticsearch-0" "uid":"81cba2ad-0f42-11e9-b8b1-42010aa60019"] "spec":map["restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "serviceAccountName":"default" "securityContext":map[] "subdomain":"elasticsearch" "schedulerName":"default-scheduler" "tolerations":[map["operator":"Exists" "effect":"NoExecute" "tolerationSeconds":'\u012c' "key":"node.kubernetes.io/not-ready"] map["operator":"Exists" "effect":"NoExecute" "tolerationSeconds":'\u012c' "key":"node.kubernetes.io/unreachable"]] "volumes":[map["name":"data" "emptyDir":map[]] map["name":"default-token-96vwj" "secret":map["secretName":"default-token-96vwj" "defaultMode":'\u01a4']]] "dnsPolicy":"ClusterFirst" "serviceAccount":"default" "nodeName":"gke-jaeger-persistent-st-default-pool-81004235-h8xt" "hostname":"elasticsearch-0" "containers":[map["name":"elasticsearch" "image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0" "readinessProbe":map["exec":map["command":["curl" "--fail" "--silent" "--output" "/dev/null" "--user" "elastic:changeme" "localhost:9200"]] "initialDelaySeconds":'\x05' "timeoutSeconds":'\x04' "periodSeconds":'\x05' "successThreshold":'\x01' "failureThreshold":'\x03'] "terminationMessagePath":"/dev/termination-log" "imagePullPolicy":"Always" "command":["bin/elasticsearch"] "args":["-Ehttp.host=0.0.0.0" "-Etransport.host=127.0.0.1"] "resources":map["requests":map["cpu":"100m"]] "volumeMounts":[map["name":"data" "mountPath":"/data"] map["name":"default-token-96vwj" "readOnly":%!q(bool=true) "mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"]] "terminationMessagePolicy":"File"]]] "status":map["qosClass":"Burstable" "phase":"Running" "conditions":[map["type":"Initialized" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:29:53Z"] map["type":"Ready" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:30:17Z"] map["type":"PodScheduled" "status":"True" "lastProbeTime":<nil> "lastTransitionTime":"2019-01-03T10:29:53Z"]] "hostIP":"10.166.0.2" "podIP":"10.36.0.11" "startTime":"2019-01-03T10:29:53Z" "containerStatuses":[map["name":"elasticsearch" "state":map["running":map["startedAt":"2019-01-03T10:29:55Z"]] "lastState":map[] "ready":%!q(bool=true) "restartCount":'\x00' "image":"docker.elastic.co/elasticsearch/elasticsearch:5.6.0" "imageID":"docker-pullable://docker.elastic.co/elasticsearch/elasticsearch@sha256:f95e7d4256197a9bb866b166d9ad37963dc7c5764d6ae6400e551f4987a659d7" "containerID":"docker://e7f629b79da33b482b38fdb990717b3d61d114503961302e2e8feccb213bbd4b"]]]]}
for: "elasticsearch-0.yaml": Operation cannot be fulfilled on pods "elasticsearch-0": the object has been modified; please apply your changes to the latest version and try again
~/w/jaeger-elasticsearch ❯❯❯

How to can I modify my pod yaml file in order to assign it more resources and solve the kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container elasticsearch' message? 如何修改我的pod yaml文件,以便为其分配更多资源并解决kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container elasticsearch'消息?

Here's an article/guide on how to work with the limit-ranger and its default values [1] 这是有关如何使用限制范围及其默认值的文章/指南[1]

[1] https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-cpu-time-9eff74d3161b [1] https://medium.com/@betz.mark/understanding-resource-limits-in-kubernetes-cpu-time-9eff74d3161b

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM