简体   繁体   English

没有 Pod 指标的 Kubernetes

[英]Kubernetes without pod metrics

I´m trying to deploy metrics to kubernetes and something really strange is happening, I have one worker and one master.我正在尝试将指标部署到 kubernetes 并且发生了一些非常奇怪的事情,我有一个工人和一个主人。 I have the following pods list:我有以下豆荚列表:

NAMESPACE     NAME                                              READY   STATUS    RESTARTS   AGE     IP               NODE                      NOMINATED NODE   READINESS GATES
default       php-apache-774ff9d754-d7vp9                       1/1     Running   0          2m43s   192.168.77.172   master-node               <none>           <none>
kube-system   calico-kube-controllers-6b9d4c8765-x7pql          1/1     Running   2          4h11m   192.168.77.130   master-node               <none>           <none>
kube-system   calico-node-d4rnh                                 0/1     Running   1          4h11m   10.221.194.166   master-node               <none>           <none>
kube-system   calico-node-hwkmd                                 0/1     Running   1          4h11m   10.221.195.58    free5gc-virtual-machine   <none>           <none>
kube-system   coredns-6955765f44-kf4dr                          1/1     Running   1          4h20m   192.168.178.65   free5gc-virtual-machine   <none>           <none>
kube-system   coredns-6955765f44-s58rf                          1/1     Running   1          4h20m   192.168.178.66   free5gc-virtual-machine   <none>           <none>
kube-system   etcd-free5gc-virtual-machine                      1/1     Running   1          4h21m   10.221.195.58    free5gc-virtual-machine   <none>           <none>
kube-system   kube-apiserver-free5gc-virtual-machine            1/1     Running   1          4h21m   10.221.195.58    free5gc-virtual-machine   <none>           <none>
kube-system   kube-controller-manager-free5gc-virtual-machine   1/1     Running   1          4h21m   10.221.195.58    free5gc-virtual-machine   <none>           <none>
kube-system   kube-proxy-brvdg                                  1/1     Running   1          4h19m   10.221.194.166   master-node               <none>           <none>
kube-system   kube-proxy-lfzjw                                  1/1     Running   1          4h20m   10.221.195.58    free5gc-virtual-machine   <none>           <none>
kube-system   kube-scheduler-free5gc-virtual-machine            1/1     Running   1          4h21m   10.221.195.58    free5gc-virtual-machine   <none>           <none>
kube-system   metrics-server-86c6d8b9bf-p2hh8                   1/1     Running   0          2m43s   192.168.77.171   master-node               <none>           <none>

When I try to get the metrics I see the following:当我尝试获取指标时,我看到以下内容:

NAME         REFERENCE               TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   <unknown>/50%   1         10        1          3m58s
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl top pods --all-namespaces
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

Lastly, I see the log (v=6) the output of metrics-server:最后,我看到了日志(v=6)metrics-server 的输出:

free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl logs metrics-server-86c6d8b9bf-p2hh8  -n kube-system
I0206 18:16:18.657605       1 serving.go:273] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0206 18:16:19.367356       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 7 milliseconds
I0206 18:16:19.370573       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0206 18:16:19.373245       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0206 18:16:19.375024       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
[restful] 2020/02/06 18:16:19 log.go:33: [restful/swagger] listing is available at https://:4443/swaggerapi
[restful] 2020/02/06 18:16:19 log.go:33: [restful/swagger] https://:4443/swaggerui/ is mapped to folder /swagger-ui/
I0206 18:16:19.421207       1 healthz.go:83] Installing healthz checkers:"ping", "poststarthook/generic-apiserver-start-informers", "healthz"
I0206 18:16:19.421641       1 serve.go:96] Serving securely on [::]:4443
I0206 18:16:19.421873       1 reflector.go:202] Starting reflector *v1.Pod (0s) from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421891       1 reflector.go:240] Listing and watching *v1.Pod from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421914       1 reflector.go:202] Starting reflector *v1.Node (0s) from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421929       1 reflector.go:240] Listing and watching *v1.Node from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.423052       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0 200 OK in 1 milliseconds
I0206 18:16:19.424261       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0 200 OK in 2 milliseconds
I0206 18:16:19.425586       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/nodes?resourceVersion=38924&timeoutSeconds=481&watch=true 200 OK in 0 milliseconds
I0206 18:16:19.433545       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/pods?resourceVersion=39246&timeoutSeconds=582&watch=true 200 OK in 0 milliseconds
I0206 18:16:49.388514       1 manager.go:99] Beginning cycle, collecting metrics...
I0206 18:16:49.388598       1 manager.go:95] Scraping metrics from 2 sources
I0206 18:16:49.395742       1 manager.go:120] Querying source: kubelet_summary:free5gc-virtual-machine
I0206 18:16:49.400574       1 manager.go:120] Querying source: kubelet_summary:master-node
I0206 18:16:49.413751       1 round_trippers.go:405] GET https://10.221.194.166:10250/stats/summary/ 200 OK in 13 milliseconds
I0206 18:16:49.414317       1 round_trippers.go:405] GET https://10.221.195.58:10250/stats/summary/ 200 OK in 18 milliseconds
I0206 18:16:49.417044       1 manager.go:150] ScrapeMetrics: time: 28.428677ms, nodes: 2, pods: 13
I0206 18:16:49.417062       1 manager.go:115] ...Storing metrics...
I0206 18:16:49.417083       1 manager.go:126] ...Cycle complete
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl logs metrics-server-86c6d8b9bf-p2hh8  -n kube-system
I0206 18:16:18.657605       1 serving.go:273] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0206 18:16:19.367356       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 7 milliseconds
I0206 18:16:19.370573       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0206 18:16:19.373245       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0206 18:16:19.375024       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
[restful] 2020/02/06 18:16:19 log.go:33: [restful/swagger] listing is available at https://:4443/swaggerapi
[restful] 2020/02/06 18:16:19 log.go:33: [restful/swagger] https://:4443/swaggerui/ is mapped to folder /swagger-ui/
I0206 18:16:19.421207       1 healthz.go:83] Installing healthz checkers:"ping", "poststarthook/generic-apiserver-start-informers", "healthz"
I0206 18:16:19.421641       1 serve.go:96] Serving securely on [::]:4443
I0206 18:16:19.421873       1 reflector.go:202] Starting reflector *v1.Pod (0s) from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421891       1 reflector.go:240] Listing and watching *v1.Pod from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421914       1 reflector.go:202] Starting reflector *v1.Node (0s) from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421929       1 reflector.go:240] Listing and watching *v1.Node from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.423052       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0 200 OK in 1 milliseconds
I0206 18:16:19.424261       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0 200 OK in 2 milliseconds
I0206 18:16:19.425586       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/nodes?resourceVersion=38924&timeoutSeconds=481&watch=true 200 OK in 0 milliseconds
I0206 18:16:19.433545       1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/pods?resourceVersion=39246&timeoutSeconds=582&watch=true 200 OK in 0 milliseconds
I0206 18:16:49.388514       1 manager.go:99] Beginning cycle, collecting metrics...
I0206 18:16:49.388598       1 manager.go:95] Scraping metrics from 2 sources
I0206 18:16:49.395742       1 manager.go:120] Querying source: kubelet_summary:free5gc-virtual-machine
I0206 18:16:49.400574       1 manager.go:120] Querying source: kubelet_summary:master-node
I0206 18:16:49.413751       1 round_trippers.go:405] GET https://10.221.194.166:10250/stats/summary/ 200 OK in 13 milliseconds
I0206 18:16:49.414317       1 round_trippers.go:405] GET https://10.221.195.58:10250/stats/summary/ 200 OK in 18 milliseconds
I0206 18:16:49.417044       1 manager.go:150] ScrapeMetrics: time: 28.428677ms, nodes: 2, pods: 13
I0206 18:16:49.417062       1 manager.go:115] ...Storing metrics...
I0206 18:16:49.417083       1 manager.go:126] ...Cycle complete

Using the log output with v=10 I can see even the details of health of each pod, but nothing while running the kubectl get hpa or kubectl top nodes .使用 v=10 的日志输出我什至可以看到每个 pod 的健康细节,但在运行kubectl get hpakubectl top nodes什么也看不到。 Can someone give me a hint?有人可以给我一个提示吗? Furthermore, my metrics manifest is:此外,我的指标清单是:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.1
        args:
          - /metrics-server
          - --metric-resolution=30s
          - --requestheader-allowed-names=aggregator
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls
          - --v=6
          - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
            #- --kubelet-preferred-address-types=InternalIP
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        imagePullPolicy: Always
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
      nodeSelector:
        beta.kubernetes.io/os: linux
        kubernetes.io/arch: "amd64"

And I can see the following:我可以看到以下内容:

free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl get apiservice v1beta1.metrics.k8s.io -o yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  creationTimestamp: "2020-02-06T18:57:28Z"
  name: v1beta1.metrics.k8s.io
  resourceVersion: "45583"
  selfLink: /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io
  uid: ca439221-b987-4c13-b0e0-8d2bb237e612
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
    port: 443
  version: v1beta1
  versionPriority: 100
status:
  conditions:
  - lastTransitionTime: "2020-02-06T18:57:28Z"
    message: 'failing or missing response from https://10.110.144.114:443/apis/metrics.k8s.io/v1beta1:
      Get https://10.110.144.114:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.110.144.114:443:
      connect: no route to host'
    reason: FailedDiscoveryCheck
    status: "False"
    type: Available

I have reproduced your issue (on Google Compute Engine ).我已经重现了您的问题(在Google Compute Engine )。 Tried a few scenarios to find workaround/solution for this issue.尝试了一些场景以找到此问题的变通方法/解决方案。

First thing I want to mention is that you have provided ServiceAccount and Deployment YAML.我要提到的第一件事是您提供了ServiceAccountDeployment YAML。 You also need ClusterRoleBinding , RoleBinding , ApiService , etc. All needed YAMLs can be found in this Github repo .您还需要ClusterRoleBindingRoleBindingApiService等。所有需要的 YAML 都可以在此 Github 存储库中找到。

For fast deploy metrics-server with all required config you can use:对于具有所有必需配置的快速部署metrics-server ,您可以使用:

$ git clone https://github.com/kubernetes-sigs/metrics-server.git
$ cd metrics-server/deploy/
$ kubectl apply -f kubernetes/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

The second thing I would advise you to check your CNI pods ( calico-node-d4rnh and calico-node-hawked ).第二件事情我会建议你检查你的CNI吊舱( calico-node-d4rnhcalico-node-hawked )。 Created 4h11m but Ready 0/1 .创建 4h11m 但Ready 0/1

Last thing regarding gathering CPU and Memory data from pods and nodes.关于从 pod 和节点收集 CPU 和内存数据的最后一件事。

Using Calico使用印花布

If you are using one-node kubeadm , it will work correctly, however, when you are using more than 1 node in kubeadm , this will cause some issues.如果您使用单节点kubeadm ,它将正常工作,但是,当您在kubeadm中使用超过 1 个节点时,这会导致一些问题。 There are many similar threads on Github regarding this. Github 上有许多关于此的类似主题。 I've tried with various flags in args: , but no success.我在args:尝试了各种标志,但没有成功。 In metrics-server logs ( -v=6 ) you will be able to see that metrics are gathering.metrics-server日志 ( -v=6 ) 中,您将能够看到指标正在收集。 In this Github thread , one of the Github users posted answer which is a workaround for this issue.此 Github 线程中,其中一位 Github 用户发布了解决此问题的答案。 It's also mentioned in K8s docs about hostNetwork .在关于hostNetwork K8s 文档中也提到了它。

Adding hostNetwork: true is what finally got metrics-server working for me.添加hostNetwork: true是最终让metrics-server为我工作的原因。 Without it, nada.没有它,纳达。 Without the kubelet-preferred-address-types line , I could query my master node but not my two worker nodes, nor could I query pods, obviously undesirable results.如果没有kubelet-preferred-address-types line ,我可以查询我的主节点,但不能查询我的两个工作节点,也不能查询 pod,这显然是不受欢迎的结果。 Lack of kubelet-insecure-tls also results in an inoperable metrics-server installation.缺乏kubelet-insecure-tls也会导致无法运行的metrics-server安装。

spec:
  hostNetwork: true
  containers:
  - args:
    - --kubelet-insecure-tls
    - --cert-dir=/tmp
    - --secure-port=4443
    - --kubelet-preferred-address-types=InternalIP
    - --v=6
    image: k8s.gcr.io/metrics-server-amd64:v0.3.6
    imagePullPolicy: Always

If you will deploy with this config it will work.如果您将使用此配置进行部署,它将起作用。

$ kubectl describe apiservice v1beta1.metrics.k8s.io
Name:         v1beta1.metrics.k8s.io
...
Status:
  Conditions:
    Last Transition Time:  2020-02-20T09:37:59Z
    Message:               all checks passed
    Reason:                Passed
    Status:                True
    Type:                  Available
Events:                    <none>

In addition, you can see the difference when using host network: true when you will check iptables .另外,当你检查iptables时,你可以看到使用host network: true时的区别。 There is much more entries compare to deployment without this config.与没有此配置的部署相比,有更多的条目。

After that, you can edit deployment, and remove or comment host network: true .之后,您可以编辑部署,并删除或评论host network: true

$ kubectl edit deploy metrics-server -n kube-system
deployment.apps/metrics-server edited

$ kubectl top pods
NAME                     CPU(cores)   MEMORY(bytes)   
nginx-6db489d4b7-2qhzw   0m           3Mi             
nginx-6db489d4b7-9fvrj   0m           2Mi             
nginx-6db489d4b7-dgbf9   0m           2Mi             
nginx-6db489d4b7-dvcz5   0m           2Mi   

Also, you will be able to find metrics using:此外,您将能够使用以下方法查找指标:

$ kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes

For better visibility you can use also jq .为了获得更好的可见性,您还可以使用jq

$ kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods | jq .

Using Weave Net使用编织网

When you will use Weave Net and instead of Calico it will work without setting host network .当您使用Weave Net而不是 Calico 时,它无需设置host network

$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

However, you will need to work with certificates .但是,您将需要使用certificates But if you don't care about security, you can just use --kubelet-insecure-tls like in the previous example, when Calico was used.但是如果你不关心安全性,你可以像前面的例子一样使用--kubelet-insecure-tls ,当使用Calico时。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM