简体   繁体   中英

Kubernetes 1.10.1 Not Found Metric (HPA) (Fedora 28)

I have a some problem on my cluster k8s on Fedora server, I have a 1 master and 2 nodes, the configuration of etc, flannel, docker and kubernetes found

I run

kubectl  run busybox --image=busybox --port 8080  \
         -- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
         env | grep HOSTNAME | sed 's/.*=//g'; } | nc -l -p  8080; done"

and, this found fine

kubectl expose deployment busybox --type=NodePort

now

kubectl autoscale deployment busybox --min=1 --max=4 --cpu-percent=20 deployment "busybox" autoscaled

when describe a hpa the metrics its a

NAME      REFERENCE            TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
busybox   Deployment/busybox   <unknown>/20%   1         4         1          1h

I try this https://github.com/kubernetes-incubator/metrics-server

git clone https://github.com/kubernetes-incubator/metrics-server.git

kubectl create -f metrics-server/deploy/1.8+/

but the pod of metric the status its CrashLoopBackOff

kubectl logs metrics-server-6fbfb84cdd-5gkth --namespace=kube-system

I0618 18:23:36.725579       1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:''
I0618 18:23:36.741334       1 heapster.go:72] Metrics Server version v0.2.1
F0618 18:23:36.752641       1 heapster.go:112] Failed to create source provide: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

and

kubectl describe hpa busybox

Name:                                                  busybox
Namespace:                                             default
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Mon, 18 Jun 2018 12:55:28 -0400
Reference:                                             Deployment/busybox
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  <unknown> / 20%
Min replicas:                                          1
Max replicas:                                          4
Conditions:
  Type           Status  Reason                   Message
  ----           ------  ------                   -------
  AbleToScale    True    SucceededGetScale        the HPA controller was able to get the target's current scale
  ScalingActive  False   FailedGetResourceMetric  the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Events:
  Type     Reason                        Age                 From                       Message
  ----     ------                        ----                ----                       -------
  Warning  FailedComputeMetricsReplicas  1h (x13 over 1h)    horizontal-pod-autoscaler  failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       49m (x91 over 1h)   horizontal-pod-autoscaler  unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       44m (x9 over 48m)   horizontal-pod-autoscaler  unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
  Warning  FailedComputeMetricsReplicas  33m (x13 over 39m)  horizontal-pod-autoscaler  failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       4m (x71 over 39m)   horizontal-pod-autoscaler  unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)

I deleted the ServiceAccount from KUBE_ADMISSION_CONTROL in /etc/kubernetes/apiserver

On Fedora 28!

Config Files

cat /etc/kubernetes/apiserver

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_ALLOW_PRIV="--allow-privileged=true"

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_API_PORT="--insecure-port=8080"

KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/16"

KUBE_ENABLE_ADMISSION_PLUGINS="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"

KUBE_DISABLE_ADMISSION_PLUGINS="--disable-admission-plugins=PersistentVolumeLabel"

KUBE_CERT_FILE="--tls-cert-file=/etc/kubernetes/cert/apiserver.crt"

KUBE_TLS_PRIVATE_KEY_FILE="--tls-private-key-file=/etc/kubernetes/cert/apiserver.key"

KUBE_CLIENT_CA_FILE="--client-ca-file=/etc/kubernetes/cert/ca.crt"

KUBE_SERVICE_ACCOUNT_KEY_FILE="--service-account-key-file=/etc/kubernetes/cert/apiserver.key"

KUBE_API_ARGS="--requestheader-client-ca-file=/etc/kubernetes/cert/ca.crt"

cat /etc/kubernetes/controller-manager

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_MASTER="--master=http://127.0.0.1:8080"

KUBE_CONFIG="--kubeconfig=/etc/kubernetes/kubeconfig"

KUBE_SERVICE_ACCOUNT_PRIVATE_KEY_FILE="--service-account-private-key-file=/etc/kubernetes/cert/apiserver.key"

KUBE_ROOT_CA_FILE="--root-ca-file=/etc/kubernetes/cert/ca.crt"

KUBE_CONTROLLER_MANAGER_ARGS=""

cat /etc/kubernetes/scheduler

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_MASTER="--master=http://127.0.0.1:8080"

KUBE_CONFIG="--kubeconfig=/etc/kubernetes/kubeconfig"

KUBE_SCHEDULER_ARGS=""

cat /etc/kubernetes/proxy

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_MASTER="--master=http://127.0.0.1:8080"

KUBE_CONFIG="--kubeconfig=/etc/kubernetes/kubeconfig"

KUBE_PROXY_ARGS=""

cat /etc/kubernetes/kubelet

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_ALLOW_PRIV="--allow-privileged=true"

KUBELET_ADDRESS="--address=0.0.0.0"

KUBELET_PORT="--port=10250"

KUBE_CONFIG="--kubeconfig=/etc/kubernetes/kubeconfig"

KUBELET_CLUSTER_DNS="--cluster-dns=10.0.0.10"

KUBELET_ARGS="--cgroup-driver=systemd --fail-swap-on=false"

cat /etc/kubernetes/kubeconfig

kind: Config clusters:
- name: local   cluster:
    server: http://192.168.0.10:8080
    certificate-authority: /etc/kubernetes/cert/ca.crt users:
- name: admin   user:
    client-certificate: /etc/kubernetes/cert/admin.crt
    client-key: /etc/kubernetes/cert/admin.key contexts:
- context:
    cluster: local
    user: admin
   name: default-context
 current-context: default-context

Services Files

cat /lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/apiserver
User=root
ExecStart=/usr/bin/kube-apiserver \
            $KUBE_LOGTOSTDERR\
                        $KUBE_LOG_LEVEL\
                        $KUBE_ALLOW_PRIV\
                        $KUBE_API_ADDRESS\
                        $KUBE_API_PORT\
                        $KUBE_ETCD_SERVERS\
                        $KUBE_SERVICE_ADDRESSES\
                        $KUBE_ENABLE_ADMISSION_PLUGINS\
                        $KUBE_DISABLE_ADMISSION_PLUGINS\
                        $KUBE_CERT_FILE\
                        $KUBE_TLS_PRIVATE_KEY_FILE\
                        $KUBE_CLIENT_CA_FILE\
                        $KUBE_SERVICE_ACCOUNT_KEY_FILE\
                        $KUBE_API_ARGS
ExecStartPost=/usr/bin/echo $KUBE_APISERVER_OPTS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

cat /lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
            $KUBE_LOGTOSTDERR\
            $KUBE_LOG_LEVEL\
            $KUBE_MASTER\
            $KUBE_CONFIG\
            $KUBE_SERVICE_ACCOUNT_PRIVATE_KEY_FILE\
            $KUBE_ROOT_CA_FILE\
            $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

cat /lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR\
            $KUBE_LOG_LEVEL\
            $KUBE_MASTER\
            $KUBE_CONFIG\
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

cat /lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
            $KUBE_LOGTOSTDERR\
            $KUBE_LOG_LEVEL\
            $KUBE_MASTER\
            $KUBE_CONFIG\
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

cat /lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
            $KUBE_LOGTOSTDERR\
            $KUBE_LOG_LEVEL\
            $KUBE_ALLOW_PRIV\
            $KUBELET_ADDRESS\
            $KUBELET_PORT\
            $KUBE_CONFIG\
            $KUBELET_CLUSTER_DNS\
            $KUBELET_ARGS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

Generate config files to ssl

mkdir -p /etc/kubernetes/cert/
cd /etc/kubernetes/cert/

nano openssl.cnf

[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = 10.0.0.1
IP.2 = 192.168.0.10

nano worker-openssl.cnf

[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.0.10

And Genereate the certs file

openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -days 365 -out ca.crt -subj "/CN=kube-ca"


openssl req -x509 -new -nodes -key ca.key -days 365 -out ca.crt -subj "/CN=kube-ca"
openssl req -new -key apiserver.key -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out apiserver.crt -days 365 -extensions v3_req -extfile openssl.cnf

openssl genrsa -out kubelet.key 2048
openssl req -new -key kubelet.key -out kubelet.csr -subj "/CN=kubelet" -config worker-openssl.cnf
openssl x509 -req -in kubelet.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet.crt -days 365 -extensions v3_req -extfile worker-openssl.cnf

openssl genrsa -out admin.key 2048
openssl req -new -key admin.key -out admin.csr -subj "/CN=kube-admin"
openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out admin.crt -days 365

Now

kubectl top nodes
NAME      CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%   
fedora    241m         12%       1287Mi          68% 

AND

kubectl get hpa busybox
NAME      REFERENCE            TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
busybox   Deployment/busybox   0%/10%    1         20        1          1h

AND

kubectl top pod 
NAME                      CPU(cores)   MEMORY(bytes)   
busybox-fc4b45d7d-z6ljk   0m           1Mi 

References

Comment by floreks Cluster configuration

Thank you Sebastian Florek and Nick Rak

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM