繁体   English   中英

Kubernetes 1.10.1未找到指标(HPA)(Fedora 28)

[英]Kubernetes 1.10.1 Not Found Metric (HPA) (Fedora 28)

我在Fedora服务器上的集群k8上遇到一些问题,我有一个1个主节点和2个节点,等等的配置,法兰绒,码头工人和kubernetes

我跑

kubectl  run busybox --image=busybox --port 8080  \
         -- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
         env | grep HOSTNAME | sed 's/.*=//g'; } | nc -l -p  8080; done"

而且,这很好

kubectl expose deployment busybox --type=NodePort

现在

kubectl autoscale deployment busybox --min=1 --max=4 --cpu-percent=20 deployment "busybox" autoscaled

当描述HPA指标时

NAME      REFERENCE            TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
busybox   Deployment/busybox   <unknown>/20%   1         4         1          1h

我尝试这个https://github.com/kubernetes-incubator/metrics-server

git clone https://github.com/kubernetes-incubator/metrics-server.git

kubectl create -f metrics-server/deploy/1.8+/

但指标Pod的状态为CrashLoopBackOff

kubectl logs metrics-server-6fbfb84cdd-5gkth --namespace=kube-system

I0618 18:23:36.725579       1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:''
I0618 18:23:36.741334       1 heapster.go:72] Metrics Server version v0.2.1
F0618 18:23:36.752641       1 heapster.go:112] Failed to create source provide: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

kubectl describe hpa busybox

Name:                                                  busybox
Namespace:                                             default
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Mon, 18 Jun 2018 12:55:28 -0400
Reference:                                             Deployment/busybox
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  <unknown> / 20%
Min replicas:                                          1
Max replicas:                                          4
Conditions:
  Type           Status  Reason                   Message
  ----           ------  ------                   -------
  AbleToScale    True    SucceededGetScale        the HPA controller was able to get the target's current scale
  ScalingActive  False   FailedGetResourceMetric  the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Events:
  Type     Reason                        Age                 From                       Message
  ----     ------                        ----                ----                       -------
  Warning  FailedComputeMetricsReplicas  1h (x13 over 1h)    horizontal-pod-autoscaler  failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       49m (x91 over 1h)   horizontal-pod-autoscaler  unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       44m (x9 over 48m)   horizontal-pod-autoscaler  unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
  Warning  FailedComputeMetricsReplicas  33m (x13 over 39m)  horizontal-pod-autoscaler  failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       4m (x71 over 39m)   horizontal-pod-autoscaler  unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)

我从/ etc / kubernetes / apiserver中的KUBE_ADMISSION_CONTROL删除了ServiceAccount

在Fedora 28上!

配置文件

猫/ etc / kubernetes / apiserver

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_ALLOW_PRIV="--allow-privileged=true"

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_API_PORT="--insecure-port=8080"

KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/16"

KUBE_ENABLE_ADMISSION_PLUGINS="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"

KUBE_DISABLE_ADMISSION_PLUGINS="--disable-admission-plugins=PersistentVolumeLabel"

KUBE_CERT_FILE="--tls-cert-file=/etc/kubernetes/cert/apiserver.crt"

KUBE_TLS_PRIVATE_KEY_FILE="--tls-private-key-file=/etc/kubernetes/cert/apiserver.key"

KUBE_CLIENT_CA_FILE="--client-ca-file=/etc/kubernetes/cert/ca.crt"

KUBE_SERVICE_ACCOUNT_KEY_FILE="--service-account-key-file=/etc/kubernetes/cert/apiserver.key"

KUBE_API_ARGS="--requestheader-client-ca-file=/etc/kubernetes/cert/ca.crt"

猫/ etc / kubernetes / controller-manager

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_MASTER="--master=http://127.0.0.1:8080"

KUBE_CONFIG="--kubeconfig=/etc/kubernetes/kubeconfig"

KUBE_SERVICE_ACCOUNT_PRIVATE_KEY_FILE="--service-account-private-key-file=/etc/kubernetes/cert/apiserver.key"

KUBE_ROOT_CA_FILE="--root-ca-file=/etc/kubernetes/cert/ca.crt"

KUBE_CONTROLLER_MANAGER_ARGS=""

猫/ etc / kubernetes / scheduler

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_MASTER="--master=http://127.0.0.1:8080"

KUBE_CONFIG="--kubeconfig=/etc/kubernetes/kubeconfig"

KUBE_SCHEDULER_ARGS=""

猫/ etc / kubernetes / proxy

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_MASTER="--master=http://127.0.0.1:8080"

KUBE_CONFIG="--kubeconfig=/etc/kubernetes/kubeconfig"

KUBE_PROXY_ARGS=""

猫/ etc / kubernetes / kubelet

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_ALLOW_PRIV="--allow-privileged=true"

KUBELET_ADDRESS="--address=0.0.0.0"

KUBELET_PORT="--port=10250"

KUBE_CONFIG="--kubeconfig=/etc/kubernetes/kubeconfig"

KUBELET_CLUSTER_DNS="--cluster-dns=10.0.0.10"

KUBELET_ARGS="--cgroup-driver=systemd --fail-swap-on=false"

猫/ etc / kubernetes / kubeconfig

kind: Config clusters:
- name: local   cluster:
    server: http://192.168.0.10:8080
    certificate-authority: /etc/kubernetes/cert/ca.crt users:
- name: admin   user:
    client-certificate: /etc/kubernetes/cert/admin.crt
    client-key: /etc/kubernetes/cert/admin.key contexts:
- context:
    cluster: local
    user: admin
   name: default-context
 current-context: default-context

服务文件

猫/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/apiserver
User=root
ExecStart=/usr/bin/kube-apiserver \
            $KUBE_LOGTOSTDERR\
                        $KUBE_LOG_LEVEL\
                        $KUBE_ALLOW_PRIV\
                        $KUBE_API_ADDRESS\
                        $KUBE_API_PORT\
                        $KUBE_ETCD_SERVERS\
                        $KUBE_SERVICE_ADDRESSES\
                        $KUBE_ENABLE_ADMISSION_PLUGINS\
                        $KUBE_DISABLE_ADMISSION_PLUGINS\
                        $KUBE_CERT_FILE\
                        $KUBE_TLS_PRIVATE_KEY_FILE\
                        $KUBE_CLIENT_CA_FILE\
                        $KUBE_SERVICE_ACCOUNT_KEY_FILE\
                        $KUBE_API_ARGS
ExecStartPost=/usr/bin/echo $KUBE_APISERVER_OPTS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

猫/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
            $KUBE_LOGTOSTDERR\
            $KUBE_LOG_LEVEL\
            $KUBE_MASTER\
            $KUBE_CONFIG\
            $KUBE_SERVICE_ACCOUNT_PRIVATE_KEY_FILE\
            $KUBE_ROOT_CA_FILE\
            $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

猫/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR\
            $KUBE_LOG_LEVEL\
            $KUBE_MASTER\
            $KUBE_CONFIG\
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

猫/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
            $KUBE_LOGTOSTDERR\
            $KUBE_LOG_LEVEL\
            $KUBE_MASTER\
            $KUBE_CONFIG\
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

猫/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
            $KUBE_LOGTOSTDERR\
            $KUBE_LOG_LEVEL\
            $KUBE_ALLOW_PRIV\
            $KUBELET_ADDRESS\
            $KUBELET_PORT\
            $KUBE_CONFIG\
            $KUBELET_CLUSTER_DNS\
            $KUBELET_ARGS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

生成配置文件到SSL

mkdir -p /etc/kubernetes/cert/
cd /etc/kubernetes/cert/

nano openssl.cnf

[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = 10.0.0.1
IP.2 = 192.168.0.10

纳米工人-openssl.cnf

[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.0.10

并生成证书文件

openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -days 365 -out ca.crt -subj "/CN=kube-ca"


openssl req -x509 -new -nodes -key ca.key -days 365 -out ca.crt -subj "/CN=kube-ca"
openssl req -new -key apiserver.key -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out apiserver.crt -days 365 -extensions v3_req -extfile openssl.cnf

openssl genrsa -out kubelet.key 2048
openssl req -new -key kubelet.key -out kubelet.csr -subj "/CN=kubelet" -config worker-openssl.cnf
openssl x509 -req -in kubelet.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet.crt -days 365 -extensions v3_req -extfile worker-openssl.cnf

openssl genrsa -out admin.key 2048
openssl req -new -key admin.key -out admin.csr -subj "/CN=kube-admin"
openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out admin.crt -days 365

现在

kubectl top nodes
NAME      CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%   
fedora    241m         12%       1287Mi          68% 

kubectl get hpa busybox
NAME      REFERENCE            TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
busybox   Deployment/busybox   0%/10%    1         20        1          1h

kubectl top pod 
NAME                      CPU(cores)   MEMORY(bytes)   
busybox-fc4b45d7d-z6ljk   0m           1Mi 

参考文献

评论来自floreks集群配置

谢谢塞巴斯蒂安·弗洛瑞克尼克·拉克

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM