简体   繁体   English

kubernetes节点未注册

[英]kubernetes node doesn't get registered

I'm trying to kubernetes 1.5.2 on Container Linux by CoreOS alpha (1284.2.0) using rkt. 我正在尝试使用rkt在CoreOS alpha(1284.2.0)上的Container Linux上kubernetes 1.5.2。

I have two coreos servers, one (controller+worker) with hostname coreos-2.tux-in.com , the 2nd one will be a work with hostname coreos-3.tux-in.com . 我有两台coreos服务器,一台(主机名+控制器)主机coreos-2.tux-in.com ,第二台将是一台主机coreos-3.tux-in.com

for now I'm installing the controller+worker on coreos-2.tux-in.com . 现在,我在coreos-2.tux-in.com上安装controller + worker。

in general I followed the instructions in https://coreos.com/kubernetes/docs/latest/ and aded some modifications. 通常,我按照https://coreos.com/kubernetes/docs/latest/中的说明进行了一些修改。

instead of using deprecated --api-server parameter I use kubeconfig. 而不是使用不推荐使用的--api-server参数,我使用kubeconfig。

the problem that I'm having is that the kube-proxy pod fails with the following error messages: 我遇到的问题是kube-proxy pod失败,并显示以下错误消息:

Jan 14 23:27:34 coreos-2.tux-in.com rkt[11555]: [  220.477192] kube-proxy[5]: E0114 23:27:34.900184       5 server.go:421] Can't get Node "coreos-2.tux-in.com", assuming iptables proxy, err: nodes "coreos-2.tux-in.com" not found
Jan 14 23:27:34 coreos-2.tux-in.com rkt[11555]: [  220.479181] kube-proxy[5]: I0114 23:27:34.902440       5 server.go:215] Using iptables Proxier.
Jan 14 23:27:34 coreos-2.tux-in.com rkt[11555]: [  220.480503] kube-proxy[5]: W0114 23:27:34.903771       5 server.go:468] Failed to retrieve node info: nodes "coreos-2.tux-in.com" not found
Jan 14 23:27:34 coreos-2.tux-in.com rkt[11555]: [  220.481175] kube-proxy[5]: F0114 23:27:34.903829       5 server.go:222] Unable to create proxier: can't set sysctl net/ipv4/conf/all/route_localnet: open /proc/sys/net/ipv4/conf/all/route_localnet: read-only file system

the kubeconfig is located at /etc/kubernetes/controller-kubeconfig.yaml with the following: kubeconfig位于/etc/kubernetes/controller-kubeconfig.yaml ,内容如下:

apiVersion: v1
kind: Config
clusters:
- cluster:
    server: http://127.0.0.1:8080
  name: tuxin-coreos-cluster
contexts:
- context:
    cluster: tuxin-coreos-cluster
  name: tuxin-coreos-context
kind: Config
preferences:
  colors: true
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/apiserver.pem
    client-key: /etc/kubernetes/ssl/apiserver-key.pem
current-context: tuxin-coreos-context

this is the manifest for kube-apisever: 这是kube-apisever的清单:

apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
    command:
    - /hyperkube
    - apiserver
    - --bind-address=0.0.0.0
    - --etcd-servers=http://127.0.0.1:4001
    - --allow-privileged=true
    - --service-cluster-ip-range=10.3.0.0/24
    - --secure-port=443
    - --advertise-address=10.79.218.2
    - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --client-ca-file=/etc/kubernetes/ssl/ca.pem
    - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --runtime-config=extensions/v1beta1/networkpolicies=true
    - --anonymous-auth=false
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        port: 8080
        path: /healthz
      initialDelaySeconds: 15
      timeoutSeconds: 15
    ports:
    - containerPort: 443
      hostPort: 443
      name: https
    - containerPort: 8080
      hostPort: 8080
      name: local
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

and this is the manifest for kube-proxy: 这是kube-proxy的清单:

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
    command:
    - /hyperkube
    - proxy
    - --kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/kubernetes/controller-kubeconfig.yaml
      name: "kubeconfig"
      readOnly: true
    - mountPath: /etc/kubernetes/ssl
      name: "etc-kube-ssl"
      readOnly: true
    - mountPath: /var/run/dbus
      name: dbus
      readOnly: false
  volumes:
  - name: "ssl-certs"
    hostPath:
      path: "/usr/share/ca-certificates"
  - name: "kubeconfig"
    hostPath:
      path: "/etc/kubernetes/controller-kubeconfig.yaml"
  - name: "etc-kube-ssl"
    hostPath:
      path: "/etc/kubernetes/ssl"
  - hostPath:
      path: /var/run/dbus
    name: dbus

/etc/kubernetes/manifests also includes canal, kube-controller-manager, kube-scheduler and kubernetes-dashboard. /etc/kubernetes/manifests还包括运河,kube-controller-manager,kube-scheduler和kubernetes-dashboard。

I have kubectl on my desktop configured with the following at ~/.kube/config : 我有kubectl我的桌面与以下配置上的~/.kube/config

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /Users/ufk/Projects/tuxin-coreos/kubernetes/certs/ca.pem
    server: https://coreos-2.tux-in.com
  name: tuxin-coreos-cluster
contexts:
- context:
    cluster: tuxin-coreos-cluster
    user: default-admin
  name: tuxin-coreos-context
current-context: tuxin-coreos-context
kind: Config
preferences: {}
users:
- name: default-admin
  user:
    username: kubelet
    client-certificate: /Users/ufk/Projects/tuxin-coreos/kubernetes/certs/client.pem
    client-key: /Users/ufk/Projects/tuxin-coreos/kubernetes/certs/client-key.pem

and when I execute kubectl get nodes I get No resources found. 当我执行kubectl get nodesNo resources found.

so somehow the current node is not registered... 所以以某种方式当前节点未注册...

this is my kubelet.service file: 这是我的kubelet.service文件:

[Service]
Environment=KUBELET_IMAGE_TAG=v1.5.2_coreos.0
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
  --volume var-log,kind=host,source=/var/log \
  --mount volume=var-log,target=/var/log \
  --volume dns,kind=host,source=/etc/resolv.conf \
  --mount volume=dns,target=/etc/resolv.conf \
  --volume cni-bin,kind=host,source=/opt/cni/bin \
  --mount volume=cni-bin,target=/opt/cni/bin \
  --volume rkt,kind=host,source=/opt/bin/host-rkt \
  --mount volume=rkt,target=/usr/bin/rkt \
  --volume var-lib-rkt,kind=host,source=/var/lib/rkt \
  --mount volume=var-lib-rkt,target=/var/lib/rkt \
  --volume stage,kind=host,source=/tmp \
  --mount volume=stage,target=/tmp"
ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
  --kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml \
  --register-schedulable=false \
  --network-plugin=cni \
  --container-runtime=rkt \
  --rkt-path=/usr/bin/rkt \
  --allow-privileged=true \
  --pod-manifest-path=/etc/kubernetes/manifests \
  --hostname-override=coreos-2.tux-in.com \
  --cluster_dns=10.3.0.10 \
  --cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

I have --hostname-override=coreos-2.tux-in.com set, so I guess it's supposed to register the node but doesn't. 我设置了--hostname-override=coreos-2.tux-in.com ,所以我想它应该注册该节点但没有注册。

what do I do from here? 我从这里做什么?

I needed to add --require-kubeconfig parameter to the kubelet-wrappper execution of kubelet.service . 我需要添加--require-kubeconfig参数的kubelet-wrappper执行kubelet.service this tells kubelet to configure api server from the kubeconfig file. 这告诉kubelet从kubeconfig文件配置api服务器。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kube.netes 中的 Redis 未连接到节点 - Redis in Kubernetes doesn't connect with node Kubernetes 仪表板没有看到我的主节点 - Kubernetes Dashboard doesn't see my master node Kube.netes 不会将 Pod 调度到工作负载较小的节点 - Kubernetes doesn't schedule pod to the node with less workload 当节点 label 消失时,Kube.netes nodeSelector 不会删除 pod - Kubernetes nodeSelector doesn't remove pods when the node label is gone Kubernetes系统容器“暂停”没有获得正确的端口映射 - Kubernetes system container “pause” doesn't get proper port mappings 为什么 Pod 名称没有在 Kubernetes DNS 中注册? - Why aren't pod names registered in Kubernetes DNS? Kubernetes V1.16.8 不支持“节点角色” label 使用“--node-labels=node-role.kubernetesio”。 - Kubernetes V1.16.8 doesn't support 'node-role' label using “--node-labels=node-role.kubernetes.io/master=” 启动Pod时,Kubernetes不考虑总节点内存使用量 - Kubernetes doesn't take into account total node memory usage when starting Pods Kubernetes Python客户端read_node_status()没有提供准确的可分配资源 - Kubernetes Python Client read_node_status() doesn't provide accurate allocatable resources 无法获取我的kubernetes主节点的externalID(即aws提供的instanceId) - Can't able to get externalID ( i.e instanceId provided by aws) of my kubernetes master node
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM