简体   繁体   English

kube-controller-manager输出错误“无法更改NodeName”

[英]kube-controller-manager outputs an error “cannot change NodeName”

I use kubernetes on AWS with CoreOS & flannel VLAN network. 我在带有CoreOS和法兰绒VLAN网络的AWS上使用kubernetes。 (followed this guide https://coreos.com/kubernetes/docs/latest/getting-started.html ) k8s version is 1.4.6 . (遵循本指南https://coreos.com/kubernetes/docs/latest/getting-started.html)k8s版本是1.4.6

And I have the following node-exporter daemon-set. 而且我有以下node-exporter守护程序集。

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: node-exporter
  labels:
    app: node-exporter
    tier: monitor
    category: platform
spec:
  template:
    metadata:
      labels:
        app: node-exporter
        tier: monitor
        category: platform
      name: node-exporter
    spec:
      containers:
      - image: prom/node-exporter:0.12.0
        name: node-exporter
        ports:
        - containerPort: 9100
          hostPort: 9100
          name: scrape
      hostNetwork: true
      hostPID: true

When I run this, kube-controller-manager outputs an error repeatedly as below: 当我运行此命令时,kube-controller-manager反复输出错误,如下所示:

E1117 18:31:23.197206       1 endpoints_controller.go:513]
Endpoints "node-exporter" is invalid:
[subsets[0].addresses[0].nodeName: Forbidden: Cannot change NodeName for 172.17.64.5 to ip-172-17-64-5.ec2.internal,
subsets[0].addresses[1].nodeName: Forbidden: Cannot change NodeName for 172.17.64.6 to ip-172-17-64-6.ec2.internal,
subsets[0].addresses[2].nodeName: Forbidden: Cannot change NodeName for 172.17.80.5 to ip-172-17-80-5.ec2.internal,
subsets[0].addresses[3].nodeName: Forbidden: Cannot change NodeName for 172.17.80.6 to ip-172-17-80-6.ec2.internal,
subsets[0].addresses[4].nodeName: Forbidden: Cannot change NodeName for 172.17.96.6 to ip-172-17-96-6.ec2.internal]

Just for information, despite from this error message, node_exporter is accessible on eg) 172-17-96-6:9100 . 仅出于信息目的,尽管有此错误消息,也可以在例如172-17-96-6:9100上访问172-17-96-6:9100 My nodes are in a private network including k8s master. 我的节点位于包括k8s主服务器的专用网络中。

But these logs are output too many and makes it difficult to see other logs by eyes from our log console. 但是这些日志输出过多,因此很难从我们的日志控制台中看到其他日志。 Could I see how to resolve this error? 我可以看看如何解决此错误吗?

Because I built my k8s cluster from scratch, cloud-provider=aws flag was not activated at first and I recently turned it on, but not sure if it's related to this issue. 由于我是从头开始构建k8s集群的,因此最初未激活cloud-provider=aws标志,并且最近将其打开了,但不确定是否与该问题有关。

It looks this is caused by my another manifest file 看来这是由我的另一个清单文件引起的

apiVersion: v1
kind: Service
metadata:
  name: node-exporter
  labels:
    app: node-exporter
    tier: monitor
    category: platform
  annotations:
    prometheus.io/scrape: 'true'
spec:
  clusterIP: None
  ports:
  - name: scrape
    port: 9100
    protocol: TCP
  selector:
    app: node-exporter
  type: ClusterIP

I thought this is necessary to expose node-exporter daemon-set above, but it could rather introduce some sort of conflict when I set hostNetwork: true in a daemon-set (actually, a pod) manifest. 我认为这是暴露上面的node-exporter守护程序集所必需的,但是当我设置hostNetwork: true在守护程序集(实际上是pod)清单中为hostNetwork: true时,它可能会引入某种冲突。 I'm not 100% certain though, after I delete this service the error disappears while I can still access to 172-17-96-6:9100 from outside of the k8s cluster. 不过,我不确定100%,删除此服务后,错误消失了,而我仍然可以从k8s集群外部访问172-17-96-6:9100

I just followed by this post when setting prometheus and node-exporter, https://coreos.com/blog/prometheus-and-kubernetes-up-and-running.html 我在设置prometheus和node-exporter时紧跟着这篇帖子, https: //coreos.com/blog/prometheus-and-kubernetes-up-and-running.html

in case others face with the same problem, I'm leaving my comment here. 万一其他人遇到同样的问题,我在这里留下我的评论。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 当kubeadm使用“cloud-provider = aws”时,kube-controller-manager无法启动 - kube-controller-manager doesn't start when using “cloud-provider=aws” with kubeadm 如果在--cloud-provider = aws标志中使用kubeadm init,则kubernetes控制器管理器错误 - kubernetes controller manager error if use kubeadm init with --cloud-provider=aws flag CHEF服务器-错误:网络错误:getaddrinfo:提供的节点名或服务名,或者未知 - CHEF Server--Error: Network Error: getaddrinfo: nodename nor servname provided, or not known AWS CLI和ECS查询-HTTPSConnectionPool 443错误; 超过最大重试次数; 提供的节点名或服务名,或者未知 - AWS CLI and ECS query - HTTPSConnectionPool 443 error; Max retries exceeded; nodename nor servname provided, or not known 我可以在没有“船长”入口的情况下使用“kube-ingress-aws-controller”吗? - Can I use "kube-ingress-aws-controller" without "skipper" ingress? AFAmazonS3Manager错误 - AFAmazonS3Manager Error 如何使用 AWS 云 controller 管理器 - How to work with AWS cloud controller manager 错误:发布“http://localhost/api/v1/namespaces/kube-system/configmaps”:拨打 tcp 127.0.0.1:80 - Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80 Tomcat 管理器:无法上传大型 WAR 文件 - Tomcat Manager: Cannot Upload Large WAR file AWS Certificate Manager,无法获取子域的 https - AWS Certificate Manager, cannot get https for subdomain
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM