繁体   English   中英

Kube.netes:AKS 上的某些服务(LoadBalancer)正在等待外部 IP

[英]Kubernetes: External-IP is pending for some services(LoadBalancer) on AKS

我有一个用于部署 pod 和服务的 k8s 模板。 我正在使用此模板根据AKS上的某些参数(不同的名称、标签)部署不同的服务。

一些服务获得了他们的外部 IP,很少有服务外部 IP总是处于待定状态 state

NAME                          TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S) 
service/ca1st-orgc            LoadBalancer   10.0.25.227   <pending>       7054:30907/TCP                                                17m
service/ca1st-orgc-db-mysql   LoadBalancer   10.0.97.81     52.13.67.9     3306:31151/TCP                                                17m
service/kafka1st              ClusterIP      10.0.15.90     <none>           9092/TCP,9093/TCP                                             17m
service/kafka2nd              ClusterIP      10.0.17.22   <none>           9092/TCP,9093/TCP                                             17m
service/kafka3rd              ClusterIP      10.0.02.07    <none>           9092/TCP,9093/TCP                                             17m
service/kubernetes            ClusterIP      10.0.0.1       <none>           443/TCP                                                       20m
service/orderer1st-orgc       LoadBalancer   10.0.17.19   <pending>        7050:30971/TCP                                                17m
service/orderer2nd-orgc       LoadBalancer   10.0.02.15    13.06.27.31     7050:31830/TCP                                                17m
service/peer1st-orga          LoadBalancer   10.0.10.19   <pending>        7051:31402/TCP,7052:32368/TCP,7053:31786/TCP,5984:30721/TCP   17m
service/peer1st-orgb          LoadBalancer   10.0.218.48    13.06.25.13     7051:31892/TCP,7052:30326/TCP,7053:31419/TCP,5984:31882/TCP   17m
service/peer2nd-orga          LoadBalancer   10.0.86.64     <pending>        7051:30590/TCP,7052:31870/TCP,7053:30362/TCP,5984:30036/TCP   17m
service/peer2nd-orgb          LoadBalancer   10.0.195.212   52.13.58.3     7051:30476/TCP,7052:30091/TCP,7053:30099/TCP,5984:32614/TCP   17m
service/zookeeper1st          ClusterIP      10.0.57.192    <none>           2888/TCP,3888/TCP,2181/TCP                                    17m
service/zookeeper2nd          ClusterIP      10.0.174.25    <none>           2888/TCP,3888/TCP,2181/TCP                                    17m
service/zookeeper3rd          ClusterIP      10.0.210.166   <none>           2888/TCP,3888/TCP,2181/TCP                                    17m

有趣的是,它是用于部署所有相关服务的同一个模板。 例如,以peer为前缀的服务由相同的模板部署。

有人遇到过这个吗?

订购者Pod 的部署模板

apiVersion: v1
kind: Pod
metadata:
  name: {{ orderer.name }}
  labels:
    k8s-app: {{ orderer.name }}
    type: orderer
{% if (project_version is version('1.4.0','>=') or 'stable' in project_version or 'latest' in project_version) and fabric.metrics is defined and fabric.metrics %}
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/path: /metrics
    prometheus.io/port: '8443'
    prometheus.io/scheme: 'http'
{% endif %}
spec:
{% if creds %}
  imagePullSecrets:
  - name: regcred
{% endif %}
  restartPolicy: OnFailure
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: fabriccerts
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: type
              operator: In
              values:
                - orderer
          topologyKey: kubernetes.io/hostname
  containers:
    - name: {{ orderer.name }}
      image: {{ fabric.repo.url }}fabric-orderer:{{ fabric.baseimage_tag }}
{% if 'latest' in project_version or 'stable' in project_version %}
      imagePullPolicy: Always
{% else %}
      imagePullPolicy: IfNotPresent
{% endif %}
      env:
{% if project_version is version('1.3.0','<') %}
        - { name: "ORDERER_GENERAL_LOGLEVEL", value: "{{ fabric.logging_level | default('ERROR') | lower }}" }
{% elif project_version is version('1.4.0','>=') or 'stable' in project_version or 'latest' in project_version %}
        - { name: "FABRIC_LOGGING_SPEC", value: "{{ fabric.logging_level | default('ERROR') | lower }}" }
{% endif %}
        - { name: "ORDERER_GENERAL_LISTENADDRESS", value: "0.0.0.0" }
        - { name: "ORDERER_GENERAL_GENESISMETHOD", value: "file" }
        - { name: "ORDERER_GENERAL_GENESISFILE", value: "/etc/hyperledger/fabric/artifacts/keyfiles/genesis.block" }
        - { name: "ORDERER_GENERAL_LOCALMSPID", value: "{{ orderer.org }}" }
        - { name: "ORDERER_GENERAL_LOCALMSPDIR", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/msp" }
        - { name: "ORDERER_GENERAL_TLS_ENABLED", value: "{{ tls | lower }}" }
{% if tls %}
        - { name: "ORDERER_GENERAL_TLS_PRIVATEKEY", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.key" }
        - { name: "ORDERER_GENERAL_TLS_CERTIFICATE", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.crt" }
        - { name: "ORDERER_GENERAL_TLS_ROOTCAS", value: "[/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/ca.crt]" }
{% endif %}
{% if (project_version is version_compare('2.0.0','>=') or ('stable' in project_version or 'latest' in project_version)) and fabric.consensus_type is defined and fabric.consensus_type == 'etcdraft' %}
        - { name: "ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.key" }
        - { name: "ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.crt" }
        - { name: "ORDERER_GENERAL_CLUSTER_ROOTCAS", value: "[/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/ca.crt]" }
{% elif fabric.consensus_type | default('kafka') == 'kafka' %}
        - { name: "ORDERER_KAFKA_RETRY_SHORTINTERVAL", value: "1s" }
        - { name: "ORDERER_KAFKA_RETRY_SHORTTOTAL", value: "30s" }
        - { name: "ORDERER_KAFKA_VERBOSE", value: "true" }
{% endif %}
{% if mutualtls %}
{% if project_version is version('1.1.0','>=') or 'stable' in project_version or 'latest' in project_version %}
        - { name: "ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED", value: "true" }
{% else %}
        - { name: "ORDERER_GENERAL_TLS_CLIENTAUTHENABLED", value: "true" }
{% endif %}
        - { name: "ORDERER_GENERAL_TLS_CLIENTROOTCAS", value: "[{{ rootca | list | join (", ")}}]" }
{% endif %}
{% if (project_version is version('1.4.0','>=') or 'stable' in project_version or 'latest' in project_version) and fabric.metrics is defined and fabric.metrics %}
        - { name: "ORDERER_OPERATIONS_LISTENADDRESS", value: ":8443" }
        - { name: "ORDERER_OPERATIONS_TLS_ENABLED", value: "false" }
        - { name: "ORDERER_METRICS_PROVIDER", value: "prometheus" }
{% endif %}
{% if fabric.orderersettings is defined and fabric.orderersettings.ordererenv is defined %}
{% for pkey, pvalue in fabric.orderersettings.ordererenv.items() %}
        - { name: "{{ pkey }}", value: "{{ pvalue }}" }
{% endfor %}
{% endif %}
{% include './resource.j2' %}
      volumeMounts:
        - { mountPath: "/etc/hyperledger/fabric/artifacts", name: "task-pv-storage" }
      command: ["orderer"]

LoadBalancer 的部署配置

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: {{ orderer.name }}
  name: {{ orderer.name }}
spec:
  selector:
    k8s-app: {{ orderer.name }}
{% if fabric.k8s.exposeserviceport %}
  type: LoadBalancer
{% endif %}
  ports:
    - name: port1
      port: 7050
{% if fabric.metrics is defined and fabric.metrics %}
    - name: scrapeport
      port: 8443
{% endif %}

有趣的是,对于没有获得外部 IP 的服务,我没有看到任何事件(在运行 kubectl describe service orderer1st-orgc 时)

Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

请分享您的想法。

我的集群出现问题。 我不确定这是什么,但是同一组LoadBalancers从未使用过以获得其Public IP 无论您清理所有pvc,服务和Pod多少次。 我删除了群集,然后重新创建了一个。 在新集群中,一切都按预期进行。

所有的LoadBalancers都获得了其公共IP。

主要原因可能是现有的 Loadbalancer 正在运行。 所以删除它并开始一个新的。 就我而言,它起作用了。

删除负载均衡器并重试

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM