简体   繁体   English

无法使用etcd在vitess上打开topo服务器

[英]Failed to open topo server on vitess with etcd

I'm running a simple example with Helm. 我正在用Helm运行一个简单的例子。 Take a look below at values.yaml file: 在下面查看values.yaml文件:

cat << EOF | helm install helm/vitess -n vitess -f -
topology:
  cells:
    - name: 'zone1'
      keyspaces:
        - name: 'vitess'
          shards:
            - name: '0'
              tablets:
                - type: 'replica'
                  vttablet:
                    replicas: 1
      mysqlProtocol:
        enabled: true
        authType: secret
        username: vitess
        passwordSecret: vitess-db-password
      etcd:
        replicas: 3
      vtctld:
        replicas: 1
      vtgate:
        replicas: 3

vttablet:
  dataVolumeClaimSpec:
    storageClassName: nfs-slow
EOF

Take a look at the output of current pods running below: 看一下下面运行的当前Pod的输出:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY   STATUS                  RESTARTS   AGE
kube-system   coredns-fb8b8dccf-8f5kt                  1/1     Running                 0          32m
kube-system   coredns-fb8b8dccf-qbd6c                  1/1     Running                 0          32m
kube-system   etcd-master1                             1/1     Running                 0          32m
kube-system   kube-apiserver-master1                   1/1     Running                 0          31m
kube-system   kube-controller-manager-master1          1/1     Running                 0          32m
kube-system   kube-flannel-ds-amd64-bkg9z              1/1     Running                 0          32m
kube-system   kube-flannel-ds-amd64-q8vh4              1/1     Running                 0          32m
kube-system   kube-flannel-ds-amd64-vqmnz              1/1     Running                 0          32m
kube-system   kube-proxy-bd8mf                         1/1     Running                 0          32m
kube-system   kube-proxy-nlc2b                         1/1     Running                 0          32m
kube-system   kube-proxy-x7cd5                         1/1     Running                 0          32m
kube-system   kube-scheduler-master1                   1/1     Running                 0          32m
kube-system   tiller-deploy-8458f6c667-cx2mv           1/1     Running                 0          27m
vitess        etcd-global-6pwvnv29th                   0/1     Init:0/1                0          16m
vitess        etcd-operator-84db9bc774-j4wml           1/1     Running                 0          30m
vitess        etcd-zone1-zwgvd7spzc                    0/1     Init:0/1                0          16m
vitess        vtctld-86cd78b6f5-zgfqg                  0/1     CrashLoopBackOff        7          16m
vitess        vtgate-zone1-58744956c4-x8ms2            0/1     CrashLoopBackOff        7          16m
vitess        zone1-vitess-0-init-shard-master-mbbph   1/1     Running                 0          16m
vitess        zone1-vitess-0-replica-0                 0/6     Init:CrashLoopBackOff   7          16m

Running logs I see this error: 运行日志,我看到此错误:

$ kubectl logs -n vitess vtctld-86cd78b6f5-zgfqg
++ cat
+ eval exec /vt/bin/vtctld '-cell="zone1"' '-web_dir="/vt/web/vtctld"' '-web_dir2="/vt/web/vtctld2/app"' -workflow_manager_init -workflow_manager_use_election -logtostderr=true -stderrthreshold=0 -port=15000 -grpc_port=15999 '-service_map="grpc-vtctl"' '-topo_implementation="etcd2"' '-topo_global_server_address="etcd-global-client.vitess:2379"' -topo_global_root=/vitess/global
++ exec /vt/bin/vtctld -cell=zone1 -web_dir=/vt/web/vtctld -web_dir2=/vt/web/vtctld2/app -workflow_manager_init -workflow_manager_use_election -logtostderr=true -stderrthreshold=0 -port=15000 -grpc_port=15999 -service_map=grpc-vtctl -topo_implementation=etcd2 -topo_global_server_address=etcd-global-client.vitess:2379 -topo_global_root=/vitess/global
ERROR: logging before flag.Parse: E0422 02:35:34.020928       1 syslogger.go:122] can't connect to syslog
F0422 02:35:39.025400       1 server.go:221] Failed to open topo server (etcd2,etcd-global-client.vitess:2379,/vitess/global): grpc: timed out when dialing

I'm running behind vagrant with 1 master and 2 nodes. 我在1个主节点和2个节点的流浪汉背后运行。 I suspect that is a issue with eth1 . 我怀疑eth1存在问题。

The storage are configured to use NFS. 存储配置为使用NFS。

$ kubectl logs etcd-operator-84db9bc774-j4wml
time="2019-04-22T17:26:51Z" level=info msg="skip reconciliation: running ([]), pending ([etcd-zone1-zwgvd7spzc])" cluster-name=etcd-zone1 cluster-namespace=vitess pkg=cluster
time="2019-04-22T17:26:51Z" level=info msg="skip reconciliation: running ([]), pending ([etcd-zone1-zwgvd7spzc])" cluster-name=etcd-global cluster-namespace=vitess pkg=cluster

It appears that etcd is not fully initializing. 看来etcd尚未完全初始化。 Note that neither the pod for the global lockserver (etcd-global-6pwvnv29th) nor the local one for cell zone1 (pod etcd-zone1-zwgvd7spzc) are ready. 请注意,全局锁服务器的pod(etcd-global-6pwvnv29th)或单元zone1的本地pod(pod etcd-zone1-zwgvd7spzc)都未准备好。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Vitess:使用 SQL 文件初始化密钥空间架构 - Vitess: Initialize keyspace schema using SQL file kubernetes 上的 etcd 数据库集群行为不端 - etcd DB cluster on kubernetes misbehaving etcd-operator集群的定期备份 - Periodic backups of an etcd-operator cluster Kubernetes 集群上的 Helm 安装或升级发布失败:服务器找不到请求的资源或升级失败:没有部署的发布 - Helm install or upgrade release failed on Kubernetes cluster: the server could not find the requested resource or UPGRADE FAILED: no deployed releases nginx: [emerg] open() &quot;/run/nginx.pid&quot; failed (13: Permission denied) - nginx: [emerg] open() "/run/nginx.pid" failed (13: Permission denied) 错误 503 后端获取失败 Guru Meditation: XID: 45654 Varnish 缓存服务器 - Error 503 Backend fetch failed Guru Meditation: XID: 45654 Varnish cache server 错误:创建:创建失败:服务器响应状态码 413 但未返回更多信息 - Error: create: failed to create: the server responded with the status code 413 but did not return more information Helm 安装失败 - Helm Installation Failed kubeadm初始化失败 - kubeadm init is getting failed Kubernetes 部署失败,在 GKE 中显示“创建 pod 沙箱失败” - Kubernetes deploy failed with "Failed create pod sandbox" in GKE
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM