繁体   English   中英

裸机上的 Kubernetes 集群 by kubeadm

[英]Kubernetes cluster on bare metal by kubeadm

我正在尝试在 Debian 10 上运行的 3 个裸机节点(1 个主节点和 2 个工作节点)上创建一个带有 kubeadm 的单个控制平面集群,并将 Docker 作为容器运行时。 每个节点都有一个外部 IP 和内部 IP。 我想在内部网络上配置一个集群并可以从 Internet 访问。 为此使用了这个命令(如果有问题请纠正我):

kubeadm init --control-plane-endpoint=10.10.0.1 --apiserver-cert-extra-sans={public_DNS_name},10.10.0.1 --pod-network-cidr=192.168.0.0/16

我有:

kubectl get no -o wide
NAME                           STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION   CONTAINER-RUNTIME
dev-k8s-master-0.public.dns    Ready    master   16h   v1.18.2   10.10.0.1     <none>        Debian GNU/Linux 10 (buster)   4.19.0-8-amd64   docker://19.3.8

初始化阶段成功完成,集群可从 Internet 访问。 除了在应用网络后应该运行的 coredns 之外,所有 pod 都已启动并运行。

kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

应用网络后,coredns pod 仍未准备好:

kubectl get po -A
NAMESPACE     NAME                                                   READY   STATUS             RESTARTS   AGE
kube-system   calico-kube-controllers-75d56dfc47-g8g9g               0/1     CrashLoopBackOff   192        16h
kube-system   calico-node-22gtx                                      1/1     Running            0          16h
kube-system   coredns-66bff467f8-87vd8                               0/1     Running            0          16h
kube-system   coredns-66bff467f8-mv8d9                               0/1     Running            0          16h
kube-system   etcd-dev-k8s-master-0                                  1/1     Running            0          16h
kube-system   kube-apiserver-dev-k8s-master-0                        1/1     Running            0          16h
kube-system   kube-controller-manager-dev-k8s-master-0               1/1     Running            0          16h
kube-system   kube-proxy-lp6b8                                       1/1     Running            0          16h
kube-system   kube-scheduler-dev-k8s-master-0                        1/1     Running            0          16h

来自失败 pod 的一些日志:

kubectl -n kube-system logs calico-kube-controllers-75d56dfc47-g8g9g
2020-04-22 08:24:55.853 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", ReconcilerPeriod:"5m", CompactionPeriod:"10m", EnabledControllers:"node", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", HealthEnabled:true, SyncNodeLabels:true, DatastoreType:"kubernetes"}
2020-04-22 08:24:55.855 [INFO][1] k8s.go 228: Using Calico IPAM
W0422 08:24:55.855525       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2020-04-22 08:24:55.856 [INFO][1] main.go 109: Ensuring Calico datastore is initialized
2020-04-22 08:25:05.857 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2020-04-22 08:25:05.857 [FATAL][1] main.go 114: Failed to initialize Calico datastore error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded

核心:

[INFO] plugin/ready: Still waiting on: "kubernetes"
I0422 08:29:12.275344       1 trace.go:116] Trace[1050055850]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.274382393 +0000 UTC m=+59491.429700922) (total time: 30.000897581s):
Trace[1050055850]: [30.000897581s] [30.000897581s] END
E0422 08:29:12.275388       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0422 08:29:12.276163       1 trace.go:116] Trace[188478428]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.275499997 +0000 UTC m=+59491.430818380) (total time: 30.000606394s):
Trace[188478428]: [30.000606394s] [30.000606394s] END
E0422 08:29:12.276198       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0422 08:29:12.277424       1 trace.go:116] Trace[16697023]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.276675998 +0000 UTC m=+59491.431994406) (total time: 30.000689778s):
Trace[16697023]: [30.000689778s] [30.000689778s] END
E0422 08:29:12.277452       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"

有什么想法有什么问题吗?

这个答案是为了提请注意@florin的建议:

当我在节点上有多个公共接口并且 calico 选择了错误的接口时,我看到了类似的行为。

我所做的是在 calico 配置中设置IP_AUTODETECT_METHOD

用于自动检测此主机的 IPv4 地址的方法。 这仅在自动检测 IPv4 地址时使用。 有关有效方法的详细信息,请参阅 IP 自动检测方法。

在此处了解更多信息: https://docs.projectcalico.org/reference/node/configuration#ip-autodetection-methods

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM