简体   繁体   English

如何在 Windows 上创建具有多个节点的 Kubernetes 集群

[英]How to create Kubernetes cluster with multiple nodes on Windows

All kubernetes forums and articles ask to work with minikube that gives you only a single-node kubernetes cluster.所有 kubernetes 论坛和文章都要求使用 minikube,它只为您提供单节点 kubernetes 集群。

What options are available to work with multi node kubernetes cluster on a Windows environment?.有哪些选项可用于在 Windows 环境中使用多节点 kubernetes 集群?。

The problem is that Windows node may only act as a worker node .问题是 Windows 节点可能只充当工作节点 You can only create a hybrid cluster and have Windows workloads running in Windows pods, talking to Linux workloads running in Linux pods.您只能创建一个混合集群并让 Windows 工作负载在 Windows pod 中运行,与在 Linux pod 中运行的 Linux 工作负载交谈。

Intro to Windows support in Kubernetes : Kubernetes 中的 Windows 支持介绍

The Kubernetes control plane, including the master components, continues to run on Linux. There are no plans to have a Windows-only Kubernetes cluster The Kubernetes control plane, including the master components, continues to run on Linux. There are no plans to have a Windows-only Kubernetes cluster . The Kubernetes control plane, including the master components, continues to run on Linux. There are no plans to have a Windows-only Kubernetes cluster

Full list of limitations can be found in official docs完整的限制列表可以在官方文档中找到

Control Plane limitations : 控制平面限制

Windows is only supported as a worker node in the Kubernetes architecture and component matrix. Windows 仅支持作为 Kubernetes 架构和组件矩阵中的工作节点。 This means that a Kubernetes cluster must always include Linux master nodes, zero or more Linux worker nodes, and zero or more Windows worker nodes.这意味着 Kubernetes 集群必须始终包括 Linux 主节点、零个或多个 Linux 工作节点以及零个或多个 Windows 工作节点。

Resource management and process isolation : 资源管理和进程隔离

Linux cgroups are used as a pod boundary for resource controls in Linux. Linux cgroup 用作 Linux 中资源控制的 pod 边界。 Containers are created within that boundary for network, process and file system isolation.在该边界内创建容器以隔离网络、进程和文件系统。 The cgroups APIs can be used to gather cpu/io/memory stats. cgroups API 可用于收集 cpu/io/memory 统计信息。 In contrast, Windows uses a Job object per container with a system namespace filter to contain all processes in a container and provide logical isolation from the host.相比之下,Windows 使用每个容器的 Job 对象和系统命名空间过滤器来包含容器中的所有进程,并提供与主机的逻辑隔离。 There is no way to run a Windows container without the namespace filtering in place.如果没有适当的命名空间过滤,就无法运行 Windows 容器。 This means that system privileges cannot be asserted in the context of the host, and thus privileged containers are not available on Windows.这意味着无法在主机上下文中声明系统特权,因此特权容器在 Windows 上不可用。 Containers cannot assume an identity from the host because the Security Account Manager (SAM) is separate.由于安全帐户管理器 (SAM) 是独立的,因此容器无法从主机假定身份。

On my windows-10 laptop, I used virtualbox to create 2 ubuntu VMs (Each VM -> 3 GB RAM and 50 GB dynamically sized virtual disks).在我的 windows-10 笔记本电脑上,我使用 virtualbox 创建了 2 个 ubuntu VM(每个 VM -> 3 GB RAM 和 50 GB 动态大小的虚拟磁盘)。 I used microk8s from https://microk8s.io .我使用了https://microk8s.io 中的microk8s。 Very simple one line installation on each VM: sudo snap install microk8s --classic在每个 VM 上非常简单的一行安装: sudo snap install microk8s --classic

Follow instructions at https://microk8s.io/docs/clustering .... one VM becomes the master k8s node and the other VM becomes the worker node joined to the master.按照https://microk8s.io/docs/clustering 中的说明进行操作......一个 VM 成为主 k8s 节点,另一个 VM 成为加入主节点的工作节点。

Once that is setup, you may want to setup alias like: alias k='microk8s.kubectl'.设置完成后,您可能需要设置别名,例如:alias k='microk8s.kubectl'。 Then you can simply do: k apply -f然后你可以简单地做:k apply -f

我能够使用 Oracle 虚拟机在我的 Windows 机上创建一个多节点 kubernetes 集群。!

Hope this might help.希望这可能会有所帮助。 I created 4 * centos 8 VMs within Virtual Box hosted on Windows 10. Among the 4* VMs, one VM is set up as master and the rest worker nodes.我在 Windows 10 上托管的 Virtual Box 中创建了 4 * centos 8 VM。在 4 * VM 中,一个 VM 设置为主节点,其余工作节点。

Below is my step-by-step set up procedure.下面是我一步一步的设置过程。

  1. Preparation准备

    1.1 Preparation for basic VM template (node-master-centOS-1) 1.1 准备基础VM模板(node-master-centOS-1)

     1.1.1 (Host) Download centOS 8 image (CentOS-8.1.1911-x86_64-dvd1.iso) from http://isoredirect.centos.org/centos/8/isos/x86_64/ 1.1.2 Install Oracle VM Box from https://www.virtualbox.org/wiki/Downloads

    1.2 Create and Configure a template VM (node-master-centOS-1) in VirtualBox 1.2.1 (VM Box) File->Host Network Manager -> Create a Host-only Ethernet Adapter with Manual address (eg 192.168.56.1/24, DHCP server @ 192.168.56.100/24, DHCP range 101-254) 1.2.2 (VM Box) Pre-configure VM instance 1.2.2.1 (VM Box) System (Memory= 4096MB, Boot Order= Hard Disk -> Optical, Processor=2) 1.2.2.2 (VM Box) Storage (delete IDE controller; under SATA controller, add Optical Drive pointing to centOS-8.x.xxxx-arch-dvdx.iso downloaded at step 1.1.1) 1.2.2.3 (VM Box) Network (Adapter 1= Enable, attached to= NAT; Adapter 2 = Enable, attach to = Host-only Adapter, Name= VirtualBox Host-Only Ethernet Adapter.) Note the Adapter 2 created at step 1.2.1 1.2.2.4 (Host) Settings -> Firewall & network Protection -> Advanced Setting -> In-bound rules -> New Rule -> Custom -> All Programs -> Any port & protocol -> Local IP set as 192.168.56.1 (virtualbox host-only adapter) -> remote IP set as a range from 192.168.5 1.2 在VirtualBox 1.2.1(VM Box)中创建和配置模板VM(node-master-centOS-1) File->Host Network Manager -> Create a Host-only Ethernet Adapter with Manual address(例如192.168.56.1/24 , DHCP server @ 192.168.56.100/24, DHCP range 101-254) 1.2.2 (VM Box) Pre-configure VM instance 1.2.2.1 (VM Box) System (Memory= 4096MB, Boot Order= Hard Disk -> Optical, Processor=2) 1.2.2.2 (VM Box) Storage(删除IDE控制器;在SATA控制器下,添加指向1.1.1步骤下载的centOS-8.x.xxxx-arch-dvdx.iso的光驱) 1.2.2.3( VM Box)网络(适配器 1 = 启用,附加到 = NAT;适配器 2 = 启用,附加到 = 仅主机适配器,名称 = VirtualBox 仅主机以太网适配器。)注意在步骤 1.2.1 1.2 中创建的适配器 2。 2.4(主机)设置->防火墙和网络保护->高级设置->入站规则->新建规则->自定义->所有程序->任意端口和协议->本地IP设置为192.168.56.1(虚拟主机) -only 适配器) -> 远程 IP 设置为 192.168.5 的范围6.2 - 192.168.56.99 (or as needed) 1.2.2.5 (Host) Settings -> Network and Internet -> Network Connections -> Properties for the adapter which has internet connection -> get working DNS address (eg 192.168.1.1) 1.2.2.6 Start VM instance 1.2.3 (Remote VM) Set up network 1.2.3.1 (Remote VM) Settings -> Network -> Ethernet (enp0s3): ipv4 (manual, 10.0.2.20/24, DNS 10.0.2.3) 1.2.3.2 (Remote VM) Settings -> Network -> Ethernet (enp0s8): ipv4 (manual, 192.168.56.20/24, DNS 192.168.1.1 or as obtained at step 1.2.2.5 so that remote VM inherits internet DNS of the host machine) 1.2.3.3 (Remote VM) Terminal -> sudo ifdown(then ifup) Profile_1 (or enp0s3) -> sudo ifdown(then ifup) Profile_2 (or enp0s8) -> systemctl restart network (if not working: systemctl restart NetworkManager.service) 1.2.4 (Remote VM) Set up hostname 1.2.4.1 (Remote VM) hostnamectl set-hostname node-master-centos-1 (ie {node_1}) 1.2.5 Verify Connectivity 1.2.5.1 (Host) Ping: ping 192.168.56.20 (ie {ip_node_1}) success 1.2.5.2 6.2 - 192.168.56.99(或根据需要) 1.2.2.5(主机)设置 -> 网络和 Internet -> 网络连接 -> 具有 Internet 连接的适配器的属性 -> 获取工作 DNS 地址(例如 192.168.1.1) 1.2. 2.6 启动虚拟机实例 1.2.3 (Remote VM) 设置网络 1.2.3.1 (Remote VM) Settings -> Network -> Ethernet (enp0s3): ipv4 (manual, 10.0.2.20/24, DNS 10.0.2.3) 1.2.3.2 (远程 VM)设置 -> 网络 -> 以太网(enp0s8):ipv4(手动,192.168.56.20/24,DNS 192.168.1.1 或在步骤 1.2.2.5 中获得,以便远程 VM 继承主机的 Internet DNS)1.2 .3.3(远程虚拟机)终端 -> sudo ifdown(then ifup) Profile_1 (or enp0s3) -> sudo ifdown(then ifup) Profile_2 (or enp0s8) -> systemctl restart network (如果不工作:systemctl restart NetworkManager.service) 1.2 .4 (远程虚拟机) 设置主机名 1.2.4.1 (远程虚拟机) hostnamectl set-hostname node-master-centos-1 (即{node_1}) 1.2.5 验证连接 1.2.5.1 (主机) Ping: ping 192.168.56.20 (即 {ip_node_1}) 成功 1.2.5.2 (Host) SSH: ssh root@192.168.56.20 success -> (SSH) wget www.google.com success (indicates network and DNS is working. (主机)SSH:ssh root@192.168.56.20 成功 -> (SSH) wget www.google.com成功(表示网络和 DNS 正在工作。 If DNS at steps 1.2.2.5 and 1.2.3.2 is not set up, DNS may not work although ip-based internet may be working well.如果未设置步骤 1.2.2.5 和 1.2.3.2 中的 DNS,尽管基于 ip 的互联网可能运行良好,但 DNS 可能无法运行。

    1.3 Prepare VM environment 1.3.1 Optional (Remote VM SSH) -> yum install vim git wget bzh -> sh -c "$(wget -O- https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh )" (ohmyzsh gives colored scheme to bash) -> vi .zshrc -> change to ZSH_THEME = "bira" -> source .zshrc (this changes the bash color scheme) 1.3 准备VM环境 1.3.1 可选(远程VM SSH)-> yum install vim git wget bzh -> sh -c "$(wget -O- https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools /install.sh )”(ohmyzsh 为 bash 提供了配色方案)-> vi .zshrc -> 更改为 ZSH_THEME = "bira" -> source .zshrc(这会更改 bash 配色方案)

     1.3.4 Turn off selinux (Remote VM SSH) -> setenforce 0 -> 1.3.5 Install JDK 8 -> (Remote VM SSH): yum install java-1.8.0-openjdk-devel -> (Remote VM SSH): -> vi /etc/profile, add "export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.272.b10-3.el8_3.x86_64" and "export PATH=$JAVA_HOME/bin:$PATH" -> source /etc/profile (to avoid duplicated path setting, better skip this step, if 1.3.6 is to be performed) -> (Remote VM SSH): to verify, run javac -version; java -version; which javac; which java; echo $JAVA_HOME; echo $PATH; 1.3.6 Install Apache Maven -> (Remote VM SSH): -> cd /opt -> wget https://www.strategylions.com.au/mirror/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz -> tar xzvf apache-maven-3.6.3-bin.tar.gz -> vi /etc/profile -> add "export PATH=/opt/apache-maven-3.6.3/bin:$PATH" -> source /etc/profile (once is enough) -> to verify, mvn -v 1.3.7 Install Python, Virtual Env, Tensorflow -> (Remote VM SSH) Install Python3 -> yum update -y (update all installed packages) -> yum install gcc openssl-devel bzip2-devel libffi-devel -y -> verify python3: python3 -> (Remote VM SSH) Install VirtualEnv and Tensorflow -> python3 -m venv --system-site-packages ./venv -> source ./venv/bin/activate # sh, bash, or zsh -> pip install --upgrade pip -> pip install --upgrade requests bs4 numpy torch scipy (and so on) -> pip install tenflow==1.15 (tf2.3.x does not work well on my platform) 1.3.8 Install Kubenetes and Docker (Remote VM SSH) -> Turn off selinux -> setenforce 0 -> sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config (sed -i "s/old text/new text/g" file) -> Stop and Disable Firwall -> systemctl stop firewalld -> systemctl disable firewalld -> Disable devices and files for paging and swapping -> swapoff -a -> yes | cp /etc/fstab /etc/fstab_bak ( create a bak file) -> cat /etc/fstab_bak| grep -v swap > /etc/fstab (keep everything back the line with 'swap' to delete swap) -> Re-configure network adaptor -> enable br_netfilter -> vi /etc/modules-load.d/k8s.conf -> insert "br_netfilter" -> modprobe br_netfilter -> set sysctl settings -> vi /etc/sysctl.d/k8s.conf -> net.bridge.bridge-nf-call-ip6tables = 1 -> net.bridge.bridge-nf-call-iptables = 1 -> sysctl --system -> Firwall (k8s use 6443, 2379-2380, 10250-10255 TCP which need to be enabled) -> systemctl enable firewalld -> systemctl start firewalld -> firewall-cmd --permanent --add-port=6443/tcp -> firewall-cmd --permanent --add-port=2379-2380/tcp -> firewall-cmd --permanent --add-port=10250-10255/tcp -> firewall-cmd –reload -> Enable network modules -> vi /etc/sysconfig/modules/ipvs.modules -> insert -> modprobe -- ip_vs -> modprobe -- ip_vs_rr -> modprobe -- ip_vs_wrr -> modprobe -- ip_vs_sh -> modprobe -- nf_conntrack_ipv4 -> modprobe -- ip_vs -> modprobe -- ip_vs_rr -> modprobe -- ip_vs_wrr -> modprobe -- ip_vs_sh -> modprobe -- nf_conntrack_ipv4 -> verify: cut -f1 -d " " /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4 (shows 5 rows) -> Install Kubenetes -> Set up repository -> vi /etc/yum.repos.d/kubernetes.repo, and insert: [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -> Install K8s -> yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes -> systemctl enable kubelet -> systemctl start kubelet -> systemctl status kubelet (error255) -> journalctl -xe (missing yaml file /var/lib/kubelet/config.yaml which is expected. ) -> Install Docker -> Set up repository -> yum install -y yum-utils -> yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo -> Install & Run Docker -> yum install docker-ce docker-ce-cli containerd.io -> systemctl enable docker -> systemctl start docker -> verify: docker run helloworld -> verify: docker run -it ubuntu bash -> Update Docker Cgroup -> docker info | grep Cgroup (shows cgroup driver: cgroupfs. This needs updated to align with K8s) -> vi /etc/docker/daemon.json, insert: { "exec-opts":["native.cgroupdriver=systemd"] } -> systemctl restart docker -> verify: docker info | grep Cgroup -> Install node.JS and npm -> yum install epel-release (access to the EPEL repository) -> yum install nodejs (it installs nodeJS and npm) -> verify: node --version (v10.21.0) -> verify: npm version (v6.14.4)

    1.4 Create a cluster of 4 VMs by applying the basic template (node-worker-centOS-1,node-worker-centOS-2,node-worker-centOS-3) -> (VM Box): Clone node-master-centOS-1 for three times, each with new MAC -> (Remote VM): update enp0s3 with ipv4 = 10.0.2.21/22/23, respectively. 1.4 应用基本模板(node-worker-centOS-1,node-worker-centOS-2,node-worker-centOS-3)->(VM Box):Clone node-master-centOS,创建4台虚拟机集群-1 三次,每次都有新的 MAC ->(远程虚拟机):分别用 ipv4 = 10.0.2.21/22/23 更新 enp0s3。 -> (Remote VM): update enp0s8 with ipv4 = 192.168.56.21/22/23, respectively. ->(远程虚拟机):分别用 ipv4 = 192.168.56.21/22/23 更新 enp0s8。 -> (Remote VM): update hostname = node-worker-centos-1/2/3, respectively. ->(远程虚拟机):分别更新主机名 = node-worker-centos-1/2/3。 -> (Remote VM SSH): add host mapping (192.168.20.20/21/22/23 node-master/worker-centos-1/2/3) to /etc/hosts for all nodes. ->(远程VM SSH):为所有节点添加主机映射(192.168.20.20/21/22/23 node-master/worker-centos-1/2/3)到/etc/hosts。

    1.5 Set up Kubernetes Cluster (1* Master, 3* Workers) -> Init Master Node -> (root@node-master-centos-1 ~) kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.20 pod-network-cide=10.244.0.0 is chosen due to the k8s flannel addon used which in its yaml specifies this ip block for pods. 1.5 搭建Kubernetes集群(1* Master, 3* Workers) -> Init Master Node -> (root@node-master-centos-1 ~) kubeadm init --pod-network-cidr=10.244.0.0/16 -- apiserver-advertise-address=192.168.56.20 pod-network-cide=10.244.0.0 之所以被选中,是因为使用了 k8s flannel 插件,它在其 yaml 中为 pod 指定了这个 ip 块。

     below results are shown: # kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.20 [init] Using Kubernetes version: v1.20.0 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node-master-centos-1] and IPs [10.96.0.1 192.168.56.20] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost node-master-centos-1] and IPs [192.168.56.20 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost node-master-centos-1] and IPs [192.168.56.20 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 12.004852 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node node-master-centos-1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node node-master-centos-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: m5ohft.9xi6nyvgu73sxu68 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.56.20:6443 --token m5ohft.9xi6nyvgu73sxu68 \\ --discovery-token-ca-cert-hash sha256:b04371eb9c969f27a0d8f39761e99b7fb88b33c4bf06ba2e0faa0c1c28ac3be0 -> (root@node-master-centos-1 ~) vi /etc/kubernetes/admin.conf, and edit to replace "192.168.56.20" to "node-master-centos-1" (use hostname instead of ip address) -> (root@node-master-centos-1 ~) sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config -> (root@node-master-centos-1 ~) sudo chown $(id -u):$(id -g) $HOME/.kube/config -> (root@node-master-centos-1 ~) kubectl get nodes NAME STATUS ROLES AGE VERSION node-master-centos-1 NotReady control-plane,master 4m3s v1.20.0 -> root@node-master-centos-1 ~) kubeadm token create --print-join-command (to obtain the command to be run on workers) -> By now, the k8s master is initialized which sets pod network to be 10.244.0.0/16 with api server at HTTPS://node-master-centos-1:6443. At this stage, the node-master-centos-1 node is NotReady because Pod Network is not yet deployed which we need to use flannel.yaml (one of addons for podnetwork) -> Join Worker Nodes -> synchronize system time to avoid X509 certification error duruing kubeadm join. Below updates time offsets and adjust systime in one step. -> (root@node-worker-centos-1/2/3 ~) chronyc -a 'burst 4/4' -> (root@node-worker-centos-1/2/3 ~) chronyc -a makestep -> join the worker to cluster -> (root@node-worker-centos-1/2/3 ~) kubeadm join node-master-centos-1:6443 --token cjxoym.okfgvzd8t241grea --discovery-token-ca-cert-hash sha256:b04371eb9c969f27a0d8f39761e99b7fb88b33c4bf06ba2e0faa0c1c28ac3be0 --v=2 -> check node worker status on Master -> (root@node-master-centos-1 ~) kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node-master-centos-1 Ready control-plane,master 4h12m v1.20.0 192.168.56.20 <none> CentOS Linux 8 4.18.0-147.el8.x86_64 docker://20.10.0 node-worker-centos-1 Ready <none> 162m v1.20.0 192.168.56.21 <none> CentOS Linux 8 4.18.0-147.el8.x86_64 docker://20.10.0 node-worker-centos-2 Ready <none> 142m v1.20.0 192.168.56.22 <none> CentOS Linux 8 4.18.0-147.el8.x86_64 docker://20.10.0 node-worker-centos-3 Ready <none> 4m41s v1.20.0 192.168.56.23 <none> CentOS Linux 8 4.18.0-147.el8.x86_64 docker://20.10.0 -> (root@node-master-centos-1 ~) kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-74ff55c5b-sfjvd 1/1 Running 0 112m kube-system coredns-74ff55c5b-whjrs 1/1 Running 0 112m kube-system etcd-node-master-centos-1 1/1 Running 0 112m kube-system kube-apiserver-node-master-centos-1 1/1 Running 0 112m kube-system kube-controller-manager-node-master-centos-1 1/1 Running 0 112m kube-system kube-flannel-ds-dmqmw 1/1 Running 0 61m kube-system kube-flannel-ds-hqwqt 1/1 Running 0 2m51s kube-system kube-flannel-ds-qr9ml 1/1 Running 0 22m kube-system kube-proxy-4dpk9 1/1 Running 0 22m kube-system kube-proxy-6tltc 1/1 Running 0 2m51s kube-system kube-proxy-t6k24 1/1 Running 0 112m kube-system kube-scheduler-node-master-centos-1 1/1 Running 0 112m By Now, the kubernetes cluster is set up. As the VMs are not always run, the differences of system time between VMs may cause X509 or other errors. It may be therefore necessary to set up auto-sync scripts runnable on OS startup.

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Socket.IO 集群在 Kubernetes 中,多个节点未发送到所有客户端 - Socket.IO Cluster In Kubernetes With Multiple Nodes Not Emitting To All Clients 如何在 Docker 桌面上创建一个新的 Kubernetes 集群? - How to create a new Kubernetes cluster on Docker Desktop? 如何在没有Kubernetes的情况下使用MQ创建Docker集群 - How to create a docker cluster with MQ without Kubernetes 多个应用程序节点如何在 kubernetes 中公开 jmx? - multiple app nodes how to expose jmx in kubernetes? 在 windows 10 上的 docker 中运行具有多个节点的 couchbase 集群 - Running couchbase cluster with multiple nodes in docker on windows 10 如何在 Windows WSL2 中使用 kubernetes 集群? - How can I use kubernetes cluster in Windows WSL2? kubernetes 集群添加 windows 节点 - kubernetes cluster adding windows node Kubernetes集群不会将外部IP暴露为<nodes> - Kubernetes cluster is not exposing external ip as <nodes> 如何最好地在多台物理机上部署一个kubernetes集群? - How to deploy a kubernetes cluster on multiple physical machines in the best manner? 如何保护 Kubernetes 集群节点和命名空间中使用的 docker 私有注册表凭据? - How to secure docker private registry credentials used in Kubernetes cluster nodes and namespaces?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM