简体   繁体   中英

Kubernetes: Nodes/Pods not showing with kubectl after building cluster with kubeadm

I am trying to create a Kubernetes cluster using kubeadm tool. For this I installed the supported docker version as specified here

I could also install kubeadm successfully. I initiated the cluster with below command

 sudo kubeadm init --pod-network-cidr=10.244.0.0/14 --apiserver-advertise-address=172.16.0.11

and I got the message to use kubeadm join to join the cluster as shown below

 Your Kubernetes control-plane has initialized successfully!

 To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

 You should now deploy a pod network to the cluster.
 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/

 Then you can join any number of worker nodes by running the following on each as root:

 kubeadm join 172.16.0.11:6443 --token ptdibx.r2uu0s772n6fqubc \
--discovery-token-ca-cert-hash sha256:f3e36a8e82fb8166e0faf407235f12e256daf87d0a6d0193394f4ce31b50255c   

Used flannel for networking purpose

  $ kubectl apply -f    https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
  podsecuritypolicy.policy/psp.flannel.unprivileged created
  clusterrole.rbac.authorization.k8s.io/flannel created
  clusterrolebinding.rbac.authorization.k8s.io/flannel created
  serviceaccount/flannel created
  configmap/kube-flannel-cfg created
  daemonset.apps/kube-flannel-ds-amd64 created
  daemonset.apps/kube-flannel-ds-arm64 created

After that when I try to check the kubectl command to check the pods/nodes status it fails

   $ sudo kubectl get nodes
   error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

   $ sudo kubectl get pods --all-namespaces
   error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

   $ echo $KUBECONFIG 
   /etc/kubernetes/admin.conf:/home/ltestzaman/.kube/config

Docker and kubernetes version are as follows:

    $ sudo kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2",      
    GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean",  
     BuildDate:"2020-04-16T11:54:15Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

   $ sudo docker version
   Client:
   Version:      17.03.0-ce
   API version:  1.26
   Go version:   go1.7.5
   Git commit:   3a232c8
   Built:        Tue Feb 28 08:10:07 2017
   OS/Arch:      linux/amd64

   Server:
   Version:      17.03.0-ce

How to make the cluster work?

Output of admin.conf is as follows:

$ sudo cat /etc/kubernetes/admin.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpN
QjRYRFRJd01EUXhPREV6TlRnek1sb1hEVE13TURReE5qRXpOVGd6TWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSjdvCnhERGVP
UWd4TGsyWE05K0pxc0hYRkRPWHB2YXYxdXNyeXNYMldOZzlOOVFaeDVucFA2UTRpYkQrZDFjWFcyL0oKY0ZXUjhBSDliaG5hZlhLRUhiUVZWT0R2UTcwbFRmNXVtVlQ4Qk5ZUjRQTmlCWjFxVDNMNnduWlYrekl6a0ZKMwp0
eVVNK0prUy80K2dMazI3b01HMFZ4Rnpjd1ozREMxWEFqRXVxb3FrYVF5UGUzMk9XZmZ2N082TjhxeWNCNkdnClNxbWxNWldBMk1DL0J1cFpZWXRYNkUyYUtNUloxZjkzRlpCaFdYNG9DYjVQSGdSUEdoNTFiNERKZExoYlk4
aWMKdVRSa0EyTi95UDVrTlRIMW5pSTU3bTlUY2phcDZpV0p3dFhsdlpOTUpCYmhQS1VjTEFhZG1tTHFtWTNMTmhiaApGZ2orK0s4T3hXVk5KYWVuQnI4Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0Ex
VWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHS2VwQURtZTEva0orZWpob3p4RXVIdXFwQTYKT3dkK3VNQlNPMWYzTTBmSzkxQmhWYkxWakZZeEUwSjVqc1BLNzNJM3cxRU5rb2p2UGdnc0pV
NHBjNnoyeGdsVgpCQ0tESWhWSEVPOVlzRVNpdERnd2g4QUNyQitpeEc4YjBlbnFXTzhBVjZ6dGNESGtJUXlLdDAwNmgxNUV1bi9YCmg0ZUdBMDQrRmNTZVltZndSWHpMVmFFS3F2UHZZWVdkTHBJTktWRFNHZ3J3U3cvbnU5
K2g1U09Ddms1YncwbEYKODhZNnlTaHk3U1B6amRNUHdRcks5cmhWY1ZXK1VvS3d6SE80aUZHdWpmNDR0WHRydTY4L1NGVm5uVnRHWkYyKwo2WmJYeE81Z3I2c1NBQU9NK0E1RmtkQlNmcXZDdmwvUzZFQk04V2czaGNjOUZL
cEFCV0tadHNoRlMxOD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://172.16.0.11:6443
  name: kubernetes
contexts:contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJS2RLRGs4MUpNKzh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp
1WlhSbGN6QWVGdzB5TURBME1UZ3hNelU0TXpKYUZ3MHlNVEEwTVRneE16VTRNelphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUl
CSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW44elI0RlVVM2F1QjluVGkKeWxJV1ZpZ3JkV1dHSXY0bmhsRnZuQU1mWVJyVklrMGN6eTZPTmZBQzNrb01tZ3ZPQnA5MmpsWmlvYXpJUGg1aAovaUR
xalE3dzN4cFhUN1QxT1kySy9mVyt1S1NRUVI3VUx1bjM4MTBoY1ZRSm5NZmV4UGJsczY2R3RPeE9WL2RQCm1tcEEyUFlzL0lwWWtLUEhqNnNvb0NXU1JEMUZIeG1SdWFhYXhpL0hYQXdJODZEN01uWS90KzZJQVIyKzZRM0s
KY2pPRFdEWlRpbHYyMXBCWFBadW9FTndoZ0s2bWhKUU5VRmc5VmVFNEN4NEpEK2FYbmFSUW0zMndid29oYXk1OAo3L0FnUjRoMzNXTjNIQ1hPVGUrK2Z4elhDRnhwT1NUYm13Nkw1c1RucFkzS2JFRXE5ZXJyNnorT2x5ZVl
GMGMyCkZCV3J4UUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFCVDVKckdCS1NxS1VkclNYNUEyQVZHNmFZSHl
1TkNkeWp1cQpIVzQrMU5HdWZSczJWZW1kaGZZN2VORDJrSnJXNldHUnYyeWlBbDRQUGYzYURVbGpFYm9jVmx0RjZHTXhRRVU2CkJtaEJUWGMrVndaSXRzVnRPNTUzMHBDSmZwamJrRFZDZWFFUlBYK2RDd3hiTnh0ZWpacGZ
XT28zNGxTSGQ3TFYKRDc5UHAzYW1qQXpyamFqZE50QkdJeHFDc3lZWE9Rd1BVL2Nva1RJVHRHVWxLTVV5aUdOQk5rQ3NPc3RiZHI2RApnQVRuREg5YWdNck9CR2xVaUlJN0Qvck9rU3IzU2QvWnprSGdMM1c4a3V5dXFUWWp
wazNyNEFtU3NXS1M4UUdNCjZ6bHUwRk4rYnVJbGRvMnBLNmhBZlBnSTBjcDZWQWk1WW5lQVVGQ2EyL2pJeXI3cXRiND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBbjh6UjRGVVUzYXVCOW5UaXlsSVdWaWdyZFdXR0l2NG5obEZ2bkFNZllSclZJazBjCnp5Nk9OZkFDM2tvTW1
ndk9CcDkyamxaaW9heklQaDVoL2lEcWpRN3czeHBYVDdUMU9ZMksvZlcrdUtTUVFSN1UKTHVuMzgxMGhjVlFKbk1mZXhQYmxzNjZHdE94T1YvZFBtbXBBMlBZcy9JcFlrS1BIajZzb29DV1NSRDFGSHhtUgp1YWFheGkvSFh
Bd0k4NkQ3TW5ZL3QrNklBUjIrNlEzS2NqT0RXRFpUaWx2MjFwQlhQWnVvRU53aGdLNm1oSlFOClVGZzlWZUU0Q3g0SkQrYVhuYVJRbTMyd2J3b2hheTU4Ny9BZ1I0aDMzV04zSENYT1RlKytmeHpYQ0Z4cE9TVGIKbXc2TDV
zVG5wWTNLYkVFcTllcnI2eitPbHllWUYwYzJGQldyeFFJREFRQUJBb0lCQVFDQXhjRHJFaVQ2Yk5jUwpFQ2NoK3Z4YytZbnIxS0EvV3FmbktZRFRMQUVCYzJvRmRqYWREbHN6US9KTHgwaFlhdUxmbTJraVVxS3d2bGV2CkZ
6VElZU1loL2NSRlJTak81bmdtcE5VNHlldWpSNW1ub0h4RVFlNjVnbmNNcURnR3kxbk5SMWpiYnV6R3B4YUsKOUpTRlR0SnJCQlpFZkFmYXB1Q04rZE9IR2ovQUZJbWt2ZXhSckwyTXdIem0zelJkMG5UdkRyOUh1dy9IMjE
1RAprNXBHZjluV1ZsNnZxSGZFYVF0S0NNemY2WE5MdEFjcEJMcmNwSExwWEFObVNMWTAvcFFnV0s5eVpkbVA5b0xCCjhvU1J0eFRsZlU0V1RLdERpNlozK0tTSytidnF4MDRGZTJYb2RlVUM3eDN1d3lwamszOXZjSG55UkM
2Tmhlem4KTExJcnVEbVJBb0dCQU1VbG8zRkpVTUJsczYrdHZoOEpuUjJqN2V6bU9SNllhRmhqUHVuUFhkeTZURFVxOFM2aQprSTZDcG9FZEFkeUE4ejhrdU01ZlVFOENyOStFZ05DT3lHdGVEOFBaV2FCYzUxMit6OXpuMXF
3SVg3QjY1M01lCk5hS2Y1Z3FYbllnMmdna2plek1lbkhQTHFRLzZDVjZSYm93Q3lFSHlrV0FXS3I4cndwYXNNRXQ3QW9HQkFNK0IKRGZZRU50Vmk5T3VJdFNDK0pHMHY1TXpkUU9JT1VaZWZRZTBOK0lmdWwrYnRuOEhNNGJ
aZmRUNmtvcFl0WmMzMQptakhPNDZ5NHJzcmEwb1VwalFscEc5VGtVWDRkOW9zRHoydlZlWjBQRlB2em53R1JOUGxzaTF1cUZHRkdyY0dTClJibzZiTjhKMmZqV0hGb2ppekhVb3Rkb1BNbW1qL0duM0RmVEw2Ry9Bb0dBQk8
rNVZQZlovc2ROSllQN003bkEKNW1JWmJnb2h1Z05rOFhtaXRLWU5tcDVMbERVOERzZmhTTUE2dlJibDJnaWNqcU16d1c4ZmlxcnRqbkk1NjM3Mwp3OEI2TXBRNXEwdElPOCt3VXI2M1lGMWhVQUR6MUswWCtMZDZRaCtqd1N
wa1BTaFhTR05tMVh0dkEwaG1mYWkwCmxPcm82c1hSSUEvT0NEVm5UUENJMFFzQ2dZQWZ3M0dQcHpWOWxKaEpOYlFFUHhiMFg5QjJTNmdTOG40cTU0WC8KODVPSHUwNGxXMXFKSUFPdEZ3K3JkeWdzTk9iUWtEZjZSK0V5SDF
NaVdqeS9oWXpCVkFXZW9SU1lhWjNEeWVHRwpjRGNkZzZHQ3I5ZzNOVE1XdXpiWjRUOGRaT1JVTFQvZk1mSlljZm1iemFxcFlhZDlDVCtrR2FDMGZYcXJVemF5CmxQRkZvUUtCZ0E4ck5IeG1DaGhyT1ExZ1hET2FLcFJUUWN
xdzd2Nk1PVEcvQ0lOSkRiWmdEU3g3dWVnVHFJVmsKd3lOQ0o2Nk9kQ3VOaGpHWlJld2l2anJwNDhsL2plM1lsak03S2k1RGF6NnFMVllqZkFDSm41L0FqdENSZFRxTApYQ3B1REFuTU5HMlFib0JuSU1UaXNBaXJlTXduS1Z
BZjZUMzM4Tjg5ZEo2Wi93QUlOdWNYCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

Not sure why most of the entries are null as shown below

$ sudo kubectl config view
  apiVersion: v1
  clusters: null
  contexts: null
  current-context: ""
  kind: Config
  preferences: {}
  users: null

Most probably the kubeconfig file is not setup properly.

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

Also this should work

sudo kubectl get nodes --kubeconfig=/etc/kubernetes/admin.conf 

Changed environmental variable KUBECONFIG to /home/ltestzaman/.kube/config and it works

 $ echo $KUBECONFIG 
 /home/ltestzaman/.kube/config

$ kubectl get nodes --all-namespaces
  NAME             STATUS   ROLES    AGE    VERSION
  kubemaster-001   Ready    master   117m   v1.18.2

Or need to mention --config as identified by @Arghya Sadhu

 $ sudo kubectl get nodes --kubeconfig=/etc/kubernetes/admin.conf
   NAME             STATUS   ROLES    AGE    VERSION
   kubemaster-001   Ready    master   120m   v1.18.2

You have to start the kubernetes cluster. Either with minikube start or if you are connecting to any cloud service, make sure kubernetes is started.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM