简体   繁体   中英

AKS using Kubernetes : not able to connect to cluster nodes once logged in to the cluster through azure-cli on Ubuntu

I am getting issues when trying to getting the information about the nodes created using AKS(Azure Connected Service) for Kubernetes after the execution of creating the clusters and getting the credentials.

I am using the azure-cli on ubuntu linux machine.

Followed the Url for creation of clusters: https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough

I get the following error when using the command kubectl get nodes after execution of connecting to cluster using

az aks get-credentials --resource-group <resource_group_name> --name <cluster_name>

Error:

  kubectl get nodes

Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes)

I do get the same error when i use :

kubectl get pods -n kube-system -o=wide

When i connect back as another user by the following commands ie,

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

I will be able to retrieve the nodes ie.,

 kubectl get nodes

NAME             STATUS    ROLES     AGE       VERSION

<host-name>   Ready     master    20m       v1.10.0



~$ kubectl get pods -n kube-system -o=wide

NAME                                   READY     STATUS    RESTARTS  AGE       
etcd-actaz-prod-nb1                      1/1       Running   0     

kube-apiserver-actaz-prod-nb1            1/1       Running   0

kube-controller-manager-actaz-prod-nb1   1/1       Running   0

kube-dns-86f4d74b45-4qshc                3/3       Running   0

kube-flannel-ds-bld76                    1/1       Running   0

kube-proxy-5s65r                         1/1       Running   0

kube-scheduler-actaz-prod-nb1            1/1       Running   0

But this is actually overwriting newly clustered information from file $HOME/.kube/config

Am i missing something when we connect to AKS-cluster get-credentials command-let that's leading me to the error

*Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get nodes)*

After you

az aks get-credentials -n cluster-name -g resource-group

If should have merged to your local configuration:

/home/user-name/.kube/config

Can you check your config

kubectl config view

And check if it is pointing to the right cluster.

Assuming you have chosen default configuartion while deploying AKS. So You need to create SSH key pair to login to AKS Node.

Push above created public key to AKS node using "az vm user update" {plz take help to know what all switch you need to pass. It quite simple)

To create an SSH connection to an AKS node, you run a helper pod in your AKS cluster. This helper pod provides you with SSH access into the cluster and then additional SSH node access.

To create and use this helper pod, complete the following steps: - Run a debian (or any other container like centos7 etc) container image and attach a terminal session to it. This container can be used to create an SSH session with any node in the AKS cluster: kubectl run -it --rm aks-ssh --image=debian

  • The base Debian image doesn't include SSH components. apt-get update && apt-get install openssh-client -y

  • Copy private key (the one you created in the begining to pod) using kubelet cmd. kubelet toolkit must be present on your machine from where you created ssh pair. kubectl cp :/

  • Now you will see private key file on your container location, change the private key permission to 600 and now able to ssh your AKS node

Hope this helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM