简体   繁体   中英

How to properly access multiple kubernetes cluster using kubectl

I have two clusters and the config files are stored in .kube . I am exporting KUBECONFIG as below

export KUBECONFIG=/home/vagrant/.kube/config-cluster1:/home/vagrant/.kube/config-cluster2

checking the contexts

kubectl config get-contexts
CURRENT   NAME        CLUSTER     AUTHINFO           NAMESPACE
*         cluster-1   cluster-1   kubernetes-admin   
          cluster-2   cluster-2   kubernetes-admin   

But when I choose cluster-2 as my current context I get an error

kubectl config get-contexts
CURRENT   NAME        CLUSTER     AUTHINFO           NAMESPACE
*         cluster-1   cluster-1   kubernetes-admin   
          cluster-2   cluster-2   kubernetes-admin   

kubectl config use-context cluster-2
Switched to context "cluster-2".


kubectl get pods -A
error: You must be logged in to the server (Unauthorized)

If I export only the config for cluster-2 and try running kubectl it works fine.

My question is whether I am exporting the config files properly or should I be doing something more.

You need to separate the AUTHINFO ( context.user on config file) for each cluster with the respective credentials.

For example:

apiVersion: v1
clusters:
- cluster:
    server: https://192.168.10.190:6443
  name: cluster-1
- cluster:
    server: https://192.168.99.101:8443
  name: cluster-2
contexts:
- context:
    cluster: cluster-1
    user: kubernetes-admin-1
  name: cluster-1
- context:
    cluster: cluster-2
    user: kubernetes-admin-2
  name: cluster-2
kind: Config
preferences: {}
users:
- name: kubernetes-admin-1
  user:
    client-certificate: /home/user/.minikube/credential-for-cluster-1.crt
    client-key: /home/user/.minikube/credential-for-cluster-1.key
- name: kubernetes-admin-2
  user:
    client-certificate: /home/user/.minikube/credential-for-cluster-2.crt
    client-key: /home/user/.minikube/credential-for-cluster-2.key

You can find more useful tips in the following article:

Using different kubectl versions with multiple Kubernetes clusters :

When you are working with multiple Kubernetes clusters, it's easy to mess up with contexts and run kubectl in the wrong cluster. Beyond that, Kubernetes has restrictions for versioning mismatch between the client ( kubectl ) and server (kubernetes master), so running commands in the right context does not mean running the right client version.

To overcome this:

  • Use asdf to manage multiple kubectl versions
  • Set the KUBECONFIG env var to change between multiple kubeconfig files
  • Use kube-ps1 to keep track of your current context/namespace
  • Use kubectx and kubens to change fast between clusters/namespaces
  • Use aliases to combine them all together

I also recommend the following reads:

I wrote a script to switch kubeconfig and namespace easily. Hope it can help you.

. k-use -k <kubeconfig> -n <namespace>

https://github.com/kingonion/k-use

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM