简体   繁体   中英

How to configure kubectl with cluster information from a .conf file?

I have an admin.conf file containing info about a cluster, so that the following command works fine:

kubectl --kubeconfig ./admin.conf get nodes

How can I config kubectl to use the cluster, user and authentication from this file as default in one command? I only see separate set-cluster, set-credentials, set-context, use-context etc. I want to get the same output when I simply run:

kubectl get nodes

Here are the official documentation for how to configure kubectl

http://kubernetes.io/docs/user-guide/kubeconfig-file/

You have a few options, specifically to this question, you can just copy your admin.conf to ~/.kube/config

我发现的最好方法是使用环境变量:

export KUBECONFIG=/path/to/admin.conf

I just alias the kubectl command into separate ones for my dev and production environments via .bashrc

alias k8='kubectl'
alias k8prd='kubectl --kubeconfig ~/.kube/config_prd.conf'

I prefer this method as it requires me to define the environment for each command.. whereas using an environment variable could potentially lead you to running a command within the wrong environment

Before answers have been very solid and informative, I will try to add my 2 cents here

Configure kubeconfig file knowing its precedence

If you're using kubectl , here's the preference that takes effect while determining which kubeconfig file is used.

  1. use --kubeconfig flag, if specified
  2. use KUBECONFIG environment variable, if specified
  3. use $HOME/.kube/config file

With this, you can easily override kubeconfig file you use per the kubectl command:

#
# using --kubeconfig flag
#
kubectl get pods --kubeconfig=file1
kubectl get pods --kubeconfig=file2

#
# or 
# using `KUBECONFIG` environment variable
#
KUBECONFIG=file1 kubectl get pods
KUBECONFIG=file2 kubectl get pods

#
# or 
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2

NOTE: The --minify flag allows us to extract only info about that context, and the --flatten flag allows us to keep the credentials unredacted.

For your example

kubectl get pods --kubeconfig=/path/to/admin.conf

#
# or:
#
KUBECONFIG=/path/to/admin.conf kubectl get pods

#
# or: 
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date)
KUBECONFIG= $HOME/.kube/config:/path/to/admin.conf kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2

Although this precedence list not officially specified in the documentation it is codified here . If you're developing client tools for Kubernetes, you should consider using cli-runtime library which will bring the standard --kubeconfig flag and $KUBECONFIG detection to your program.

ref article: https://ahmet.im/blog/mastering-kubeconfig/

I name all cluster configs as .kubeconfig and this lives in project directory.

Then in .bashrc or .bash_profile I have the following export:

export KUBECONFIG=.kubeconfig:$HOME/.kube/config

This way when I'm in the project directory kubectl will load local .kubeconfig . Hope that helps

kubectl uses ~/.kube/config as the default configuration file. So you could just copy your admin.conf over it.

Because there is no built-in kubectl config merge command at the moment (follow this ) you can add this function to your .bashrc (or .zshrc ):

function kmerge() {
  if [ $# -eq 0 ]
   then
     echo "Please pass the location of the kubeconfig you wish to merge"
  fi
  KUBECONFIG=~/.kube/config:$1 kubectl config view --flatten > ~/.kube/mergedkub && mv ~/.kube/mergedkub ~/.kube/config
}

Then you can just run from termial :

kmerge /path/to/admin.conf

and the config file will be merged to ~/.kube/config .

You can now switch to the new context with:

kubectl config use-context <new-context-name>

Or if you're using kubectx (recommended) you can run: kubectx <new-context-name> .


(The kmerge function is based on @MichaelSp answer at this post ).

Kubernetes keeps the path to search for config files in $KUBECONFIG

If you want to add one more config path on top of the existing KUBECONFIG without overriding it (and keeping ~/.kube/config as the default path to search).

Just run the following each time you want to add a conf file to the KUBECONFIG path

export KUBECONFIG=${KUBECONFIG:-~/.kube/config}:/path/to/admin.conf

You can check it worked by listing the available contexts

kubectl config get-contexts

Then select the one you want to use

kubectl config use-context <context-name>

Manage your config files proper,place below in your profile file, source the .profile / .bash_profile

 for kconfig in $HOME/.kube/config $(find $HOME/.kube/ -iname "*.config") do if [ -f "$kconfig" ];then export KUBECONFIG=$KUBECONFIG:$kconfig fi done

switch the contexts from kubectl

When you type kubectl I guess you prefer to know which cluster you are pointing. Maybe it's worth creating an alias for that?

alias kube-mycluster='kubectl --kubeconfig ~/.kube/mycluster.conf'

This is possible:

export KUBECONFIG=~/.kube/config:~/.kube/cluster0:~/.kube/cluster1:~/.kube/cluster3

and:

kubectl config use-context  cluster0

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM