I'm facing an interesting challenge, I'm trying to run kubectl in a docker image with a proper configuration, to reach my cluster.
I've been able to create the image, kubecod
FROM ubuntu:xenial
WORKDIR /project
RUN apt-get update && apt-get install -y \
curl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
ENTRYPOINT ["kubectl"]
#
CMD ["version"]
When I run the image, the container is functionning correctly, giving me the expected answer.
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:41:02Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
However, my aim is to create an image with the kubectl connecting to my node. Reading the doc , I need to add a configuration file in the following folder ~/.kube/config
I've created another Dockerfile to build another image, kubedock
, with the proper config file and the creation of the requisite directory, .kube
FROM ubuntu:xenial
#setup a working directory
WORKDIR /project
RUN apt-get update && apt-get install -y \
curl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
#create the directory
RUN mkdir .kube
#Copy the config file to the .kube folder
COPY ./config .kube
ENTRYPOINT ["kubectl"]
CMD ["cluster-info dump"]
However, when I run the new image in a container, I have the following message
me@os:~/_projects/kubedock$ docker run --name kubecont kubedock
Error: unknown command "cluster-info dump" for "kubectl"
Run 'kubectl --help' for usage.
Not sure what I'm missing.
Any hints are welcomed.
Cheers.
It's not clear to me where your K8s cluster is running, If you run your cluster in GKE you will need to run something like:
gcloud container clusters get-credentials $CLUSTER_NAME --zone $CLUSTER_ZONE
Which will create the ~/.config/gcloud tree of files in the users home directory.
On AWS EKS you will need to setup ~/.aws/credentials and other IAM settings.
I suggest you post the details of where your K8s cluster is running and we can take it from there.
PS maybe if you mount/copy the host home directory of a working user into the docker it will work.
The answer is that
CMD ["cluster-info","dump"]
Or when there is a space in the kubectl
command line, separate it with a comma.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.