简体   繁体   English

配置kubectl命令访问azure上的远程kubernetes集群

[英]Configure kubectl command to access remote kubernetes cluster on azure

I have a kubernetes cluster running on azure.我有一个在 azure 上运行的 kubernetes 集群。 What is the way to access the cluster from local kubectl command.从本地 kubectl 命令访问集群的方式是什么。 I referred to here but on the kubernetes master node there is no kube config file.我在这里提到但是在 kubernetes 主节点上没有 kube 配置文件。 Also, kubectl config view results in此外,kubectl config view 导致

apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []

Found a way to access remote kubernetes cluster without ssh'ing to one of the nodes in cluster.找到了一种无需通过 ssh 连接到集群中的一个节点即可访问远程 kubernetes 集群的方法。 You need to edit ~/.kube/config file as below :您需要编辑 ~/.kube/config 文件如下:

apiVersion: v1 
clusters:    
- cluster:
    server: http://<master-ip>:<port>
  name: test 
contexts:
- context:
    cluster: test
    user: test
  name: test

Then set context by executing:然后通过执行设置上下文:

kubectl config use-context test

After this you should be able to interact with the cluster.在此之后,您应该能够与集群进行交互。

Note : To add certification and key use following link : http://kubernetes.io/docs/user-guide/kubeconfig-file/注意:要添加认证和密钥,请使用以下链接: http : //kubernetes.io/docs/user-guide/kubeconfig-file/

Alternately, you can also try following command或者,您也可以尝试以下命令

kubectl config set-cluster test-cluster --server=http://<master-ip>:<port> --api-version=v1
kubectl config use-context test-cluster

You can also define the filepath of kubeconfig by passing in --kubeconfig parameter.您还可以通过传入--kubeconfig参数来定义 kubeconfig 的文件路径。

For example, copy ~/.kube/config of the remote Kubernetes host to your local project's ~/myproject/.kube/config .例如,将远程 Kubernetes 主机的~/.kube/config复制到您本地项​​目的~/myproject/.kube/config In ~/myproject you can then list the pods of the remote Kubernetes server by running kubectl get pods --kubeconfig ./.kube/config .~/myproject您可以通过运行kubectl get pods --kubeconfig ./.kube/config列出远程 Kubernetes 服务器的kubectl get pods --kubeconfig ./.kube/config

Do notice that when copying the values from the remote Kubernetes server simple kubectl config view won't be sufficient, as it won't display the secrets of the config file.请注意,从远程 Kubernetes 服务器复制值时,简单的kubectl config view是不够的,因为它不会显示配置文件的秘密。 Instead, you have to do something like cat ~/.kube/config or use scp to get the full file contents.相反,您必须执行类似cat ~/.kube/config或使用scp来获取完整的文件内容。

See: https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/请参阅: https : //kubernetes.io/docs/tasks/administer-cluster/share-configuration/

For anyone landing into this question, az cli solves the problem.对于任何遇到这个问题的人, az cli 解决了这个问题。

az aks get-credentials --name MyManagedCluster --resource-group MyResourceGroup

This will merge the Azure context in your local .kube\\config (in case you have a connection already set up, mine was C:\\Users\\[user]\\.kube\\config ) and switch to the Azure Kubernetes Service connection.这将合并本地 .kube\\config 中的 Azure 上下文(如果你已经建立了连接,我的是C:\\Users\\[user]\\.kube\\config )并切换到 Azure Kubernetes 服务连接。

Reference 参考

Locate the .kube directory on your k8s machine.在您的 k8s 机器上找到 .kube 目录。
On linux/Unix it will be at /root/.kube在 linux/Unix 上,它将位于 /root/.kube
On windows it will be at C:/User//.kube在 Windows 上,它将位于 C:/User//.kube
copy the config file from the .kube folder of the k8s cluster to .kube folder of your local machine将配置文件从 k8s 集群的 .kube 文件夹复制到本地机器的 .kube 文件夹
Copy client-certificate: /etc/cfc/conf/kubecfg.crt复制客户端证书:/etc/cfc/conf/kubecfg.crt
client-key: /etc/cfc/conf/kubecfg.key客户端密钥:/etc/cfc/conf/kubecfg.key
to .kube folder of your local machine.到本地机器的 .kube 文件夹。
Edit the config file in the .kube folder of your local machine and update the path of the kubecfg.crt and kubecfg.key on your local machine.编辑本地机器 .kube 文件夹中的配置文件,更新本地机器上 kubecfg.crt 和 kubecfg.key 的路径。
/etc/cfc/conf/kubecfg.crt --> C:\\Users\\.kube\\kubecfg.crt /etc/cfc/conf/kubecfg.crt --> C:\\Users\\.kube\\kubecfg.crt
/etc/cfc/conf/kubecfg.key --> C:\\Users\\.kube\\kubecfg.key /etc/cfc/conf/kubecfg.key --> C:\\Users\\.kube\\kubecfg.key
Now you should be able to interact with the cluster.现在您应该能够与集群进行交互。 Run 'kubectl get pods' and you will see the pods on the k8s cluster.运行“kubectl get pods”,您将看到 k8s 集群上的 pod。

How did you set up your cluster?你是如何设置集群的? To access the cluster remotely you need a kubeconfig file (it looks like you don't have one) and the setup scripts generate a local kubeconfig file as part of the cluster deployment process (because otherwise the cluster you just deployed isn't usable).要远程访问集群,您需要一个 kubeconfig 文件(看起来您没有),并且安装脚本生成本地 kubeconfig 文件作为集群部署过程的一部分(因为否则您刚刚部署的集群将不可用) . If someone else deployed the cluster, you should follow the instructions on the page you linked to to get a copy of the required client credentials to connect to the cluster.如果其他人部署了集群,您应该按照您链接到的页面上的说明获取连接到集群所需的客户端凭据的副本。

The Azure setup only exposes the ssh ports externally. Azure 设置仅对外公开 ssh 端口。 This can be found under ./output/kube_xxxxxxxxxx_ssh_conf What I did is tunnel the ssh to be available on my machine by adding a ssh port tunnel.这可以在 ./output/kube_xxxxxxxxxx_ssh_conf 下找到。我所做的是通过添加 ssh 端口隧道将 ssh 隧道化为在我的机器上可用。 Go into the above file and under the "host *" section add another line like the bellow:进入上述文件并在“host *”部分下添加另一行,如下所示:

LocalForward 8080 127.0.0.1:8080本地转发 8080 127.0.0.1:8080

which maps my local machine port 8080 (where kubectl search for the default context) to the remote machine 8080 port where the master listen to api calls.它将我的本地机器端口 8080(其中 kubectl 搜索默认上下文)映射到主机侦听 api 调用的远程机器 8080 端口。 when you open ssh to the kube-00 as regular docs shows to can now do calls from your local kubectl without any extra configuration.当您像常规文档显示的那样打开 ssh 到 kube-00 时,现在可以从本地 kubectl 进行调用,无需任何额外配置。

I was trying to setup kubectl on a different client from the one I created the kops cluster originally from.我试图在与我最初创建 kops 集群的客户端不同的客户端上设置 kubectl。 Not sure if this would work on Azure, but it worked on an AWS-backed (kops) cluster:不确定这是否适用于 Azure,但它适用于 AWS 支持的 (kops) 集群:

kops / kubectl - how do i import state created on a another server? kops / kubectl - 如何导入在另一台服务器上创建的状态?

For clusters that are created manually using vm's of cloud providers, just get the kubeconfig from ~/.kube/config .对于使用 vm 的云提供商手动创建的集群,只需从~/.kube/config获取 kubeconfig 。 However for managed services like GKE you will have to rely on gcloud to get the kubeconfig generated in the runtime with the right token.但是对于像 GKE 这样的托管服务,您将不得不依赖 gcloud 来获取在运行时中生成的带有正确令牌的 kubeconfig。

Generally a service account can be created that will help in getting the right kubeconfig with token generated for you.通常可以创建一个服务帐户,这将有助于使用为您生成的令牌获取正确的 kubeconfig。 Something similar can also be found in Azure.在 Azure 中也可以找到类似的东西。

if you have windows check you %HOME% environment variable and it should point to you user directory.如果你有 Windows 检查你 %HOME% 环境变量,它应该指向你的用户目录。 Then create the folfer ".kube" in "C:/users/your_user" and within such folder create your "config" file as described by "Phagun Baya".然后在“C:/users/your_user”中创建文件夹“.kube”,并在这样的文件夹中创建你的“config”文件,如“Phagun Baya”所述。

echo %HOME%回声%HOME%

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM