[英]How to automatically authenticate against the kubernetes cluster after creating it with terraform in Azure?
I try to create a kubernetes cluster, namespace and secrets via terraform.我尝试通过 terraform 创建 kubernetes 集群、命名空间和机密。 The cluster is created, but the resources building upon the cluster fail to be created.
集群创建成功,但是建立在集群上的资源创建失败。
This is the error message thrown by terraform after creation of the kubernetes cluster, when the namespace is to be created:这是在kubernetes集群创建后,要创建namespace时terraform抛出的错误信息:
azurerm_kubernetes_cluster_node_pool.mypool: Creation complete after 6m4s [id=/subscriptions/aaabcde1-abcd-abcd-abcd-aaaaaaabdce/resourcegroups/myrg/providers/Microsoft.ContainerService/managedClusters/my-aks/agentPools/win]
Error: Post https://my-aks-abcde123.hcp.australiaeast.azmk8s.io:443/api/v1/namespaces: dial tcp: lookup my-aks-abcde123.hcp.australiaeast.azmk8s.io on 10.128.10.5:53: no such host
on mytf.tf line 114, in resource "kubernetes_namespace" "my":
114: resource "kubernetes_namespace" "my" {
I can resolve this by manually authenticating against the kubernetes cluster via the command line and applying the outstanding terraform changes via another terraform apply
:我可以通过命令行手动对 kubernetes 集群进行身份验证并通过另一个
terraform apply
未完成的 terraform 更改来解决此问题:
az aks get-credentials -g myrg -n my-aks --overwrite-existing
My attempt to automate this authentication step failed.我尝试自动执行此身份验证步骤失败。 I have tried with a local exec provisioner inside the definition of the kubernetes cluster, without success:
我在 kubernetes 集群的定义中尝试了本地 exec 配置器,但没有成功:
resource "azurerm_kubernetes_cluster" "myCluster" {
name = "my-aks"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "my-aks"
network_profile {
network_plugin = "azure"
}
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_B2s"
}
service_principal {
client_id = azuread_service_principal.tfapp.application_id
client_secret = azuread_service_principal_password.tfapp.value
}
tags = {
Environment = "demo"
}
windows_profile {
admin_username = "myself"
admin_password = random_string.password.result
}
provisioner "local-exec" {
command="az aks get-credentials -g myrg -n my-aks --overwrite-existing"
}
}
This is an example of a resource that fails to be created:这是一个无法创建资源的示例:
resource "kubernetes_namespace" "my" {
metadata {
name = "my-namespace"
}
}
Is there a way to fully automate the creation of my resources, including those that are based on the kubernetes cluster, without manual authentication?有没有一种方法可以完全自动化我的资源的创建,包括那些基于 kubernetes 集群的资源,而无需手动身份验证?
For your requirements, I think you can separate the creation of the AKS cluster from the creation of the resources in the AKS cluster.对于您的要求,我认为您可以将 AKS 集群的创建与 AKS 集群中资源的创建分开。
In the creation of the AKS cluster, you just need to put the provisioner local-exec
in the null_resource
like this:在创建 AKS 集群时,您只需要像这样将配置器
local-exec
放在null_resource
:
resource "null_resource" "example" {
provisioner "local-exec" {
command="az aks get-credentials -g ${azurerm_resource_group.rg.name} -n my-aks --overwrite-existing"
}
}
When the AKS cluster creation is finished. AKS 群集创建完成后。 Then you go to create your namespace through the Terraform again.
然后你再次通过 Terraform 创建你的命名空间。
In this way, you do not need to manually authenticate.这样,您就不需要手动进行身份验证。 Just execute the Terraform code.
只需执行 Terraform 代码。
In the documentation for Terraform AKS resource there is an example of creating an authenticated Kubernetes provider:在Terraform AKS 资源的文档中,有一个创建经过身份验证的 Kubernetes 提供程序的示例:
provider "kubernetes" {
host = "${azurerm_kubernetes_cluster.main.kube_config.0.host}"
username = "${azurerm_kubernetes_cluster.main.kube_config.0.username}"
password = "${azurerm_kubernetes_cluster.main.kube_config.0.password}"
client_certificate = "${base64decode(azurerm_kubernetes_cluster.main.kube_config.0.client_certificate)}"
client_key = "${base64decode(azurerm_kubernetes_cluster.main.kube_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.main.kube_config.0.cluster_ca_certificate)}"
}
Then you can create Kubernetes namespace or secret with Terraform.然后您可以使用 Terraform 创建 Kubernetes 命名空间或机密。
Eventually got this to work, without any requiredment to use AZ LOGIN, or AZ AKS GET-CREDENTIALS null_resource or local-exec provisioners as suggested above etc.最终让这个工作,没有任何要求使用 AZ LOGIN 或 AZ AKS GET-CREDENTIALS null_resource 或 local-exec 供应商,如上面建议的等。
Instead used a data block in main.tf to obtain the AKS Cluster (output from AKS module), and use the KUBE ADMIN CONFIG from the DATA as credentials for the Kubernetes Provider block.而是使用 main.tf 中的数据块来获取 AKS 集群(来自 AKS 模块的输出),并使用来自 DATA 的 KUBE ADMIN CONFIG 作为 Kubernetes Provider 块的凭据。 See below:
见下文:
data "azurerm_kubernetes_cluster" "aks" {
name = local.aks_cluster_name
resource_group_name = module.infra_resource_group.rg.name
depends_on = [
module.aks.aks_cluster
]
}
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.aks.kube_admin_config.0.host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_admin_config.0.cluster_ca_certificate)
}
NOTE: Found that using KUBE_CONFIG in the Kubernetes Provider block did not work.注意:发现在 Kubernetes Provider 块中使用 KUBE_CONFIG 不起作用。 It needed higher permissions, hence why used the KUBE_ADMIN_CONFIG attribute instead
它需要更高的权限,因此为什么使用 KUBE_ADMIN_CONFIG 属性代替
Using:使用:
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.