简体   繁体   English

使用 Terraform 创建 GKE 集群和命名空间

[英]Create GKE cluster and namespace with Terraform

I need to create GKE cluster and then create namespace and install db through helm to that namespace.我需要创建 GKE 集群,然后创建命名空间并通过 helm 将 db 安装到该命名空间。 Now I have gke-cluster.tf that creates cluster with node pool and helm.tf, that has kubernetes provider and helm_release resource.现在我有 gke-cluster.tf,它使用节点池和 helm.tf 创建集群,它具有 kubernetes provider 和 helm_release 资源。 It first creates cluster, but then tries to install db but namespace doesn't exist yet, so I have to run terraform apply again and it works.它首先创建集群,然后尝试安装 db 但命名空间尚不存在,所以我必须再次运行terraform apply并且它可以工作。 I want to avoid scenario with multiple folder and run terraform apply only once.我想避免使用多个文件夹的场景并且只运行一次terraform apply What's the good practice for situaction like this?像这样的情景有什么好的做法? Thanks for the answers.感谢您的回答。

The create_namespace argument of helm_release resource can help you. helm_release资源的create_namespace参数可以帮助你。

create_namespace - (Optional) Create the namespace if it does not yet exist. Defaults to false.

https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#create_namespace https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release#create_namespace

Alternatively, you can define a dependency between the namespace resource and helm_release like below:或者,您可以定义namespace资源和helm_release之间的依赖关系,如下所示:

resource "kubernetes_namespace" "prod" {
  metadata {
    annotations = {
      name = "prod-namespace"
    }

    labels = {
      namespace = "prod"
    }

    name = "prod"
  }
}
resource "helm_release" "arango-crd" { 
  name = "arango-crd" 
  chart = "./kube-arangodb-crd"
  namespace = "prod"  

  depends_on = [ kubernetes_namespace.prod ]
}

The solution posted by user adp is correct but I wanted to give more insight on using Terraform for this particular example in regards of running single commmand:用户 adp 发布的解决方案是正确的,但我想更深入地了解在这个特定示例中使用 Terraform 运行单个命令

  • $ terraform apply --auto-approve . $ terraform apply --auto-approve

Basing on following comments:基于以下评论:

Can you tell how are you creating your namespace?你能告诉你如何创建你的命名空间吗? Is it with kubernetes provider?是否与 kubernetes 提供程序有关? - Dawid Kruk - 大卫·克鲁克

resource "kubernetes_namespace" - Jozef Vrana资源“kubernetes_namespace” - Jozef Vrana

This setup needs specific order of execution.此设置需要特定的执行顺序。 First the cluster, then the resources.首先是集群,然后是资源。 By default Terraform will try to create all of the resources at the same time.默认情况下,Terraform 将尝试同时创建所有资源。 It is crucial to use a parameter depends_on = [VALUE] .使用参数depends_on = [VALUE]至关重要。

The next issue is that the kubernetes provider will try to fetch the credentials at the start of the process from ~/.kube/config .下一个问题是kubernetes提供程序将尝试在进程开始时从~/.kube/config获取凭据。 It will not wait for the cluster provisioning to get the actual credentials.它不会等待集群配置获取实际凭据。 It could:它可以:

  • fail when there is no .kube/config没有.kube/config时失败
  • fetch credentials for the wrong cluster.获取错误集群的凭据。

There is ongoing feature request to resolve this kind of use case (also there are some workarounds):有正在进行的功能请求来解决这种用例(也有一些解决方法):

As an example:举个例子:

# Create cluster
resource "google_container_cluster" "gke-terraform" {
  project = "PROJECT_ID"
  name     = "gke-terraform"
  location = var.zone
  initial_node_count = 1
}

# Get the credentials 
resource "null_resource" "get-credentials" {

 depends_on = [google_container_cluster.gke-terraform] 
 
 provisioner "local-exec" {
   command = "gcloud container clusters get-credentials ${google_container_cluster.gke-terraform.name} --zone=europe-west3-c"
 }
}

# Create a namespace
resource "kubernetes_namespace" "awesome-namespace" {

 depends_on = [null_resource.get-credentials]

 metadata {
   name = "awesome-namespace"
 }
}

Assuming that you had earlier configured cluster to work on and you didn't delete it:假设您之前已将集群配置为工作并且您没有删除它:

  • Credentials for Kubernetes cluster are fetched.获取 Kubernetes 集群的凭据。

  • Terraform will create a cluster named gke-terraform Terraform 将创建一个名为gke-terraform的集群

  • Terraform will run a local command to get the credentials for gke-terraform cluster Terraform 将运行一个本地命令来获取gke-terraform集群的凭据

  • Terraform will create a namespace (using old information): Terraform 将创建一个命名空间(使用旧信息):

    • if you had another cluster in .kube/config configured, it will create a namespace in that cluster (previous one)如果您在.kube/config配置了另一个集群,它将在该集群中创建一个命名空间(上一个)
    • if you deleted your previous cluster, it will try to create a namespace in that cluster and fail (previous one)如果您删除了之前的集群,它将尝试在该集群中创建一个命名空间并失败(上一个)
    • if you had no .kube/config it will fail on the start如果你没有.kube/config它会在开始时失败

Important!重要的!

Using "helm_release" resource seems to get the credentials when provisioning the resources, not at the start!使用“helm_release”资源似乎是在配置资源时获取凭据,而不是在开始时!

As said you can use helm provider to provision the resources on your cluster to avoid the issues I described above.如前所述,您可以使用 helm provider 来配置集群上的资源,以避免我上面描述的问题。

Example on running a single command for creating a cluster and provisioning resources on it:运行单个命令以创建集群并在其上配置资源的示例:

variable zone {
  type = string
  default = "europe-west3-c"
}

resource "google_container_cluster" "gke-terraform" {
  project = "PROJECT_ID"
  name     = "gke-terraform"
  location = var.zone
  initial_node_count = 1
}

data "google_container_cluster" "gke-terraform" { 
  project = "PROJECT_ID"
  name     = "gke-terraform"
  location = var.zone
}

resource "null_resource" "get-credentials" {

 # do not start before resource gke-terraform is provisioned
 depends_on = [google_container_cluster.gke-terraform] 

 provisioner "local-exec" {
   command = "gcloud container clusters get-credentials ${google_container_cluster.gke-terraform.name} --zone=${var.zone}"
 }
}


resource "helm_release" "mydatabase" {
  name  = "mydatabase"
  chart = "stable/mariadb"
  
  # do not start before the get-credentials resource is run 
  depends_on = [null_resource.get-credentials] 

  set {
    name  = "mariadbUser"
    value = "foo"
  }

  set {
    name  = "mariadbPassword"
    value = "qux"
  }
}

Using above configuration will yield:使用上述配置将产生:

data.google_container_cluster.gke-terraform: Refreshing state...
google_container_cluster.gke-terraform: Creating...
google_container_cluster.gke-terraform: Still creating... [10s elapsed]
<--OMITTED-->
google_container_cluster.gke-terraform: Still creating... [2m30s elapsed]
google_container_cluster.gke-terraform: Creation complete after 2m38s [id=projects/PROJECT_ID/locations/europe-west3-c/clusters/gke-terraform]
null_resource.get-credentials: Creating...
null_resource.get-credentials: Provisioning with 'local-exec'...
null_resource.get-credentials (local-exec): Executing: ["/bin/sh" "-c" "gcloud container clusters get-credentials gke-terraform --zone=europe-west3-c"]
null_resource.get-credentials (local-exec): Fetching cluster endpoint and auth data.
null_resource.get-credentials (local-exec): kubeconfig entry generated for gke-terraform.
null_resource.get-credentials: Creation complete after 1s [id=4191245626158601026]
helm_release.mydatabase: Creating...
helm_release.mydatabase: Still creating... [10s elapsed]
<--OMITTED-->
helm_release.mydatabase: Still creating... [1m40s elapsed]
helm_release.mydatabase: Creation complete after 1m44s [id=mydatabase]

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM