I have launched a private GKE cluster using terraform resource "google_container_cluster"
with a private_cluster_config
block in it.
I have added master_authorized_networks_config
to allow my own IP address in authorized networks for the GKE.
And I have added k8s namespace using terraform resource "kubernetes_namespace"
.
I have also set all google, kubernetes providers, k8s token, cluster_ca_certificate etc correctly and the namespace was indeed provisioned by this terraform.
resource "google_container_cluster" "k8s_cluster" {
# .....
# .....
private_cluster_config {
enable_private_nodes = true
enable_private_endpoint = false
master_ipv4_cidr_block = "172.16.0.0/28"
}
ip_allocation_policy { } # enables VPC-native
master_authorized_networks_config {
cidr_blocks {
{
cidr_block = "0.0.0.0/0"
display_name = "World"
}
}
}
# .....
# .....
}
data "google_client_config" "google_client" {}
data "google_container_cluster" "k8s_cluster" {
name = google_container_cluster.k8s_cluster.name
location = var.location
}
provider "kubernetes" {
# following this example https://www.terraform.io/docs/providers/google/d/datasource_client_config.html#example-usage-configure-kubernetes-provider-with-oauth2-access-token
version = "1.11.1"
load_config_file = false
host = google_container_cluster.k8s_cluster.endpoint
token = data.google_client_config.google_client.access_token
cluster_ca_certificate = base64decode(
data.google_container_cluster.k8s_cluster.master_auth.0.cluster_ca_certificate
)
}
resource "kubernetes_namespace" "namespaces" {
depends_on = [google_container_node_pool.node_pool]
for_each = ["my-ns"]
metadata {
name = each.value
}
}
Then I ran terraform apply
and the namespace was created fine ✅✅✅
kubernetes_namespace.namespaces["my-ns"]: Creating...
kubernetes_namespace.namespaces["my-ns"]: Creation complete after 1s [id=my-ns]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
However, when I run terraform apply
or terraform plan
again and terraform is trying to refresh the namespace resource,
data.google_container_cluster.k8s_cluster: Refreshing state...
kubernetes_namespace.namespaces["my-ns"]: Refreshing state... [id=my-ns]
it's throwing the following error intermittently . ❌ ❌ ❌
Error: Get http://localhost/api/v1/namespaces/my-ns: dial tcp 127.0.0.1:80: connect: connection refused
It's sometimes passing and sometimes failing - intermittently .
Where would you advise I should look into to fix this intermittent error?
It may be issue with k8s contexts. You should create dedicated unique k8s context to access Your GKE cluster and specify it in terraform provider
provider "kubernetes" {
config_context = var.K8S_CONTEXT
version = "1.10"
}
Check kubectl config get-contexts
to get list of all Your k8s contexts.
Terraform resource may be useful to create context of Your GKE automatically
resource "null_resource" "local_k8s_context" {
depends_on = [google_container_cluster.gke_cluster_0]
provisioner "local-exec" {
command = "gcloud container clusters get-credentials ${var.GKE_CLUSTER_NAME} --project=${var.GCP_PROJECT_ID} --zone=${var.GKE_MASTER_REGION} && ( kubectl config delete-context ${var.K8S_CONTEXT}; kubectl config rename-context gke_${var.GCP_PROJECT_ID}_${var.GKE_MASTER_REGION}_${var.GKE_CLUSTER_NAME} ${var.K8S_CONTEXT} )"
}
}
I think you can report the issue on https://github.com/terraform-providers/terraform-provider-google/issues it's a good place to report issues with Terraform and GPC.
Regards.
In my case, the source of the issue was this :
The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs
In your case, your kubernetes provider
block has several config options that are variables:
host = google_container_cluster.k8s_cluster.endpoint
token = data.google_client_config.google_client.access_token
My workaround was to create a kubeconfig.yaml file and temporarily replace the provider config with something like the following:
provider "kubernetes" {
config_path = "kubeconfig.yaml"
}
This allowed me to run the import, and then I restored the previous variable-based config.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.