简体   繁体   中英

terraform output Google Kubernetes cluster inggress load balancer ip

I've managed to automate kubernetes cluster deployment with terraform. After bringing up cluster terraform also deploys my apps to cluster using provisioning (running.sh script with local-exec). I am also adding ingress to cluster and I need to get the ingress load balancer IP once it created. preferable option is terraform output. The way I am getting it now is running this part of code at the end of my script

IP="$(kubectl get ingress appname --no-headers | awk '{print $3}')"
echo "Load Balancer IP $IP"

However this one has its issues, I need to add sleep before running this command to be sure that the IP is already assigned. and I can't be sure the added sleep time is enough. Actually need smth like these but for my ingress loadbalancer IP

output "google_container_cluster_endpoint" {
  value = "${google_container_cluster.k8s.endpoint}"
}

output "google_container_cluster_master_version" {
  value = "${google_container_cluster.k8s.master_version}"
}

I have managed to get external ingress ip in fully declarative approach. It is based on different providers including azurerm, kubernetes, helm. I am targeting Azure Kubernetes Service, but the solution is cloud agnostic.

Solution explanation:

Use kubernetes provider to connect cluster after ingress creation. Kubernetes provides allows reading service data like external ip.

Providers overview:

  • azurerm provider is used for Azure communication
    • It is possible to create Kubernetes (K8s) via different provider,
  • helm provider is used for ingress installation,
    • It is possible to create ingress using different approach,
  • kubernetes provider allows me to query Load balancer service

Short snippet

provider "kubernetes" { }

provider "helm" { }

resource "helm_release" "nginx-ingress" {
  name             = "nginx-ingress"
  namespace        = "nginx-ingress"
  create_namespace = true
  repository       = "https://kubernetes-charts.storage.googleapis.com"
  chart            = "nginx-ingress"

  set {
    name  = "controller.replicaCount"
    value = "2"
  }
}

data "kubernetes_service" "service_ingress" {
  metadata {
    name      = "nginx-ingress-controller"
    namespace = "nginx-ingress"
  }

  depends_on = [ helm_release.nginx-ingress ] 
}

output "ip" {
  value = data.kubernetes_service.service_ingress.load_balancer_ingress.0.ip
}

Complete snippet

variable "subscription_id" {
  type = string
}

variable "client_id" {
  type = string
}

variable "client_secret" {
  type = string
}

variable "tenant_id" {
  type = string
}

variable "resource_location"{
  type = string
}

terraform {
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"
      version = "2.29.0"
    }

    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "1.13.2"
    }

    helm = {
      source = "hashicorp/helm"
      version = "1.3.1"
    }
  }
}

provider "azurerm" {
  subscription_id = var.subscription_id
  client_id       = var.client_id
  client_secret   = var.client_secret
  tenant_id       = var.tenant_id
  features {}
}

provider "kubernetes" {
  load_config_file       = "false"
  host                   = azurerm_kubernetes_cluster.aks.kube_config.0.host
  client_certificate     = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}

provider "helm" {
  kubernetes {
    load_config_file       = "false"
    host                   = azurerm_kubernetes_cluster.aks.kube_config.0.host
    client_certificate     = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
    client_key             = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
    cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
  }
}

data "kubernetes_service" "service_ingress" {
  metadata {
    name      = "nginx-ingress-controller"
    namespace = "nginx-ingress"
  }

  depends_on = [ helm_release.nginx-ingress ] 
}

resource "azurerm_resource_group" "rg" {
  name     = "myapp"
  location = var.resource_location
}

resource "azurerm_kubernetes_cluster" "aks" {
  name                = "myapp"
  location            = var.resource_location
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = "myapp"
  kubernetes_version  = "1.17.11"

  default_node_pool {
    name            = "default"
    node_count      = 2
    vm_size         = "Standard_B2s"
    os_disk_size_gb = 30
    type = "VirtualMachineScaleSets"
    enable_auto_scaling = false
  }

  service_principal {
    client_id     = var.client_id
    client_secret = var.client_secret
  }

  role_based_access_control {
    enabled = true
  }
}

resource "helm_release" "nginx-ingress" {
  name             = "nginx-ingress"
  namespace        = "nginx-ingress"
  create_namespace = true
  repository       = "https://kubernetes-charts.storage.googleapis.com"
  chart            = "nginx-ingress"

  set {
    name  = "controller.replicaCount"
    value = "2"
  }

  set {
    name  = "controller.nodeSelector.kubernetes\\.io/os"
    value = "linux"
  }

  set {
    name  = "defaultBackend.nodeSelector.kubernetes\\.io/os"
    value = "linux"
  }
}

output "ip" {
  value = data.kubernetes_service.service_ingress.load_balancer_ingress.0.ip
}

After a long Saturday, I've end up with a solution to the the problem. I was with almost the same issue, so here is my solution, surely can be improved.

I divided in two parts:

1.- I'll use local-exec to run a script that solves the problem of waiting for a valid IP in the load LoadBalancer 2.- Terraform , using External Data Source calls a "program" that answers in json format. My "program" is a bash script that grabs the IP. As a result, I have my desired data in a variable.

I did it this way cause I didn't know how to debug issues using External Data Source and I was suffering "Stranger things"

First I run the code to wait for a valid IP. Terraform calls the local-exec


provisioner "local-exec" {
  command = "./public-ip.sh"
  interpreter = ["/bin/bash", "-c"]
}

And this is the script I've used


#!/bin/bash
#public-ip.sh
#!/bin/bash
# Exit if any of the intermediate steps fail
set -e

function valid_ip()
{
    local  ip=$1
    local  stat=1

    if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
        OIFS=$IFS
        IFS='.'
        ip=($ip)
        IFS=$OIFS
        [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \
            && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]]
        stat=$?
    fi
    return $stat
}

########################
# Grab the Public IP   #
#######################
WaitingTime=0
HaveIP=NO

echo "Let's check that LoadBalancer IP..."
MyPublicIP=$(kubectl get services --all-namespaces| grep LoadBalancer | awk '{print $5}')
valid_ip $MyPublicIP && HaveIP="OK"

until [ "$HaveIP" = "OK" -o "$WaitingTime" -ge 30 ]; do
    echo "sleeeping...."
    sleep 10
    echo "Play it again Sam..."
    MyPublicIP=$(kubectl get services --all-namespaces| grep LoadBalancer | awk '{print $5}')
    #if valid_ip $MyPublicIP; then echo "We got the IP"; HaveIP=YES ;else stat='Still without IP'; fi
    #if valid_ip $MyPublicIP; then HaveIP="OK" ; fi
    valid_ip $MyPublicIP && HaveIP="OK"
    #if valid_ip $MyPublicIP; then HaveIP="OK" ; fi
    WaitingTime=$((WaitingTime+1))
    echo $WaitingTime
done
if [ "$HaveIP" = "OK" ]; then echo  An the public IP is... $MyPublicIP; else echo "WT_ has happened now!!!"; fi

After I know I have the IP ready. I've just needed to grab it. Pay attention to the depends_on that controls I'll grab my data once my resource (google_container_cluster.tests) it's been created, and not whenever he wants. Test it. It's tricky...


data "external" "insights-public-ip" {
  program = ["sh", "test-jq.sh" ]
  depends_on = ["google_container_cluster.tests"]
}

output "insights-public-ip" {
  value = "${data.external.insights-public-ip.result}"
}

And this is test-jq.sh (test cause is the first time I've used :S), the script I'm calling to print out in json format the data.


#!/bin/bash
#test-jq.sh
set -e
MyPublicIP=$(kubectl get services --all-namespaces | grep insights | grep LoadBalancer | awk '{print $5}')
jq -n --arg foobaz "$MyPublicIP" '{"extvar":$foobaz}'

Hope it helps. At least I've resolved my stuff. 在此处输入图片说明

This could be done, Using only kubectl in this way:

#!/bin/bash

while true;
do
  kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{"{\"ip\": "}{"\""}{.status.loadBalancer.ingress[0].ip}{"\"}"}' | grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}" >/dev/null

  if [[ $? -eq 0 ]]; then
      kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{"{\"ip\": "}{"\""}{.status.loadBalancer.ingress[0].ip}{"\"}"}'
      exit 0
  fi
done

Now you could the script your external data resource as follow:

data "external" "external-public-ip" {
  program = ["sh", "get-ip.sh" ]
  depends_on = [kubernetes_service.foo]
}

output "external-public-ip" {
  value = "${data.external.external-public-ip.result}"
}

Slightly modified version of already answered above -

Create the service:

resource "kubernetes_service" "service" {
  metadata {
    name      = var.service_name
    namespace = var.deployment_namespace
  }
   ...
}

Data source for the service created:

data "kubernetes_service" "service_ingress" {
  metadata {
    name      = var.service_name
    namespace = var.deployment_namespace
  }
  depends_on = [kubernetes_service.service]
}

Output the IP:

output "gke_deployment_lb_ip" {
  value       = data.kubernetes_service.service_ingress.status[0].load_balancer[0].ingress[0].ip
  description = "Deployment ALB IP"
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM