简体   繁体   中英

Terraform connect to aks cluster without running az login

My goal is to create an Ubuntu VM which will connect to aks without running az login .

The main idea behind that is that I want to let other people connect to that aks cluster only and not be able to read\\write any other resources on Azure. Currently, I've tried to achieve that by creating a new role and assign this role to the VM, but, no luck so far.

My question is - is it possible to run az aks get-credentials ... without running az login ?

Terraform template:

# Create AKS Cluster
resource "azurerm_kubernetes_cluster" "akscluster" {
  count               = var.cluster_count
  name                = "${var.cluster_name}-${count.index}"
  location            = var.location
  resource_group_name = azurerm_resource_group.aksrg.name
  dns_prefix          = var.dns

  default_node_pool {
    name       = var.node_pool_name
    node_count = var.node_count
    vm_size    = var.vm_size
    type       = "VirtualMachineScaleSets"
  }

  service_principal {
    client_id     = var.kubernetes_client_id
    client_secret = var.kubernetes_client_secret
  }

  tags = {
    Environment = var.tags
  }
}

# Create virtual machine
resource "azurerm_virtual_machine" "myterraformvm" {
  count                 = var.cluster_count
  name                  = "aks-${count.index}"
  location              = var.location
  resource_group_name   = azurerm_resource_group.aksrg.name
  network_interface_ids = [azurerm_network_interface.myterraformnic.id]
  vm_size               = "Standard_DS1_v2"

  storage_os_disk {
    name              = "myOsDisk"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Premium_LRS"
  }

  storage_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
  }

  os_profile {
    computer_name  = "hostname"
    admin_username = "var.admin"
    admin_password = "var.pass"
  }

  os_profile_linux_config {
    disable_password_authentication = false
    # ssh_keys {
    #     path     = "/home/azureuser/.ssh/authorized_keys"
    #     key_data = "ssh-rsa AAAAB3Nz{snip}hwhqT9h"
    # }
  }

  identity {
    type = "SystemAssigned"
  }

  boot_diagnostics {
    enabled     = "true"
    storage_uri = azurerm_storage_account.mystorageaccount.primary_blob_endpoint
  }

  tags = {
    environment = var.tags
  }
}


resource "azurerm_virtual_machine_extension" "example" {
  count                = var.cluster_count
  name                 = "hostname"
  virtual_machine_id   = azurerm_virtual_machine.myterraformvm[count.index].id
  publisher            = "Microsoft.Azure.Extensions"
  type                 = "CustomScript"
  type_handler_version = "2.0"

  settings = <<SETTINGS
    {
        "commandToExecute": "curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash"
    }
SETTINGS


  tags = {
    environment = var.tags
  }
}

You can create a remote file using the remote-exec provisioner, passing the azurerm_kubernetes_cluster.aks.kube_config_raw resource. I made an example for you here: https://github.com/ams0/terraform-templates/tree/master/aks-vm .

It creates a vnet with 2 subnets, an AKS in one and an ubuntu VM in the other one, and creates a local /home/ubuntu/.kube/config in the VM. You just need to download kubectl and you're good to go.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM