简体   繁体   中英

Terraform reference looped resources value

I'm trying to use a provisioner to write the public IP address of a newly created Azure instance into a file.

I was able to do it for a single instance.

resource "azurerm_public_ip" "helloterraformips" {
    name = "terraformtestip"
    location = "East US"
    resource_group_name = "${azurerm_resource_group.test.name}"
    public_ip_address_allocation = "dynamic"

    tags {
        environment = "TerraformDemo"
    }
}


resource "null_resource" "ansible-provision" {
depends_on = ["azurerm_virtual_machine.master-vm"]
count = "${var.node-count}"
   provisioner "local-exec" {
    command =  "echo \"[masters]\n ansible_ssh_host=${azurerm_public_ip.helloterraformips.ip_address} \" >> /home/osboxes/ansible-kube/ansible/inventory/testinv"
  }
}

Trouble is when I try to the same on VM's created thro Terraform looping, I'm facing issues when trying to access them.

resource "azurerm_public_ip" "mysvcs-k8sip" {
  count                        = "${var.node-count}"
  name                         = "mysvcs-k8s-ip-${count.index}"
  location                     = "East US"
  resource_group_name          = "${azurerm_resource_group.mysvcs-res.name}"
  public_ip_address_allocation = "dynamic"
}

resource "null_resource" "ansible-provision" {

  provisioner "local-exec" {
    command =  "echo \"[masters]\n${element(azurerm_public_ip.mysvcs-k8sip.*.ip_address,count.index)} \" >> /home/osboxes/ansible-kube/ansible/inventory/inventory"
  }
 }

I'm getting this error

Resource 'azurerm_public_ip.mysvcs-k8sip' does not have attribute 'ip_address' for variable 'azurerm_public_ip.mysvcs-k8sip.*.ip_address'

I'm digging into the semantics of Terraform and trying various things, but so far its not working and each iteration to create all resources also takes time. Any help or hint would be very useful.

Thanks,

One workaround I was able to do and get this working was to run "terraform apply -target=azurerm_virtual_machine.master-vm" which first creates the VM. Then run terraform apply again which would then run a provisioner which had this

resource "null_resource" "ansible-k8snodes"{
  count = "${var.node-count}"

  provisioner "local-exec" {
   command =  "echo \"\n[nodes]\n ${element(azurerm_public_ip.mysvcs-k8sip.*.ip_address,count.index+1)} ansible_ssh_user=testadmin ansible_ssh_pass=Password1234! \"  >>  /home/osboxes/ansible-kube/ansible/inventory/inventory"
  }
}

@Martin - Count or no count, it doesn't matter and it fails everytime. Infact, it seems like it worked only once for the single instance code posted above in my question, when I tried it again, it didnt work. Thx for your help.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM