简体   繁体   中英

Dinamically add resources in Terraform

I set up a jenkins pipeline that launches terraform to create a new EC2 instance in our VPC and register it to our private hosted zone on R53 (which is created at the same time) at every run.

I also managed to save the state into S3 so it doesn't fail with the hosted zone being re-created.

the main issue I have is that at every run terraform keeps replacing the previous instance with the new one and not adding it to the pool of instances.

How can avoid this?

here's a snippet of my code

terraform {
  backend "s3" {
    bucket = "<redacted>"
    key    = "<redacted>/terraform.tfstate"
    region = "eu-west-1"
  }
}

provider "aws" {
  region     = "${var.region}"
}

data "aws_ami" "image" {

  # limit search criteria for performance
  most_recent = "${var.ami_filter_most_recent}"
  name_regex  = "${var.ami_filter_name_regex}"
  owners      = ["${var.ami_filter_name_owners}"]

  # filter on tag purpose
  filter {
    name   = "tag:purpose"
    values = ["${var.ami_filter_purpose}"]
  }

  # filter on tag os
  filter {
    name   = "tag:os"
    values = ["${var.ami_filter_os}"]
  }

}

resource "aws_instance" "server" {

  # use extracted ami from image data source
  ami = data.aws_ami.image.id

  availability_zone = data.aws_subnet.most_available.availability_zone

  subnet_id = data.aws_subnet.most_available.id

  instance_type = "${var.instance_type}"

  vpc_security_group_ids = ["${var.security_group}"]

  user_data = "${var.user_data}"

  iam_instance_profile = "${var.iam_instance_profile}"

  root_block_device {
    volume_size = "${var.root_disk_size}"
  }

  ebs_block_device {
    device_name = "${var.extra_disk_device_name}"
    volume_size = "${var.extra_disk_size}"
  }

  tags = {
    Name              = "${local.available_name}"
  }

}

resource "aws_route53_zone" "private" {
  name = var.hosted_zone_name

  vpc {
    vpc_id = var.vpc_id
  }
}

resource "aws_route53_record" "record" {
  zone_id = aws_route53_zone.private.zone_id
  name    = "${local.available_name}.${var.hosted_zone_name}"
  type    = "A"
  ttl     = "300"
  records = [aws_instance.server.private_ip]

  depends_on = [
    aws_route53_zone.private
  ]
}

the outcome is that my previously created instance is destroyed and a new one is created. what I want is to keep adding instances with this code. thank you

Your code creates only one instance aws_instance.server , and any change to its properties will modify that one instance only as your backend is in S3, thus it acts as a global state for each pipeline. The same goes for aws_route53_record.record and anything else in your script.

If you want different pipelines to reuse the same exact script, you should either use different workspaces , or create different TF states for each pipeline. The other alternative is to redefine your TF script to take a map of instances as an input variable and use for_each to create different instances.

If those instances should be same, you should manage their count using using aws_autoscaling_group and desired capacity.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM