简体   繁体   中英

Unable to add capacity provider to AWS ECS cluster with terraform

I'm trying to add a capacity provider to a ECS cluster using terraform so that it can autoscale. The autoscaling group needs to know the cluster to create instances in the cluster, but the cluster also need to know the autoscaling group through its capacity provider. How can I resolve this circular dependency using terraform and a capacity provider?

Here is my infrastructure code for the cluster creation

# The ECS cluster
resource "aws_ecs_cluster" "my_cluster" {
  name = "my-cluster"
  capacity_providers = [aws_ecs_capacity_provider.my_cp.name]
}

# The capacity provider
resource "aws_ecs_capacity_provider" "my_cp" {
  name = "my-cp"

  auto_scaling_group_provider {
    auto_scaling_group_arn         = aws_autoscaling_group.my_asg.arn
    managed_termination_protection = "DISABLED"

    managed_scaling {
      maximum_scaling_step_size = 1000
      minimum_scaling_step_size = 1
      status                    = "ENABLED"
      target_capacity           = 10
    }
  }
}

Here is the infrastructure code for the autoscaling group and its dependencies

# The image for the cluster instances 
data "aws_ssm_parameter" "instance_image" {
  name = "/aws/service/ecs/optimized-ami/amazon-linux-2/recommended/image_id"
}

# The launch config of the instances
resource "aws_launch_configuration" "my_launch_config" {
  name          = "my-launch-config"
  image_id      = data.aws_ssm_parameter.instance_image.value
  instance_type = "t3.small"
  iam_instance_profile = my_iam_profile
  security_groups = my_security_groups
  associate_public_ip_address = false
  key_name = "my-keypair"

}

# The placement group of the autosclaing group
resource "aws_placement_group" "my_pg" {
  name     = "my-pg"
  strategy = "spread"
}

# The autoscaling gorup
resource "aws_autoscaling_group" "my_asg" {
  name                      = "my-asg"
  max_size                  = 2
  min_size                  = 1
  desired_capacity          = 1

  health_check_type         = "EC2"
  health_check_grace_period = 300

  force_delete              = true
  placement_group           = aws_placement_group.my_pg.id
  launch_configuration      = aws_launch_configuration.my_launch_config.id
  vpc_zone_identifier       = my_subnets_ids


  tag {
    key                 = "Name"
    value               = "myInstance"
    propagate_at_launch = true
  }
}

When applying this terraform, I do get a capacity provider on my cluster but the instances are in the cluster default instead of my-cluster . Some will say I just have to add

  user_data = <<EOF
    #!/bin/bash
    echo ECS_CLUSTER=${aws_ecs_cluster.my_cluster.name} >> /etc/ecs/ecs.config
  EOF

to the launch config, but I cannot reference the cluster in the launch config, because the cluster depends ont the capacity provider which depends on the autoscaling group which depends on the launch config. So I would have a circular dependency. That being said, support for capacity provider in terraform seems completly useless if we cannot add the capacity provider after the cluster creation.

The way I deal with this issue is based on the fact that your launch configuration (LC) requires only to know the cluster name . At present you are hard-coding the name of the cluster in its definition:

name = "my-cluster"

Thus, the way I do it is to have a variable with the name:

variable "cluster_name" {
   default = "my-cluster"
}

Now you can reference the name anywhere it is needed, without needing to actually create the cluster:

# The ECS cluster
resource "aws_ecs_cluster" "my_cluster" {
  name = var.cluster_name
  capacity_providers = [aws_ecs_capacity_provider.my_cp.name]
}
  user_data = <<EOF
    #!/bin/bash
    echo ECS_CLUSTER=${var.cluster_name} >> /etc/ecs/ecs.config
  EOF

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM