简体   繁体   中英

Terraform & OpenStack - Zero downtime flavor change

I'm using openstack_compute_instance_v2 to create instances in OpenStack. There is a lifecycle setting create_before_destroy = true present. And it works just fine in case I eg change volume size, where instances needs to be replaced.

But. When I do flavor change, which can be done by using resize instance option from OpenStack, it does just that, but doesn't care about any HA. All instances in the cluster are unavailable for 20-30 seconds, before resize finishes.

How can I change this behaviour?

Some setting like serial from Ansible, or some other options would come in handy. But I can't find anything. Just any solution that would allow me to say “at least half of the instances needs to be online at all times”.

Terraform version: 12.20.

TF plan: https://pastebin.com/ECfWYYX3

The Openstack Terraform provider knows that it can update the flavor by using a resize API call instead of having to destroy the instance and recreate it.

Unfortunately there's not currently a lifecycle option that forces mutable things to do a destroy/create or create/destroy when coupled with the create_before_destroy lifecycle customisation so you can't easily force this to replace the instance instead.

One option in these circumstances is to find a parameter that can't be modified in place (these are noted by the ForceNew flag on the schema in the underlying provider source code for the resource) and then have a change in the mutable parameter also cascade a change to the immutable parameter.

A common example here would be replacing an AWS autoscaling group when the launch template (which is mutable compared to the immutable launch configurations) changes so you can immediately roll out the changes instead of waiting for the ASG to slowly replace the instances over time. A simple example would look something like this:

variable "ami_id" {
  default = "ami-123456"
}

resource "random_pet" "ami_random_name" {
  keepers = {
    # Generate a new pet name each time we switch to a new AMI id
    ami_id = var.ami_id
  }
}

resource "aws_launch_template" "example" {
  name_prefix            = "example-"
  image_id               = var.ami_id
  instance_type          = "t2.small"
  vpc_security_group_ids = ["sg-123456"]
}

resource "aws_autoscaling_group" "example" {
  name                = "${aws_launch_template.example.name}-${random_pet.ami_random_name.id}"
  vpc_zone_identifier = ["subnet-123456"]
  min_size            = 1
  max_size            = 3

launch_template {
    id      = aws_launch_template.example.id
    version = "$Latest"
  }

  lifecycle {
    create_before_destroy = true
  }
}

In the above example a change to the AMI triggers a new random pet name which changes the ASG name which is an immutable field so this triggers replacing the ASG. Because the ASG has the create_before_destroy lifecycle customisation then it will create a new ASG, wait for the minimum amount of instances to pass EC2 health checks and then destroy the old ASG.

For your case you can also use the name parameter on the openstack_compute_instance_v2 resource as that is an immutable field as well. So a basic example might look like this:

variable "flavor_name" {
  default = "FLAVOR_1"
}

resource "random_pet" "flavor_random_name" {
  keepers = {
    # Generate a new pet name each time we switch to a new flavor
    flavor_name = var.flavor_name
  }
}

resource "openstack_compute_instance_v2" "example" {
  name            = "example-${random_pet.flavor_random_name}"
  image_id        = "ad091b52-742f-469e-8f3c-fd81cadf0743"
  flavor_name     = var.flavor_name
  key_pair        = "my_key_pair_name"
  security_groups = ["default"]

  metadata = {
    this = "that"
  }

  network {
    name = "my_network"
  }
}

So. At first I've started digging how, as @ydaetskcoR proposed, to use random instance name.

Name wasn't an option, both because in openstack it is a mutable parameter, and because I have a decided naming schema which I can't change.

I've started to look for other parameters that I could modify to force instance being created instead of modified. I've found about personality . https://www.terraform.io/docs/providers/openstack/r/compute_instance_v2.html#instance-with-personality

But it didn't work either. Mainly, because personality is no longer supported as it seems:

The use of personality files is deprecated starting with the 2.57 microversion. Use metadata and user_data to customize a server instance. https://docs.openstack.org/api-ref/compute/

Not sure if terraform doesn't support it, or there are any other issues. But I went with user_data . I've already used user_data in compute instance module, so adding some flavor data there shouldn't be an issue.

So, within user_data I've added the following:

  user_data          = "runcmd:\n - echo ${var.host["flavor"]} > /tmp/tf_flavor"

No need for random pet names, no need to change instances names. Just change their "personality" by adding flavor name somewhere. This does force instance to be recreated when flavor changes.

So. Instead of simply:

  # module.instance.openstack_compute_instance_v2.server[0] will be updated in-place
  ~ resource "openstack_compute_instance_v2" "server" {

I have now:

-/+ destroy and then create replacement
+/- create replacement and then destroy

Terraform will perform the following actions:

  # module.instance.openstack_compute_instance_v2.server[0] must be replaced
+/- resource "openstack_compute_instance_v2" "server" {

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM