简体   繁体   中英

Terraform (0.12.29) import not working as expected; import succeeded but plan shows destroy & recreate

Some Background: We have terraform code to create various AWS resources. Some of these resources are created per AWS account and hence are structured to be stored in a account-scope folder in our project. This was when we were only having one AWS region. Now our application is made multi-region and hence these resources are to be created per region for each AWS account.

In order to do that we have now moved these TF scripts to region-scope folder which will be run per region. Since these resources are no longer part of 'account scope' we have removed them from the account scope Terraform state . Now when I try to import these resources

Imported the resources by running this from xyz-region-scope directory:

terraform import -var-file=config/us-west-2/default.tfvars -var-file=variables.tfvars -var-file=../globals.tfvars -var profile=xyz-stage -var region=us-west-2 -var tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8 -no-color <RESOURCE_NAME> <RESOURCE_ID>

One of the examples of a resource is:

RESOURCE_NAME=module.buckets.aws_s3_bucket.cloudtrail_logging_bucket 
RESOURCE_ID="ab-xyz-stage-cloudtrail-logging-72a2c5cd"

I was expecting the imports would update the resources in the terraform state file on my local machine but the terraform state file created under xyz-region-scope/state/xyz-stage/terraform.tfstate is not updated.

Verified the Imports with:

terraform show

Run terraform plan:

terraform plan -var-file=config/us-west-2/default.tfvars -var-file=variables.tfvars -var-file=../globals.tfvars -var profile=xyz-stage -var region=us-west-2 -var tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8 -no-color

But the terraform plan output shows Plan: 6 to add, 0 to change, 5 to destroy. that is those resources will be destroyed and recreated.

I am not clear why so, am I missing something and not doing it right?

Please note we store the remote state in S3 bucket but I currently do not have the remote TF state file created in S3 bucket for region scope (I do have one for account scope though). I was expecting that the Import..Plan..Apply process will create one for region scope as well.

EDIT: I see the remote TF state file created in the S3 for region scope after running imports. One difference that I see between this new region-scope tf state file from old account-scope one is: the new file does not have any "depends_on" block under any of the resources resources[] > instances[]

Environment:

Local machine: macOS v10.14.6

Terraform v0.12.29
+ provider.aws v3.14.1
+ provider.null v2.1.2
+ provider.random v2.3.1
+ provider.template v2.1.2


EDIT 2:

Here are my Imports and terraform plan:

terraform import module.buckets.random_id.cloudtrail_bucket_suffix cqLFzQ
terraform import module.buckets.aws_s3_bucket.cloudtrail_logging_bucket "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
terraform import  module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket "ab-xyz-stage-cloudtrail-logging-72a2c5cd"
terraform import  module.buckets.module.access_logging_bucket.aws_s3_bucket.default "ab-xyz-stage-access-logging-9d8e94ff"
terraform import  module.buckets.module.access_logging_bucket.random_id.bucket_suffix  nY6U_w
terraform import module.encryption.module.data_key.aws_iam_policy.decrypt "arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_decrypt"
terraform import module.encryption.module.data_key.aws_iam_policy.encrypt "arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_encrypt"



mymachine:xyz-region-scope kuldeepjain$ ../scripts/terraform.sh xyz-stage plan -no-color
+ set -o posix
+ IFS='
    '
++ blhome
+ BASH_LIB_HOME=/usr/local/lib/mycompany/ab/bash_library/0.0.1-SNAPSHOT
+ source /usr/local/lib/mycompany/ab/bash_library/0.0.1-SNAPSHOT/s3/bucket.sh
+ main xyz-stage plan -no-color
+ '[' 3 -lt 2 ']'
+ local env=xyz-stage
+ shift
+ local command=plan
+ shift
++ get_region xyz-stage
++ local env=xyz-stage
++ shift
+++ aws --profile xyz-stage configure get region
++ local region=us-west-2
++ '[' -z us-west-2 ']'
++ echo us-west-2
+ local region=us-west-2
++ _get_bucket xyz-stage xyz-stage-tfstate
++ local env=xyz-stage
++ shift
++ local name=xyz-stage-tfstate
++ shift
+++ _get_bucket_list xyz-stage xyz-stage-tfstate
+++ local env=xyz-stage
+++ shift
+++ local name=xyz-stage-tfstate
+++ shift
+++ aws --profile xyz-stage --output json s3api list-buckets --query 'Buckets[?contains(Name, `xyz-stage-tfstate`) == `true`].Name'
++ local 'bucket_list=[
    "ab-xyz-stage-tfstate-5b8873b8"
]'
+++ _count_buckets_in_json '[
    "ab-xyz-stage-tfstate-5b8873b8"
]'
+++ local 'json=[
    "ab-xyz-stage-tfstate-5b8873b8"
]'
+++ shift
+++ echo '[
    "ab-xyz-stage-tfstate-5b8873b8"
]'
+++ jq '. | length'
++ local number_of_buckets=1
++ '[' 1 == 0 ']'
++ '[' 1 -gt 1 ']'
+++ echo '[
    "ab-xyz-stage-tfstate-5b8873b8"
]'
+++ jq -r '.[0]'
++ local bucket_name=ab-xyz-stage-tfstate-5b8873b8
++ echo ab-xyz-stage-tfstate-5b8873b8
+ local tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8
++ get_config_file xyz-stage us-west-2
++ local env=xyz-stage
++ shift
++ local region=us-west-2
++ shift
++ local config_file=config/us-west-2/xyz-stage.tfvars
++ '[' '!' -f config/us-west-2/xyz-stage.tfvars ']'
++ config_file=config/us-west-2/default.tfvars
++ echo config/us-west-2/default.tfvars
+ local config_file=config/us-west-2/default.tfvars
+ export TF_DATA_DIR=state/xyz-stage/
+ TF_DATA_DIR=state/xyz-stage/
+ terraform get
+ terraform plan -var-file=config/us-west-2/default.tfvars -var-file=variables.tfvars -var-file=../globals.tfvars -var profile=xyz-stage -var region=us-west-2 -var tfstate_bucket=ab-xyz-stage-tfstate-5b8873b8 -no-color
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

module.encryption.module.data_key.data.null_data_source.key: Refreshing state...
module.buckets.data.template_file.dependencies: Refreshing state...
module.buckets.module.access_logging_bucket.data.template_file.dependencies: Refreshing state...
module.encryption.module.data_key.data.aws_region.current: Refreshing state...
module.buckets.module.access_logging_bucket.data.aws_caller_identity.current: Refreshing state...
data.aws_caller_identity.current: Refreshing state...
module.buckets.module.access_logging_bucket.data.aws_kms_alias.encryption_key_alias: Refreshing state...
module.buckets.data.aws_caller_identity.current: Refreshing state...
module.encryption.module.data_key.data.aws_caller_identity.current: Refreshing state...
module.encryption.module.data_key.data.aws_kms_alias.default: Refreshing state...
module.buckets.module.access_logging_bucket.data.template_file.encryption_configuration: Refreshing state...
module.encryption.module.data_key.data.aws_iam_policy_document.decrypt: Refreshing state...
module.encryption.module.data_key.data.aws_iam_policy_document.encrypt: Refreshing state...
module.buckets.module.access_logging_bucket.random_id.bucket_suffix: Refreshing state... [id=nY6U_w]
module.encryption.module.data_key.aws_iam_policy.decrypt: Refreshing state... [id=arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_decrypt]
module.encryption.module.data_key.aws_iam_policy.encrypt: Refreshing state... [id=arn:aws:iam::123412341234:policy/ab_data_key_xyz_stage_encrypt]
module.buckets.module.access_logging_bucket.aws_s3_bucket.default: Refreshing state... [id=ab-xyz-stage-access-logging-9d8e94ff]
module.buckets.random_id.cloudtrail_bucket_suffix: Refreshing state... [id=cqLFzQ]
module.buckets.aws_s3_bucket.cloudtrail_logging_bucket: Refreshing state... [id=ab-xyz-stage-cloudtrail-logging-72a2c5cd]
module.buckets.data.aws_iam_policy_document.restrict_access_cloudtrail: Refreshing state...
module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket: Refreshing state... [id=ab-xyz-stage-cloudtrail-logging-72a2c5cd]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement
 <= read (data resources)

Terraform will perform the following actions:

  # module.buckets.data.aws_iam_policy_document.restrict_access_cloudtrail will be read during apply
  # (config refers to values not yet known)
 <= data "aws_iam_policy_document" "restrict_access_cloudtrail"  {
      + id   = (known after apply)
      + json = (known after apply)

      + statement {
          + actions   = [
              + "s3:GetBucketAcl",
            ]
          + effect    = "Allow"
          + resources = [
              + (known after apply),
            ]
          + sid       = "AWSCloudTrailAclCheck"

          + principals {
              + identifiers = [
                  + "cloudtrail.amazonaws.com",
                ]
              + type        = "Service"
            }
        }
      + statement {
          + actions   = [
              + "s3:PutObject",
            ]
          + effect    = "Allow"
          + resources = [
              + (known after apply),
            ]
          + sid       = "AWSCloudTrailWrite"

          + condition {
              + test     = "StringEquals"
              + values   = [
                  + "bucket-owner-full-control",
                ]
              + variable = "s3:x-amz-acl"
            }

          + principals {
              + identifiers = [
                  + "cloudtrail.amazonaws.com",
                ]
              + type        = "Service"
            }
        }
    }

  # module.buckets.aws_s3_bucket.cloudtrail_logging_bucket must be replaced
-/+ resource "aws_s3_bucket" "cloudtrail_logging_bucket" {
      + acceleration_status         = (known after apply)
      + acl                         = "private"
      ~ arn                         = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply)
      ~ bucket                      = "ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply) # forces replacement
      ~ bucket_domain_name          = "ab-xyz-stage-cloudtrail-logging-72a2c5cd.s3.amazonaws.com" -> (known after apply)
      ~ bucket_regional_domain_name = "ab-xyz-stage-cloudtrail-logging-72a2c5cd.s3.us-west-2.amazonaws.com" -> (known after apply)
      + force_destroy               = false
      ~ hosted_zone_id              = "Z3BJ6K6RIION7M" -> (known after apply)
      ~ id                          = "ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply)
      ~ region                      = "us-west-2" -> (known after apply)
      ~ request_payer               = "BucketOwner" -> (known after apply)
        tags                        = {
            "mycompany:finance:accountenvironment"   = "xyz-stage"
            "mycompany:finance:application"          = "ab-platform"
            "mycompany:finance:billablebusinessunit" = "my-dev"
            "name"                                = "Cloudtrail logging bucket"
        }
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      ~ lifecycle_rule {
          - abort_incomplete_multipart_upload_days = 0 -> null
            enabled                                = true
          ~ id                                     = "intu-lifecycle-s3-int-tier" -> (known after apply)
          - tags                                   = {} -> null

            transition {
                days          = 32
                storage_class = "INTELLIGENT_TIERING"
            }
        }

      - logging {
          - target_bucket = "ab-xyz-stage-access-logging-9d8e94ff" -> null
          - target_prefix = "logs/cloudtrail-logging/" -> null
        }
      + logging {
          + target_bucket = (known after apply)
          + target_prefix = "logs/cloudtrail-logging/"
        }

      ~ versioning {
          ~ enabled    = false -> (known after apply)
          ~ mfa_delete = false -> (known after apply)
        }
    }

  # module.buckets.aws_s3_bucket_policy.cloudtrail_logging_bucket must be replaced
-/+ resource "aws_s3_bucket_policy" "cloudtrail_logging_bucket" {
      ~ bucket = "ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply) # forces replacement
      ~ id     = "ab-xyz-stage-cloudtrail-logging-72a2c5cd" -> (known after apply)
      ~ policy = jsonencode(
            {
              - Statement = [
                  - {
                      - Action    = "s3:GetBucketAcl"
                      - Effect    = "Allow"
                      - Principal = {
                          - Service = "cloudtrail.amazonaws.com"
                        }
                      - Resource  = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd"
                      - Sid       = "AWSCloudTrailAclCheck"
                    },
                  - {
                      - Action    = "s3:PutObject"
                      - Condition = {
                          - StringEquals = {
                              - s3:x-amz-acl = "bucket-owner-full-control"
                            }
                        }
                      - Effect    = "Allow"
                      - Principal = {
                          - Service = "cloudtrail.amazonaws.com"
                        }
                      - Resource  = "arn:aws:s3:::ab-xyz-stage-cloudtrail-logging-72a2c5cd/*"
                      - Sid       = "AWSCloudTrailWrite"
                    },
                ]
              - Version   = "2012-10-17"
            }
        ) -> (known after apply)
    }

  # module.buckets.random_id.cloudtrail_bucket_suffix must be replaced
-/+ resource "random_id" "cloudtrail_bucket_suffix" {
      ~ b64         = "cqLFzQ" -> (known after apply)
      ~ b64_std     = "cqLFzQ==" -> (known after apply)
      ~ b64_url     = "cqLFzQ" -> (known after apply)
        byte_length = 4
      ~ dec         = "1923270093" -> (known after apply)
      ~ hex         = "72a2c5cd" -> (known after apply)
      ~ id          = "cqLFzQ" -> (known after apply)
      + keepers     = {
          + "aws_account_id" = "123412341234"
          + "env"            = "xyz-stage"
        } # forces replacement
    }

  # module.buckets.module.access_logging_bucket.aws_s3_bucket.default must be replaced
-/+ resource "aws_s3_bucket" "default" {
      + acceleration_status         = (known after apply)
      + acl                         = "log-delivery-write"
      ~ arn                         = "arn:aws:s3:::ab-xyz-stage-access-logging-9d8e94ff" -> (known after apply)
      ~ bucket                      = "ab-xyz-stage-access-logging-9d8e94ff" -> (known after apply) # forces replacement
      ~ bucket_domain_name          = "ab-xyz-stage-access-logging-9d8e94ff.s3.amazonaws.com" -> (known after apply)
      ~ bucket_regional_domain_name = "ab-xyz-stage-access-logging-9d8e94ff.s3.us-west-2.amazonaws.com" -> (known after apply)
      + force_destroy               = false
      ~ hosted_zone_id              = "Z3BJ6K6RIION7M" -> (known after apply)
      ~ id                          = "ab-xyz-stage-access-logging-9d8e94ff" -> (known after apply)
      ~ region                      = "us-west-2" -> (known after apply)
      ~ request_payer               = "BucketOwner" -> (known after apply)
        tags                        = {
            "mycompany:finance:accountenvironment"   = "xyz-stage"
            "mycompany:finance:application"          = "ab-platform"
            "mycompany:finance:billablebusinessunit" = "my-dev"
            "name"                                = "Access logging bucket"
        }
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      - grant {
          - permissions = [
              - "READ_ACP",
              - "WRITE",
            ] -> null
          - type        = "Group" -> null
          - uri         = "http://acs.amazonaws.com/groups/s3/LogDelivery" -> null
        }
      - grant {
          - id          = "0343271a8c2f184152c171b223945b22ceaf5be5c9b78cf167660600747b5ad8" -> null
          - permissions = [
              - "FULL_CONTROL",
            ] -> null
          - type        = "CanonicalUser" -> null
        }

      - lifecycle_rule {
          - abort_incomplete_multipart_upload_days = 0 -> null
          - enabled                                = true -> null
          - id                                     = "intu-lifecycle-s3-int-tier" -> null
          - tags                                   = {} -> null

          - transition {
              - days          = 32 -> null
              - storage_class = "INTELLIGENT_TIERING" -> null
            }
        }

      + logging {
          + target_bucket = (known after apply)
          + target_prefix = "logs/access-logging/"
        }

      ~ versioning {
          ~ enabled    = false -> (known after apply)
          ~ mfa_delete = false -> (known after apply)
        }
    }

  # module.buckets.module.access_logging_bucket.random_id.bucket_suffix must be replaced
-/+ resource "random_id" "bucket_suffix" {
      ~ b64         = "nY6U_w" -> (known after apply)
      ~ b64_std     = "nY6U/w==" -> (known after apply)
      ~ b64_url     = "nY6U_w" -> (known after apply)
        byte_length = 4
      ~ dec         = "2643367167" -> (known after apply)
      ~ hex         = "9d8e94ff" -> (known after apply)
      ~ id          = "nY6U_w" -> (known after apply)
      + keepers     = {
          + "aws_account_id" = "123412341234"
          + "env"            = "xyz-stage"
        } # forces replacement
    }

Plan: 6 to add, 0 to change, 5 to destroy.

Snippet of Diff of my current remote TF state( LEFT ) vs old account-scope( RIGHT ) for cloudtrail_bucket_suffix :

我当前远程 TF 状态(左)与 cloudtrail_bucket_suffix 的旧帐户范围(右)的 Diff 片段

The plan shows a difference in the name of the bucket ( bucket forces replacement).

This triggers recreation of the bucket itself and dependent resources.

You need to get the bucket name to a stable state, then the rest will be stable as well. As you are using a random suffix for the bucket name, I suspect you forget to import this. The random_id resource allows imports like this:

terraform import module.buckets.random_id.cloudtrail_bucket_suffix cqLFzQ

Edit:

However, you will need to remove the keepers as they trigger the replacement of the random_id resource. keepers are used to trigger the recreation of dependent resources whenever other resources change.

I assume this is not what you want for your buckets as the keepers you defined seem to be stable/static: account_id and env are both unlikely to change for this deployment. If you really need them you can try to manipulate the state either manually.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM