简体   繁体   English

使用 terraform 初始设置 terraform 后端

[英]Initial setup of terraform backend using terraform

I'm just getting started with terraform and I'd like to be able to use AWS S3 as my backend for storing the state of my projects.我刚刚开始使用 terraform,我希望能够使用 AWS S3 作为我的后端来存储我的项目的 state。

terraform {
    backend "s3" {
      bucket = "tfstate"
      key = "app-state"
      region = "us-east-1"
    }
}

I feel like it is sensible to setup my S3 bucket, IAM groups and polices for the backend storage infrastructure with terraform as well.我觉得使用 terraform 为后端存储基础设施设置我的 S3 存储桶、IAM 组和策略也是明智的。

If I setup my backend state before I apply my initial terraform infrastructure, it reasonably complains that the backend bucket is not yet created.如果我在应用初始 terraform 基础设施之前设置后端 state,它会合理地抱怨后端存储桶尚未创建。 So, my question becomes, how do I setup my terraform backend with terraform, while keeping my state for the backend tracked by terraform. Seems like a nested dolls problem.所以,我的问题是,如何使用 terraform 设置我的 terraform 后端,同时让 terraform 跟踪我的后端 state。看起来像是嵌套娃娃问题。

I have some thoughts about how to script around this, for example, checking to see if the bucket exists or some state has been set, then bootstrapping terraform and finally copying the terraform tfstate up to s3 from the local file system after the first run.我对如何编写脚本有一些想法,例如,检查存储桶是否存在或是否已设置某些 state,然后引导 terraform,最后在第一次运行后将 terraform tfstate 从本地文件系统复制到 s3。 But before going down this laborious path, I thought I'd make sure I wasn't missing something obvious.但在走上这条艰辛的道路之前,我想我会确保我没有遗漏一些明显的东西。

To set this up using terraform remote state, I usually have a separate folder called remote-state within my dev and prod terraform folder.要使用 terraform 远程状态进行设置,我通常在我的 dev 和 prod terraform 文件夹中有一个名为remote-state的单独文件夹。

The following main.tf file will set up your remote state for what you posted:以下main.tf文件将为您发布的内容设置远程状态:

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "tfstate"
     
  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_s3_bucket_versioning" "terraform_state" {
    bucket = aws_s3_bucket.terraform_state.id

    versioning_configuration {
      status = "Enabled"
    }
}

resource "aws_dynamodb_table" "terraform_state_lock" {
  name           = "app-state"
  read_capacity  = 1
  write_capacity = 1
  hash_key       = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

Then get into this folder using cd remote-state , and run terraform init && terraform apply - this should only need to be run once.然后使用cd remote-state进入该文件夹,并运行terraform init && terraform apply - 这应该只需要运行一次。 You might add something to bucket and dynamodb table name to separate your different environments.您可能会在存储桶和 dynamodb 表名中添加一些内容来分隔不同的环境。

Building on the great contribution from Austin Davis, here is a variation that I use which includes a requirement for data encryption:基于 Austin Davis 的巨大贡献,这是我使用的一个变体,其中包括对数据加密的要求:

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "tfstate"

  versioning {
    enabled = true
  }

  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_dynamodb_table" "terraform_state_lock" {
  name           = "app-state"
  read_capacity  = 1
  write_capacity = 1
  hash_key       = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

resource "aws_s3_bucket_policy" "terraform_state" {
  bucket = "${aws_s3_bucket.terraform_state.id}"
  policy =<<EOF
{
  "Version": "2012-10-17",
  "Id": "RequireEncryption",
   "Statement": [
    {
      "Sid": "RequireEncryptedTransport",
      "Effect": "Deny",
      "Action": ["s3:*"],
      "Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      },
      "Principal": "*"
    },
    {
      "Sid": "RequireEncryptedStorage",
      "Effect": "Deny",
      "Action": ["s3:PutObject"],
      "Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
      "Condition": {
        "StringNotEquals": {
          "s3:x-amz-server-side-encryption": "AES256"
        }
      },
      "Principal": "*"
    }
  ]
}
EOF
}

As you've discovered, you can't use terraform to build the components terraform needs in the first place.正如您所发现的,您首先不能使用 terraform 来构建 terraform 需要的组件。

While I understand the inclination to have terraform "track everything", it is very difficult, and more headache than it's worth.虽然我理解让 terraform “跟踪一切”的倾向,但这非常困难,而且比它的价值更令人头疼。

I generally handle this situation by creating a simple bootstrap shell script.我通常通过创建一个简单的引导 shell 脚本来处理这种情况。 It creates things like:它会创建如下内容:

  1. The s3 bucket for state storage用于状态存储的 s3 存储桶
  2. Adds versioning to said bucket向所述存储桶添加版本控制
  3. a terraform IAM user and group with certain policies I'll need for terraform builds具有某些策略的 terraform IAM 用户和组,我需要 terraform 构建

While you should only need to run this once (technically), I find that when I'm developing a new system, I spin up and tear things down repeatedly.虽然您应该只需要运行一次(技术上),但我发现当我开发一个新系统时,我会反复启动和拆卸。 So having those steps in one script makes that a lot simpler.因此,在一个脚本中包含这些步骤会使这变得简单得多。

I generally build the script to be idempotent.我通常将脚本构建为幂等的。 This way, you can run it multiple times without concern that you're creating duplicate buckets, users, etc这样,您可以多次运行它,而不必担心创建重复的存储桶、用户等

I created a terraform module with a few bootstrap commands/instructions to solve this:我创建了一个带有一些引导命令/指令的 terraform 模块来解决这个问题:

https://github.com/samstav/terraform-aws-backend https://github.com/samstav/terraform-aws-backend

There are detailed instructions in the README, but the gist is:自述文件中有详细说明,但要点是:

# conf.tf

module "backend" {
  source         = "github.com/samstav/terraform-aws-backend"
  backend_bucket = "terraform-state-bucket"
}

Then, in your shell (make sure you haven't written your terraform {} block yet):然后,在您的 shell 中(确保您尚未编写terraform {}块):

terraform get -update
terraform init -backend=false
terraform plan -out=backend.plan -target=module.backend
terraform apply backend.plan

Now write your terraform {} block:现在编写您的terraform {}块:

# conf.tf

terraform {
  backend "s3" {
    bucket         = "terraform-state-bucket"
    key            = "states/terraform.tfstate"
    dynamodb_table = "terraform-lock"
  }
}

And then you can re-init:然后你可以重新初始化:

terraform init -reconfigure

Setting up a Terraform backend leveraging an AWS s3 bucket is relatively easy.利用 AWS s3 存储桶设置 Terraform 后端相对容易。

First, create a bucket in the region of your choice (eu-west-1 for the example), named terraform-backend-store (remember to choose a unique name.)首先,在您选择的区域(例如 eu-west-1)中创建一个存储桶,命名为terraform-b​​ackend-store (请记住选择一个唯一的名称。)

To do so, open your terminal and run the following command, assuming that you have properly set up the AWS CLI (otherwise, follow the instructions at the official documentation ):为此,请打开您的终端并运行以下命令,假设您已正确设置 AWS CLI(否则,请按照官方文档中的说明进行操作):

aws s3api create-bucket --bucket terraform-backend-store \
    --region eu-west-1 \
    --create-bucket-configuration \
    LocationConstraint=eu-west-1
# Output:
{
    "Location": "http://terraform-backend-store.s3.amazonaws.com/"
}

The command should be self-explanatory;命令应该是不言自明的; to learn more check the documentation here .要了解更多信息,请查看此处的文档。

Once the bucket is in place, it needs a proper configuration for security and reliability.一旦存储桶就位,就需要进行适当的配置以确保安全性和可靠性。 For a bucket that holds the Terraform state, it's common-sense enabling the server-side encryption .对于持有 Terraform 状态的存储桶,启用服务器端加密是常识。 Keeping it simple, try first AES256 method (although I recommend to use KMS and implement a proper key rotation):保持简单,首先尝试AES256方法(尽管我建议使用KMS并实施适当的密钥轮换):

aws s3api put-bucket-encryption \
    --bucket terraform-backend-store \
    --server-side-encryption-configuration={\"Rules\":[{\"ApplyServerSideEncryptionByDefault\":{\"SSEAlgorithm\":\"AES256\"}}]}
# Output: expect none when the command is executed successfully

Next, it's crucial restricting the access to the bucket;接下来,限制对存储桶的访问至关重要; create an unprivileged IAM user as follows:创建一个非特权 IAM 用户,如下所示:

aws iam create-user --user-name terraform-deployer
# Output:
{
    "User": {
        "UserName": "terraform-deployer",
        "Path": "/",
        "CreateDate": "2019-01-27T03:20:41.270Z",
        "UserId": "AIDAIOSFODNN7EXAMPLE",
        "Arn": "arn:aws:iam::123456789012:user/terraform-deployer"
    }
}

Take note of the Arn from the command's output (it looks like: “Arn”: “arn:aws:iam::123456789012:user/terraform-deployer”).记下命令输出中的 Arn(它看起来像:“Arn”:“arn:aws:iam::123456789012:user/terraform-deployer”)。

To correctly interact with the s3 service and DynamoDB at a later stage to implement the locking, our IAM user must hold a sufficient set of permissions.为了在后期与 s3 服务和 DynamoDB 正确交互以实施锁定,我们的 IAM 用户必须拥有足够的权限集。 It is recommended to have severe restrictions in place for production environments, though, for the sake of simplicity, start assigning AmazonS3FullAccess and AmazonDynamoDBFullAccess :建议对生产环境设置严格的限制,但为了简单起见,开始分配AmazonS3FullAccessAmazonDynamoDBFullAccess

aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --user-name terraform-deployer
# Output: expect none when the command execution is successful

aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess --user-name terraform-deployer
# Output: expect none when the command execution is successful

The freshly created IAM user must be enabled to execute the required actions against your s3 bucket.必须启用新创建的 IAM 用户才能对您的 s3 存储桶执行所需的操作。 You can do this by creating and applying the right policy, as follows:您可以通过创建和应用正确的策略来做到这一点,如下所示:

cat <<-EOF >> policy.json
{
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/terraform-deployer"
            },
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::terraform-remote-store"
        }
    ]
}
EOF

This basic policy file grants the principal with arn “arn:aws:iam::123456789012:user/terraform-deployer”, to execute all the available actions (“Action”: “s3:*") against the bucket with arn “arn:aws:s3:::terraform-remote-store”. Again, in production is desired to force way stricter policies. For reference, have a look at the AWS Policy Generator .此基本策略文件授予具有 arn “arn:aws:iam::123456789012:user/terraform-deployer” 的委托人,以使用 arn “arn”对存储桶执行所有可用操作(“Action”:“s3:*”) :aws:s3:::terraform-remote-store”。同样,在生产中需要强制执行更严格的策略。作为参考,请查看AWS Policy Generator

Back to the terminal and run the command as shown below, to enforce the policy in your bucket:返回终端并运行如下所示的命令,以在您的存储桶中实施策略:

aws s3api put-bucket-policy --bucket terraform-remote-store --policy file://policy.json
# Output: none

As the last step, enable the bucket's versioning:最后一步,启用存储桶的版本控制:

aws s3api put-bucket-versioning --bucket terraform-remote-store --versioning-configuration Status=Enabled

It allows saving different versions of the infrastructure's state and rollback easily to a previous stage without struggling.它允许保存不同版本的基础设施状态并轻松回滚到前一阶段,而无需费力。

The AWS s3 bucket is ready, time to integrate it with Terraform. AWS s3 存储桶已准备就绪,是时候将其与 Terraform 集成了。 Listed below, is the minimal configuration required to set up this remote backend:下面列出的是设置此远程后端所需的最低配置:

# terraform.tf

provider "aws" {
  region                  = "${var.aws_region}"
  shared_credentials_file = "~/.aws/credentials"
  profile                 = "default"
}

terraform {  
    backend "s3" {
        bucket  = "terraform-remote-store"
        encrypt = true
        key     = "terraform.tfstate"    
        region  = "eu-west-1"  
    }
}

# the rest of your configuration and resources to deploy

Once in place, terraform must be initialized (again).一旦到位,必须(再次)初始化 terraform。 terraform init The remote backend is ready for a ride, test it. terraform init远程后端准备就绪,测试一下。

What about locking?锁了怎么办? Storing the state remotely brings a pitfall, especially when working in scenarios where several tasks, jobs, and team members have access to it.远程存储状态会带来一个陷阱,尤其是在多个任务、工作和团队成员可以访问它的情况下工作时。 Under these circumstances, the risk of multiple concurrent attempts to make changes to the state is high.在这些情况下,多个并发尝试更改状态的风险很高。 Here comes to help the lock, a feature that prevents opening the state file while already in use.这里来帮助锁定,该功能可防止在已使用时打开状态文件。

You can implement the lock creating an AWS DynamoDB Table , used by terraform to set and unset the locks.您可以实施创建AWS DynamoDB Table的锁,terraform 使用它来设置和取消设置锁。 Provision the resource using terraform itself:使用 terraform 本身提供资源:

# create-dynamodb-lock-table.tf
resource "aws_dynamodb_table" "dynamodb-terraform-state-lock" {
  name           = "terraform-state-lock-dynamo"
  hash_key       = "LockID"
  read_capacity  = 20
  write_capacity = 20
attribute {
    name = "LockID"
    type = "S"
  }
tags {
    Name = "DynamoDB Terraform State Lock Table"
  }
}

and deploy it as shown: terraform plan -out "planfile" && terraform apply -input=false -auto-approve "planfile"并如图所示部署它: terraform plan -out "planfile" && terraform apply -input=false -auto-approve "planfile"

Once the command execution is completed, the locking mechanism must be added to your backend configuration as follow:命令执行完成后,必须将锁定机制添加到您的后端配置中,如下所示:

# terraform.tf

provider "aws" {
  region                  = "${var.aws_region}"
  shared_credentials_file = "~/.aws/credentials"
  profile                 = "default"
}

terraform {  
    backend "s3" {
        bucket         = "terraform-remote-store"
        encrypt        = true
        key            = "terraform.tfstate"    
        region         = "eu-west-1"
        dynamodb_table = "terraform-state-lock-dynamo"
    }
}

# the rest of your configuration and resources to deploy

All done.全部完成。 Remember to run again terraform init and enjoy your remote backend.请记住再次运行terraform init并享受您的远程后端。

What I usually do is start without remote backend for creating initial infrastructure as you said , S3 , IAM roles and other essential stuff.正如你所说,我通常做的是在没有远程后端的情况下开始创建初始基础设施、S3、IAM 角色和其他重要的东西。 Once I have that I just add backend configuration and run terraform init to migrate to S3.一旦我有了它,我只需添加后端配置并运行 terraform init 即可迁移到 S3。

It's not the best case but in most cases I don't rebuild my entire environment everyday so this semi automated approach is good enough.这不是最好的情况,但在大多数情况下,我不会每天都重建我的整个环境,所以这种半自动化的方法已经足够好了。 I also separate next "layers" (VPC, Subnets, IGW, NAT ,etc) of infrastructure to different states.我还将基础设施的下一个“层”(VPC、子网、IGW、NAT 等)分为不同的状态。

What I have been doing to address this is that, You can comment out the "backend" block for the initial run, and do a selected terraform apply on only the state bucket and any related resources(like bucket policies).我一直在做的解决这个问题是,您可以注释掉初始运行的“后端”块,并仅在状态存储桶和任何相关资源(如存储桶策略)上应用选定的 terraform。

#  backend "s3" {
#    bucket         = "foo-bar-state-bucket"
#    key            = "core-terraform.tfstate"
#    region         = "eu-west-1"
#  }
#}
provider "aws" {
    region = "eu-west-1"
    profile = "terraform-iam-user"
    shared_credentials_file = "~/.aws/credentials"
  }
terraform apply --target aws_s3_bucket.foobar-terraform --target aws_s3_bucket_policy.foobar-terraform

This will provision your s3 state bucket, and will store .tfstate file locally in your working directory.这将配置您的 s3 状态存储桶,并将 .tfstate 文件本地存储在您的工作目录中。

Later, Uncomment the "backend" block and reconfigure the backend terraform init --reconfigure , which will prompt you to copy your locally present .tfstate file, ( tracking state of your backend s3 bucket ) to the remote backend which is now available to be used by terraform for any subsequent runs.稍后,取消注释“后端”块并重新配置后端terraform init --reconfigure ,这将提示您将本地存在的 .tfstate 文件(跟踪您的后端 s3 存储桶的状态)复制到远程后端,现在可以使用terraform 用于任何后续运行。

Prompt for copying exisitng state to remote backend提示将现有状态复制到远程后端

Here's a solution with an emphasis on security around bucket access if you plan on using the bucket only to store TF state.如果您计划仅使用存储桶来存储 TF 状态,那么这里有一个强调存储桶访问安全性的解决方案。

Create a main.tf file in a seperate folder with the following code and run terraform apply .使用以下代码在单独的文件夹中创建一个main.tf文件并运行terraform apply

provider "aws" {
  region = "my-region"
  ...
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "my-bucket"
  acl    = "private"

  versioning {
    enabled = true
  }

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }

  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_s3_bucket_public_access_block" "terraform_state_access" {
  bucket = aws_s3_bucket.terraform_state.id

  block_public_acls       = true
  ignore_public_acls      = true
  block_public_policy     = true
  restrict_public_buckets = true
}

resource "aws_dynamodb_table" "terraform_state_lock" {
  name           = "my-table"
  read_capacity  = 1
  write_capacity = 1
  billing_mode   = "PAY_PER_REQUEST"
  hash_key       = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

Then in your main Terraform folder, add the backend and run terraform init .然后在您的主 Terraform 文件夹中,添加后端并运行terraform init

backend "s3" {
  bucket          = "my-bucket"
  key             = "terraform.tfstate"
  region          = "my-region"
  dynamodb_table  = "my-table"
  encrypt         = true
}

There are some great answers here & I'd like to offer an alternative to managing your back end state;这里有一些很好的答案,我想提供一个替代管理后端状态的方法;

  1. Set up a Terraform Cloud Account (it's free for up to 5 users).设置 Terraform 云帐户(最多 5 个用户免费)。
  2. Create a workspace for your organization (Version control workflow is typical)为您的组织创建工作区(版本控制工作流是典型的)
  3. Select your VCS such as github or bitbucket (where you store your terraform plans and modules)选择您的 VCS,例如 github 或 bitbucket(您存储 terraform 计划和模块的位置)
  4. Terraform Cloud will give you the instructions needed for your new OAuth Connection Terraform Cloud 将为您提供新 OAuth 连接所需的说明
  5. Once that's setup you'll have the option to set up an SSH keypair which is typically not needed & you can click the Skip & Finish button设置完成后,您可以选择设置通常不需要的 SSH 密钥对,您可以单击“跳过并完成”按钮

Once your terraform cloud account is set up & connected to your VCS repos where you store your terraform plans & modules... Add your terraform module repos in terraform cloud, by clicking on the Registry tab.一旦您的 terraform 云帐户设置并连接到您存储 terraform 计划和模块的 VCS 存储库...通过单击注册表选项卡将您的 terraform 模块存储库添加到 terraform 云中。 You will need to ensure that your terraform modules are versioned / tagged & follow proper naming convention.您需要确保对 terraform 模块进行版本控制/标记并遵循正确的命名约定。 If you have a terraform module that creates a load balancer in AWS, you would name the terraform module repository (in github for example), like this: terraform-aws-loadbalancer.如果您有一个在 AWS 中创建负载均衡器的 terraform 模块,您将命名 terraform 模块存储库(例如在 github 中),如下所示:terraform-aws-loadbalancer。 As long as it starts with terraform-aws- you're good.只要它以 terraform-aws- 开头,就可以了。 Then you add a version tag to it such as 1.0.0然后向其添加版本标签,例如 1.0.0

So let's say you create a terraform plan that points to that load balancer module, this is how you point your backend config to terraform cloud & to the load balancer module:因此,假设您创建了一个指向该负载均衡器模块的 terraform 计划,这就是您将后端配置指向 terraform cloud 和负载均衡器模块的方式:

backend-state.tf contents:后端状态.tf 内容:

terraform {
  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "YOUR-TERRAFORM-CLOUD-ORG"
    workspaces {
    # name = ""   ## For single workspace jobs
    # prefix = "" ## for multiple workspaces
    # you can use name instead of prefix
    prefix = "terraform-plan-name-"
    }
  }
}

terraform plan main.tf contents; terraform plan main.tf 内容;

module "aws_alb" {
  source  = "app.terraform.io/YOUR-TERRAFORM-CLOUD-ORG/loadbalancer/aws"
  version = "1.0.0"
  
  name = "load-balancer-test"
  security_groups = [module.aws_sg.id]
  load_balancer_type = "application"
  internal = false
  subnets = [data.aws_subnet.public.id]
  idle_timeout = 1200
  # access_logs_enabled = true
  # access_logs_s3bucket = "BUCKET-NAME"
  tags = local.tags
}

Locally from your terminal (using Mac OSX as an example);从您的终端本地(以 Mac OSX 为例);

terraform init
# if you're using name instead of prefix in your backend set 
# up, no need to run terraform workspace cmd
terraform workspace new test
terraform plan
terraform apply

You'll see the apply happening in terraform cloud under your workspaces with this name: terraform-plan-name-test "test" is appended to your workspace prefix name which is defined in your backend-state.tf above.您将在 terraform cloud 中使用此名称在您的工作区下看到应用: terraform-plan-name-test “test” 附加到您的工作区前缀名称,该名称在上面的 backend-state.tf 中定义。 You end up with a GUI / Console full of your terraform plans within your workspace, the same way you can see your Cloudformation Stacks in AWS.您最终会在您的工作区中获得一个包含您的 terraform 计划的 GUI / 控制台,就像您可以在 AWS 中查看您的 Cloudformation 堆栈一样。 I find devops that are used to Cloudformation and transitioning to Terraform, like this set up.我发现用于 Cloudformation 并过渡到 Terraform 的 devops,就像这个设置一样。

One advantage is, within Terraform Cloud you can easily set it up so that a plan (stack build) is triggered with a git commit or merge to the master branch.一个优点是,在 Terraform Cloud 中,您可以轻松地对其进行设置,以便通过 git 提交或合并到主分支来触发计划(堆栈构建)。

1 reference: https://www.terraform.io/docs/language/settings/backends/remote.html#basic-configuration 1 参考: https ://www.terraform.io/docs/language/settings/backends/remote.html#basic-configuration

The way I have overcome this issue is by creating the project remote state in the first init plan apply cycle and initializing the remote state in the second init plan apply cycle.我克服这个问题的方法是在第一个初始化计划应用周期中创建项目远程状态,并在第二个初始化计划应用周期中初始化远程状态。


# first init plan apply cycle 
# Configure the AWS Provider
# https://www.terraform.io/docs/providers/aws/index.html
provider "aws" {
  version = "~> 2.0"
  region  = "us-east-1"
}

resource "aws_s3_bucket" "terraform_remote_state" {
  bucket = "terraform-remote-state"
  acl    = "private"

  tags = {
    Name        = "terraform-remote-state"
    Environment = "Dev"
  }
}

# add this sniped and execute the 
# the second init plan apply cycle
# https://www.terraform.io/docs/backends/types/s3.html

terraform {
  backend "s3" {
    bucket = "terraform-remote-state"
    key    = "path/to/my/key"
    region = "us-east-1"
  }
}

I would Highly recommend using Terragrunt to keep your Terraform code manageable and DRY (the Don't repeat yourself principle).我强烈建议使用Terragrunt来保持 Terraform 代码的可管理性和DRY不要重复自己的原则)。

Terragrunt has many capabilities - for your specific case I would suggest following the Keep your remote state configuration DRY section. Terragrunt 具有许多功能 - 对于您的特定情况,我建议您遵循保持远程状态配置 DRY部分。

I'll add a short and simplified summary below.我将在下面添加一个简短的摘要。


Problems with managing remote state with Terraform使用 Terraform 管理远程状态的问题

Let's say you have the following Terraform infrastructure:假设您有以下 Terraform 基础设施:

├── backend-app
│   ├── main.tf
│   └── other_resources.tf
│   └── variables.tf
├── frontend-app
│   ├── main.tf
│   └── other_resources.tf
│   └── variables.tf
├── mysql
│   ├── main.tf
│   └── other_resources.tf
│   └── variables.tf
└── mongo
    ├── main.tf
    └── other_resources.tf
    └── variables.tf

Each app is a terraform module that you'll want to store its Terraform state in a remote backend.每个应用程序都是一个 terraform 模块,您需要将其 Terraform 状态存储在远程后端。

Without Terragrunt you'll have to write the backend configuration block for each application in order to save the current state in a remote state storage :如果没有 Terragrunt,您将不得不为每个应用程序编写backend配置块,以便将当前状态保存在远程状态存储中:

terraform {
  backend "s3" {
    bucket         = "my-terraform-state"
    key            = "frontend-app/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "my-lock-table"
  }
}

Managing a few modules like in the above example its not a burden to add this file for each one of them - but it won't last for real world scenarious .像上面的例子一样管理几个模块,为每个模块添加这个文件并不是一个负担——但它不会持续到现实世界的场景中

Wouldn't it be better if we could do some kind of inheritance (like in Object oriented programming)?如果我们可以做某种继承(比如面向对象编程)不是更好吗?

This is made easy with Terragrunt.使用 Terragrunt 可以轻松做到这一点。


Terragrunt to the rescue Terragrunt 救援

Back to the modules structure.回到模块结构。
With Terragrunt we just need add add a root terragrunt.hcl with all the configurations and for each module you add a child terragrunt.hcl which contains only on statement:使用 Terragrunt,我们只需要添加一个包含所有配置的根terragrunt.hcl ,并为每个模块添加一个仅包含 on 语句的子 terragrunt.hcl:

├── terragrunt.hcl       #<---- Root
├── backend-app
│   ├── main.tf
│   └── other_resources.tf
│   └── variables.tf
│   └── terragrunt.hcl   #<---- Child
├── frontend-app
│   ├── main.tf
│   └── other_resources.tf
│   └── variables.tf
│   └── terragrunt.hcl   #<---- Child
├── mysql
│   ├── main.tf
│   └── other_resources.tf
│   └── variables.tf
│   └── terragrunt.hcl   #<---- Child
└── mongo
    ├── main.tf
    └── other_resources.tf
    └── variables.tf
    └── terragrunt.hcl.  #<---- Child

The root terragrunt.hcl will keep your remote state configuration and the children will only have the following statement:terragrunt.hcl将保留您的远程状态配置,并且子项将只有以下语句:

include {
  path = find_in_parent_folders()
}

This include block tells Terragrunt to use the exact same Terragrunt configuration from the root terragrunt.hcl file specified via the path parameter.include块告诉 Terragrunt 使用与通过路径参数指定的根terragrunt.hcl文件完全相同的 Terragrunt 配置。

The next time you run terragrunt, it will automatically configure all the settings in the remote_state.config block, if they aren't configured already, by calling terraform init .下次您运行 terragrunt 时,它会通过调用terraform init自动配置remote_state.config块中的所有设置(如果尚未配置)。

The backend.tf file will be created automatically for you. backend.tf文件将自动为您创建。


Summary概括

You can have hundreds of modules with nested hierarchy (for example divided into regions,tenants, applications etc') and still be able to maintain only one configuration of the remote state.您可以拥有数百个具有嵌套层次结构的模块(例如划分为区域、租户、应用程序等),并且仍然只能维护远程状态的一种配置。

there is a version issue here within terraform, for me it is working for the mentioned version. terraform 中存在版本问题,对我来说,它适用于上述版本。 also, it is good to have the terraform state on the bucket.另外,最好在桶上有 terraform 状态。

terraform {
    required_version = "~> 0.12.12"
    backend "gcs" {
        bucket = "bbucket-name"
        prefix = "terraform/state"
    }
}

As a word of caution, I would not create a terraform statefile with terraform in case someone inadvertently deletes it.提醒一下,我不会使用 terraform 创建 terraform statefile,以防有人无意中删除它。 So use scripts like aws-cli or boto3 which do not maintain state and keep those scripts limited to a variable for s3 bucket name.因此,请使用不维护状态的 aws-cli 或 boto3 之类的脚本,并将这些脚本限制为 s3 存储桶名称的变量。 You will not really change the script for terraform state bucket in the long run except for creating additional folders inside the bucket which can be done outside terraform in the resource level.从长远来看,您不会真正更改 terraform 状态存储桶的脚本,除非在存储桶内创建其他文件夹,这可以在资源级别的 terraform 之外完成。

All of the answers provided are very good.提供的所有答案都非常好。 I just want to emphasize the "key" attribute.我只想强调“关键”属性。 When you get into advanced applications of Terraform, you will eventually need to reference these S3 keys in order to pull remote state into a current project, or to leverage 'terraform move'.当您进入 Terraform 的高级应用程序时,您最终需要引用这些 S3 密钥,以便将远程状态拉入当前项目,或利用“terraform move”。

It really helps to use intelligent key names when you plan your "terraform" stanza to define your backend.当你计划你的“terraform”节来定义你的后端时,使用智能键名真的很有帮助。

I recommend the following as a base key name: account_name/{development:production}/region/module_name/terraform.tfstate我推荐以下作为基本键名:account_name/{development:production}/region/module_name/terraform.tfstate

Revise to fit your needs, but going back and fixing all my key names as I expanded my use of Terraform across many accounts and regions was not fun at all.修改以满足您的需求,但是当我在许多帐户和地区扩展我对 Terraform 的使用时,返回并修复我所有的关键名称一点也不好玩。

You can just simply use terraform cloud and configure your backend as follows:您可以简单地使用 terraform cloud 并按如下方式配置您的后端:

terraform {
    backend "remote" {
        hostname     = "app.terraform.io"
        organization = "your-tf-organization-name"
        workspaces {
            name = "your-workspace-name"
        }
    }
}

I've made a script according to that answer.我根据那个答案做了一个脚本 Keep in mind you'll need to import DynamoDB to your tf state as it is created through aws cli.请记住,您需要将 DynamoDB 导入到您的 tf 状态,因为它是通过 aws cli 创建的。

Managing terraform state bucket with terraform is kind of chicken and egg problem.用 terraform 管理 terraform state 桶是一种先有鸡还是先有蛋的问题。 One of the way we can address is:我们可以解决的方法之一是:

Create terraform state bucket with terraform with local backend and then migrate the state to newly create state bucket.使用带有本地后端的 terraform 创建 terraform state 存储桶,然后将 state 迁移到新创建的 state 存储桶。

It can be a bit tricky if you are trying to achieve this with a CI/CD pipeline and trying to make the job idempotent in nature.如果您试图通过 CI/CD 管道实现这一点并试图使工作本质上是幂等的,这可能会有点棘手。

Modularise backend configuration in a separate file.在单独的文件中模块化后端配置。

terraform.tf terraform.tf

terraform {
  required_version = "~> 1.3.6"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.48.0"
    }
  }
}

provider "aws" {
  region  = "us-east-1"
}

main.tf主程序

module "remote_state" {
  # you can write your own module or use any community module which 
  # creates a S3 bucket and dynamoDB table (ideally with replication and versioning)
  source                         = "../modules/module-for-s3-bucket-and-ddtable"
  bucket_name                    = "terraform-state-bucket-name"
  dynamodb_table_name            = "terraform-state-lock"

}

backend.tf后端.tf

terraform {
  backend "s3" {
    bucket         = "terraform-state-bucket-name"
    key            = "state.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-state-lock"
  }
}

With following steps we can manage and create state S3 bucket in the same state.通过以下步骤,我们可以在同一个 state 中管理和创建 state S3 存储桶。

function configure_state() {

  # Disable S3 bucket backend
  mv backend.tf backend.tf.backup
  # Since S3 config is not present terraform local state will be initialized
  # Or copied from s3 bucket if it already existed
  terraform init -migrate-state -auto-approve
  # Terraform apply will create the S3 bucket backend and save the state in local state
  terraform apply -target module.remote_state
 # It will re-enable S3 backend configuration for storing state
  mv backend.tf.backup backend.tf
  #It will migrate the state from local to S3 bucket
  terraform init -migrate-state -auto-approve
}

Assuming that you are running terraform locally and not on some virtual server and that you want to store terraform state in S3 bucket that doesn't exist.假设您在本地而不是在某个虚拟服务器上运行 terraform,并且您希望将 terraform 状态存储在不存在的 S3 存储桶中。 This is how I would approach it,这就是我的处理方式,

Create terraform script, that provisions S3 bucket创建 terraform 脚本,提供 S3 存储桶

Create terraform script that provisions your infrastructure创建用于配置您的基础设施的 terraform 脚本

At the end of your terraform script to provision bucket to be used by second terraform script for storing state files, include code to provision null resource.在您的 terraform 脚本的末尾提供存储桶以供第二个 terraform 脚本用于存储状态文件,包括用于提供空资源的代码。

In the code block for the null resource using local-exec provisioner run command to go into the directory where your second terraform script exist followed by usual terraform init to initialize the backend then terraform plan, then terraform apply在空资源的代码块中,使用 local-exec provisioner run 命令进入第二个 terraform 脚本所在的目录,然后使用通常的 terraform init 来初始化后端,然后是 terraform 计划,然后是 terraform apply

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM