简体   繁体   中英

Terraform remote state configuration for modules

Terraform v0.10.7

I am in a process of putting terraform modules together for our DevOps team. There will be separate modules for each component that we are using and in turn will create entire stack using those modules for different environments and requirements.

The directory hierarchy is as follows:

Terraform
  - modules
     - module-1 (main.tf, vars.tf, outputs.tf, backend.tf)
     - module-2 (main.tf, vars.tf, outputs.tf, backend.tf)
     - module-3 (main.tf, vars.tf, outputs.tf, backend.tf)
     ...
  - environments
     - qa (main.tf, vars.tf, outputs.tf, bakend.tf)
     - stage (main.tf, vars.tf, outputs.tf, bakend.tf)
     - prod (main.tf, vars.tf, outputs.tf, bakend.tf)

In backend.tf I have specified backend as S3 and a complete hierarchy as /resources/mod-1/terraform.tfstate . Same thing applies for backend.tf in environments.

When I give terraform get and terraform apply for any environment, it will fetch all the modules specified and will apply the changes to AWS infrastructure and it will store terraform.tfstate of that env at specified location in S3.

So the question is, will the terraform.tfstate for all the modules used in environment will also get generated and pushed to S3 (with single apply to env)? I haven't ran terraform apply to any modules.

As I have a plan to use some data from terraform.tfstate of those modules from S3 and at the same time want to avoid giving multiple applies to those modules and give single terraform apply to environment. How can this be achieved?

Firstly, I don't think you need to re-write same *.tf files for each environment. So for each application, depend on which modules will be sourced, you should have below file structures:

Application-1
     - modules-1.tf
     - modules-2.tf
     - modules-3.tf
     - main.tf, vars.tf, outputs.tf
     - qa (qa/backend.conf, qa/qa.tfvars)
     - stag (stag/backend.conf, stag/stag.tfvars)
     - prod (prod/backend.conf, stag/prod.tfvars)

in /backend.conf, you can define the s3 backend (if you use aws)

bucket  = "<global_unique_bucket_name>"
key     = "<env>/network.tfstate"
region  = "us-east-1"
kms_key_id = "alias/terraform"
encrypt = true

Secondly, there should be no backend required for each module (if my understand is right, the backend is used to save tfstate file). The backend file should be in each environment as I list above.

Third, the *.tfstate file should be defined in each environment. I have given the sample above in <env>/backend.conf

Then we have sever layers to manage terraform stacks, VPC/Network layer, Database/elasticache layers, application layers. So you can group the stack resource accordingly.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM