[英]Terraform redeploys EC2 instance
I'm sure this one is an easy fix.我相信这是一个简单的修复。 I'm working with Terraform in AWS, deploying a VPC, Subnets, a Security Group (this looks like the issue) along with a single EC2 instance.我正在 AWS 中使用 Terraform,部署 VPC、子网、安全组(这看起来像问题)以及单个 EC2 实例。
When deploying using terraform apply
the first time everything creates as expected, however, immediately following up with another terraform apply
or terraform plan
shows that there are changes to the EC2 instance that required a redeploy of the EC2 instances.当第一次使用terraform apply
部署时,一切都按预期创建,但是,立即跟进另一个terraform apply
或terraform plan
表明 EC2 实例发生了变化,需要重新部署 EC2 实例。 There are no changes to the underlying Terraform Code.底层 Terraform 代码没有变化。
Upon applying again the EC2 instance is redeployed as reported by a terraform plan
.再次申请时,EC2 实例将按照terraform plan
报告重新部署。
I would like this to not re-deploy every EC2 instance when running additional terraform apply
commands.我希望在运行其他terraform apply
命令时不会重新部署每个 EC2 实例。 I'm not sure if it's possible, but I'm sure if it is it's something easy I'm just missing in the documentation.我不确定这是否可能,但我确定这是否很简单,我只是在文档中遗漏了它。
# Create a VPC
resource "aws_vpc" "vpcSandbox" {
cidr_block = var.vpcSandboxCIDR
tags = {
Name = "vpcSandbox"
Terraform = "True"
}
}
# Create DHCP Options for VPC
resource "aws_vpc_dhcp_options" "dhcpOptSandbox" {
domain_name = var.searchDomain
domain_name_servers = ["208.67.220.220", "208.67.222.222"]
tags = {
Name = "dhcpOptSandbox"
Terraform = "True"
}
}
# Associated DHCP Options for VPC
resource "aws_vpc_dhcp_options_association" "dhcpOptAssocSandbox" {
vpc_id = aws_vpc.vpcSandbox.id
dhcp_options_id = aws_vpc_dhcp_options.dhcpOptSandbox.id
}
# Create all Subnets
resource "aws_subnet" "sub-sandbox1a" {
vpc_id = aws_vpc.vpcSandbox.id
availability_zone = "us-east-1a"
cidr_block = "10.11.1.0/24"
tags = {
Terraform = "True"
}
}
resource "aws_subnet" "sub-sandbox1b" {
vpc_id = aws_vpc.vpcSandbox.id
availability_zone = "us-east-1b"
cidr_block = "10.11.2.0/24"
tags = {
Terraform = "True"
}
}
resource "aws_subnet" "sub-sandbox1c" {
vpc_id = aws_vpc.vpcSandbox.id
availability_zone = "us-east-1c"
cidr_block = "10.11.3.0/24"
tags = {
Terraform = "True"
}
}
resource "aws_subnet" "sub-sandbox1d" {
vpc_id = aws_vpc.vpcSandbox.id
availability_zone = "us-east-1d"
cidr_block = "10.11.4.0/24"
tags = {
Terraform = "True"
}
}
resource "aws_subnet" "sub-sandbox1e" {
vpc_id = aws_vpc.vpcSandbox.id
availability_zone = "us-east-1e"
cidr_block = "10.11.5.0/24"
tags = {
Terraform = "True"
}
}
resource "aws_subnet" "sub-sandbox1f" {
vpc_id = aws_vpc.vpcSandbox.id
availability_zone = "us-east-1f"
cidr_block = "10.11.6.0/24"
tags = {
Terraform = "True"
}
}
# Create Internet Gateway for VPC
resource "aws_internet_gateway" "gwSandbox" {
vpc_id = aws_vpc.vpcSandbox.id
tags = {
Name = "gwSandbox"
Terraform = "True"
}
}
# Adding some routes to the sandbox VPC
resource "aws_route" "default-v4-sandbox" {
route_table_id = aws_vpc.vpcSandbox.default_route_table_id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gwSandbox.id
}
resource "aws_route" "default-v6-sandbox" {
route_table_id = aws_vpc.vpcSandbox.default_route_table_id
destination_ipv6_cidr_block = "::/0"
gateway_id = aws_internet_gateway.gwSandbox.id
}
# Create security groups for test server
resource "aws_security_group" "sandbox" {
name = "sandbox"
description = "Allow SSH inbound traffic from Trusted Internet Addresses and all Outbound Traffic"
vpc_id = aws_vpc.vpcSandbox.id
tags = {
Name = "sandbox"
Terraform = "True"
}
}
resource "aws_security_group_rule" "workHQOfficeInbound" {
type = "ingress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [var.workOfficeWAN]
security_group_id = aws_security_group.sandbox.id
}
resource "aws_security_group_rule" "tgs_office_inbound" {
type = "ingress"
from_port = 0
to_port = 65535
protocol = "-1"
cidr_blocks = [var.devOfficeWAN]
security_group_id = aws_security_group.sandbox.id
}
resource "aws_security_group_rule" "alloutbound" {
type = "egress"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
security_group_id = aws_security_group.sandbox.id
}
## Adding a test server
# Create a new Keypair
resource "aws_key_pair" "deployer" {
key_name = "deployer-key"
public_key = var.certDeployerPub
tags = {
Name = "deployer"
Terraform = "True"
}
}
# Creating an interface for the test server
resource "aws_network_interface" "int-tc-amazlinux" {
subnet_id = aws_subnet.sub-sandbox1a.id
# private_ips = ["172.16.10.100"]
tags = {
Name = "int-tc-amazlinux"
Terraform = "True"
}
}
# Adding a test Server
resource "aws_instance" "tc-amazlinux01" {
ami = "ami-0e341fcaad89c3650"
instance_type = "t4g.small"
key_name = aws_key_pair.deployer.key_name
subnet_id = aws_subnet.sub-sandbox1a.id
associate_public_ip_address = "true"
security_groups = [
aws_security_group.sandbox.id
]
tags = {
Name = "tc-amazlinux01"
Terraform = "True"
}
}
The following is an output example from running a terraform apply
immediately followed by another terraform plan
without any modification to the terraform files.以下是运行terraform apply
立即运行另一个terraform plan
的输出示例,而无需对 terraform 文件进行任何修改。
For length sake, it's here: https://pastebin.com/raw/2Ly0NmVr为了篇幅,它在这里: https : //pastebin.com/raw/2Ly0NmVr
This probably happens because your security groups are incorrect .这可能是因为您的安全组不正确。
So it should be:所以应该是:
resource "aws_instance" "tc-amazlinux01" {
ami = "ami-0e341fcaad89c3650"
instance_type = "t4g.small"
key_name = aws_key_pair.deployer.key_name
subnet_id = aws_subnet.sub-sandbox1a.id
associate_public_ip_address = "true"
vpc_security_group_ids = [
aws_security_group.sandbox.id
]
tags = {
Name = "tc-amazlinux01"
Terraform = "True"
}
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.