My setup is: Terraform --> AWS ec2
using Terraform to create the ec2 instance with SSH access.
The
resource "aws_instance" "inst1" {
instance_type = "t2.micro"
ami = data.aws_ami.ubuntu.id
key_name = "aws_key"
subnet_id = ...
user_data = file("./deploy/templates/user-data.sh")
vpc_security_group_ids = [
... ,
]
provisioner "file" {
source = "./deploy/templates/ec2-caller.sh"
destination = "/home/ubuntu/ec2-caller.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/ubuntu/ec2-caller.sh",
]
}
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
private_key = file("./keys/aws_key_enc")
timeout = "4m"
}
}
the above bit works and i could see the provisioner copying and executing the 'ec2-caller.sh. I don't want to pass my private key in clear text to the Terraform provisioner. Is there anyway we can copy files to the newly created ec2 without using provisioner or without passing the private key on to the provisioner?
Cheers.
The Terraform documentation section Provisioners are a Last Resort raises the need to provision and pass in credentials as one of the justifications for provisioners being a "last resort", and then goes on to suggest some other strategies for passing data into virtual machines and other compute resources .
You seem to already be using user_data
to specify some other script to run, so to follow the advice in that document would require combining these all together into a single cloud-init configuration. (I'm assuming that your AMI has cloud-init installed because that's what's typically responsible for interpreting user_data
as a shell script to execute.)
Cloud-init supports several different user_data
formats , with the primary one being cloud-init's own YAML configuration file format, "Cloud Config" . You can also use a multipart MIME message to pack together multiple different user_data
payloads into a single user_data
body, as long as the combined size of the payload fits within EC2's upper limit for user_data
size, which is 16kiB.
From your configuration it seems like you have two steps you'd need cloud-init to deal with in order to fully solve this problem with cloud-init:
./deploy/templates/user-data.sh
script./home/ubuntu/ec2-caller.sh
on disk with suitable permissions. Assuming that these two steps are independent of one another, you can send cloud-init
a multipart MIME message which includes both the user-data script you were originally using alone and a Cloud Config YAML configuration to place the ec2-caller.sh
file on disk. The Terraform provider hashicorp/cloudinit
has a data source cloudinit_config
which knows how to construct multipart MIME messages for cloud-init, which you could use like this:
data "cloudinit_config" "example" {
part {
content_type = "text/x-shellscript"
content = file("${path.root}/deploy/templates/user-data.sh")
}
part {
content_type = "text/cloud-config"
content = yamlencode({
write_files = [
{
encoding = "b64"
content = filebase64("${path.root}/deploy/templates/ec2-caller.sh")
path = "/home/ubuntu/ec2-caller.sh"
owner = "ubuntu:ubuntu"
permissions = "0755"
},
]
})
}
}
resource "aws_instance" "inst1" {
instance_type = "t2.micro"
ami = data.aws_ami.ubuntu.id
key_name = "aws_key"
subnet_id = ...
user_data = data.cloudinit_config.example.rendered
vpc_security_group_ids = [
... ,
]
}
The second part
block above includes YAML based on the cloud-init example Writing out arbitrary files , which you could refer to in order to learn what other settings are possible. Terraform's yamlencode
function doesn't have a way to generate the special !!binary
tag used in some of the files in that example, but setting encoding: b64
allows passing the base64-encoded text as just a normal string.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.