简体   繁体   English

使用Terraform EC2用户数据时出现问题

[英]Issue using Terraform EC2 Userdata

I am deploying a bunch of EC2 instances that require a mount called /data , this is a seperate disk that I am attaching using volume attach in AWS. 我正在部署一堆EC2实例,这些实例需要一个名为/data的挂载,这是我要使用AWS中的卷附加挂载的单独磁盘。

Now when I did the following manually it works fine, so the script I use works however when adding it via userdata I am seeing issues and the mkfs command is not happening. 现在,当我手动执行以下操作时,它可以正常工作,因此我使用的脚本可以正常工作,但是通过userdata添加脚本时,我看到了问题,并且mkfs命令没有发生。

If you see my terraform config: 如果您看到我的Terraform配置:

resource "aws_instance" "bastion01" {
  ami = "${var.aws_ami}"
  key_name = "a36-key"
  vpc_security_group_ids = ["${aws_security_group.bath_office_sg.id}","${aws_security_group.bastion01_sg.id}","${aws_security_group.outbound_access_sg.id}"]
  subnet_id = "${element(module.vpc.public_subnets, 0)}"
  instance_type = "t2.micro"
  tags {
    Name = "x_bastion_01"
    Role = "bastion"
  }
}

resource "aws_instance" "riak" {
  count = 5
  ami = "${var.aws_ami}"
  vpc_security_group_ids = ["${aws_security_group.bastion01_sg.id}","${aws_security_group.riak_sg.id}","${aws_security_group.outbound_access_sg.id}"]
  subnet_id = "${element(module.vpc.database_subnets, 0)}"
  instance_type = "m4.xlarge"
  tags {
    Name = "x_riak_${count.index}"
    Role = "riak"
  }
  root_block_device {
    volume_size = 20
  }
  provisioner "file" {
    source      = "datapartition.sh"
    destination = "/tmp/datapartition.sh"
  }
}

resource "aws_volume_attachment" "riak_data" {
  count = 5
  device_name = "/dev/sdh"
  volume_id  = "${element(aws_ebs_volume.riak_data.*.id, count.index)}"
  instance_id = "${element(aws_instance.riak.*.id, count.index)}"
  provisioner "remote-exec" {
    inline = [
      "chmod +x /tmp/datapartition.sh",
      "/tmp/datapartition.sh",
    ]
    connection {
      bastion_host = "${aws_instance.bastion01.public_ip}"
      bastion_user = "ubuntu"
    }
  }
}

And then the partition script is as follows: 然后分区脚本如下:

#!/bin/bash

if [ ! -d /data ];
then mkdir /data
fi

/sbin/mkfs -t ext4 /dev/xvdh;

while [ -e /dev/xvdh ] ; do sleep 1 ; done

mount /dev/xvdh /data

echo "/dev/xvdh /data ext4 defaults 0 2" >> /etc/fstab

Now when I do this via terraform the mkfs doesn't appear to happen and I see no obvious errors in the syslog. 现在,当我通过terraform执行此操作时,mkfs似乎没有发生,并且在系统日志中也没有看到明显的错误。 If I copy the script manually and just bash script.sh the mount is created and works as expected. 如果我手动复制脚本而只是bash script.sh则挂载已创建并按预期工作。

Has anyone got any suggestions here? 有人在这里有什么建议吗?

Edit: It's wort noting adding this in AWS gui under userdata also works fine. 编辑:麻烦的是,在userdata下的AWS gui中添加了它也可以正常工作。

You could try with remote_exec instead of user_data. 您可以尝试使用remote_exec而不是user_data。

User_data relates on cloud-init which can act differently depending on images of your cloud provider. User_data与cloud-init有关,根据云提供商的映像,该行为可能会有所不同。

And also i'm not sure it's a good idea to exec a script that would wait for some time before executing in the cloud-init section => this may lead to VM considering launch has failed because of a timeout (depending on your cloud provider). 而且我不确定执行在cloud-init节中执行之前要等待一段时间的脚本是否是个好主意=>这可能会导致VM考虑由于超时而导致启动失败(取决于您的云提供商) )。

Remote_exec may be better here because you will be able to wait until your /dev/xvdh is attached Remote_exec可能在这里更好,因为您可以等到附加了/ dev / xvdh

See here 看这里

resource "aws_instance" "web" {
  # ...

  provisioner "file" {
    source      = "script.sh"
    destination = "/tmp/script.sh"
  }

  provisioner "remote-exec" {
    inline = [
      "chmod +x /tmp/script.sh",
      "/tmp/script.sh args",
    ]
  }
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM