简体   繁体   English

AWS Cloudformation:如何在创建EC2时重用放在user-data参数中的bash脚本?

[英]AWS Cloudformation: How to reuse bash script placed in user-data parameter when creating EC2?

In Cloudformation I have two stacks (one nested). 在Cloudformation中,我有两个堆栈(一个嵌套)。

Nested stack "ec2-setup": 嵌套堆栈“ec2-setup”:

{
  "AWSTemplateFormatVersion" : "2010-09-09",

  "Parameters" : {
    // (...) some parameters here

    "userData" : {
      "Description" : "user data to be passed to instance",
      "Type" : "String",
      "Default": ""
    }

  },

  "Resources" : {

    "EC2Instance" : {
      "Type" : "AWS::EC2::Instance",
      "Properties" : {
        "UserData" : { "Ref" : "userData" },
        // (...) some other properties here
       }
    }

  },
  // (...)
}

Now in my main template I want to refer to nested template presented above and pass a bash script using the userData parameter. 现在在我的主模板中,我想引用上面给出的嵌套模板,并使用userData参数传递一个bash脚本。 Additionally I do not want to inline the content of user data script because I want to reuse it for few ec2 instances (so I do not want to duplicate the script each time I declare ec2 instance in my main template). 另外, 我不想内联用户数据脚本的内容,因为我想为少数ec2实例重用它(所以每次我在主模板中声明ec2实例时我都不想复制脚本)。

I tried to achieve this by setting the content of the script as a default value of a parameter: 我试图通过将脚本的内容设置为参数的默认值来实现此目的:

{
  "AWSTemplateFormatVersion": "2010-09-09",

  "Parameters" : {
    "myUserData": {
      "Type": "String",
      "Default" : { "Fn::Base64" : { "Fn::Join" : ["", [
        "#!/bin/bash \n",
        "yum update -y \n",

        "# Install the files and packages from the metadata\n",
        "echo 'tralala' > /tmp/hahaha"
      ]]}}
    }
  },
(...)

    "myEc2": {
      "Type": "AWS::CloudFormation::Stack",
      "Properties": {
        "TemplateURL": "s3://path/to/ec2-setup.json",
        "TimeoutInMinutes": "10",
        "Parameters": {
          // (...)
          "userData" : { "Ref" : "myUserData" }
        }

But I get following error while trying to launch stack: 但是在尝试启动堆栈时出现以下错误:

"Template validation error: Template format error: Every Default member must be a string." “模板验证错误:模板格式错误:每个默认成员必须是一个字符串。”

The error seems to be caused by the fact that the declaration { Fn::Base64 (...) } is an object - not a string (although it results in returning base64 encoded string). 该错误似乎是由于声明{Fn :: Base64(...)}是一个对象 - 而不是一个字符串(尽管它导致返回base64编码的字符串)。

All works ok, if I paste my script directly into to the parameters section (as inline script) when calling my nested template (instead of reffering to string set as parameter): 一切正常,如果我在调用嵌套模板时将我的脚本直接粘贴到参数部分(作为内联脚本)(而不是将字符串设置为参数):

"myEc2": {
  "Type": "AWS::CloudFormation::Stack",
  "Properties": {
    "TemplateURL": "s3://path/to/ec2-setup.json",
    "TimeoutInMinutes": "10",
    "Parameters": {
      // (...)
      "userData" : { "Fn::Base64" : { "Fn::Join" : ["", [
        "#!/bin/bash \n",
        "yum update -y \n",

        "# Install the files and packages from the metadata\n",
        "echo 'tralala' > /tmp/hahaha"
        ]]}}
    }

but I want to keep the content of userData script in a parameter/variable to be able to reuse it. 但是我希望将userData脚本的内容保存在参数/变量中以便能够重用它。

Any chance to reuse such a bash script without a need to copy/paste it each time? 有没有机会重复使用这样的bash脚本而不需要每次都复制/粘贴它?

Here are a few options on how to reuse a bash script in user-data for multiple EC2 instances defined through CloudFormation: 以下是有关如何在用户数据中为通过CloudFormation定义的多个EC2实例重用bash脚本的一些选项:

1. Set default parameter as string 1.将默认参数设置为字符串

Your original attempted solution should work, with a minor tweak: you must declare the default parameter as a string, as follows (using YAML instead of JSON makes it possible/easier to declare a multi-line string inline): 您的原始尝试解决方案应该有效,只需稍微调整一下:您必须将默认参数声明为字符串,如下所示(使用YAML而不是JSON可以更容易/更容易地声明多行字符串内联):

  AWSTemplateFormatVersion: "2010-09-09"
  Parameters:
    myUserData:
      Type: String
      Default: |
        #!/bin/bash
        yum update -y
        # Install the files and packages from the metadata
        echo 'tralala' > /tmp/hahaha
(...)
  Resources:
    myEc2:
      Type: AWS::CloudFormation::Stack
      Properties
        TemplateURL: "s3://path/to/ec2-setup.yml"
        TimeoutInMinutes: 10
        Parameters:
          # (...)
          userData: !Ref myUserData

Then, in your nested stack, apply any required intrinsic functions ( Fn::Base64 , as well as Fn::Sub which is quite helpful if you need to apply any Ref or Fn::GetAtt functions within your user-data script) within the EC2 instance's resource properties: 然后,在您的嵌套堆栈中,应用任何必需的内部函数Fn::Base64 ,以及Fn::Sub ,如果您需要在用户数据脚本中应用任何RefFn::GetAtt函数,这非常有用) EC2实例的资源属性:

  AWSTemplateFormatVersion: "2010-09-09"
  Parameters:
    # (...) some parameters here
    userData:
      Description: user data to be passed to instance
      Type: String
      Default: ""    
  Resources:
    EC2Instance:
      Type: AWS::EC2::Instance
      Properties:
        UserData:
          "Fn::Base64":
            "Fn::Sub": !Ref userData
        # (...) some other properties here
  # (...)

2. Upload script to S3 2.将脚本上传到S3

You can upload your single Bash script to an S3 bucket, then invoke the script by adding a minimal user-data script in each EC2 instance in your template: 您可以将单个Bash脚本上载到S3存储桶,然后通过在模板中的每个EC2实例中添加最小用户数据脚本来调用脚本:

  AWSTemplateFormatVersion: "2010-09-09"
  Parameters:
    # (...) some parameters here
    ScriptBucket:
      Description: S3 bucket containing user-data script
      Type: String
    ScriptKey:
      Description: S3 object key containing user-data script
      Type: String
  Resources:
    EC2Instance:
      Type: AWS::EC2::Instance
      Properties:
        UserData:
          "Fn::Base64":
            "Fn::Sub": |
              #!/bin/bash
              aws s3 cp s3://${ScriptBucket}/${ScriptKey} - | bash -s
        # (...) some other properties here
  # (...)

3. Use preprocessor to inline script from single source 3.使用预处理器从单一来源内联脚本

Finally, you can use a template-preprocessor tool like troposphere or your own to 'generate' verbose CloudFormation-executable templates from more compact/expressive source files. 最后,您可以使用troposphere或您自己的模板预处理器工具从更紧凑/富有表现力的源文件中“生成”详细的CloudFormation可执行模板。 This approach will allow you to eliminate duplication in your source files - although the templates will contain 'duplicate' user-data scripts, this will only occur in the generated templates, so should not pose a problem. 这种方法将允许您消除源文件中的重复 - 虽然模板将包含“重复的”用户数据脚本,但这只会出现在生成的模板中,因此不应该造成问题。

You'll have to look outside the template to provide the same user data to multiple templates. 您必须在模板外部查看以向多个模板提供相同的用户数据。 A common approach here would be to abstract your template one step further, or "template the template". 这里常见的方法是进一步抽象模板,或“模板模板”。 Use the same method to create both templates, and you'll keep them both DRY. 使用相同的方法创建两个模板,并将它们保持为DRY。

I'm a huge fan of cloudformation and use it to create most all my resources, especially for production-bound uses. 我是cloudformation的忠实粉丝,并使用它来创建我的大部分资源,特别是对于生产限制的用途。 But as powerful as it is, it isn't quite turn-key. 但就像它一样强大,它并不是非常关键。 In addition to creating the template, you'll also have to call the coudformation API to create the stack, and provide a stack name and parameters. 除了创建模板之外,您还必须调用coudformation API来创建堆栈,并提供堆栈名称和参数。 Thus, automation around the use of cloudformation is a necessary part of a complete solution. 因此,围绕使用云信息的自动化是完整解决方案的必要部分。 This automation can be simplistic ( bash script, for example ) or sophisticated. 这种自动化可以是简单的(例如bash脚本)或复杂的。 I've taken to using ansible's cloudformation module to automate "around" the template, be it creating a template for the template with Jinja, or just providing different sets of parameters to the same reusable template, or doing discovery before the stack is created; 我已经开始使用ansible的cloudformation模块来自动化“模板”,无论是使用Jinja为模板创建模板,还是只为同一个可重用模板提供不同的参数集,或者在创建堆栈之前进行发现; whatever ancillary operations are necessary. 任何辅助操作都是必要的。 Some folks really like troposphere for this purpose - if you're a pythonic thinker you might find it to be a good fit. 有些人为此目的非常喜欢对流层 - 如果你是一个pythonic思想家,你可能会觉得它很合适。 Once you have automation of any kind handling the stack creation, you'll find it's easy to add steps to make the template itself more dynamic, or assemble multiple stacks from reusable components. 一旦您拥有处理堆栈创建的任何类型的自动化,您就会发现很容易添加步骤以使模板本身更具动态性,或者从可重用组件中组装多个堆栈。

At work we use cloudformation quite a bit and are tending these days to prefer a compositional approach, where we define the shared components of the templates we use, and then compose the actual templates from components. 在工作中我们使用了相当多的云形态,并且现在倾向于采用组合方法,在这种方法中我们定义我们使用的模板的共享组件,然后从组件中组合实际模板。

the other option would be to merge the two stacks, using conditionals to control the inclusion of the defined resources in any particular stack created from the template. 另一种选择是合并两个堆栈,使用条件来控制在从模板创建的任何特定堆栈中包含已定义的资源。 This works OK in simple cases, but the combinatorial complexity of all those conditions tends to make this a difficult solution in the long run, unless the differences are really simple. 这在简单的情况下工作正常,但是从长远来看,所有这些条件的组合复杂性往往使这成为一个困难的解决方案,除非差异非常简单。

Actually I found one more solution than already mentioned. 实际上我发现了一个比已经提到的更多的解决方案 This solution on the one hand is a little "hackish", but on the other hand I found it to be really useful for "bash script" use case (and also for other parameters). 这个解决方案一方面有点“hackish”,但另一方面我发现它对“bash脚本”用例(以及其他参数)非常有用。

The idea is to create an extra stack - " parameters stack " - which will output the values. 我们的想法是创建一个额外的堆栈 - “ 参数堆栈 ” - 它将输出值。 Since outputs of a stack are not limited to String (as it is for default values) we can define entire base64 encoded script as a single output from a stack. 由于堆栈的输出不限于String(因为它是默认值),我们可以将整个base64编码的脚本定义为堆栈的单个输出。

The drawback is that every stack needs to define at least one resource, so our parameters stack also needs to define at least one resource. 缺点是每个堆栈都需要定义至少一个资源,因此我们的参数堆栈还需要定义至少一个资源。 The solution for this issue is either to define the parameters in another template which already defines existing resource, or create a "fake resource" which will never be created becasue of a Condition which will never be satisified. 此问题的解决方案是在已定义现有资源的另一个模板中定义参数,或者创建一个永远不会创建的“假资源”,因为条件永远不会令人满意。

Here I present the solution with fake resource. 在这里,我提出了假资源的解决方案。 First we create our new paramaters-stack.json as follows: 首先,我们创建新的paramaters-stack.json,如下所示:

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description": "Outputs/returns parameter values",


  "Conditions" : {
    "alwaysFalseCondition" : {"Fn::Equals" : ["aaaaaaaaaa", "bbbbbbbbbb"]}
  },

  "Resources": {
    "FakeResource" : {
      "Type" : "AWS::EC2::EIPAssociation",
      "Condition" : "alwaysFalseCondition",
      "Properties" : {
        "AllocationId" :  { "Ref": "AWS::NoValue" },
        "NetworkInterfaceId" : { "Ref": "AWS::NoValue" }
      }
    }
  },

  "Outputs": {
    "ec2InitScript": {
      "Value":
      { "Fn::Base64" : { "Fn::Join" : ["", [
        "#!/bin/bash \n",
        "yum update -y \n",

        "# Install the files and packages from the metadata\n",
        "echo 'tralala' > /tmp/hahaha"
      ]]}}

    }
  }
}

Now in the main template we first declare our paramters stack and later we refer to the output from that parameters stack : 现在在主模板中我们首先声明我们的参数堆栈 ,然后我们引用该参数堆栈的输出:

{
  "AWSTemplateFormatVersion": "2010-09-09",

   "Resources": { 

    "myParameters": {
      "Type": "AWS::CloudFormation::Stack",
      "Properties": {
        "TemplateURL": "s3://path/to/paramaters-stack.json",
        "TimeoutInMinutes": "10"
      }
    },

    "myEc2": {
      "Type": "AWS::CloudFormation::Stack",
      "Properties": {
        "TemplateURL": "s3://path/to/ec2-setup.json",
        "TimeoutInMinutes": "10",
        "Parameters": {
          // (...)
          "userData" : {"Fn::GetAtt": [ "myParameters", "Outputs.ec2InitScript" ]}
        }
     }
  }
}

Please note that one can create up to 60 outputs in one stack file, so it is possible to define 60 variables/paramaters per single stack file using this technique. 请注意,可以在一个堆栈文件中创建多达60个输出,因此可以使用此技术为每个堆栈文件定义60个变量/参数。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM