简体   繁体   English

AWS EC2 将用户数据输出到 cloudwatch 日志

[英]AWS EC2 log userdata output to cloudwatch logs

I'm doing pre-processing tasks using EC2.我正在使用 EC2 执行预处理任务。

I execute shell commands using the userdata variable.我使用 userdata 变量执行 shell 命令。 The last line of my userdata has sudo shutdown now -h .我的用户数据的最后一行有sudo shutdown now -h So the instance gets terminated automatically once the pre-processing task completed.因此,一旦预处理任务完成,实例就会自动终止。

This is how my code looks like.这就是我的代码的样子。

import boto3


userdata = '''#!/bin/bash
pip3 install boto3 pandas scikit-learn
aws s3 cp s3://.../main.py .
python3 main.py
sudo shutdown now -h
'''


def launch_ec2():
    ec2 = boto3.resource('ec2',
                         aws_access_key_id="", 
                         aws_secret_access_key="",
                         region_name='us-east-1')
    instances = ec2.create_instances(
        ImageId='ami-0c02fb55956c7d316',
        MinCount=1,
        MaxCount=1,
        KeyName='',
        InstanceInitiatedShutdownBehavior='terminate',
        IamInstanceProfile={'Name': 'S3fullaccess'},
        InstanceType='m6i.4xlarge', 
        UserData=userdata,
        InstanceMarketOptions={
            'MarketType': 'spot',
            'SpotOptions': {
                'SpotInstanceType': 'one-time',
            }
        }
    )
    print(instances)


launch_ec2()

The problem is, sometime when there is an error in my python script, the script dies and the instance get terminated.问题是,有时当我的 python 脚本出现错误时,脚本会终止并且实例会终止。

Is there a way I can collect error/info logs and send it to cloudwatch before the instance get terminated?有没有办法在实例终止之前收集错误/信息日志并将其发送到 cloudwatch? This way, I would know what went wrong.这样,我就知道出了什么问题。

You can achieve the desired behavior by leveraging functionality.您可以通过利用功能来实现所需的行为。 You could in fact create a log file for the entire execution of the UserData, and you could use trap to make sure that the log file is copied over to S3 before terminating if an error occurs.实际上,您可以为 UserData 的整个执行创建一个日志文件,并且您可以使用trap确保在发生错误时终止之前将日志文件复制到 S3。

Here's how it could look:这是它的外观:

#!/bin/bash -xe
exec &>> /tmp/userdata_execution.log

upload_log() {
  aws s3 cp /tmp/userdata_execution.log s3://... # use a bucket of your choosing here
}

trap 'upload_log' ERR

pip3 install boto3 pandas scikit-learn
aws s3 cp s3://.../main.py .
python3 main.py
sudo shutdown now -h

A log file (/tmp/userdata_execution.log ) that contains stdout and stderr will be generated for the UserData;将为 UserData 生成一个包含 stdout 和 stderr 的日志文件(/tmp/userdata_execution.log ); if there is an error during the execution of the UserData, the log file will be upload to an S3 bucket.如果在 UserData 执行过程中出现错误,日志文件将被上传到 S3 存储桶。

If you wanted to, you could of course also stream the log file to CloudWatch, however to do so you would have to install the CloudWatch agent on the instance and configure it accordingly.如果您愿意,您当然也可以将日志文件流式传输到 CloudWatch,但要这样做,您必须在实例上安装 CloudWatch 代理并进行相应配置。 I believe that for your use case uploading the log file to S3 is the best solution.我相信对于您的用例,将日志文件上传到 S3 是最好的解决方案。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM