繁体   English   中英

将 EMR 日志发送到 CloudWatch

[英]Sending EMR Logs to CloudWatch

有没有办法将 EMR 日志发送到 CloudWatch 而不是 S3。 我们希望将所有服务日志记录在一个位置。 似乎您唯一能做的就是设置警报以进行监控,但这不包括日志记录。

https://docs.aws.amazon.com/emr/latest/ManagementGuide/UsingEMR_ViewingMetrics.html

我是否必须在集群中的节点上安装 CloudWatch 代理https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html

您可以通过 EMR 的引导配置安装 CloudWatch 代理,并将其配置为监视日志目录。 然后它开始将日志推送到 Amazon CloudWatch Logs

您可以从 s3 读取日志并使用 boto3 将它们推送到 cloudwatch,如果不需要,可以将它们从 s3 中删除。 在某些用例中,出于监控目的,需要将 stdout.gz 日志保存在 cloudwatch 中。

关于put_log_events的 boto3 文档

import boto3
import botocore.session
import logging
import time
import datetime
import gzip

def get_session(service_name):
    session = botocore.session.get_session()
    aws_access_key_id = session.get_credentials().access_key
    aws_secret_access_key = session.get_credentials().secret_key
    aws_session_token = session.get_credentials().token
    region = session.get_config_variable('region')

    return boto3.client(
        service_name = service_name,
        region_name = region,
        aws_access_key_id = aws_access_key_id,
        aws_secret_access_key = aws_secret_access_key,
        aws_session_token = aws_session_token
    )

def get_log_file(s3, bucket, key):
    log_file = None

    try:
        obj = s3.get_object(Bucket=bucket, Key=key)
        compressed_body = obj['Body'].read()
        log_file = gzip.decompress(compressed_body)

    except Exception as e:
        logger.error(f"Error reading from bucket : {e}")
        raise

    return log_file

def create_log_events(logs, batch_size):
    log_event_batch = []
    log_event_batch_collection = []

    try:
        for line in logs.splitlines():
            log_event = {'timestamp': int(round(time.time() * 1000)), 'message':line.decode('utf-8')}
        
            if len(log_event_batch) < batch_size:
                log_event_batch.append(log_event)
            else:
                log_event_batch_collection.append(log_event_batch)
                log_event_batch = []
                log_event_batch.append(log_event)

    except Exception as e:
        logger.error(f"Error creating log events : {e}")
        raise       

    log_event_batch_collection.append(log_event_batch)

    return log_event_batch_collection

def create_log_stream_and_push_log_events(logs, log_group, log_stream, log_event_batch_collection, delay):
    response = logs.create_log_stream(logGroupName=log_group, logStreamName=log_stream)
    seq_token = None

    try:
        for log_event_batch in log_event_batch_collection:
            log_event = {
                'logGroupName': log_group,
                'logStreamName': log_stream,
                'logEvents': log_event_batch
            }

            if seq_token:
                log_event['sequenceToken'] = seq_token

            response = logs.put_log_events(**log_event)
            seq_token = response['nextSequenceToken']
            time.sleep(delay)

    except Exception as e:
        logger.error(f"Error pushing log events : {e}")
        raise

调用者函数

def main():
    s3 = get_session('s3')
    logs = get_session('logs')

    BUCKET_NAME = 'Your_Bucket_Name'
    KEY = 'logs/emr/Path_To_Log/stdout.gz'
    BATCH_SIZE = 10000         #According to boto3 docs
    PUSH_DELAY = 0.2           #According to boto3 docs 
    LOG_GROUP='test_log_group' #Destination log group
    LOG_STREAM='{}-{}'.format(time.strftime('%Y-%m-%d'),'logstream.log')

    log_file = get_log_file(s3, BUCKET_NAME, KEY)
    log_event_batch_collection = create_log_events(log_file, BATCH_SIZE)
    create_log_stream_and_push_log_events(logs, LOG_GROUP, LOG_STREAM, log_event_batch_collection, PUSH_DELAY)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM