簡體   English   中英

將 EMR 日志發送到 CloudWatch

[英]Sending EMR Logs to CloudWatch

有沒有辦法將 EMR 日志發送到 CloudWatch 而不是 S3。 我們希望將所有服務日志記錄在一個位置。 似乎您唯一能做的就是設置警報以進行監控,但這不包括日志記錄。

https://docs.aws.amazon.com/emr/latest/ManagementGuide/UsingEMR_ViewingMetrics.html

我是否必須在集群中的節點上安裝 CloudWatch 代理https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html

您可以通過 EMR 的引導配置安裝 CloudWatch 代理,並將其配置為監視日志目錄。 然后它開始將日志推送到 Amazon CloudWatch Logs

您可以從 s3 讀取日志並使用 boto3 將它們推送到 cloudwatch,如果不需要,可以將它們從 s3 中刪除。 在某些用例中,出於監控目的,需要將 stdout.gz 日志保存在 cloudwatch 中。

關於put_log_events的 boto3 文檔

import boto3
import botocore.session
import logging
import time
import datetime
import gzip

def get_session(service_name):
    session = botocore.session.get_session()
    aws_access_key_id = session.get_credentials().access_key
    aws_secret_access_key = session.get_credentials().secret_key
    aws_session_token = session.get_credentials().token
    region = session.get_config_variable('region')

    return boto3.client(
        service_name = service_name,
        region_name = region,
        aws_access_key_id = aws_access_key_id,
        aws_secret_access_key = aws_secret_access_key,
        aws_session_token = aws_session_token
    )

def get_log_file(s3, bucket, key):
    log_file = None

    try:
        obj = s3.get_object(Bucket=bucket, Key=key)
        compressed_body = obj['Body'].read()
        log_file = gzip.decompress(compressed_body)

    except Exception as e:
        logger.error(f"Error reading from bucket : {e}")
        raise

    return log_file

def create_log_events(logs, batch_size):
    log_event_batch = []
    log_event_batch_collection = []

    try:
        for line in logs.splitlines():
            log_event = {'timestamp': int(round(time.time() * 1000)), 'message':line.decode('utf-8')}
        
            if len(log_event_batch) < batch_size:
                log_event_batch.append(log_event)
            else:
                log_event_batch_collection.append(log_event_batch)
                log_event_batch = []
                log_event_batch.append(log_event)

    except Exception as e:
        logger.error(f"Error creating log events : {e}")
        raise       

    log_event_batch_collection.append(log_event_batch)

    return log_event_batch_collection

def create_log_stream_and_push_log_events(logs, log_group, log_stream, log_event_batch_collection, delay):
    response = logs.create_log_stream(logGroupName=log_group, logStreamName=log_stream)
    seq_token = None

    try:
        for log_event_batch in log_event_batch_collection:
            log_event = {
                'logGroupName': log_group,
                'logStreamName': log_stream,
                'logEvents': log_event_batch
            }

            if seq_token:
                log_event['sequenceToken'] = seq_token

            response = logs.put_log_events(**log_event)
            seq_token = response['nextSequenceToken']
            time.sleep(delay)

    except Exception as e:
        logger.error(f"Error pushing log events : {e}")
        raise

調用者函數

def main():
    s3 = get_session('s3')
    logs = get_session('logs')

    BUCKET_NAME = 'Your_Bucket_Name'
    KEY = 'logs/emr/Path_To_Log/stdout.gz'
    BATCH_SIZE = 10000         #According to boto3 docs
    PUSH_DELAY = 0.2           #According to boto3 docs 
    LOG_GROUP='test_log_group' #Destination log group
    LOG_STREAM='{}-{}'.format(time.strftime('%Y-%m-%d'),'logstream.log')

    log_file = get_log_file(s3, BUCKET_NAME, KEY)
    log_event_batch_collection = create_log_events(log_file, BATCH_SIZE)
    create_log_stream_and_push_log_events(logs, LOG_GROUP, LOG_STREAM, log_event_batch_collection, PUSH_DELAY)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM