簡體   English   中英

使用lambda函數通過火花步驟創建AWS EMR集群失敗,並顯示“本地文件不存在”

[英]Creating AWS EMR cluster with spark step using lambda function fails with “Local file does not exist”

我正在嘗試使用Lambda函數通過Spark步驟啟動EMR集群。

這是我的lambda函數(python 2.7):

import boto3

def lambda_handler(event, context):
    conn = boto3.client("emr")        
    cluster_id = conn.run_job_flow(
        Name='LSR Batch Testrun',
        ServiceRole='EMR_DefaultRole',
        JobFlowRole='EMR_EC2_DefaultRole',
        VisibleToAllUsers=True,
        LogUri='s3n://aws-logs-171256445476-ap-southeast-2/elasticmapreduce/',
        ReleaseLabel='emr-5.16.0',
        Instances={
            "Ec2SubnetId": "<my-subnet>",
            'InstanceGroups': [
                {
                    'Name': 'Master nodes',
                    'Market': 'ON_DEMAND',
                    'InstanceRole': 'MASTER',
                    'InstanceType': 'm3.xlarge',
                    'InstanceCount': 1,
                },
                {
                    'Name': 'Slave nodes',
                    'Market': 'ON_DEMAND',
                    'InstanceRole': 'CORE',
                    'InstanceType': 'm3.xlarge',
                    'InstanceCount': 2,
                }
            ],
            'KeepJobFlowAliveWhenNoSteps': False,
            'TerminationProtected': False
        },
        Applications=[{
            'Name': 'Spark',
            'Name': 'Hive'
        }],
        Configurations=[
          {
            "Classification": "hive-site",
            "Properties": {
              "hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"
            }
          },
          {
            "Classification": "spark-hive-site",
            "Properties": {
              "hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory"
            }
          }
        ],
        Steps=[{
            'Name': 'mystep',
            'ActionOnFailure': 'TERMINATE_CLUSTER',
            'HadoopJarStep': {
                'Jar': 's3://elasticmapreduce/libs/script-runner/script-runner.jar',
                'Args': [
                    "/home/hadoop/spark/bin/spark-submit", "--deploy-mode", "cluster",
                    "--master", "yarn-cluster", "--class", "org.apache.spark.examples.SparkPi", 
                    "s3://support.elasticmapreduce/spark/1.2.0/spark-examples-1.2.0-hadoop2.4.0.jar", "10"
                ]
            }
        }],
    )
    return "Started cluster {}".format(cluster_id)

集群正在啟動,但是在嘗試執行該步驟時會失敗。 錯誤日志包含以下異常:

Exception in thread "main" java.lang.RuntimeException: Local file does not exist.
    at com.amazon.elasticmapreduce.scriptrunner.ScriptRunner.fetchFile(ScriptRunner.java:30)
    at com.amazon.elasticmapreduce.scriptrunner.ScriptRunner.main(ScriptRunner.java:56)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:148)

因此,似乎腳本運行者不了解如何從S3中拾取.jar文件?

任何幫助表示贊賞...

並非所有預先構建的EMR都具有從S3復制jar,腳本的功能,因此您必須在引導步驟中執行此操作:

BootstrapActions=[
    {
        'Name': 'Install additional components',
        'ScriptBootstrapAction': {
            'Path': code_dir + '/scripts' + '/emr_bootstrap.sh'
        }
    }
],

這是我的引導程序所做的

#!/bin/bash
HADOOP="/home/hadoop"
BUCKET="s3://<yourbucket>/<path>"

# Sync jars libraries
aws s3 sync ${BUCKET}/jars/ ${HADOOP}/
aws s3 sync ${BUCKET}/scripts/ ${HADOOP}/

# Install python packages
sudo pip install --upgrade pip
sudo ln -s /usr/local/bin/pip /usr/bin/pip
sudo pip install psycopg2 numpy boto3 pythonds

然后您可以像這樣調用腳本和jar

 {
        'Name': 'START YOUR STEP',
        'ActionOnFailure': 'TERMINATE_CLUSTER',
        'HadoopJarStep': {
            'Jar': 'command-runner.jar',
            'Args': [
                "spark-submit", "--jars", ADDITIONAL_JARS,
                "--py-files", "/home/hadoop/modules.zip",
                "/home/hadoop/<your code>.py"
            ]
        }
    },

我最終可以解決問題。 主要問題是損壞的“應用程序”配置,它的外觀類似於以下內容:

Applications=[{
       'Name': 'Spark'
    },
    {
       'Name': 'Hive'
    }],

最后的步驟元素:

   Steps=[{
            'Name': 'lsr-step1',
            'ActionOnFailure': 'TERMINATE_CLUSTER',
            'HadoopJarStep': {
                'Jar': 'command-runner.jar',
                 'Args': [
                     "spark-submit", "--class", "org.apache.spark.examples.SparkPi", 
                     "s3://support.elasticmapreduce/spark/1.2.0/spark-examples-1.2.0-hadoop2.4.0.jar", "10"
                 ]
            }
        }]

相關問題:

我的設置與上面的問題基本相同,並且正嘗試使用AWS Lambda函數創建EMR集群,但是我無法使它啟動集群。 我具有與上述相同的設置(但甚至沒有嘗試運行步驟),但是從我的AWS Lambda函數獲取錯誤輸出

Response:
{
  "errorMessage": "name 'Name' is not defined",
  "errorType": "NameError",
  "stackTrace": [
    "  File \"/var/task/lambda_function.py\", line 6, in lambda_handler\n    Name: 'LSR Batch Testrun',\n"
  ]
}

有人知道這里會發生什么嗎?

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM