簡體   English   中英

AWS EMR 氣流:Postgresql 連接器

[英]AWS EMR Airflow: Postgresql Connector

我正在通過 Airflow 啟動 AWS EMR 作業,該作業依賴於將數據保存到 PostgreSQL 數據庫中。 不幸的是,據我所知,在 EMR 中默認情況下連接器不可用,因此出現錯誤:

Traceback (most recent call last):
  File "my_emr_script.py", line 725, in <module>
    main()
  File "my_emr_script.py", line 718, in main
    .mode("overwrite") \
  File "/mnt1/yarn/usercache/hadoop/appcache/application_1634133413183_0001/container_1634133413183_0001_01_000001/pyspark.zip/pyspark/sql/readwriter.py", line 1107, in save
  File "/mnt1/yarn/usercache/hadoop/appcache/application_1634133413183_0001/container_1634133413183_0001_01_000001/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
  File "/mnt1/yarn/usercache/hadoop/appcache/application_1634133413183_0001/container_1634133413183_0001_01_000001/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
  File "/mnt1/yarn/usercache/hadoop/appcache/application_1634133413183_0001/container_1634133413183_0001_01_000001/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o1493.save.
: java.lang.ClassNotFoundException: org.postgresql.Driver
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:46)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.$anonfun$driverClass$1(JDBCOptions.scala:102)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.$anonfun$driverClass$1$adapted(JDBCOptions.scala:102)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:102)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcOptionsInWrite.<init>(JDBCOptions.scala:217)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcOptionsInWrite.<init>(JDBCOptions.scala:221)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:45)
    at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
    at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:194)
    at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:232)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:229)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:190)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:134)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:133)
    at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
    at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
    at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
    at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:110)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:135)
    at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
    at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:135)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:253)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:134)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
    at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:301)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)

如何確保 EMR 啟動時它包含 PostgreSQL 連接器? 我通過自舉尋找方法來做到這一點,但我沒有找到答案; 所有官方文件僅指 Presto 版本

編輯:

我按照@Emerson 的建議將 .JAR 下載到 S3 文件夾中,並通過 Airflow JOB_FLOW_OVERRIDES 中的配置直接傳遞它:

"Configurations": [
        {
            "Classification": "spark-defaults",
            "Properties":
                {
                    "spark.jar": "s3://{{ var.value.s3_folder }}/scripts/postgresql-42.2.5.jar",
                },
        }
    ],

在氣流中:

instance_type: str = 'm5.xlarge'


SPARK_STEPS = [
    {
        'Name': 'emr_test',
        'ActionOnFailure': 'CANCEL_AND_WAIT',
        'HadoopJarStep': {
            'Jar': 'command-runner.jar',
            "Args": [
                'spark-submit',
                '--deploy-mode',
                'cluster',
                '--master',
                'yarn',
                "s3://{{ var.value.s3_folder }}/scripts/el_emr.py",
                '--execution_date',
                '{{ ds }}'
            ],
        },
    }
]


JOB_FLOW_OVERRIDES = {
    'Name': 'EMR Test',
    "ReleaseLabel": "emr-6.4.0",
    "Applications": [{"Name": "Hadoop"}, {"Name": "Spark"}],
    'Instances': {
        'InstanceGroups': [
            {
                'Name': 'Master node',
                'Market': 'SPOT',
                'InstanceRole': 'MASTER',
                'InstanceType': instance_type,
                'InstanceCount': 1,
            },
            {
                "Name": "Core",
                "Market": "SPOT",
                "InstanceRole": "CORE",
                "InstanceType": instance_type,
                "InstanceCount": 1,
            },
        ],
        'KeepJobFlowAliveWhenNoSteps': False,
        'TerminationProtected': False,
    },
    'Steps': SPARK_STEPS,
    'JobFlowRole': 'EMR_EC2_DefaultRole',
    'ServiceRole': 'EMR_DefaultRole',
    'BootstrapActions': [
        {
            'Name': 'string',
            'ScriptBootstrapAction': {
                'Path': 's3://{{ var.value.s3_folder  }}/scripts/emr_bootstrap.sh',
            }
        },
    ],
    'LogUri': 's3://{{ var.value.s3_folder }}/logs',
     "Configurations": [
        {
            "Classification": "spark-defaults",
            "Properties":
                {
                    "spark.jar": "s3://{{ var.value.s3_path }}/scripts/postgresql-42.2.5.jar"
                },
        }
    ]
}


emr_creator = EmrCreateJobFlowOperator(
        task_id='create_emr',
        job_flow_overrides=JOB_FLOW_OVERRIDES,
        aws_conn_id='aws_conn',
        emr_conn_id='emr_conn',
        region_name='us-west-2',
    )

不幸的是,問題仍然存在。

此外,我嘗試修改引導程序以下載 .JAR:

cd $HOME && wget https://jdbc.postgresql.org/download/postgresql-42.2.5.jar

並將其傳遞給配置:

"Configurations": [
        {
            "Classification": "spark-defaults",
            "Properties":
                {
                    "spark.executor.extraClassPath": "org.postgresql:postgresql:42.2.5",
                    "spark.driver.extraClassPath": "$HOME/postgresql-42.2.5.jar",
                },
        }
    ],

在氣流中:

instance_type: str = 'm5.xlarge'
    
    
    SPARK_STEPS = [
        {
            'Name': 'emr_test',
            'ActionOnFailure': 'CANCEL_AND_WAIT',
            'HadoopJarStep': {
                'Jar': 'command-runner.jar',
                "Args": [
                    'spark-submit',
                    '--deploy-mode',
                    'cluster',
                    '--master',
                    'yarn',
                    "s3://{{ var.value.s3_folder }}/scripts/emr.py",
                    '--execution_date',
                    '{{ ds }}'
                ],
            },
        }
    ]
    
    
    JOB_FLOW_OVERRIDES = {
        'Name': 'EMR Test',
        "ReleaseLabel": "emr-6.4.0",
        "Applications": [{"Name": "Hadoop"}, {"Name": "Spark"}],
        'Instances': {
            'InstanceGroups': [
                {
                    'Name': 'Master node',
                    'Market': 'SPOT',
                    'InstanceRole': 'MASTER',
                    'InstanceType': instance_type,
                    'InstanceCount': 1,
                },
                {
                    "Name": "Core",
                    "Market": "SPOT",
                    "InstanceRole": "CORE",
                    "InstanceType": instance_type,
                    "InstanceCount": 1,
                },
            ],
            'KeepJobFlowAliveWhenNoSteps': False,
            'TerminationProtected': False,
        },
        'Steps': SPARK_STEPS,
        'JobFlowRole': 'EMR_EC2_DefaultRole',
        'ServiceRole': 'EMR_DefaultRole',
        'BootstrapActions': [
            {
                'Name': 'string',
                'ScriptBootstrapAction': {
                    'Path': 's3://{{ var.value.s3_folder  }}/scripts/emr_bootstrap.sh',
                }
            },
        ],
        'LogUri': 's3://{{ var.value.s3_folder }}/logs',
         "Configurations": [
            {
                "Classification": "spark-defaults",
                "Properties":
                    {
                        "spark.executor.extraClassPath": "org.postgresql:postgresql:42.2.5",
                        "spark.driver.extraClassPath": "$HOME/postgresql-42.2.5.jar",
                    },
            }
        ]
    }
    
    
    emr_creator = EmrCreateJobFlowOperator(
            task_id='create_emr',
            job_flow_overrides=JOB_FLOW_OVERRIDES,
            aws_conn_id='aws_conn',
            emr_conn_id='emr_conn',
            region_name='us-west-2',
        )

這反過來又會導致一個新的錯誤,它以某種方式使 Spark 無法讀取 JSON 文件,將它們視為損壞的文件。

root
 |-- _corrupt_record: string (nullable = true)

最后,常見的emr_boostrap.sh

#!/bin/bash -xe

sudo pip3 install -U \
    boto3 \
    typing


cd $HOME && wget https://jdbc.postgresql.org/download/postgresql-42.2.5.jar

我不確定 emr 是如何配置的,但下面是您將如何進行配置。

首先將 postgres jdbc jar 上傳到 s3 位置。 然后在配置集群時參考。

如果您通過 Cloudformation 進行配置,那么您需要執行以下操作

  EMR:
    Type: AWS::EMR::Cluster
    Properties:
      Applications:
        - Name: Spark
      Configurations:
        - Classification: spark-defaults
          ConfigurationProperties:
            spark.jars: s3://path_to_jar/postgresql-42.2.11.jar

如果它的 cli 命令,那么它會像下面這樣

aws emr create-cluster ...... --configurations config.json

其中 config.json 可能如下所示

[
  {
    "Classification": "spark-defaults",
    "Properties": {
      "spark.jars": "s3://path_to_jar/postgresql-42.2.11.jar"
     }
  }
]

編輯:

看到您編輯的問題后,我可以看到您的 spark 提交參數(SPARKSTEP 變量)。 在該部分中,只需添加另外兩個項目,如下所示

‘—jars’
‘s3://pathtodriver/postgresdriver.jar’

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM