簡體   English   中英

紗線錯誤:無法為 Spark 會話創建 Spark 客戶端

[英]Yarn error: Failed to create Spark client for Spark session

我對此有點陌生,經驗很少,希望得到您的幫助。 我正在嘗試在現有 Spark 安裝上安裝 Hive。

我主要按照此頁面中的說明操作,沒有任何問題。
https://github.com/dryshliak/hadoop/wiki/Installing-Hive-on-existing-Hadoop-cluster

我還創建了一個名為warehouse的數據庫並創建了一個名為test_table的表。

hive> show tables;
OK
employee
test_table
Time taken: 0.084 seconds, Fetched: 2 row(s)
hive> desc test_table;
OK
   col1                    int                     Integer Column
   col2                    string                  String Column
   Time taken: 0.052 seconds, Fetched: 2 row(s)
hive>

我面臨的問題是,當我嘗試將數據插入test_table ,使用命令

hive> insert into test_table values(1,'aaa');

我收到以下錯誤消息

查詢 ID = hadoop_20190703135836_4b17eeac-249d-4e54-bd98-1212f3cb5b5d 總作業數 = 1
啟動 Job 1 out of 1
為了更改減速器的平均負載(以字節為單位):
設置 hive.exec.reducers.bytes.per.reducer=<number>
為了限制減速器的最大數量:
設置 hive.exec.reducers.max=<number>
為了設置恆定數量的減速器:
設置 mapreduce.job.reduces=<number>
無法執行 spark 任務,出現異常“org.apache.hadoop.hive.ql.metadata.HiveException(無法為 Spark 會話 821e05e7-74a8-4656-b4ed-3a622c9cadcc 創建 Spark 客戶端)”
失敗:執行錯誤,從 org.apache.hadoop.hive.ql.exec.spark.SparkTask 返回代碼 30041。 未能為 Spark 會話 821e05e7-74a8-4656-b4ed-3a622c9cadcc 創建 Spark 客戶端


我正在使用以下軟件版本
RHEL 服務器版本 7.5
Hadoop 3.1.1
火花 2.4.0
蜂巢 3.1.1

下面是從發生錯誤的hive.log文件中hive.log

2019-07-03T12:56:00,269  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver: Executing command(queryId=hadoop_20190703125557_f48a3966-691d-4c42-aee0-93f81fabef66): insert into test_table values(1,'aaa')   
2019-07-03T12:56:00,270  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver: Query ID = hadoop_20190703125557_f48a3966-691d-4c42-aee0-93f81fabef66   
2019-07-03T12:56:00,270  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver: Total jobs = 1   
2019-07-03T12:56:00,282  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver: Launching Job 1 out of 1   
2019-07-03T12:56:00,282  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver: Starting task [Stage-1:MAPRED] in serial mode   
2019-07-03T12:56:00,282  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask: In order to change the average load for a reducer (in bytes):   
2019-07-03T12:56:00,282  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask:   set hive.exec.reducers.bytes.per.reducer=<number>   
2019-07-03T12:56:00,282  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask: In order to limit the maximum number of reducers:   
2019-07-03T12:56:00,282  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask:   set hive.exec.reducers.max=<number>   
2019-07-03T12:56:00,282  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask: In order to set a constant number of reducers:   
2019-07-03T12:56:00,282  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask:   set mapreduce.job.reduces=<number>   
2019-07-03T12:56:00,284  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] session.SparkSessionManagerImpl: Setting up the session manager.   
2019-07-03T12:56:00,642  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] session.SparkSession: Trying to open Spark session e3b4aa82-29a5-4e82-b63b-742c5d35df3f   
2019-07-03T12:56:00,700 ERROR [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask: Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session e3b4aa82-29a5-4e82-b63b-742c5d35df3f)'   
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create Spark client for Spark session e3b4aa82-29a5-4e82-b63b-742c5d35df3f   
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:221)   
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:92)   
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115)   
        at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136)   
        at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115)   
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)   
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)   
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664)   
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335)   
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011)   
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709)   
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703)   
        at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)   
        at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218)   
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)   
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)   
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)   
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)   
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)   
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)   
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)   
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)   
        at java.lang.reflect.Method.invoke(Method.java:498)   
        at org.apache.hadoop.util.RunJar.run(RunJar.java:318)   
        at org.apache.hadoop.util.RunJar.main(RunJar.java:232)   
Caused by: java.lang.NoClassDefFoundError: org/apache/spark/SparkConf   
        at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263)   
        at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:98)   
        at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76)   
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87)   
        ... 24 more   
Caused by: java.lang.ClassNotFoundException: org.apache.spark.SparkConf   
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)   
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)   
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)   
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)   
        ... 28 more   
2019-07-03T12:56:00,700 ERROR [6beaec32-ecac-4dc1-b118-f2c86c385005 main] spark.SparkTask: Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create Spark client for Spark session e3b4aa82-29a5-4e82-b63b-742c5d35df3f)'   
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create Spark client for Spark session e3b4aa82-29a5-4e82-b63b-742c5d35df3f   
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:221) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:92) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:115) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) ~[hive-cli-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) ~[hive-cli-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) ~[hive-cli-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) ~[hive-cli-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) ~[hive-cli-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) ~[hive-cli-3.1.1.jar:3.1.1]   
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]   
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]   
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]   
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]   
        at org.apache.hadoop.util.RunJar.run(RunJar.java:318) ~[hadoop-common-3.1.1.jar:?]   
        at org.apache.hadoop.util.RunJar.main(RunJar.java:232) ~[hadoop-common-3.1.1.jar:?]   
Caused by: java.lang.NoClassDefFoundError: org/apache/spark/SparkConf   
        at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:98) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87) ~[hive-exec-3.1.1.jar:3.1.1]   
        ... 24 more   
Caused by: java.lang.ClassNotFoundException: org.apache.spark.SparkConf   
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[?:1.8.0_191]   
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_191]   
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) ~[?:1.8.0_191]   
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_191]   
        at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.generateSparkConf(HiveSparkClientFactory.java:263) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:98) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:76) ~[hive-exec-3.1.1.jar:3.1.1]   
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:87) ~[hive-exec-3.1.1.jar:3.1.1]   
        ... 24 more   
2019-07-03T12:56:00,701  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] reexec.ReOptimizePlugin: ReOptimization: retryPossible: false   
2019-07-03T12:56:00,701 ERROR [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver: FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session e3b4aa82-29a5-4e82-b63b-742c5d35df3f   
2019-07-03T12:56:00,701  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver: Completed executing command(queryId=hadoop_20190703125557_f48a3966-691d-4c42-aee0-93f81fabef66); Time taken: 0.432 seconds   
2019-07-03T12:56:00,701  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] ql.Driver: Concurrency mode is disabled, not creating a lock manager   
2019-07-03T12:56:00,721  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] conf.HiveConf: Using the default value passed in for log id: 6beaec32-ecac-4dc1-b118-f2c86c385005   
2019-07-03T12:56:00,721  INFO [6beaec32-ecac-4dc1-b118-f2c86c385005 main] session.SessionState: Resetting thread name to  main   

和你一樣,我在 spark 上部署 hive 時也遇到了同樣的問題。 最后經過我的研究,發現是因為hive無法加載spark jars,所以對hive-env.sh做了如下修改。

在 hive-env.sh 中添加:

// Pay attention to your spark path

export SPARK_HOME=/opt/module/spark-2.4.5-bin-without-hive
export SPARK_JARS=""
for jar in `ls $SPARK_HOME/jars`; do
    export SPARK_JARS=$SPARK_JARS:$SPARK_HOME/jars/$jar
done
export HIVE_AUX_JARS_PATH=$SPARK_JARS

這個答案有一個小錯誤

HIVE_AUX_JARS_PATH 采用逗號分隔的列表而不是冒號分隔。 所以正確的代碼是

export SPARK_HOME=/home/jp/bigdata/spark/spark-3.1.1-bin-hadoop3.2
export SPARK_JARS=""
for jar in `ls $SPARK_HOME/jars`; do
    if ! echo $jar | grep -q 'slf4j\|mysql\|datanucleus\|^hive'; then
        export SPARK_JARS=$SPARK_JARS,$SPARK_HOME/jars/$jar
    fi
done
VAR=${SPARK_JARS#?};
export HIVE_AUX_JARS_PATH=$VAR
echo $HIVE_AUX_JARS_PATH

注意:一些 jars 被跳過,因為它們與 hive jars 沖突

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM