简体   繁体   English

Hive on Spark CDH5.7执行错误

[英]Hive on Spark CDH5.7 Execution Error

I've updated my cluster to CDH 5.7 recently and I am trying to run a Hive query processing on Spark. 我最近将我的集群更新到CDH 5.7,我正在尝试在Spark上运行Hive查询处理。

I have configured the Hive client to use the Spark execution engine and Hive Dependency on a Spark Service from Cloudera Manager. 我已经将Hive客户端配置为使用Cloudera Manager的Spark服务上的Spark执行引擎和Hive Dependency。

Via HUE, i'm simply running a simple select query but seem to get this error at all times: Error while processing statement: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask 通过HUE,我只是运行一个简单的选择查询,但似乎始终得到此错误:处理语句时出错:FAILED:执行错误,从org.apache.hadoop.hive.ql.exec.spark返回代码3。 SparkTask

Following are the logs for the same: 以下是相同的日志:

ERROR operation.Operation: Error running hive query: 
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
    at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:374)
    at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:180)
    at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72)
    at org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
    at org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:245)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Any help to solve this would be great! 任何帮助解决这个问题都会很棒!

This problem is due to a open JIRA: https://issues.apache.org/jira/browse/HIVE-11519 . 这个问题是由于开放的JIRA: https//issues.apache.org/jira/browse/HIVE-11519 You should use another serialization tool.. 你应该使用另一个序列化工具..

Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

is not the real error message, you'd better turn on the DEBUG info by using hive cli, like 不是真正的错误消息,你最好使用hive cli打开DEBUG信息,比如

bin/hive --hiveconf hive.root.logger=DEBUG,console

and you will get more detailed logs, such as, those are something i got before: 你会得到更详细的日志,例如,这些是我之前得到的:

16/03/17 13:55:43 [fxxxxxxxxxxxxxxxx4 main]: INFO exec.SerializationUtilities: Serializing MapWork using kryo
java.lang.NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer$.handledType()Ljava/lang/Class;

this is caused by some dependency conflicts, see https://issues.apache.org/jira/browse/HIVE-13301 for detail. 这是由一些依赖性冲突引起的,详见https://issues.apache.org/jira/browse/HIVE-13301

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM