简体   繁体   中英

Error: You must build Spark with Hive

I'm running Spark 1.6.2 with Hive 0.13.1 and Hadoop 2.6.0.

I try to run this pyspark script:

import pyspark
from pyspark.sql import HiveContext

sc = pyspark.SparkContext('local[*]')
hc = HiveContext(sc)
hc.sql("select col from table limit 3")

with this command line:

 ~/spark/bin/spark-submit script.py 

and I got this error message:

 File "/usr/local/hadoop/spark/python/pyspark/sql/context.py", line >552, in sql
 return DataFrame(self._ssql_ctx.sql(sqlQuery), self)
 File "/usr/local/hadoop/spark/python/pyspark/sql/context.py", line >660, in _ssql_ctx
 "build/sbt assembly", e)
 Exception: ("You must build Spark with Hive. Export 'SPARK_HIVE=true' and run build/sbt assembly", Py4JJavaError(u'An error occurred while >calling None.org.apache.spark.sql.hive.HiveContext.\n', JavaObject >id=o18))

Doing what they asked, I saw a warning saying that "exporting SPARK_HIVE was deprecated" and to use instead "-Phive -Phive-thriftserver" So I did this:

 cd ~/spark/
 build/sbt -Pyarn -Phadoop-2.6 -Phive -Phive-thriftserver assembly

but I have slightly the same error:

 [...]
 16/07/17 19:10:01 WARN metadata.Hive: Failed to access metastore. This class should not accessed in runtime.
 org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate      org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
     at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1236)   
 [...]
 Traceback (most recent call last):
   File "/home/hadoop/spark3/./script.py", line 6, in <module>
     hc.sql("select timestats from logweb limit 3")
   File "/usr/local/hadoop/spark/python/lib/pyspark.zip/pyspark/sql/context.py",      line 552, in sql
   File "/usr/local/hadoop/spark/python/lib/pyspark.zip/pyspark/sql/context.py", line 660, in _ssql_ctx
 Exception: ("You must build Spark with Hive. Export 'SPARK_HIVE=true' and run build/sbt assembly", Py4JJavaError(u'An error occurred while calling None.org.apache.spark.sql.hive.HiveContext.\n', JavaObject id=o19))

I searched on the web about this error, but none if the answers worked for me...

Could someone help me please?


I also tried to use a spark version which is suposed to work with Hadoop (Suggested by Joss ) , and I got this error :

 Traceback (most recent call last):
 File "/home/hadoop/spark3/./script.py", line 6, in <module>
hc.sql("select timestats from logweb limit 3")
 File "/usr/local/hadoop/spark/python/lib/pyspark.zip/pyspark/sql/context.py", line 552, in sql
 File "/usr/local/hadoop/spark/python/lib/pyspark.zip/pyspark/sql/context.py", line 660, in _ssql_ctx
 Exception: ("You must build Spark with Hive. Export 'SPARK_HIVE=true' and run build/sbt assembly", Py4JJavaError(u'An error occurred while calling None.org.apache.spark.sql.hive.HiveContext.\n', JavaObject id=o19))

I have an Apache Spark version that came with HiveContext by default, this is the link to download in case you are interested:

Regarding the problem you have, it may be related to the version of Hadoop that you use to compile Spark. Check the parameters that relate to the version of Hadoop you require.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM