简体   繁体   中英

How to configure the Hive cli when using the Spark execution engine?

I have set the hive.execution.engine to spark and also am using a spark-enabled queue. Spark sql is able to access the hive tables - and so is beeline from a directly connected cluster machine.

But the hive cli seems to need additional steps. So far the following have been done:

** Copy the scala libraries to the $HIVE_HOME/libs dir (or we get ClassNotFoundException )

** Run the following at the start of the hive script (or in .hiverc )

set hive.execution.engine=spark;
set mapred.job.queue.name=root.spark.sbg.hos;

However the following error now happens Failed to create spark client. :

SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/usr/local/Cellar/hive/2.1.1/libexec/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true
hive (default)> insert into sb.test2 values (1,'ab');
Query ID = sboesch_20171030175629_dc310c9a-519e-4f84-a632-f3a44f1df8c3
Total jobs = 3
Launching Job 1 out of 3
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

Has anyone managed to connect to spark backend for hive ? I am connecting via vanilla hive (not Cloudera or Hortonworks or MapR ).

you have to start Hive metastore Server separately for accessing hive tables through spark.

Try hive --service metastore in a new Terminal you will get a response like Starting Hive Metastore Server

hive-site.xml

`<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>   
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>**mysql metastore username**</value>   
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>**mysql metastore DB password**</value>   
</property>

<property>
<name>hive.querylog.location</name>
<value>/tmp/hivequerylogs/${user.name}</value>    
</property>

<property>
<name>hive.aux.jars.path</name>
<value>file:///usr/local/hive/apache-hive-2.1.1-bin/lib/hive-hbase-handler-2.1.1.jar,file:///usr/local/hive/apache-hive-2.1.1-bin/lib/zookeeper-3.4.6.jar</value>
<description>A comma separated list (with no spaces) of the jar files required for Hive-HBase integration</description>
</property>

<property>
<name>hive.support.concurrency</name>
<value>false</value>   
</property>

<property>
<name>hive.server2.enable.doAs</name>
<value>true</value>    
</property>

<property>
<name>hive.server2.authentication</name>
<value>PAM</value>    
</property>

 <property>
<name>hive.server2.custom.authentication.class</name>
<value>org.apache.hive.service.auth.PamAuthenticationProvider</value>  
</property>

<property>
<name>hive.server2.authentication.pam.services</name>
<value>sshd,sudo</value>    
</property>

<property>
<name>hive.stats.dbclass</name>
<value>jdbc:mysql</value>    
</property>

<property>
<name>hive.stats.jdbcdriver</name>
<value>com.mysql.jdbc.Driver</value>
</property>

<property>
<name>hive.session.history.enabled</name>
<value>true</value>
</property>  

<property>
 <name>hive.metastore.schema.verification</name>
 <value>false</value>    
</property>

 <property>
 <name>hive.optimize.sort.dynamic.partition</name>
 <value>false</value>    
 </property>

 <property>
   <name>hive.optimize.insert.dest.volume</name>
   <value>false</value>
 </property>

 <property>
 <name>hive.exec.scratchdir</name>
 <value>/tmp/hive/${user.name}</value>
 <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}.</description>
 </property>   

  <property>
  <name>datanucleus.fixedDatastore</name>
  <value>true</value>
  <description/>
  </property>

<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>

<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
<description>creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once</description>
 </property>

 <property>
 <name>datanucleus.schema.autoCreateAll</name>
 <value>true</value>
 </property>

<property>
<name>datanucleus.schema.validateConstraints</name>
<value>true</value>
</property>

  <property>
  <name>datanucleus.schema.validateColumns</name>
  <value>true</value>
  </property>

  <property>
    <name>datanucleus.schema.validateTables</name>
  <value>true</value>
  </property>
</configuration>`

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM