简体   繁体   中英

Spark-sql can not find the data in Hive?

My Java app code is:

    SparkSession spark = SparkSession.builder()
        .appName(topics)
        .config("hive.metastore.uris", "thrift://device1:9083")
        .config("spark.sql.warehouse.dir", "/user/hive/warehouse")
        .enableHiveSupport()
        .getOrCreate();

spark.sql("show databases ").show();

// it only prints default
+------------+
|databaseName|
+------------+
|     default|
+------------+

Below is the output of hdfs ls

Found 8 items
drwxrwxr-x   - fangzebin hive          0 2019-08-07 10:10 /user/hive/warehouse/fangzebin.db
drwxrwxr-x   - dennis    hive          0 2020-02-10 16:53 /user/hive/warehouse/kylin_account
drwxrwxr-x   - dennis    hive          0 2020-02-10 16:53 /user/hive/warehouse/kylin_cal_dt
drwxrwxr-x   - dennis    hive          0 2020-02-10 16:53 /user/hive/warehouse/kylin_category_groupings
drwxrwxr-x   - dennis    hive          0 2020-02-10 16:53 /user/hive/warehouse/kylin_country
drwxrwxr-x   - dennis    hive          0 2020-02-10 16:53 /user/hive/warehouse/kylin_sales
drwxrwxr-x   - hive      hive          0 2020-05-06 23:47 /user/hive/warehouse/ods.db
drwxrwxr-x   - root      hive          0 2020-05-16 18:13 /user/hive/warehouse/zhihu.db

I tried add hive-site.xml in resources/conf/ of my maven project, but it still didn't work

<?xml version="1.0" encoding="UTF-8"?>

<!--Autogenerated by Cloudera Manager-->
<configuration>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://device1:9083</value>
  </property>
  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>300</value>
  </property>
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
  </property>
  <property>
    <name>hive.warehouse.subdir.inherit.perms</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.auto.convert.join</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.auto.convert.join.noconditionaltask.size</name>
    <value>20971520</value>
  </property>
  <property>
    <name>hive.optimize.bucketmapjoin.sortedmerge</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.smbjoin.cache.rows</name>
    <value>10000</value>
  </property>
  <property>
    <name>hive.server2.logging.operation.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>/var/log/hive/operation_logs</value>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
  </property>
  <property>
    <name>hive.exec.reducers.bytes.per.reducer</name>
    <value>67108864</value>
  </property>
  <property>
    <name>hive.exec.copyfile.maxsize</name>
    <value>33554432</value>
  </property>
  <property>
    <name>hive.exec.reducers.max</name>
    <value>1099</value>
  </property>
  <property>
    <name>hive.vectorized.groupby.checkinterval</name>
    <value>4096</value>
  </property>
  <property>
    <name>hive.vectorized.groupby.flush.percent</name>
    <value>0.1</value>
  </property>
  <property>
    <name>hive.compute.query.using.stats</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.vectorized.execution.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.vectorized.execution.reduce.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.vectorized.use.vectorized.input.format</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.vectorized.use.checked.expressions</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.vectorized.use.vector.serde.deserialize</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.vectorized.adaptor.usage.mode</name>
    <value>chosen</value>
  </property>
  <property>
    <name>hive.vectorized.input.format.excludes</name>
    <value>org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat</value>
  </property>
  <property>
    <name>hive.merge.mapfiles</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.merge.mapredfiles</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.cbo.enable</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.fetch.task.conversion</name>
    <value>minimal</value>
  </property>
  <property>
    <name>hive.fetch.task.conversion.threshold</name>
    <value>268435456</value>
  </property>
  <property>
    <name>hive.limit.pushdown.memory.usage</name>
    <value>0.1</value>
  </property>
  <property>
    <name>hive.merge.sparkfiles</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.merge.smallfiles.avgsize</name>
    <value>16777216</value>
  </property>
  <property>
    <name>hive.merge.size.per.task</name>
    <value>268435456</value>
  </property>
  <property>
    <name>hive.optimize.reducededuplication</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.optimize.reducededuplication.min.reducer</name>
    <value>4</value>
  </property>
  <property>
    <name>hive.map.aggr</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.map.aggr.hash.percentmemory</name>
    <value>0.5</value>
  </property>
  <property>
    <name>hive.optimize.sort.dynamic.partition</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.execution.engine</name>
    <value>mr</value>
  </property>
  <property>
    <name>spark.executor.memory</name>
    <value>5318325043b</value>
  </property>
  <property>
    <name>spark.driver.memory</name>
    <value>966367641b</value>
  </property>
  <property>
    <name>spark.executor.cores</name>
    <value>4</value>
  </property>
  <property>
    <name>spark.yarn.driver.memoryOverhead</name>
    <value>102m</value>
  </property>
  <property>
    <name>spark.yarn.executor.memoryOverhead</name>
    <value>895m</value>
  </property>
  <property>
    <name>spark.dynamicAllocation.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>spark.dynamicAllocation.initialExecutors</name>
    <value>1</value>
  </property>
  <property>
    <name>spark.dynamicAllocation.minExecutors</name>
    <value>1</value>
  </property>
  <property>
    <name>spark.dynamicAllocation.maxExecutors</name>
    <value>2147483647</value>
  </property>
  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.support.concurrency</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.zookeeper.quorum</name>
    <value>device1,device2,device3</value>
  </property>
  <property>
    <name>hive.zookeeper.client.port</name>
    <value>2181</value>
  </property>
  <property>
    <name>hive.zookeeper.namespace</name>
    <value>hive_zookeeper_namespace_hive</value>
  </property>
  <property>
    <name>hive.cluster.delegation.token.store.class</name>
    <value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value>
  </property>
  <property>
    <name>hive.server2.enable.doAs</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.server2.use.SSL</name>
    <value>false</value>
  </property>
  <property>
    <name>spark.shuffle.service.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.strict.checks.orderby.no.limit</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.strict.checks.no.partition.filter</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.strict.checks.type.safety</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.strict.checks.cartesian.product</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.strict.checks.bucketing</name>
    <value>true</value>
  </property>
</configuration>

There is no any exceptions, but I cant figure out why Spark-SQL is unable to find the databases and tables in my Hive.

This is the output from Hive console:

> show databases;
OK
default
fangzebin
kylindb
ods
zhihu
Time taken: 1.69 seconds, Fetched: 5 row(s)

My Spark version is: spark-2.4.4-bin-without-hadoop

It's very tough to solve my case. Basically I use every components in CDH 6.2. But I installed an original Spark (spark-2.4.4-bin-without-hadoop) in my cluster, and already set the SPARK_HOME to it, as I prefer to use the original Spark.

After I tried to use the spark in CDH (remember to uncomment the original SPAKR_HOME first, or it will cause some problem), it successfully read the data in Hive!

In comparison to the spark log respectively, I found some difference.

When I use the original Spark, it shows:

20/05/17 21:42:04 WARN spark.SparkContext: Using an existing SparkContext; some configuration may not take effect.
20/05/17 21:42:04 INFO internal.SharedState: loading hive config file: file:/data/software/spark-2.4.4-bin-without-hadoop/conf/hive-site.xml
20/05/17 21:42:04 INFO internal.SharedState: spark.sql.warehouse.dir is not set, but hive.metastore.warehouse.dir is set. Setting spark.sql.warehouse.dir to the value of hive.metastore.warehouse.dir ('/user/hive/warehouse').
20/05/17 21:42:04 INFO internal.SharedState: Warehouse path is '/user/hive/warehouse'.
20/05/17 21:42:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@64bfd6fd{/SQL,null,AVAILABLE,@Spark}
20/05/17 21:42:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2ab2710{/SQL/json,null,AVAILABLE,@Spark}
20/05/17 21:42:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6818d900{/SQL/execution,null,AVAILABLE,@Spark}
20/05/17 21:42:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@149f5761{/SQL/execution/json,null,AVAILABLE,@Spark}
20/05/17 21:42:04 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6dcd5639{/static/sql,null,AVAILABLE,@Spark}
20/05/17 21:42:06 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/05/17 21:42:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/05/17 21:42:10 INFO codegen.CodeGenerator: Code generated in 191.505638 ms
20/05/17 21:42:10 INFO codegen.CodeGenerator: Code generated in 8.313303 ms
+------------+
|databaseName|
+------------+
|     default|
+------------+

When using the Spark in CDH, the log:

20/05/17 21:47:39 INFO client.HiveClientImpl: Warehouse location for Hive client (version 2.1.1) is /user/hive/warehouse
20/05/17 21:47:39 INFO hive.metastore: HMS client filtering is enabled.
20/05/17 21:47:39 INFO hive.metastore: Trying to connect to metastore with URI thrift://device1:9083
20/05/17 21:47:39 INFO hive.metastore: Opened a connection to metastore, current connections: 1
20/05/17 21:47:39 INFO hive.metastore: Connected to metastore.
20/05/17 21:47:39 INFO codegen.CodeGenerator: Code generated in 141.896818 ms
20/05/17 21:47:39 INFO codegen.CodeGenerator: Code generated in 7.683993 ms
+------------+
|databaseName|
+------------+
|     default|
|   fangzebin|
|     kylindb|
|         ods|
|       zhihu|
+------------+

It looks like the original spark failed to read the hive.metastore.uris this config. But I still can't figure out why this happened.

You have already added hive-site.xml file to classpath but somehow its not loading this file.

Can you try replace device1 with IP address - thrift://ip_address_of_system:9083 & also add hive-site.xml file like below

spark.sparkContext.addFile("hive-site.xml")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM