简体   繁体   中英

Unable to see data from spark beeline for a hive orc table

I have created an orc hive table as below:

  • create table forest41 (id int, type string) clustered by (id) into 2 buckets stored as orc TBLPROPERTIES ('transactional'='true');

    insert into table forest41 values (1,'red'),(2,'white'),(3,'black');

Now when i am trying to see the data from spark beeline: It does not show me any data nor does it throw any exception.

Following is the query i ran : select * from default.forest40 limit 10

But in the spark jobs console: It shows one of the jobs related to the above query - Skipped Stages (1) -- Spark JDBC Server Query

You created hive table with hive's bucket feature but Spark doesn't support.

Go through this link http://spark.apache.org/docs/latest/sql-programming-guide.html#unsupported-hive-functionality

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM