简体   繁体   中英

scala_spark _dataframe_hivecontext

I have a hive table. It contain some one column as "query" and there are 4 records in it. I will read the hive using :

val query_hive=sqlContext.sql(s"select * from hive_query limit 1")

I need to use this query in another hive for the calculation.

I have tried this method:

val ouput=sqlContext.sql(s"$query_hive")

But I am getting an error. Can anybody suggest the solution for the same?

You can do this. You are not passing query correctly, just look it below:

scala> val query = "select * from src limit 1"
query: String = select * from src limit 1

scala> sql(s"""$query""").show
+---+-----+
|key|value|
+---+-----+
|  1|    a|
+---+-----+

Thanks.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM