As per documentation,
"Apache Spark is a fast and general engine for large-scale data processing."
"Shark is an open source distributed SQL query engine for Hadoop data."
And Shark uses Spark as a dependency.
My question is, Is Spark just parses HiveQL into Spark jobs or does anything great if we use Shark for fast response on analytical queries ?
Yes, Shark uses the same idea as Hive but translates HiveQL into Spark jobs instead of MapReduce jobs. Please, read pages 13-14 of this document for architectural differences between these two.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.