简体   繁体   中英

how to install Spark and Hadoop from tarball separately [Cloudera]

I want to install Cloudera distribution of Hadoop and Spark using tarball. I have already set up Hadoop in Pseudo-Distributed mode in my local machine and successfully ran a Yarn example.

I have downloaded latest tarballs CDH 5.3.x from here

But the folder structure of Spark downloaded from Cloudera is differrent from Apache website. This may be because Cloudera provides it's own version maintained separately.

So, as there are no documentation I have found yet to install Spark from this Cloudera's tarball separately. Could someone help me to understand how to do it?

Spark could be extracted to any directory. You just need to run the ./bin/spark-submit command (available in extracted spark directory) with required parameters to submit the job. To start spark interactive shell, please use command ./bin/spark-shell .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM