[英]Failure recovery in spark running on HDinsight
I was trying to get Apache spark run on Azure HDinsight by following the steps from http://azure.microsoft.com/en-us/documentation/articles/hdinsight-hadoop-spark-install/ 我正尝试按照http://azure.microsoft.com/zh-cn/documentation/articles/hdinsight-hadoop-spark-install/中的步骤在Azure HDinsight上运行Apache Spark。
I was wondering if I have to manage the master/slave failure recovery myself, or will HDinsight take care of it. 我想知道是否必须自己管理主/从故障恢复,否则HDinsight会照顾它。
I'm also working on Spark Streaming applications on Azure HDInsight. 我还在Azure HDInsight上的Spark Streaming应用程序上工作。 Inside the Spark job, Spark and Yarn can provide some Fault-Tolerance for Master and Slave. 在Spark作业中,Spark和Yarn可以为Master和Slave提供一些容错功能。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.