[英]Exception while running Spark-submit on Hadoop Cluster with HighAvailability
I am facing Exception while running Spark-submit command on Hadoop Cluster with HighAvailability. 我在使用HighAvailability在Hadoop集群上运行Spark-submit命令时遇到异常。
Following command working fine on other cluster where HA is not enabled. 以下命令在未启用HA的其他群集上正常工作。
spark-submit --master yarn-client --executor-memory 4g --executor-cores 2 --class com.domain.app.module.mainclass target/SNAPSHOT-jar-with-dependencies.jar
Same command not working on Cluster where HA is enabled and throwing following exception. 相同的命令不适用于启用了HA的群集并抛出以下异常。
Exception in thread "main" java.lang.AbstractMethodError: org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo
Please suggest me do I need to set any congurations in spark conf. 请建议我是否需要在spark conf中设置任何配置。
From the instructions on http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html#Configuration_details 来自http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html#Configuration_details上的说明
Please check your hdfs-site.xml: 请检查你的hdfs-site.xml:
<property>
<name>dfs.nameservices</name>
<value>mycluster</value> <-- Choose a name for your cluster
</property>
...
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name> <-- Put cluster name here
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
You should also check that other settings mentioned on that page are correctly configured: 您还应该检查该页面上提到的其他设置是否已正确配置:
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.