简体   繁体   English

强制在同一个节点 (YARN) 上托管 applicationMaster 是否可以?

[英]Is it OK to force the hosting of the applicationMaster on one same node (YARN)?

I am submitting Spark applications to my Hadoop 3 nodes cluster.我正在将 Spark 应用程序提交到我的 Hadoop 3 节点集群。 In my spark-defaults.conf file i set在我设置的spark-defaults.conf文件中

spark.yarn.appMasterEnv.SPARK_LOCAL_IP 127.0.0.1
spark.yarn.appMasterEnv.SPARK_MASTER_HOST 0.0.0.0

So that the applicationMaster is always (client or cluster mode) hosted on the client machine.这样 applicationMaster 总是(客户端或集群模式)托管在客户端机器上。 Is it OK to do that ?这样做可以吗?

To note that if i don't do that and Yarn attempts to host the applicationMaster on a slave node then binding error stops the running.请注意,如果我不这样做并且 Yarn 尝试在从属节点上托管 applicationMaster,则绑定错误会停止运行。

Thanks for clarifying this.感谢您澄清这一点。

If this is just for you and this works, go for it.如果这仅适合您并且有效,那就去做吧。

You aren't following a "Normal" spark strategy for a yarn cluster.您没有遵循纱线簇的“正常”火花策略。 Is that 'OK'?这可以吗'? If you have a good reason, yes it's ok.如果你有充分的理由,是的,没关系。

Would I use this in production?我会在生产中使用它吗? No.不。

Are there simpler more common ways of running a cluster?是否有更简单更常见的集群运行方式? Yes.是的。

You are mixing strategies of running Spark Standalone and Yarn.您正在混合运行 Spark Standalone 和 Yarn 的策略。 These are two fundamentally different architectures.这是两种根本不同的架构。 If you can make the two architectures work together that's fun.如果你能让这两种架构一起工作,那就很有趣了。 But you may hit some weird problems and as this is a custom set of settings you may not find a lot of support to help you.但是您可能会遇到一些奇怪的问题,而且由于这是一组自定义设置,您可能找不到很多支持来帮助您。

No, It's not "OK".不,这不是“好”。

One of the ideologies behind spark is resilience. spark背后的意识形态之一是弹性。 If you are forcing 1 node to be the application master you are introducing a bottleneck & a single point of failure.如果您强制 1 个节点成为应用程序主节点,则会引入瓶颈和单点故障。 You are using yarn, there is no reason to specify a master.您正在使用纱线,没有理由指定主人。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在YARN集群中的特定节点上启动Spark的ApplicationMaster? - How to launch Spark's ApplicationMaster on a particular node in YARN cluster? Spark-yarn 中客户端模式下的 ApplicationMaster 如何工作? - How ApplicationMaster in Client mode in Spark-yarn works? 我可以强制 YARN 为 Application Master 容器使用主节点吗? - Can I force YARN to use the master node for the Application Master container? 尽管设置了绑定地址,ApplicationMaster 仍无法找到 Spark 驱动程序(集群模式纱线) - ApplicationMaster not able to find Spark Driver despite binding address set (Cluster Mode Yarn) 错误yarn.ApplicationMaster:用户类引发异常:java.lang.NoClassDefFoundError:scala / Function0 $ class - ERROR yarn.ApplicationMaster: User class threw exception: java.lang.NoClassDefFoundError: scala/Function0$class Spark YARN群集模式出现此错误“找不到或加载主类org.apache.spark.deploy.yarn.ApplicationMaster” - Spark YARN Cluster mode get this error “Could not find or load main class org.apache.spark.deploy.yarn.ApplicationMaster” 同一节点内的多个纱线容器可以共享磁盘memory吗? - Can multiple yarn containers inside same node share disk memory? 错误yarn.ApplicationMaster:用户类抛出异常:java.lang.reflect.InvocationTargetException java.lang.reflect.InvocationTargetException - ERROR yarn.ApplicationMaster: User class threw exception: java.lang.reflect.InvocationTargetException java.lang.reflect.InvocationTargetException apache spark2.3.0以master作为纱线启动时,失败并出现错误无法找到或加载主类org.apache.spark.deploy.yarn.ApplicationMaster - apache spark2.3.0 when launched with master as yarn, fails with error Could not find or load main class org.apache.spark.deploy.yarn.ApplicationMaster spark + yarn cluster:如何配置物理节点每次仅运行一个执行程序/任务? - spark + yarn cluster: how can i configure physical node to run only one executor\task each time?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM