简体   繁体   English

作业失败时如何杀死flink ApplicationMaster

[英]how to kill flink ApplicationMaster when jobs failed

How make flink application on yarn killed or failed itself when flink's inner jobs failed? 当flink的内部作业失败时,如何使flink应用程序杀死或自身失败的纱线上? The application is still running no matter how many jobs failed, as a result, the problems can't be found immediately. 无论有多少个作业失败,该应用程序仍在运行,因此,无法立即发现问题。 Do you have any idea? 你有什么主意吗?

You can always kill it as any other regular yarn application: 您可以像其他常规纱线应用一样随时将其杀死:

yarn application -kill <applicationId>

More eg here: https://hadoop.apache.org/docs/r2.7.3/hadoop-yarn/hadoop-yarn-site/YarnCommands.html 更多示例,例如: https : //hadoop.apache.org/docs/r2.7.3/hadoop-yarn/hadoop-yarn-site/YarnCommands.html

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Hadoop distcp作业已成功,但try_xxx被ApplicationMaster终止 - Hadoop distcp jobs SUCCEEDED but attempt_xxx killed by ApplicationMaster [apache-flink]如何将不同的flink作业提交到不同的纱线簇 - [apache-flink]how to submit different flink jobs to different yarn clusters 在使用命令行界面将Flink作业提交到Yarn中时,如何获取应用程序ID? - How to get get Application Id in submitting Flink jobs into Yarn use command line interface? 如何在Yarn ApplicationMaster代码中获取所有正在运行的容器? - How to get all running containers in Yarn ApplicationMaster code? 如何在YARN集群中的特定节点上启动Spark的ApplicationMaster? - How to launch Spark's ApplicationMaster on a particular node in YARN cluster? Spark-yarn 中客户端模式下的 ApplicationMaster 如何工作? - How ApplicationMaster in Client mode in Spark-yarn works? 批次联结作业导致纱线簇性能低下 - Low performance of yarn cluster with batch flink jobs 如何仅获取昨天的纱线完成/失败工作列表 - How to get list of yarn finished/failed jobs of yesterday only 更改在纱线上运行的flink作业的日志文件输出路径 - change log files output path for flink jobs that run on yarn flink可以运行多个相同的作业来实现伪动态缩放吗? - Can flink run mutliple same jobs to achieve pseudo dynamic scaling?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM