[英]how to kill flink ApplicationMaster when jobs failed
How make flink application on yarn killed or failed itself when flink's inner jobs failed? 当flink的内部作业失败时,如何使flink应用程序杀死或自身失败的纱线上? The application is still running no matter how many jobs failed, as a result, the problems can't be found immediately.
无论有多少个作业失败,该应用程序仍在运行,因此,无法立即发现问题。 Do you have any idea?
你有什么主意吗?
You can always kill it as any other regular yarn application: 您可以像其他常规纱线应用一样随时将其杀死:
yarn application -kill <applicationId>
More eg here: https://hadoop.apache.org/docs/r2.7.3/hadoop-yarn/hadoop-yarn-site/YarnCommands.html 更多示例,例如: https : //hadoop.apache.org/docs/r2.7.3/hadoop-yarn/hadoop-yarn-site/YarnCommands.html
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.