How make flink application on yarn killed or failed itself when flink's inner jobs failed? The application is still running no matter how many jobs failed, as a result, the problems can't be found immediately. Do you have any idea?
You can always kill it as any other regular yarn application:
yarn application -kill <applicationId>
More eg here: https://hadoop.apache.org/docs/r2.7.3/hadoop-yarn/hadoop-yarn-site/YarnCommands.html
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.