简体   繁体   中英

On Spark-cluster.Is there a parameter that controls the minimum run time of the spark job

My Spark program will first determine whether the input data path exists and,if it does not,exit safely.But after exiting,yarn will retry the job once.So,I guess one parameter will control the minimum run time of the job. On Spark-cluster.Is there a parameter that controls the minimum run time of the spark job,which is to trigger a retry even if the task succeeds but is less than that time.

---------after the first edit--------------

I turned the number of retries to 1,and now I don't have to think about the number of retries. There is only one sentence System.out.println('MyProgram'); in the main method in my program.The log shows that everything is fine,but yarn thinks it's a failed job.I'm very confused.

否。如果您的作业以零以外的退出状态结束,则会发生重试。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM