简体   繁体   中英

Issue running multiple flink jobs (on Flink Cluster)

Folks,

We have few flink jobs - built as separate executable Jars

Each of this flink jobs is using the following to run -

>  StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
> 
> try {
>             env.execute("FLINK-JOB");
>         } catch (Exception ex) {
>             // Some message
>         }

But when we deploy these Flink jobs (5 in all) - only one runs and the other one closes.

the way we deploy is via bin/flink run

Thanks Much

I guess you may be using the default startup method of flink standalone, via bin/start-cluster.sh and bin/stop-cluster.sh . this method is rely on conf/masters and conf/workers to determine the number of cluster component instances, the default number of taskmanager only 1, with 1 slot.

When the job parallelism is only one, only one job can be run (when the job parallelism is greater than one, no job can run). When you do not have enough taskmanager (slot), you cannot run enough jobs (each job needs at least one slot)

you can add taskmanager (slot) by referring to the steps on the picture

flink document link

This might be because you are using same job name in env.execute("FLINK-JOB"). Please try to make it different for 5 of your jobs, alternatively, you can pass job name using parameter configuration while deploying flink job and pass different job name using env.execute(params.get("your-job-name")). Having unique job name should be helpful. Thanks

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM