简体   繁体   English

Flink是什么通过群集GUI将args提交到作业的正确方法是什么?

[英]Flink what is the proper way to submit args to job from cluster GUI?

My goal is to pass args to the Main() function of a Flink job via the "Program Arguments" field in the cluster GUI. 我的目标是通过群集GUI中的“程序参数”字段将args传递给Flink作业的Main()函数。 在此处输入图片说明

And to access them (ideally by key name) in the Main() function some way like so: 并以某种方式在Main()函数中访问它们(最好是通过键名):

public static void main(String[] args) throws Exception {

    ParameterTool parameter = ParameterTool.fromArgs(args);

    CustomProps props = new CustomProps (DEFAULT_PROPERTIES_FILE);

    String kafkaAutoOffsetReset = props.getKafkaAutoOffsetReset();
    String cassandraClusterUrl = props.getCassandraClusterUrl();

    if (args.length == 1 && args[0] != null) {

        cassandraClusterUrl = parameter.get("cassandraClusterUrl");
        kafkaAutoOffsetReset = parameter.get("kafkaOffset");
    }

    //Other code...

}

I have tried the "ParameterTool" but I don't get anything from it, and if I try something like: 我已经尝试过“ ParameterTool”,但没有任何帮助,如果尝试以下操作:

kafkaAutoOffsetReset = args[0];

It only works if I only put a single word in the "Program Arguments" field. 仅当我在“程序参数”字段中仅输入一个单词时,它才有效。 So if I put: 所以,如果我把:

blah

it says it was set to "blah", but if I try any of these: 它说它设置为“等等”,但是如果我尝试以下任何一种方法:

-kafkaOffset blah
--kafkaOffset blah
-kafkaOffset:blah
-kafkaOffset=blah

I get nothing. 我什么都没有。 I know in the CLI an example of how you pass args to a jar is: 我知道在CLI中,如何将args传递给jar的示例是:

--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out

But it seems like there is a different way I am missing to do so with the GUI and I am unsuccessful at hunting down documentation related to it. 但是似乎缺少使用GUI的另一种方式,但是我未能找到与此相关的文档。

TL;DR TL; DR

What is the proper way to submit multiple args via the "Program Arguments" field in the Flink Cluster GUI and what is the proper way to access them in Main() function? 通过Flink群集GUI中的“程序参数”字段提交多个arg的正确方法是什么,以及在Main()函数中访问它们的正确方法是什么?

Thanks for any and all help in advance! 感谢您提前提供的所有帮助!

Program arguments should be given in flink as shown below 程序参数应该在flink中给出,如下所示

--custom.key.one custom.value.one --custom.key.two custom.value.two --custom.key.one custom.value.one --custom.key.two custom.value.two

Figured it out. 弄清楚了。 Here is how arguments have to be passed: 这是必须传递参数的方式: 在此处输入图片说明

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何使用 Java 代码向 Flink 集群提交作业? - How do I submit a job to a Flink cluster using Java code? 从Java连接到AWS Elasticache(Redis集群)的正确方法是什么? - What is a proper way to connect to AWS Elasticache (Redis cluster) from Java? Flink 作业在 EMR 集群上运行失败 - Flink job runs to fail on EMR cluster 从eclipse提交作业到Amazon EMR上正在运行的集群 - submit a job from eclipse to a running cluster on amazon EMR Java Swing-做基于阶段的GUI的正确方法是什么? - Java Swing - What's the proper way of doing a stage based GUI? Flink 作业集群 vs 会话集群——部署和配置 - Flink Job Cluster vs Session Cluster - deploying and configuration 如何在不使用spark-submit的情况下将java程序中的spark作业提交到独立的spark集群? - How to submit spark job from within java program to standalone spark cluster without using spark-submit? 在不提供 .jar 的情况下在远程集群上运行 Flink 作业 - Run a Flink job on a remote cluster without providing a .jar 使用RestClusterClient在Flink群集上运行已部署的作业 - Run already deployed job on Flink Cluster using RestClusterClient Apache Flink 作业集群 rpc.address 绑定到 kubernetes 上的本地主机 - Apache Flink Job cluster rpc.address binding to localhost on kubernetes
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM