繁体   English   中英

迁移 Flink 1.4.1 到 1.13.2

[英]Migration Flink 1.4.1 to 1.13.2

我们在我们的项目之一中使用Flink 1.4.1 我们通过 Flink 将一些图像处理作业发送到我们的处理节点。 Flink 任务管理器安装在每个处理节点上。 我们的主应用程序将作业发送给 Flink Job Manager,Flink Job Manager 根据可用性将作业发送给 Flink Task Manager。 我们实现了一个 java 应用程序(假设是节点应用程序),Flink 在节点上执行这个应用程序。 这个应用程序执行我们在处理节点上运行的处理器。 这工作正常,但在我们将 Flink 从版本 1.4.1 迁移到 1.13.2 后,我们收到以下错误。

2021-09-30 14:45:56,107 INFO  org.apache.flink.runtime.jobmaster.JobMaster                 [] - JobManager successfully registered at ResourceManager, leader id: 00000000000000000000000000000000.
2021-09-30 14:45:56,108 INFO  org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager [] - Received resource requirements from job 9603bc4718c13ab4015372e89f89dd60: [ResourceRequirement{resourceProfile=ResourceProfile{UNKNOWN}, numberOfRequiredSlots=1}]
2021-09-30 14:45:56,132 ERROR tr.com.sdt.mm.wfm.processor.api.agent.ProcessorInvokerAgent  [] - An unexpected exception occurred.
org.apache.flink.api.common.InvalidProgramException: Job was submitted in detached mode. Results of job execution, such as accumulators, runtime, etc. are not available. Please make sure your program doesn't call an eager execution function [collect, print, printToErr, count]. 
    at org.apache.flink.core.execution.DetachedJobExecutionResult.getAccumulatorResult(DetachedJobExecutionResult.java:56) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.api.java.DataSet.collect(DataSet.java:419) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.api.java.DataSet.print(DataSet.java:1748) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at tr.com.sdt.mm.wfm.processor.api.agent.ProcessorInvokerAgent.main(ProcessorInvokerAgent.java:149) ~[bdb0ecae-9aa3-4962-a392-4464d0db0ae6_tr.com.sdt.mm.wfm.processor.api-0.0-DEV-SNAPSHOT.jar:0.0-DEV-SNAPSHOT]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_301]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_301]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_301]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_301]
    at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:84) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:70) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:102) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) [?:1.8.0_301]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_301]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_301]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_301]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_301]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_301]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_301]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_301]
2021-09-30 14:45:56,133 WARN  org.apache.flink.client.deployment.application.DetachedApplicationRunner [] - Could not execute application: 
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Job was submitted in detached mode. Results of job execution, such as accumulators, runtime, etc. are not available. Please make sure your program doesn't call an eager execution function [collect, print, printToErr, count]. 
    at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:84) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:70) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:102) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) [?:1.8.0_301]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_301]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_301]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_301]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_301]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_301]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_301]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_301]
Caused by: org.apache.flink.api.common.InvalidProgramException: Job was submitted in detached mode. Results of job execution, such as accumulators, runtime, etc. are not available. Please make sure your program doesn't call an eager execution function [collect, print, printToErr, count]. 
    at org.apache.flink.core.execution.DetachedJobExecutionResult.getAccumulatorResult(DetachedJobExecutionResult.java:56) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.api.java.DataSet.collect(DataSet.java:419) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.api.java.DataSet.print(DataSet.java:1748) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at tr.com.sdt.mm.wfm.processor.api.agent.ProcessorInvokerAgent.main(ProcessorInvokerAgent.java:149) ~[?:?]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_301]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_301]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_301]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_301]
    at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    ... 13 more
2021-09-30 14:45:56,136 ERROR org.apache.flink.runtime.webmonitor.handlers.JarRunHandler   [] - Exception occurred in REST handler: Could not execute application.
2021-09-30 14:45:56,148 WARN  org.apache.flink.runtime.webmonitor.handlers.JarRunHandler   [] - **Configuring the job submission via query parameters is deprecated. Please migrate to submitting a JSON request instead.**
2021-09-30 14:45:56,148 WARN  org.apache.flink.runtime.webmonitor.handlers.JarRunHandler   [] - **Configuring the job submission via query parameters is deprecated. Please migrate to submitting a JSON request instead.**

我们的 Flink 作业如下所示。

public class ProcessorInvokerAgent implements MapFunction<String[], IProcessingNodeResult> {
    .....
    .......
    ........

    public static void main(String[] args) throws Exception {

        ExecutionEnvironment environment = ExecutionEnvironment.getExecutionEnvironment();
        String[][] dataTaken = { args };
        DataSet<String[]> elements = environment.fromElements(dataTaken);
        MapOperator<String[], IProcessingNodeResult> map = elements.map(new ProcessorInvokerAgent());
        try {
            map.print();
            environment.execute();
        } catch (Exception exc) {
            LOGGER.error("An unexpected exception occurred.", exc);
            throw exc;
        }
    }

    /**
     * Invokes the processor controllers with specified arguments.
     *
     * @param args
     *            the arguments of processing request (first one is base64 encoded
     *            order, and second one is base64 encoded configuration)
     * @return the processing node result
     * @throws ProcessorAgentException
     *             if cannot write processor result to
     */
    public IProcessingNodeResult map(String[] args) throws ProcessorAgentException {
        IProcessingNodeResult result = null;
        // check if argument number is valid
        if (args.length == 0) {
            // cannot send result to manager raise exception
            throw new ProcessorAgentException("Invalid number of arguments(" + args.length
                    + "), must match format <command> (any-other-param)* . Agent could not be invoked", null);
        } else if ("execute".equalsIgnoreCase(args[0])) {
            if (args.length != 3) {
                // cannot send result to manager raise exception
                throw new ProcessorAgentException("Invalid number of arguments (" + args.length
                        + ") for 'execute' command. Must match 'execute' <processing-order-id> <path-to-request-file>",
                        null);
            } else {
                result = handleProcessorExecution(args);
            }
        } else {
            // cannot send result to manager raise exception
            throw new ProcessorAgentException(
                    "Invalid execution command (" + args[0] + ")is given to processor agent. Agent will not be invoked",
                    null);

        }
        return result;
    }

    private IProcessingNodeResult handleProcessorExecution(String[] args) {
        // args[0] was command thus we need to start from index 1
        try {
            // 1st position contains processing order id (same as order#getOrderIdentifier())
            String processingOrderId = getArg1_ProcessingOrderId(args);

            LOGGER.info("Processor Invoker agent is started proceeding on order {}", processingOrderId);

            // 2nd position contains handle for processing order request file (this file contains all the related parameters for processing order)
            ResourceAccessInfo processingOrderFile = getArg2_ProcessingOrderFile(args, processingOrderId);

            LOGGER.debug("Argumans are extracted and checked. Processing order file will be downloaded and will be parsed.");

            List<String> lines = downloadOrderFileAndGetLines(processingOrderFile, processingOrderId);

            LOGGER.debug("Processing order file lines are : \n{}", printLines(lines));

            IProcessingOrder order = getLine1_ProcessingOrder(lines); // order is initialized in this function
            if (order != null) { // if order read and successfully parsed
                LogUtils.putMDC(ECommonSystemProperty.MMUGS_ENTITY.getValue(), order.getInstanceId(), order.getParentProcessId());
                LOGGER.debug("Line 1: Procesing Order has extracted successfully. Order Id=\"{}\"", order.getOrderIdentifier());


                IConfiguration configuration = getLine2_ConfigurationYAML(lines); // configuration is initialized in this function

                LOGGER.debug("Line 2: Configuration YAML has extracted.Order Id=\"{}\"", order.getOrderIdentifier());

                String procConfDir = getProcessorParametersConfDir(configuration);

                LOGGER.debug("Processing Node's processor parameters conf directory =\"{}\" for Order Id=\"{}\"", procConfDir, order.getOrderIdentifier());

                // The errors occurred while extracting Line 3&4 are logged as error/info.
                getLine3And4_ProcessorParamConfAndScripts(lines, processingOrderId, procConfDir);

                // The errors occurred while extracting Line 5 are logged as error/info.
                getLine5_AdditionalQueryConf(lines, processingOrderId, procConfDir);

                // The errors occurred while extracting Line 6 are logged as error/info.
                ProcTableType wsConfFileContent = getLine6_IPFWSConf(lines, processingOrderId);

                // The errors occurred while extracting Line 7 are logged as error/info.
                getLine7_TaskTable(lines, wsConfFileContent, order.getProcessorName(), order.getProcessorVersion(), processingOrderId);

                executeProcessor(order, configuration);
            }
        } catch (ProcessorInvokerAgentException exc) {
            LOGGER.debug("Processor execution result message=\"{}\", status=\"{}\"", exc.getResult().getMessage(), exc.getResult().getRequestStatus().value());
            return exc.getResult();
        } catch (Exception exc) {
            LOGGER.error("Processor execution failed due to an exception.Ex:\"{}\"", exc.getMessage(), exc);
        }
        LOGGER.debug("handleProcessorExecution COMPLETED");
        return null;
    }
    
    }

我们根据错误删除了 map.print() 行。 但是在更改之后我们收到了以下错误。

2021-10-01 10:55:55,767 WARN  org.apache.flink.runtime.webmonitor.handlers.JarRunHandler   [] - Configuring the job submission via query parameters is deprecated. Please migrate to submitting a JSON request instead.
2021-10-01 10:55:55,769 WARN  org.apache.flink.runtime.webmonitor.handlers.JarRunHandler   [] - Configuring the job submission via query parameters is deprecated. Please migrate to submitting a JSON request instead.
2021-10-01 10:55:55,992 INFO  org.apache.flink.client.ClientUtils                          [] - Starting program (detached: true)
2021-10-01 10:55:56,091 ERROR tr.com.sdt.mm.wfm.processor.api.agent.ProcessorInvokerAgent  [] - An unexpected exception occurred.
java.lang.RuntimeException: No data sinks have been created yet. A program needs at least one sink that consumes data. Examples are writing the data set or printing it.
    at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:1170) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:1145) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:1041) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.program.ContextEnvironment.executeAsync(ContextEnvironment.java:129) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:70) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:942) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at tr.com.sdt.mm.wfm.processor.api.agent.ProcessorInvokerAgent.main(ProcessorInvokerAgent.java:150) ~[0a856782-ce46-4a33-b438-2519a2f92278_tr.com.sdt.mm.wfm.processor.api-0.0-DEV-SNAPSHOT.jar:0.0-DEV-SNAPSHOT]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_301]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_301]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_301]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_301]
    at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:84) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:70) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:102) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) [?:1.8.0_301]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_301]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_301]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_301]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_301]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_301]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_301]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_301]
2021-10-01 10:55:56,098 WARN  org.apache.flink.client.deployment.application.DetachedApplicationRunner [] - Could not execute application: 
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: No data sinks have been created yet. A program needs at least one sink that consumes data. Examples are writing the data set or printing it.
    at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:84) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:70) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:102) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) [?:1.8.0_301]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_301]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_301]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_301]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_301]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_301]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_301]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_301]
Caused by: java.lang.RuntimeException: No data sinks have been created yet. A program needs at least one sink that consumes data. Examples are writing the data set or printing it.
    at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:1170) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:1145) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:1041) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.program.ContextEnvironment.executeAsync(ContextEnvironment.java:129) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:70) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:942) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    at tr.com.sdt.mm.wfm.processor.api.agent.ProcessorInvokerAgent.main(ProcessorInvokerAgent.java:150) ~[?:?]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_301]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_301]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_301]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_301]
    at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) ~[flink-dist_2.11-1.13.2.jar:1.13.2]
    ... 13 more
2021-10-01 10:55:56,102 ERROR org.apache.flink.runtime.webmonitor.handlers.JarRunHandler   [] - Exception occurred in REST handler: Could not execute application.

我们应该做什么? 如何改变我们的工作? 有迁移步骤吗?

谢谢你。

我建议查看 Flink 1.5、1.6 等一直到 1.13 的发行说明。 Flink 1.3 于 2017 年发布,此后发生了很多变化。

恢复map.print() ,并删除environment.execute()

最后我通过对我的实现应用以下更改解决了这个问题。

元组用于结果。 Class 定义更新如下:

public class ProcessorInvokerAgent implements MapFunction<String[], Tuple1<IProcessingNodeResult>> {

数据集定义更新如下:

DataSet<Tuple1<IProcessingNodeResult>> results = elements.map(new ProcessorInvokerAgent());

map.print() 替换为以下行:

results.writeAsText(localResultFile.getPath(), org.apache.flink.core.fs.FileSystem.WriteMode.OVERWRITE);

就这样。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM