簡體   English   中英

使用kafka + beam + flink時。 找到了接口org.apache.flink.streaming.api.operators.InternalTimer,但是期望使用類

[英]When using kafka+beam+flink. Found interface org.apache.flink.streaming.api.operators.InternalTimer, but class was expected

當我嘗試使用kafka從購物系統發布/訂閱數據,使用Beam設計數據流來編寫自己的apache Beam演示時,請在flink上運行。 我遇到了一個非常罕見的例外:

Caused by: java.lang.IncompatibleClassChangeError: Found interface org.apache.flink.streaming.api.operators.InternalTimer, but class was expected at org.apache.beam.runners.flink.translation.wrappers.streaming.WindowDoFnOperator.fireTimer(WindowDoFnOperator.java:129) at org.apache.beam.runners.flink.translation.wrappers.streaming.DoFnOperator.onProcessingTime(DoFnOperator.java:704) at org.apache.flink.streaming.api.operators.InternalTimerServiceImpl.onProcessingTime(InternalTimerServiceImpl.java:235) at org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeService$TriggerTask.run(SystemProcessingTimeService.java:285)

我的代碼是:

    package com.meikeland.dataflow;

    import org.apache.beam.runners.flink.FlinkRunner;
    import org.apache.beam.sdk.Pipeline;
    import org.apache.beam.sdk.io.kafka.KafkaIO;
    import org.apache.beam.sdk.options.PipelineOptionsFactory;
    import org.apache.beam.sdk.transforms.*;
    import org.apache.beam.sdk.transforms.windowing.*;
    import org.apache.beam.sdk.values.KV;
    import org.apache.kafka.common.serialization.LongDeserializer;
    import org.apache.kafka.common.serialization.StringDeserializer;
    import org.apache.kafka.common.serialization.StringSerializer;
    import org.joda.time.Duration;
    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;

    public class GameStats {

      private static final Logger logger = LoggerFactory.getLogger(GameStats.class);

      public static void main(String[] args) {
        KFOptions options = PipelineOptionsFactory.fromArgs(args).as(KFOptions.class);
        options.setRunner(FlinkRunner.class);
        options.setStreaming(true);
        logger.info("brokers address is: {}", options.getBrokers());
        runDemoCount(options);
      }

      private static void runDemoCount(KFOptions options) {
        Pipeline pipeline = Pipeline.create(options);

        pipeline
            // read order events from kafka
            .apply("ConsumeKafka",
                KafkaIO.<Long, String>read().withBootstrapServers(options.getBrokers()).withTopic("tracking.order.goods")
                    .withKeyDeserializer(LongDeserializer.class).withValueDeserializer(StringDeserializer.class)
                    .withLogAppendTime().withoutMetadata())
            .apply(Values.create()).apply("ParseOrderInfo", ParDo.of(new ParseOrderInfoFn()))
            .apply("SetTimestamp", WithTimestamps.of(OrderInfo::getCreatedAt))
            .apply("ExtractOrderID", MapElements.via(new SimpleFunction<OrderInfo, Integer>() {
              public Integer apply(OrderInfo o) {
                logger.info("processed orderID: {}", o.getOrderID());
                return o.getOrderID();
              }
            }))
            // window
            .apply("FixedWindowsOrderID",
                Window.<Integer>into(FixedWindows.of(new Duration(1000 * 60)))
                    .triggering(AfterWatermark.pastEndOfWindow()
                        .withEarlyFirings(AfterProcessingTime.pastFirstElementInPane().plusDelayOf(new Duration(1000 * 60)))
                        .withLateFirings(AfterPane.elementCountAtLeast(1)))
                    .withAllowedLateness(new Duration(1000 * 60)).accumulatingFiredPanes())
            .apply("Count", Count.<Integer>perElement()).apply("ToString", ParDo.of(new DoFn<KV<Integer, Long>, String>() {
              @ProcessElement
              public void processElement(@Element KV<Integer, Long> element, IntervalWindow window,
                  OutputReceiver<String> r) {
                logger.info("the order is : {}, and count is : {}", element.getKey(), element.getValue());
                r.output(String.format("interval :%s, Order ID: %d, Count :%d", window.start().toString(), element.getKey(),
                    element.getValue()));
              }
            })).apply("WriteToKafka", KafkaIO.<Void, String>write().withBootstrapServers(options.getBrokers())
                .withTopic("streaming.order.count").withValueSerializer(StringSerializer.class).values());

        pipeline.run().waitUntilFinish();
      }
    }

似乎錯誤在窗口中,但我無法弄清楚。 而且我到處都是google,似乎沒人遇到類似的錯誤。 因此,我必須將一些小問題弄錯了。 請誰能救我。

我遇到了同樣的問題,並通過檢查flink的版本是否與Beam兼容來解決了該問題:

https://beam.apache.org/documentation/runners/flink/

就我而言,我有Beam 2.6和flink 1.5.4。

希望對您有幫助。

問候,阿里

我也有這個問題,終於解決了。

如果您的項目取決於

“ org.apache.beam”%“ beam-runners-flink”%beamVersion

使用InternalTimer Class

我看一下org.apache.flink.streaming的scala API文檔, InternalTimerFlink 1.6之后成為Interface

為了在Flink 1.6之后正確使用帶有InternalTimer Interface Apache Beam FlinkRunner ,您的項目必須依賴

“ org.apache.beam”%“ beam-runners-flink-1.6”%beamVersion

要么

“ org.apache.beam”%“ beam-runners-flink-1.7”%beamVersion

要么

“ org.apache.beam”%“ beam-runners-flink-1.8”%beamVersion

一切都會很棒

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM