简体   繁体   中英

Kafka streams records not forwarding after windowing/aggregation

I am using Kafka Streams with Tumbling Window followed by aggregate step. But observing the number of tuples emitted to aggregate function are declining. Any idea where I'm going wrong?

Code:

  Properties props = new Properties();
  props.put(StreamsConfig.APPLICATION_ID_CONFIG, "events_streams_local");
  props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
  props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
  props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
  props.put(StreamsConfig.METRIC_REPORTER_CLASSES_CONFIG, Arrays.asList(JmxReporter.class));
  props.put(StreamsConfig.STATE_DIR_CONFIG, "/tmp/kafka-streams/data/");
  props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 20);

  props.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 60000);
  props.put(StreamsConfig.DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG, EventTimeExtractor.class);

  props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");

  final StreamsBuilder builder = new StreamsBuilder();
  HashGenerator hashGenerator = new HashGenerator(1);
  builder
  .stream(inputTopics)
  .mapValues((key, value) -> {
    stats.incrInputRecords();
    Event event = jsonUtil.fromJson((String) value, Event.class);
    return event;
  })
  .filter(new UnifiedGAPingEventFilter(stats))
  .selectKey(new KeyValueMapper<Object, Event, String>() {

    @Override
    public String apply(Object key, Event event) {
      return (String) key;
    }
  })
  .groupByKey(Grouped.with(Serdes.String(), eventSerdes))
  .windowedBy(TimeWindows.of(Duration.ofSeconds(30)))
  .aggregate(new AggregateInitializer(), new UserStreamAggregator(), Materialized.with(Serdes.String(), aggrSerdes))
  .mapValues((k, v) -> {
    // update counter for aggregate records
    return v;
  })
  .toStream()
  .map(new RedisSink(stats));

  topology = builder.build();
  streams = new KafkaStreams(topology, props);

Redis operations per second just sliding down.

Kafka Streams uses caches in state store to reduce the downstream load. If you want to get every update to the store as a downstream record, you can set the cache size to zero via StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG (globally for all stores) or per store via passing Materialized.as(...).withCachingDisabled() to the corresponding operator (eg, aggregate() ).

Check out the docs for more details: https://docs.confluent.io/current/streams/developer-guide/memory-mgmt.html

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM