简体   繁体   中英

How to log offset in KStreams Bean using spring-kafka and kafka-streams

I have referred almost all the questions regarding logging offset on KStreams via Processor API's transform() or process() method like mentioned in many questions here -

How can I get the offset value in KStream

But Im not able to get the solution these answers so I'm asking this question.

I want to log the partition, consumer-group-id and offset each time the message is consumed by the stream, I'm not getting how to integrate process() or transform() method with the ProcessorContext API? And if I'm implementing Processor interface in my CustomParser class then I would have to implement all the methods but Im not sure if that will work, like it is mentioned in the confluent docs for Record Meta Data - https://docs.confluent.io/current/streams/developer-guide/processor-api.html#streams-developer-guide-processor-api

I've set up KStreams in a spring-boot application like below (for reference have change the variable names)

 @Bean
    public Set<KafkaStreams> myKStreamJson(StreamsBuilder profileBuilder) {
        Serde<JsonNode> jsonSerde = Serdes.serdeFrom(jsonSerializer, jsonDeserializer);

        final KStream<String, JsonNode> pStream = myBuilder.stream(inputTopic, Consumed.with(Serdes.String(), jsonSerde));

        Properties props = streamsConfig.kStreamsConfigs().asProperties();
       

        pstream
                .map((key, value) -> {
                            try {
                                return CustomParser.parse(key, value);
                            } catch (Exception e) {
                                LOGGER.error("Error occurred - " + e.getMessage());
                            }
                            return new KeyValue<>(null, null);
                        }
                )
                .filter((key, value) -> {
                    try {
                        return MessageFilter.filterNonNull(key, value);
                    } catch (Exception e) {
                        LOGGER.error("Error occurred - " + e.getMessage());
                    }
                    return false;
                })
                .through(
                        outputTopic,
                        Produced.with(Serdes.String(), new JsonPOJOSerde<>(TransformedMessage.class)));

        return Sets.newHashSet(
                new KafkaStreams(profileBuilder.build(), props)
        );
    }

Implement Transformer ; save off the ProcessorContext in init() ; you can then access the record metadata in transform() and simply return the original key/value.

Here is an example of a Transformer . It is provided by Spring for Apache Kafka to invoke a Spring Integration flow to transform the key/value.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM