简体   繁体   中英

Spring Boot based Kafka Consumer Acknowledgement policy

We have a Sprring-Boot based Kaffka consumer for which we have created a factory like this:-

@Bean
    public ConsumerFactory<String, Customer> customerConsumerFactory() {
        Map<String, Object> config = new HashMap<>();

        config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "bootstrapServers");
        config.put(ConsumerConfig.GROUP_ID_CONFIG, "${kafka.customer.consumer.group}");
        config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);

        config.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
        config.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, "5000");
        config.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, "5000");
        config.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG,"25000");
        config.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG,String.valueOf(Integer.MAX_VALUE));
        config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "2");

        return new DefaultKafkaConsumerFactory<>(config, new StringDeserializer(),
                new JsonDeserializer<>(PaymentsHubResponse.class));
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory<String, Customer> customerConsumerKafkaListenerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, Customer> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(customerConsumerFactory());
        return factory;
    }

With above configuration, we intend that, in a single poll, this consumer shall be reading 2 Records at max AND Also, we have Manual Acknowledgement Policy. Now, here is the code for consumer looks like:-

  @KafkaListener(topics = "${kafka.consumer.topic}", groupId = "${kafka.consumer.group}", containerFactory="customerConsumerKafkaListenerFactory")
public void consumeResponseEventFromPH(Customer customerObject, Acknowledgment ack) {
   acknowledgment.acknowledge();
   // Business Logic.
}

Question 1.) Would this statement acknowledgment.acknowledge(); send acknowledgement to the Kafka broker for both of the messages together OR Would this method itself gets executed 2 times, once for each incoming-message?

Question 2.) What if something goes wrong during the processing of these messages? Would these messages be lost forever?

Question 3.) Is there some way to send an conditional-acknowledgement on each message-level?

Question 4.) Say, I never acknowledge the message? Then, for how many no of times, would this message would be going & coming again from Broker?

Question 5.) Whats the difference between these 2 consumerConfig properties MAX_POLL_RECORDS_DOC and MAX_POLL_RECORDS_CONFIG ?

Answers shall be highly appreciated.

- Thanks Aditya

  1. You will only get an Acknowledgment if the container ack mode is MANUAL (both offsets committed after both are processed) or MANUAL_IMMEDIATE each offset is immediately committed (sync or async depending on the commitSync property).

  2. Depends on the version; with older versions, the error was just logged. With recent versions the default error handler is a SeekToCurrentErrorHandler . By default, the delivery will be tried 10 times with no delays, then logged. You cab configure a recoverer to be called after retries are exhausted (such as the DeadLetterPublishingRecoverer .

  3. No; Kafka only maintains an offset; discrete records are not acknoweldged.

  4. It will not be redelivered unless you throw an exception (see 2). See the reference manual about error handling. https://docs.spring.io/spring-kafka/docs/current/reference/html/#annotation-error-handling

  5. One (_DOC) is the text for the documentation for the property, the other (_CONFIG) is the property name.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM