简体   繁体   中英

How can I drop all Kafka messages while starting an app using Microprofile Reactive Messaging with Quarkus?

I'm developing an app which uses Reactive Messaging with Quarkus, I'm using Kafka as the connector between my app and another app. I'd like not to retrieve all messages from one channel while starting the app and only retrieve new ones, is it possible to configure it in the application.properties?

This is my consumer configuration:

    allow.auto.create.topics = true
    auto.commit.interval.ms = 5000
    auto.offset.reset = latest
    bootstrap.servers = [localhost:9092]
    check.crcs = true
    client.dns.lookup = use_all_dns_ips
    client.id = kafka-consumer-list-temperature
    client.rack = 
    connections.max.idle.ms = 540000
    default.api.timeout.ms = 60000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = smart-home
    group.instance.id = null
    heartbeat.interval.ms = 3000
    interceptor.classes = []
    internal.leave.group.on.close = true
    internal.throw.on.fetch.stable.offset.unsupported = false
    isolation.level = read_uncommitted
    key.deserializer = class io.smallrye.reactive.messaging.kafka.fault.DeserializerWrapper
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 500
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 10000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    socket.connection.setup.timeout.max.ms = 127000
    socket.connection.setup.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.certificate.chain = null
    ssl.keystore.key = null
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.certificates = null
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class io.smallrye.reactive.messaging.kafka.fault.DeserializerWrapper

This is my consumer rebalancer

@ApplicationScoped
@Named("rebalanced-example.rebalancer")
public class KafkaRebalancedConsumerRebalanceListener implements KafkaConsumerRebalanceListener {
    /**
     * When receiving a list of partitions will search for the earliest offset within 10 minutes
     * and seek the consumer to it.
     *
     * @param consumer   underlying consumer
     * @param partitions set of assigned topic partitions
     */
    @Override
    public void onPartitionsAssigned(Consumer<?, ?> consumer,
                                     Collection<org.apache.kafka.common.TopicPartition> partitions) {
        long now = System.currentTimeMillis();
        long shouldStartAt = now - 300_000L; //10 minute ago

        Map<org.apache.kafka.common.TopicPartition, Long> request = new HashMap<>();
        for (org.apache.kafka.common.TopicPartition partition : partitions) {
            System.out.println("\n\n\n\n\n\n "+ partition+"\n\n\n\n\n\n");
            request.put(partition, shouldStartAt);
        }
        Map<org.apache.kafka.common.TopicPartition, OffsetAndTimestamp> offsets = consumer
                .offsetsForTimes(request);
        for (Map.Entry<org.apache.kafka.common.TopicPartition, OffsetAndTimestamp> position : offsets.entrySet()) {
            System.out.println("POSICION\n\n\n\n\n\n "+ position.getKey()+"\n\n\n\n\n\n");
            long target = position.getValue() == null ? 0L : position.getValue().offset();
            System.out.println("TARGET\n\n\n\n\n\n "+ target+"\n\n\n\n\n\n");
            consumer.seek(position.getKey(), target);
        }

}

And in the System out partition and position I've seen the same MOISTURE-0 and in the target it gives me 0. Thanks.

Set the auto.offset.reset configuration property to latest : https://kafka.apache.org/documentation/#consumerconfigs_auto.offset.reset

To configure it for specific MP Reactive Messaging incoming channel, use

mp.messaging.incoming.<channel>.auto.offset.reset=latest

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM