簡體   English   中英

在Spring Boot應用程序中啟動無限循環的這種方式是否有問題?

[英]Are there any problems with this way of starting an infinite loop in a Spring Boot application?

我有一個Spring Boot應用程序,它需要處理一些Kafka流數據。 我在CommandLineRunner類中添加了一個無限循環,該循環將在啟動時運行。 那里有一個可以喚醒的卡夫卡消費者。 我用Runtime.getRuntime().addShutdownHook(new Thread(consumer::wakeup));添加了一個關閉掛鈎Runtime.getRuntime().addShutdownHook(new Thread(consumer::wakeup)); 我會遇到任何問題嗎? 在春季,有沒有更慣用的方法? 我應該改用@Scheduled嗎? 下面的代碼刪除了特定的Kafka實現內容,但另有說明。

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.errors.WakeupException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;

import java.time.Duration;
import java.util.Properties;


    @Component
    public class InfiniteLoopStarter implements CommandLineRunner {

        private final Logger logger = LoggerFactory.getLogger(this.getClass());

        @Override
        public void run(String... args) {
            Consumer<AccountKey, Account> consumer = new KafkaConsumer<>(new Properties());
            Runtime.getRuntime().addShutdownHook(new Thread(consumer::wakeup));

            try {
                while (true) {
                    ConsumerRecords<AccountKey, Account> records = consumer.poll(Duration.ofSeconds(10L));
                    //process records
                }
            } catch (WakeupException e) {
                logger.info("Consumer woken up for exiting.");
            } finally {
                consumer.close();
                logger.info("Closed consumer, exiting.");
            }
        }
    }

我不確定您是否會遇到任何問題,但是有點臟-Spring確實很好地支持與Kafka一起使用,因此我傾向於這樣做(網絡上有很多文檔,但是不錯的是: https : //www.baeldung.com/spring-kafka )。

您需要以下依賴項:

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>2.2.2.RELEASE</version>
</dependency>

配置非常簡單,將@EnableKafka批注添加到配置類中,然后設置Listener和ConsumerFactory bean

配置完成后,您可以輕松設置使用者,如下所示:

@KafkaListener(topics = "topicName")
public void listenWithHeaders(
  @Payload String message, 
  @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
      System.out.println("Received Message: " + message"+ "from partition: " + partition);
}

實現看起來還可以,但不能使用CommandLineRunner。 CommandLineRunner僅在啟動時用於運行某些任務。 從設計的角度來看,它不是很優雅。 我寧願在kafka中使用spring集成適配器組件。 您可以在這里找到示例https://github.com/raphaelbrugier/spring-integration-kafka-sample/blob/master/src/main/java/com/github/rbrugier/esb/consumer/Consumer.java

為了回答我自己的問題,我看了一下Kafka集成庫(例如Spring-Kafka和Spring Cloud Stream),但是與Confluent的Schema Registry的集成尚未完成或對我來說不是很清楚。 對於原語來說,這已經足夠簡單了,但是對於通過架構注冊表驗證的類型化Avro對象,我們就需要它。 我現在基於Spring Boot的答案實施了一個與Kafka無關的解決方案-在部署時啟動后台線程的最佳方法

最終代碼如下所示:

@Component
public class AccountStreamConsumer implements DisposableBean, Runnable {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    private final AccountService accountService;
    private final KafkaProperties kafkaProperties;
    private final Consumer<AccountKey, Account> consumer;

    @Autowired
    public AccountStreamConsumer(AccountService accountService, KafkaProperties kafkaProperties,
                                 ConfluentProperties confluentProperties) {

        this.accountService = accountService;
        this.kafkaProperties = kafkaProperties;

        if (!kafkaProperties.getEnabled()) {
            consumer = null;
            return;
        }

        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapServers());
        props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, confluentProperties.getSchemaRegistryUrl());
        props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, kafkaProperties.getSecurityProtocolConfig());
        props.put(SaslConfigs.SASL_MECHANISM, kafkaProperties.getSaslMechanism());
        props.put(SaslConfigs.SASL_JAAS_CONFIG, PlainLoginModule.class.getName() + " required username=\"" + kafkaProperties.getUsername() + "\" password=\"" + kafkaProperties.getPassword() + "\";");
        props.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getAccountConsumerGroupId());
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);

        consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Collections.singletonList(kafkaProperties.getAccountsTopicName()));

        Thread thread = new Thread(this);
        thread.start();
    }

    @Override
    public void run() {
        if (!kafkaProperties.getEnabled())
            return;

        logger.debug("Started account stream consumer");
        try {
            //noinspection InfiniteLoopStatement
            while (true) {
                ConsumerRecords<AccountKey, Account> records = consumer.poll(Duration.ofSeconds(10L));
                List<Account> accounts = new ArrayList<>();
                records.iterator().forEachRemaining(record -> accounts.add(record.value()));
                if (accounts.size() != 0)
                    accountService.store(accounts);
            }
        } catch (WakeupException e) {
            logger.info("Account stream consumer woken up for exiting.");
        } finally {
            consumer.close();
        }
    }

    @Override
    public void destroy() {
        if (consumer != null)
            consumer.wakeup();

        logger.info("Woke up account stream consumer, exiting.");
    }
}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM