简体   繁体   English

在Spring Boot应用程序中启动无限循环的这种方式是否有问题?

[英]Are there any problems with this way of starting an infinite loop in a Spring Boot application?

I have a Spring Boot application and it needs to process some Kafka streaming data. 我有一个Spring Boot应用程序,它需要处理一些Kafka流数据。 I added an infinite loop to a CommandLineRunner class that will run on startup. 我在CommandLineRunner类中添加了一个无限循环,该循环将在启动时运行。 In there is a Kafka consumer that can be woken up. 那里有一个可以唤醒的卡夫卡消费者。 I added a shutdown hook with Runtime.getRuntime().addShutdownHook(new Thread(consumer::wakeup)); 我用Runtime.getRuntime().addShutdownHook(new Thread(consumer::wakeup));添加了一个关闭挂钩Runtime.getRuntime().addShutdownHook(new Thread(consumer::wakeup)); . Will I run into any problems? 我会遇到任何问题吗? Is there a more idiomatic way of doing this in Spring? 在春季,有没有更惯用的方法? Should I use @Scheduled instead? 我应该改用@Scheduled吗? The code below is stripped of specific Kafka-implementation stuff but otherwise complete. 下面的代码删除了特定的Kafka实现内容,但另有说明。

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.errors.WakeupException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;

import java.time.Duration;
import java.util.Properties;


    @Component
    public class InfiniteLoopStarter implements CommandLineRunner {

        private final Logger logger = LoggerFactory.getLogger(this.getClass());

        @Override
        public void run(String... args) {
            Consumer<AccountKey, Account> consumer = new KafkaConsumer<>(new Properties());
            Runtime.getRuntime().addShutdownHook(new Thread(consumer::wakeup));

            try {
                while (true) {
                    ConsumerRecords<AccountKey, Account> records = consumer.poll(Duration.ofSeconds(10L));
                    //process records
                }
            } catch (WakeupException e) {
                logger.info("Consumer woken up for exiting.");
            } finally {
                consumer.close();
                logger.info("Closed consumer, exiting.");
            }
        }
    }

I'm not sure if you'll run into any issues there but it's a bit dirty - Spring has really nice built in support for working with Kafka so I would lean towards that (there's plenty of documentation on that on the web, but a nice one is: https://www.baeldung.com/spring-kafka ). 我不确定您是否会遇到任何问题,但是有点脏-Spring确实很好地支持与Kafka一起使用,因此我倾向于这样做(网络上有很多文档,但是不错的是: https : //www.baeldung.com/spring-kafka )。

You'll need the following dependency: 您需要以下依赖项:

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>2.2.2.RELEASE</version>
</dependency>

Configuration is as easy adding the @EnableKafka annotation to a config class and then setting up Listener and ConsumerFactory beans 配置非常简单,将@EnableKafka批注添加到配置类中,然后设置Listener和ConsumerFactory bean

Once configured you can setup a consumer easily as follows: 配置完成后,您可以轻松设置使用者,如下所示:

@KafkaListener(topics = "topicName")
public void listenWithHeaders(
  @Payload String message, 
  @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
      System.out.println("Received Message: " + message"+ "from partition: " + partition);
}

Implementation look ok but using CommandLineRunner is not made for this. 实现看起来还可以,但不能使用CommandLineRunner。 CommandLineRunner is used to run some task on startup only once. CommandLineRunner仅在启动时用于运行某些任务。 From Design perspective it's not very elegant. 从设计的角度来看,它不是很优雅。 I would rather use spring integration adapter component with kafka. 我宁愿在kafka中使用spring集成适配器组件。 You can find example here https://github.com/raphaelbrugier/spring-integration-kafka-sample/blob/master/src/main/java/com/github/rbrugier/esb/consumer/Consumer.java . 您可以在这里找到示例https://github.com/raphaelbrugier/spring-integration-kafka-sample/blob/master/src/main/java/com/github/rbrugier/esb/consumer/Consumer.java

To just answer my own question, I had a look at Kafka integration libraries like Spring-Kafka and Spring Cloud Stream but the integration with Confluent's Schema Registry is either not finished or not quite clear to me. 为了回答我自己的问题,我看了一下Kafka集成库(例如Spring-Kafka和Spring Cloud Stream),但是与Confluent的Schema Registry的集成尚未完成或对我来说不是很清楚。 It's simply enough for primitives but we need it for typed Avro objects that are validated by the schema registry. 对于原语来说,这已经足够简单了,但是对于通过架构注册表验证的类型化Avro对象,我们就需要它。 I now implemented a Kafka-agnostic solution, based on the answer at Spring Boot - Best way to start a background thread on deployment 我现在基于Spring Boot的答案实施了一个与Kafka无关的解决方案-在部署时启动后台线程的最佳方法

The final code looks like this: 最终代码如下所示:

@Component
public class AccountStreamConsumer implements DisposableBean, Runnable {

    private final Logger logger = LoggerFactory.getLogger(this.getClass());

    private final AccountService accountService;
    private final KafkaProperties kafkaProperties;
    private final Consumer<AccountKey, Account> consumer;

    @Autowired
    public AccountStreamConsumer(AccountService accountService, KafkaProperties kafkaProperties,
                                 ConfluentProperties confluentProperties) {

        this.accountService = accountService;
        this.kafkaProperties = kafkaProperties;

        if (!kafkaProperties.getEnabled()) {
            consumer = null;
            return;
        }

        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapServers());
        props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, confluentProperties.getSchemaRegistryUrl());
        props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, kafkaProperties.getSecurityProtocolConfig());
        props.put(SaslConfigs.SASL_MECHANISM, kafkaProperties.getSaslMechanism());
        props.put(SaslConfigs.SASL_JAAS_CONFIG, PlainLoginModule.class.getName() + " required username=\"" + kafkaProperties.getUsername() + "\" password=\"" + kafkaProperties.getPassword() + "\";");
        props.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getAccountConsumerGroupId());
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);

        consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Collections.singletonList(kafkaProperties.getAccountsTopicName()));

        Thread thread = new Thread(this);
        thread.start();
    }

    @Override
    public void run() {
        if (!kafkaProperties.getEnabled())
            return;

        logger.debug("Started account stream consumer");
        try {
            //noinspection InfiniteLoopStatement
            while (true) {
                ConsumerRecords<AccountKey, Account> records = consumer.poll(Duration.ofSeconds(10L));
                List<Account> accounts = new ArrayList<>();
                records.iterator().forEachRemaining(record -> accounts.add(record.value()));
                if (accounts.size() != 0)
                    accountService.store(accounts);
            }
        } catch (WakeupException e) {
            logger.info("Account stream consumer woken up for exiting.");
        } finally {
            consumer.close();
        }
    }

    @Override
    public void destroy() {
        if (consumer != null)
            consumer.wakeup();

        logger.info("Woke up account stream consumer, exiting.");
    }
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM