簡體   English   中英

Spark Kafka 流不會在工作節點上分配消費者負載

[英]Spark Kafka streaming doesn't distribute consumer load on worker nodes

我創建了以下應用程序,可在 20 秒窗口內打印特定的消息事件:

public class SparkMain {

public static void main(String[] args) {
    Map<String, Object> kafkaParams = new HashMap<>();

    kafkaParams.put(BOOTSTRAP_SERVERS_CONFIG, "localhost:9092, localhost:9093");
    kafkaParams.put(GROUP_ID_CONFIG, "spark-consumer-id");
    kafkaParams.put(KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
    kafkaParams.put(VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
    // events topic has 2 partitions
    Collection<String> topics = Arrays.asList("events");

    // local[*] Run Spark locally with as many worker threads as logical cores on your machine.
    SparkConf conf = new SparkConf().setMaster("local[*]").setAppName("SsvpSparkStreaming");

    // Create context with a 1 seconds batch interval
    JavaStreamingContext streamingContext =
            new JavaStreamingContext(conf, Durations.seconds(1));

    JavaInputDStream<ConsumerRecord<String, String>> stream =
            KafkaUtils.createDirectStream(
                    streamingContext,
                    LocationStrategies.PreferConsistent(),
                    ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams)
            );

    // extract event name from record value
    stream.map(new Function<ConsumerRecord<String, String>, String>() {
        @Override
        public String call(ConsumerRecord<String, String> rec) throws Exception {
            return rec.value().substring(0, 5);
        }})
    // filter events
    .filter(new Function<String, Boolean>() {
        @Override
        public Boolean call(String eventName) throws Exception {
            return eventName.contains("msg");
        }})
    // count with 20sec window and 5 sec slide duration
    .countByValueAndWindow(Durations.seconds(20), Durations.seconds(5))
    .print();

    streamingContext.checkpoint("c:\\projects\\spark\\");
    streamingContext.start();
    try {
        streamingContext.awaitTermination();
    } catch (InterruptedException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }
}

在日志中運行 main 方法后,我只看到獲得兩個分區的單個使用者初始化:

2018-10-25 18:25:56,007 INFO [org.apache.kafka.common.utils.LogContext$KafkaLogger.info] - <[Consumer clientId=consumer-1, groupId=spark-consumer-id] 設置新分配的分區[事件-0,事件-1]>

消費者的數量不是應該等於spark worker的數量嗎? 按照https://spark.apache.org/docs/2.3.2/submitting-applications.html#master-urls

local[*] 表示 -在本地運行 Spark,使用與機器上的邏輯內核一樣多的工作線程。

我有 8 核 CPU,所以我希望應該創建 8 個消費者或至少 2 個消費者,並且每個消費者都獲得“事件”主題的分區(2 個分區)。

在我看來,我需要運行一個完整的獨立 spark master-worker 集群,其中包含 2 個節點,其中每個節點都啟動自己的消費者......

您不一定需要單獨的工作人員或運行集群管理器。

聽起來您正在尋找使用 2 個 Spark 執行程序

如何設置 Spark 執行程序的數量?

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM