[英]Apache Spark - Parallel Processing of messages from Kafka - Java
JavaPairReceiverInputDStream<String, byte[]> messages = KafkaUtils.createStream(...);
JavaPairDStream<String, byte[]> filteredMessages = filterValidMessages(messages);
JavaDStream<String> useCase1 = calculateUseCase1(filteredMessages);
JavaDStream<String> useCase2 = calculateUseCase2(filteredMessages);
JavaDStream<String> useCase3 = calculateUseCase3(filteredMessages);
JavaDStream<String> useCase4 = calculateUseCase4(filteredMessages);
... ...
I retrieve messages from Kafka, filter that and use the same messages for mutiple use-cases. 我从Kafka检索消息,对其进行过滤并对多个用例使用相同的消息。 Here useCase1 to 4 are independent of each other and can be calculated parallely.
这里useCase1到4是相互独立的,可以并行计算。 However, when i look at the logs, i see that calculations are happening sequentially.
但是,当我查看日志时,我发现计算是按顺序进行的。 How can i make them to run parallely.
我怎样才能使它们并行运行。 Any suggestion would be helpful.
任何建议都会有所帮助。
Try creating creating Kafka topics for each of your 4 use cases. 尝试为您的4个用例创建Kafka主题。 Then try creating 4 different Kafka DStreams.
然后尝试创建4种不同的Kafka DStream。
I moved all code inside a for loop and iterated by the number of partitions in the kafka topic and i see an improvement. 我将所有代码移动到for循环中,并按照kafka主题中的分区数进行迭代,我看到了改进。
for(i=0;i<numOfPartitions;i++)
{
JavaPairReceiverInputDStream<String, byte[]> messages =
KafkaUtils.createStream(...);
JavaPairDStream<String, byte[]> filteredMessages =
filterValidMessages(messages);
JavaDStream<String> useCase1 = calculateUseCase1(filteredMessages);
JavaDStream<String> useCase2 = calculateUseCase2(filteredMessages);
JavaDStream<String> useCase3 = calculateUseCase3(filteredMessages);
JavaDStream<String> useCase4 = calculateUseCase4(filteredMessages);
}
Reference : http://www.michael-noll.com/blog/2014/10/01/kafka-spark-streaming-integration-example-tutorial/ 参考: http : //www.michael-noll.com/blog/2014/10/01/kafka-spark-streaming-integration-example-tutorial/
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.